Large files optimization : block-level syncing and no-cache or block-level caching?

Hi,

Very large files, on the order of 100 GB, when in a synced directory, get copied to the cloud when modified locally.
In some cases, the modification is only on a few MB of the file, and syncing the full file is a waste of time/bandwidth, etc. or in some cases not feasible.

1/ Do you plan on implementing a block-level sync for large files ?

2/ When reading only small parts of a large file from the mounted drive, it seems that the file is fully downloaded to the cache, could this be improved / avoided ?

In my particular case, it’s an 80 GB VeraCrypt container, with small daily modifications, that I had to remove from sync, and not read it from the cloud as I got multiple copies of it in my cache directory !

Another use-case could be a database file, which is modified/appended, and randomly read.

Cheers

NB: icedrive is one of the few to actually accept any file size !

2 Likes

That would be great to have… But such system is very complex and prone to cluster errors, which could result in corrupted files.

It’s also difficult to implement and develop, so it might not b soon we see such feature, in my opinion…