I have been using IceDrive for quite some time now, and overall my experience has been excellent.
Recently, I have been going through the process of ensuring that I have a back-up of my contents in IceDrive, and was doing this via a manual process of copying it over to an external drive.
At this point my inner-self said this was insane, I should automate this… (I’m a software engineer in my day job). The plan was to remove this whole manual process and set-up a local server, possibly hosting a NAS that I could have it auto-sync with IceDrive on a daily basis.
I have done a little research but was hoping that someone else might have attempted or done this, and if so, what they ended up doing.
I did a bit more digging and you could probably do this using a file transfer WebDAV client. There are a few around, with a free and open source option called Cyberduck, which might have the ability to sync on a daily, weekly or monthly basis (I haven’t looked enough at the docs).
If it doesn’t natively support syncing, it has a CLI (Command-Line Interface) which with a bit of scripting, you could probably get to automatically download all files that are new. I read that it has an option to download files that it doesn’t have locally, see below which is a snippet from their docs
Only Download Files
Files are downloaded that match one of the following criteria:
Do not exist on the local filesystem
Have a different checksum
Have a newer timestamp on the server
Note #1 - Anything automated like this also has the possibility of backfiring, so I would probably recommend only a one-way sync, just so you don’t end up deleting precious files in IceDrive.
Note #2 - I would also have the WebDAV client make a few versions of your back-up on the local server, so if you do accidentally delete something in IceDrive, it doesn’t automatically nuke it on your local server. Granted, this might then require more local storage!
I hope that all makes sense, feel free to reply if not!
Until about a month ago, I naively assumed all my files in IceDrive’s cloud were safe, even if it was difficult to get them back out of the cloud. But then I downloaded some critical files for work and discovered half of them had been terminally corrupted somewhere during the up/download process. I still have the originals so know the problem wasn’t that the originals are corrupted. I’ve also been reading the literature on cloud storage and there’s a strong sentiment among professionals in the field that cloud storage has focussed mainly on data redundancy and not nearly enough on data integrity. It doesn’t matter, for example, if you have 10 copies of a file corrupted on upload as all the copies would also be corrupt.
So I’ve since invested in building my own redundant file storage (and streaming) system across home, school, and work. I’m sure there’s a more efficient way to set things up, but key to my sync/backup systems is performing a checksum at every sync (2-way) or mirror (1-way). I can’t believe most cloud services don’t include this in their services. Without it, it’s hard to know which of the thousands and thousands of uploaded files have problems. Until it’s too late. I had Backblaze, Sync, and four other cloud storage services (OK, Backblaze is just backup) and have since learned that a significant portion of my files on ALL CLOUD services became corrupted. What’s saved me every single time? Copies at home on external disks. But if I have to rely on external disks I buy and service, cloud storage is far less valuable to me.
Next time, if ever, I invest in cloud storage, checksum or similar data fidelity will be a MUST HAVE among the cloud’s features.