Windows desktop app sync fails trying to upload file too early

Hi,
I have some large files (5-10GB) that are created through a 3rd party backup application each day. I have created a sync pair to upload these files from a source directory to my Icedrive account. Unfortunately the sync app tries to sync the files before they have been fully copied to the sync source directory. It then fails with a series of errors. The only way to get it to work after that is to cancel the existing transfers and to then “touch” the file in the source directory to initiate a change. The sync then works. Is there any way to programmatically control the sequence of the sync process, or at least for it to wait until the file has completed the copy before proceeding? I see that there used to be some sort of sync intervals option - could that be included as an option instead of instant sync?
Thanks

i had this problem too. depends on your backup program (as long as you can add a post backup event on your backup program),
but my solution is to backup to a temp dir, which must be on the same drive as the folder you are syncing to the cloud, then add a post event to rename the (i asume) zip backup file to the true location in your local sync dir. because a rename on the same drive is almost instant, the cloud sync client doesn’t have an issue

Hi @jjoneil ,
Thanks for your feedback, appreciate you taking the time to reply. Yes I have a few options around the post-backup scenario, I was just trying to avoid having too many moving parts in the process. Initially I thought that it might be something that the Icedrive app would just manage, but it just gets itself into knots. So given that fact, it would be more useful to me to have a simple scheduled time “blunt” approach rather than real-time, as my backups all run on a schedule in any case. As I understand from other posts, this used to be available, so I’d find it useful as a separate option. For now I’ll go with the automated temp dir/rename process that you suggested, which is essentially what I am doing now manually.
cheers.

no probs, welcome.
another option if you really want to avoid renaming specifically to write your backup in place, eg if your backup program managed the retention of said backups, would be to kill the icedrive client from a pre-backup event then restart it with from a post backup event, but that may be riskier.

also, be aware, i was using a one way sync to cloud for my backups with the ‘do not delete’ option with the expectation i could have by backup app only retain a few weeks locally but have icedrive keep the backups longer. unfortunately there’s a bug in the client, it honors a single local file delete (and warns it hasn’t deleted the cloud copy) but for multiple local file deletes, it goes into a ‘batch delete’ mode and disregards the ‘do not delete’ and obliterates my backups on their cloud. i reported this many months ago in the buc channel but icedrive have not fixed it or even acknowledged. afraid i have no trust in their ability to reliably save my backups, but that’s just me, ymmv

Thanks for that information - I use a Sync Type of Backup so local copies are never deleted and I manage a rolling set of local backups myself. I haven’t seen the behaviour that you describe (so far!) but I do see strange behaviour where the client will show multiple upload entries for the same filename. When I finally abort the processing for the upload entries (sometimes they are running all day for a 4GB file) I look at my Icedrive account through a browser, and the file has been uploaded, so I’m not entirely sure what is happening, but it is very confusing.
Killing and restarting the Icedrive Client as part of the process might work but it just seems like I’m working around a known issue that should be fixed by Icedrive. I can’t be the only person that this is happening to.
Thanks again for your input - appreciated.

not sure if icedrive can fix the issue as it’s really an OS feature. when you’re creating a huge file, the OS takes a lock on the file purposely while it’s being updated. the sync client has to wait until the file creation has finished before it can get a clean read of the file to upload. the question for the devs is how long to wait before giving up.
i’m backing up a 12G dir (in 3 x 4ish G sub-dirs) and had this problem. but when i changed by backup to create 3 separate 4ish G zips, the ID client waited long enough to retry itself, so it’s working for me now… apart from the batch delete issue.
because of the delete issue, i also one-way sync my zips to another cloud and that one didn’t wait at all, juat gave up, forcing me to regularly manually restart the sync. so that’s why i did the local rename which works for me

Yep, I get that. Bear in mind that these files are less than 5GB and I’m only backing up one of them a day. Also the Icedrive App provides a message to tell me specifically that the file is currently in use by another process, so if it can detect the new file and also know that it’s currently being used, then I would have thought that it can also go to sleep for a minute and then try again. (Sure fail after three attempts or something in case it’s not the copy/move itself that’s locking the file.) I’ve written file transfer systems myself (to be fair that was on Linux) so I’m pretty sure that this can all be done. I’m wondering whether a variation on your suggestion of the kill/restart of the sync app might be the simplest way to go, i.e. instead of starting the sync app automatically with Windows, add it to the Windows scheduler so that it starts at a time when I know my backups and file copy have completed. Not particularly elegant but will probably work.