Hello,
I'm currently evaluating MSP360 backup on Linux (Ubuntu 16 LTS).
I've run a backup of files from a mapped network share (77GB of data, 10,400 files) to a local disk (1TB) for testing. This backup completed successfully.
To then test the failure detection, I deleted the contents of the local disk I previously backed up to. When I run the backup job again, I can see the GUI indicating it is looking for modified files (there are none) and so the job completes successfully, despite all the backup files on the local disk being previously deleted! I even get an email to say that 77GB of data, 10,400 files were scanned, but no files needed backing up.
It seems like somewhat of a deficiency to rely solely on the database to determine file changes without at least checking the files still exist in the target location.
Hello Matt,
Thank you for your reply. We are now backing up again! Is this behaviour true across all platforms (Linux/Windows/Mac OS)?
I think this does highlight the need to include some target file validation and I would encourage your company to consider putting this on the development road map.
It would also be worth updating the wording on the GUI dialogue where synchronisation is initiated as the current text lead me to believe this was synchronising between the Cloud and local storage, rather than between the database and either local or cloud storage.
Yes, this behavior applies to all product versions and platforms. We've got a lot of improvements on the road map regarding everything you mentioned in your post. Most of these things will be enhanced with the introduction of the new archive mode. The details are kept under wraps for now.