• Google Storage Price Increase
    In any event, I am offering free consultation regarding backend storage cost as I am familiar with and use Wasabi, BackBlaze, Google, and Amazon.
  • Google Storage Price Increase
    Thanks David,
    My understanding is that the suspension of the transfer fees only applies to the bulk data conversion from another backend platform . Not sure why anybody would want to do that given the much lower cost of Wasabi and Backblaze.
    I suspect very few people use Google, and in this case that’s a good thing. I just wonder how many people are using standard Amazon S3 for backups, instead of one zone IA - $.023 vs .$01.
  • New Backup Format - Size comparison on the bucket vs legacy
    We design our systems to have a hypervisor and a separate VHDx disk for the DC and two vhdx disks for the File server- one with the OS (C:) and the other with the data (D:).
    Using the new format, we do nightly incrementals and weekly synthetic fulls of all VHDx on a given server to BackBlaze B2 ( not BB S3 as it does not support synthetic backups.) BackBlaze is half the cost of the cheapest (viable) Amazon price.
    We also do legacy format file backups locally, and to a separate Cloud provider ( Google Nearline or Wasabi).
    So a complete system restore in a disaster situation requires only that we restore the VHDx files, which is a lot faster than restoring the files individually. A synthetic full backup of a 2TB data vhdx file can be completed in 12- 15 hours each weekend - depending on your upload speed and the data change rate. The incrementals run in five hours tops each night.
    So I suggest local ( one year retention- not guaranteed) and cloud legacy individual file backups ( 90 day retention guaranteed) for operational recovery, and new format VHDx/ Image backups for DR. We keep only the most recent full VHDx files and any subsequent incrementals. Each week we start a new set and the previous week’s set gets purged.
  • Tracking deleted objects
    It’s more of a nice to have, To be able to see which files are going to get purged once the retention period for deleted files is reached.
    the bigger issue is the one where we are required to restore all files that are flagged as deleted if we select” restore deleted files”.
    We keep two years of deleted files on local storage so it would be nice to have a setting where we could specify “restore deleted files deleted before or after a specific date.
  • New Backup Format - Size comparison on the bucket vs legacy
    We use legacy format for file backups. It is incremental forever. No need to keep multiple "full" copies such as is required with the new format.
    We use the new format exclusively for Disaster Recovery VHDx and Image backups. Now that we can keep a single version of the image/VHDx in the cloud, it has worked out great. We do daily incremental and weeky synthetic fulls to Backblaze which does not have a minimum renention period.
    It makes zero sense to go with the new format for file backups unless you are required to keep say an Annual full, a Quarterly ful, etc. We have no such requirement.
  • Tracking deleted objects
    For troubleshootng purposes, we would like to be able to see which files were deleted on any given day from the source. A report that shows files/folders deleted on a particular client machine on a certain day or range of dates would be great.
    Does the backup storage tab have a different icon or a flag that shows a file/folder has been deleted?
    Would be nice to get ahead of large accidental file/folder deletions rather than waiting foer the cient to realize a file/folder is missing.
    And on a somewhat related note, has there been any progress on the true point in time restore model that we discussed a while back?
  • Backup Agent 7.5.1 Released
    Backup Fan
    Summary: I believe that the major fix is to make the New Backup Format retention/purge settings work the way they were supposed to in the last release. When we run a synthetic full, and it completes successfully, the prior synthetic full can now be purged and we can end up with only one version/generation in cloud storage. I tested it and it works.
    Detail:
    We do weekly new-format synthetic fulls to BackBlaze B2 of all client VHDx and Images files. We only need the latest generation kept, but even though we set the retention to 1 day, the purge process did not delete the prior weeks full. This resulted in two weeks of fulls being kept, effectively doubling our storage requirements compared to the legacy format, which allowed us to keep only one version with no problem. The new format Synthetic fulls are such a massive improvement ( runtime-wise) that it was still worth it, but this is now fixed.
  • Retention Policy Problem with V7
    Thank you. I will test tonight.
  • Retention Policy Problem with V7
    Any word from the development as to whether this can be fixed?
  • How does CBB local file backups handle moved data?
    The reason that we use Legacy format for File backups, both cloud and local, is that they are incremental forever, meaning once a file gets backed up, it never gets backed up again unless it is modified.
    The new backup format requires periodic "True full" backups, meaning that even the unchanged files have to be backed up again. Now it is true that the synthetic backup process will shorten the time to complete a "true full" backup, but why keep two copies of files that have not changed?
    As I have said prior, unless you feel some compelling need to imitate Tape Backups with the GFS paradigm, the legacy format incremental forever is the only way to go for file level backups.
  • Retention Policy Problem with V7
    I updated the server to 7.5 and made sure that my retention was set to 1 day (as it had been). I forced a synthetic Full Image Backup.
    I expected that since it has been four days since the last Full backup, that the four day old Full would get purged, but that did not happen.
    It is behaving the same as the prior version - It always keeps two generations, where I was expecting to now only have to keep one.
    See attached file for a screenshot of the Backup storage showing that both generations are still there.
    Attachment
    PUT5 (9K)
  • Retention Policy Problem with V7
    David,
    Any chance this issue was addressed in the latest release 7.5? Really tired of spending $80+ per month to keep two full image/ VHDx copies in the cloud when we really only need one.
  • CloudBerry Backup Questions
    No you cannot. We use legacy format for file backups, and new format for Image and VHDX backups.
  • How does CBB local file backups handle moved data?
    No need to do repository sync. In either format, files would be backed up all over again, and the original location backups would be deleted from backup storage based on your retention setting for deleted files.
    Synthetic fulls do not work for local storage, so move the files, run the backups, make sure you have a setting for purging of deleted files set and you will be fine.
    The problem with the new format is that you have to do periodic true full backups (where all files get backed up regardless of whether they have been modified. And you have to keep at least two generations. That is not the case for the legacy format, so we use legacy for local and cloud file backups. We use the new format for Cloud image and VHDx backups as BackBlaze supports synthetic fulls, but we still need to keep two generations, which takes up twice the storage.
    But it is worth it to get the backups completed in 75% less time thanks to the synthetic ful capability.
    Sorry if this is confusing,
  • Full backup to two storage accounts - how do they interact
    No problem. Recommended in fact. We do this all the time. Local storage does not support synthetic backups, but locally connected, it isn’t necessary.
  • Restore from cloud to Hyper-V VM on on-premises hardware?
    So the short answer is that MSP360 does not have the ability to do what Recovery Console does (basically continuous replication via software).
    You can certainly create a backup/restore sequence that would periodically backup up and then restore VHDx files, but it might get tricky from a timing standpoint. Perhaps David G. can comment on how one might set that up.
    For us, the daily local VHDx backups, combined with "every-four-hour" local file backups provides an acceptable RPO/RTO for the majority of our SMB customers.
    For our larger, more mission critical clients, we use HYPER-V replication.
    Typically we use what we call the "trickle down servernomics model".
    When a server needs to be replaced, the old one becomes the HYPERV replica - which provides recovery points every 10 minutes going 8 hours back.
    It has served us well, and the cost is relatively low (given that the customer already owned the Replica Hardware. It does not perform as well, but in a Failover situation, it works adequately.
  • Changing drive letter for backed up Windows directory hierarchy
    See the knowledge base article in the link below. Yes you can change the drive letter, and as long as the folder structure remains the same it will not backup everything again.
    https://www.msp360.com/resources/blog/how-to-continue-backup-on-another-computer/amp/
  • MSP360 Restore Without Internet Connection
    You need an internet connection to do a restore. If you have an internet connection you can do any type of restore; cloud or local. I do agree that if it is technically possible to permit a local restore from USB/ NAS device without an internet connection it would be nice. Question: if a local restore starts, and then we lose internet connection, will the restore finish?
  • Retention Policy Problem with V7
    Thanks David, for following up on this. FYI, I did discover that if I delete the oldest of the two generations and uncheck "Enable Full Consistency Check" box, that the synthetic full runs just fine with only a warning that some data is missing from storage.
    Not worth the time and effort to do that every week for sure.
  • Retention Policy Problem with V7
    Thanks. No need for incrementals, as fhese are Disaster Recovery images or VHDx files. A month or week old image is fine to get someone backup,and running, using these as a base plus daily file based backups