• MSP360 Managed Backup 5.2 with New Backup Format, GFS, Restore Verification and more
    Alex,
    So far I am loving the Synthetic Backup for Files and Images. It will make a tremendous difference in our ability to complete the backups overnight/weekends for clients with slow upstream connections.
    Right now our biggest outstanding issue is with the situation where a client reorganizes their files by moving them to a new folder - often to sort by month or to an "Archive" folder. All of the files must be re-uploaded which can take days, not to mention the extra storage space that is consumed for the 90 days until the original files get purged.
    I believe that other backup solutions have solved this, I am wondering if client side dedupe addresses this situation, and if not, could it be designed to?
  • Backblaze Synthetic Full
    Question: Are there any BackBlaze transaction fees for in-cloud copying of data during a synthetic full?
  • Retention Policy Question
    Mike S. -
    Are you talking about data files or images?
    For data files, we keep 90 days worth of versions .A version will never get deleted before it has aged for 90 days.
    If you are using Wasabi, 90 day retention works out fine - you will not get any "early delete:" charges.
    However, your statement that:
    "Once a full runs on 11/01, it will delete 10/01 backup, then when the incremental runs on 11/02, it will delete the 10/02 incremental",is incorrect.
    You cannot delete a full until all of the dependent incrementals have reached the retention setting, in this case 30 days.
    If you are running a full once per month. incremental versions that were created during October would not be deleted until the last incremental on October 31 has reached 30 days old on November 30.
    At that point the October 1 full and all of the October incrementals will get purged at once.
    If you were to run fulls each week, on Nov 7th the Fulls and incrementals from Oct 1-7 would get purged.
    For image backups (and VHDx files), we only backup once per month and want to keep one version.
    We only want to keep a full for 30 days so we do not use Wasabi as we would get charged for 90 days whether we use it or not.
    We have been using BackBlaze for the once-a-month Full Image backups as they have no minimum retention period.
  • Backblaze Synthetic Full
    Unfortunately there appears to be a problem with V7 and BackBlaze B2. We rolled out V7 to two clients whose VHDx/Images are so large that we could not complete the upload in a single weekend.
    Both ran for a while but then halted with "Could not create SSL/TLS secure channel" errors.
    I understand from support that this is a known issue, one which BackBlaze has provided little assistance in solving.
    Apparently there is a workaround (developed by and available from Support) that we will test this week, but since it requires a server reboot (which requires approval from each client), it will slow our rollout significantly.
  • Retention Policy Question
    The most recent version will stay in the Cloud until another run of the backup plan completes successfully, and the prior version meets the purge criteria. It’s the same as the data/ file plans.
    We used to use # of versions for image and VHDx backups, but we found that the purges of old versions were not happening ( fixed since I suspect), so we went to a # of days approach.
    We do a full Image Cloud backup once per month for Disaster Recovery purposes, and daily local Image/VHDx backups to a USB drive.
    For retention settings, we use 1 week for the Cloud backups, and since it is a month between backups, the prior version gets purged and we end up with only one version in the Cloud.
    Certainly you could try the # of versions approach for Cloud Image backups, but assuming you are taking daily or weekly Local Image/ VM backups, there is no real reason for more than one copy in the cloud (IMHO).
  • Retention Policy Question
    I took so long to write this that David G. beat me to the punch. My [lengthy] explanation below takes into consideration the implications of the Full/Block Backup sets I described in a prior post.

    Backup Fan - Yes you are correct. It will purge based on the most aggressive purge setting, be it # of versions or days.
    Keep in mind that the latest version of each file is always saved unless you deliberately uncheck that setting. There are situations where we do his but that is beyond the scope of this discussion.

    If you back up a file such as a picture or pdf that never gets modified, it will remain in backup storage forever, regardless of any retention settings.
    The settings only apply to files that get modified, creating multiple versions.

    To continue for those hell bent on using # of versions, I will try to explain how it works using a different example.
    Example
    - Retention set to Keep 2 versions
    - Retention set to keep 30 days
    - Fulls set to run weekly
    • Day 1 - Full backup = V1
    • Day 2 - Block incremental = V2
    • Day 3 - Block Incremental = V3
    You would think that V1 could be deleted, but it cannot since you still need V2 (in order to still have 2 versions) and V2 depends on the Full (V1).
    • Now lets say that the file is not touched again until the Weekly Full Backup runs creating V4.
    You still cannot delete any of the V1-V3 versions since they are considered a "set" and you cannot delete V3 for the same reason as above.
    • The next day you run a block level backup (V5)
    NOW V1 - V3 will be purged, since we now have two versions (V4 & V5).
    Lets say that you do not modify this file again.
    • The following week a new Full (V6) will get created as a full is taken for any files that have had modifications since the last full.
    When V5 reaches 30 days old, can V4 and V5 be deleted? No, because V5 represents Version #2 and Since V5 is associated with V4, it cannot be deleted either.

    Now if you did only fulls (by turning off block level backups), you would always have just the two most recent full backups of the file, but that will chew up more storage than an incremental does, particularly if you have files in the 100's of MB's.
    Summary
    If you are backing up files/versions for paying clients, I strongly recommend a 30 day retention - at a minimum - and do not use # of versions. We use 90 days as our standard, with some clients paying extra for 15 month's retention to protect versions of files that are only touched once per year or so (accounting, Law Offices, etc.)
    This way it does not matter how many times each month the file gets updated, you can always go back to a version from 30 or 90 days ago. For some files, it might be two versions for others there may be 30 or 90 versions but in all honesty, at the hideiously low cost per TB of Cloud storage available these days, storage costs should be the least of your concerns.
    Not being able to recover a file from a month ago because you only have the last two days due to your " # of version" settings will be far more painful.
  • Retention Policy Question
    So if I am correct, (and David G. will correct me if I am wrong), if you specify 30 days retention and 45 versions, all versions older than 30 days will be purged even if there are only 5 versions in storage.
    If you had a 90 day retention and the same 45 versions setting, as soon as Version 46 is created, which could be on day 46, version #1 would get deleted, even though it is not yet 90 days old.
    I don't trust/like the " #of versions" setting for this reason.
    We sell 90 days of version recoverability to out clients for file/data backups. For Accounting and law firms that only touch customer files once per year we sell an extended retention period for a very small uplift that provides 15 months of versions. As you would expect, Image/VHDx backups are different.
  • Retention Policy Question
    1. I admit this is confusing but if you think of your backups on storage as a set containing a "full" backup plus any incrementals based on that full, this gets easier to understand.
    Example:
    - Yor retention period is set to 30 days
    - You have a file that gets changed every day
    - You run the full backup of that file each Sunday and block level incrementals on the other six days
    - At the end of 30 days you would have four "sets" of 1 full/ 6 incrementals in Backup storage and one partial set with a full and one day of incrementals:
    • Set #1 = Day 1-7
    • Set #2 = Day 8-14
    • Set #3 = Day 15-21
    • Set #4 = Day 22-28
    • Set #5 = Day 29-30
    - On day 31 you run another block level backup
    - Set # 1 cannot be purged until all of the elements in the set have aged to 90 days.
    - So until the day 7 block incremental is 30 days old , you cannot purge anything from Set #1
    - But on Day 37, the entire Set 1 will be purged as all components have aged to 30 days (or more).

    If you choose to do monthly fulls with block incrementals every other day of the month, Set #1 will not be purged until Day 60 - the point where the last incremental in the set reaches 30 days old.

    2. My experience/understanding is that it is an "OR" condition, meaning that if either condition is met, the files will be purged. I agree that if it was an "AND" operation it would be more useful for the strategy that you are looking to employ.
  • Retention Policy Question
    We do not use the “ keep # of versions” as that complicates our retention scheme.
    If we set the plan to keep 30 versions, A file that changes every day will have 30 versions in 30 days. For files that change once per month, it willl keep 30 months of versions.
    We set our retention for file backups to 90 days, with one “full” each month and daily block level backups.
    In the above example we would have 90 versions of the daily- updated file, and 3 versions of the monthly-updated file.
    Because we do “full” ( what MSP360 calls incremental) backups only once per month, it takes an extra 29 days for the oldest set of full/ block-level versions to age to 90 days. If we did a full (incremental) each week, we would only have an extra week to wait until the oldest set is purged. The trade-off is that we would be storing 12 “full” versions vs 3 and if there are large, frequently updated files ( pst, Quickbooks files, etc) it can increase your storage consumption even more than keeping an extra 21 days of block level backups.
    Confusing? Yes.
    But it has worked well for us.
    Happy to discuss further.
    -Cloud Steve
  • Backup Plan Email Notifcations - Error message causing agita
    To answer your first question: yes, please eliminate that information altogether. Include error messages for failed/warning backups, but not skipped files.
    My clients only want to know that the backup happened successfully. They don’t care about anything else. Frankly, all the rest of the info provided does nothing but elicit questions. And the “download here” link. shows what backed up successfully, but doesn’t show what failed.
    Perhaps a discussion about the best options for client email notification is in order.
  • Backblaze Synthetic Full
    Just completed a couple of tests of the V7 Synthetic full feature going to BackBlaze
    VHDx file
    Original Full (76GB) took 9 hours 5 mins (9:05) over a 1MB/s uplink.
    Subsequent Synthetic Full took only 1:38 with 4GB of changed data uploaded.
    82% reduction in runtime

    Image
    Original Full (72GB) = 8 hrs 26 mins
    Synthetic Full - 8GB uploaded - Remainder copied in cloud = 2 Hrs 15 mins
    73% reduction in runtime

    The more data that has changed since the last backup , and thus has to be uploaded, the longer the synthetic will take, but regardless, this will dramatically reduce the backup time.
    Hopefully we can now get all of our full backups completed over the weekend, rather than running for 3-4 days in some cases.
  • Backblaze Synthetic Full
    Is there a plan to allow Synthetic fulls for Local backup accounts?
  • File and Folder - Confused About Chained Backups
    Well, the short answer is, we don’t need the feature.
    We run three backups of client data each night- local, Cloud 1, and Cloud 2.
    They can all run at the same time, and sometimes do, but we typically spread the start times out.
    Unless there is a specific dependency that file x has to be backed up prior to file y, I see no reason to bother with chained backups.
    It maybe that I am missing something , so I am open to suggestions as to why I should be using it.
  • File and Folder - Confused About Chained Backups
    ”I only want the second backup to run at the completion of the first backup and never independently on its own”
    We too have separate backups/ retention periods for pst files, but do not use chained backups at all. The plans are scheduled to run at different times of the night, but even if they overlap with one another it does not cause any problems as they are backing up different files/folders.
  • New Version - Backup Agent 7.2 for Windows
    An image backup will have to restart any uncompleted partitions from the beginning.
  • New Version - Backup Agent 7.2 for Windows
    That works fine, thanks. Per my other post, we will not be reuploading 50TB of data just to get to the new format. We will use it for VHDx files and Images, where the synthetic full adds a lot of value and we only keep one version so switching formats is a no brainer.
    But please impress upon upper management that you can never deprecate the legacy format, without negatively impacting all of your MBS customer.
  • New Version - Backup Agent 7.2 for Windows
    Surprised that you would release an MBS version that does not allow editing of backup plans that are in the new format using the admin portal. We will not roll out the new format until we can manage the plans from the portal.
  • New Version - Backup Agent 7.2 for Windows
    Thanks,
    I participated in the beta and was given the indication that at some point, the only supported format would be the new one. That would be a problem. That being said, for image and HyperV/VMware backups, the new version is fantastic.
    Concerned that the banner is telling me that All of my clients are running an unsupported version.
  • New Version - Backup Agent 7.2 for Windows
    Does this require a re-upload of all data to conform to the new format? If that is the case, we will need to keep the old format indefinitily, as re-uploading everything is not viable.