• Alexander Negrash
    23
    We are happy to announce that the new backup format in MSP360 Managed Backup is finally out with a number of amazing features that make your data protection job a lot faster and easier.

    Primary features:

    • New Backup Format. The new backup format works as follows: every data set is kept as a separate entity. No matter what type of backup you run, every backup plan with its data set will be stored separately from all other backups. Thus, there is no interference between the data from different backups and backup plans. This approach to data structuring allows us to introduce a bunch of useful features that will make your backups run even faster – along with some other improvements.
      Please note: you cannot convert your existing backup plans to the new format. All data should be backed up from scratch. Please consider using a new bucket to make the switch to the new format easier. The old format will still be available for backup and restore.
    • The Grandfather-Father-Son (GFS) Retention Policy. Grandfather-father-son, or GFS for short, is a backup retention policy that is based on keeping several full backup copies. These full backup sets have different retention routines: weekly, monthly, and yearly (where the grandfather is the oldest, yearly backup, the father is a monthly one, and the son is the youngest, weekly backup). Only a full backup that has been completed without any errors can become a GFS backup.
    • Restore Verification for Image-Based Backups. Restore Verification is a feature that allows you to check the recoverability of system image backups. System image backup is a complete backup of everything on your pc or server, including operating system, installed applications, system settings, drivers, as well as files created or downloaded by users. When you perform such a backup, MSP360 Managed Backup creates a special file structure that can later be restored as a fully-fledged system on a virtual machine to test your backup for recoverability.
    • Mandatory and Full Consistency Checks. At the same step as Restore Verification, you can enable another option: the consistency check. This feature helps you to check the state of data that is backed up in your storage and verify its consistency. Of course, you usually send undamaged files there but, there is no 100% guarantee that these files won’t be corrupted while in transit or already in the cloud. Data corruption might happen, for example, because of technical problems on the server (since, although major storage providers have “11 nines” durability, that’s still not 100%) or some human factor.
    • Client-Side Deduplication. Datasets that are uploaded during the process of backup can be huge, and transferring identical files located in different folders can be redundant and might take a lot of time. To reduce this time and bandwidth and storage consumption, we have implemented client-side deduplication in the new backup format.
    • Synthetic Full Backup for File-Level and VMware. Synthetic full backup is another feature that saves time and traffic, and now it is available for file-level and VMware backups, along with image-based backups. After the first-generation full backup, only the changed data blocks get uploaded into the cloud, forming incremental backups. When a new full backup starts as scheduled, MSP360 Managed Backup creates a new restore point based on the previous one. Synthetic full backup is enabled by default and works for the following cloud storage services: Amazon S3, Microsoft Azure, Backblaze B2, and Wasabi.

    Additional features:

    • Restore points. A restore point represents a set of backup data. If the restore point is valid, then all the data in the appropriate dataset is valid as well and can be recovered. All the restore points can be found on the Backup Storage tab of the MSP360 Managed Backup Agent.
    • Faster purge. As objects in the storage are saved as data parts, the purge speed is higher. Each time the purging mechanism works, it deletes a data part, not a single file.
    • Faster synchronization. As there are fewer objects in the backup storage, the number of API requests is also reduced. Thus, you pay less to your storage provider if these calls are not free.
    • Plan configuration is always included in backups This means you don’t need to recreate your plans when using them on a new PC; you can use the already-configured existing ones.
    • Object size limits are increased from 5 TB to 256 TB, regardless of storage provider limitations. This is achieved using data partitioning; no matter how big the object is, it is divided and stored in smaller parts.
    • Support for any filename characters and extra-long filenames.You (and your users) no longer need to care about file names to back them up.
    • Filename encryption out-of-the-box
    • Password hint. When you use encryption for your backups, there’s always a risk of forgetting the password. In this case, you lose access to your data. Now you can add a hint that will help you to recall the password. Bear in mind that it should be something that makes sense for you only.
    • Changed block tracking for image-based backups. This algorithm helps to identify the new and modified blocks of data, and only these blocks are uploaded to backup storage. Thus, incremental image-based backups run much faster. Please note: only NTFS file systems support changed block tracking.
  • Steve Putnam
    35
    Alex,
    So far I am loving the Synthetic Backup for Files and Images. It will make a tremendous difference in our ability to complete the backups overnight/weekends for clients with slow upstream connections.
    Right now our biggest outstanding issue is with the situation where a client reorganizes their files by moving them to a new folder - often to sort by month or to an "Archive" folder. All of the files must be re-uploaded which can take days, not to mention the extra storage space that is consumed for the 90 days until the original files get purged.
    I believe that other backup solutions have solved this, I am wondering if client side dedupe addresses this situation, and if not, could it be designed to?
  • BackupFan
    2
    Is there a way to see what files were uploaded and file size that was uploaded in the course of a New Backup Format synthetic backup? I too am seeing a large amount of data uploaded by some computers that don't have much new file creation or file changes daily. Knowing the what is taking bandwidth and storage might help me optimize what might be excluded from the backup.
  • Steve Putnam
    35
    Haven’t found a way using the mbs portal to see exactly how much data gets uploaded for a specific file or partition during a synthetic bacjpkuo, but the plan details on the actual server let me know how much got copied in the cloud.
    For image backups we exclude folders that get backed up via file level backups.
    For Virtual Disks, we exclude the D: drive data vhdx’s since again that gets backed up via file backups.
  • BackupFan
    2
    Thanks Steve Putnam. This helps some. We see a few computers whose backups should not be too large daily that actually upload 20 GB or more daily. Files are primarily stored on external file servers so users work during the day should not involve many new or changed files. The large upload causes increased storage usage and a longer backup duration making it difficult to complete backups of all computers overnight with the available data bandwidth. I would like to pinpoint what contributes most to this large upload size.
  • David Gugick
    118
    For File Backup, you can go to Reporting - Backup History and search for the endpoint and find the backup execution in question. Click the execution on the date in question. Once the sidebar opens, click the Details tab for a list of files that were backed up.

    If you're referring to Image backup, then we are not backing up files, so there's nothing to report in that regard. The best you can do is to exclude files / folders that are not needed for the restore. Temp folders, Cache Folders for browsers, log folders, hibernation and pagefile / swapfile (if not needed), etc.
  • BackupFan
    2
    Thanks David! In another discussion someone spoke about ost email data files. I think that it was mentioned that we don't need to back these up as the email is stored on the email server and to recover the ost file the emails need to be re-downloaded from the email server. If these ost files are not excluded from the backup, will MSP360 try to back these ost files up? In other words, will excluding the ost files from MSP360 backups make backups run faster and use less bandwidth?
  • David Gugick
    118
    I think the answer to that is, it depends. It's going to depend on how large the OST files are in relation to the rest of the data being backed up. But before you exclude any files please make sure that they're OST files which are just cached copies of emails and calendar and contacts from the server versus a PST file which is an actual email archive or may even be used as the main mailbox when using POP3 email servers. But I think you'll have more luck excluding other folders that are absolutely not needed for backup, whether that backup is a file folder or image backup.
  • BackupFan
    2
    Thank you David. I understand your distinction between ost and pst files. So MSP360 will backup the *.ost files if not excluded from the backup, correct? As you mentioned these ost backups are not likely very usable as the data normally needs to be downloaded from the mail server if lost or damaged. Just trying to determine if a user has an ost mailbox with 20 GB of data if MSP360 will be backing this up and if excluded will be skipped from the backup.
  • David Gugick
    118
    We back everything up by default. We leave it to the customer to decide what does not need to be backed up. There are no default exclusions unless you have the don't back up system and hidden files option checked in the backup wizard. You can Google what file types and locations are not likely needed for backup and you'll probably get a bunch of resources that will list out Windows folders and the like that may help assist. I've been speaking to the team about whether or not we should post a list ourselves, but I haven't made a final decision yet - it's under discussion.
  • BackupFan
    2
    Understand. Thanks for the helpful information.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment