Comments

  • Prefilled Encryption Key in MBS portal - Big Problem
    I will send the logs to the open ticket. Thanks for your help.
  • Prefilled Encryption Key in MBS portal - Big Problem
    I have been avoiding looking at the keys for that reason, but the prefilled pw managed to get saved in a plan forcing a full backup to occur. Could be that someone on the team looked at it, not realizing the implications.
    I am just glad that it throws a notification that the encryption key has been changed. Otherwise we’d never know until a restore didn’t work. Would be looking for another job.
  • Prefilled Encryption Key in MBS portal - Big Problem
    Edit a V7 file plan in portal. Go to compression and encryption. Sometimes the defaul encryption key is visible, other times if you look at it is shows jlx40..6mp…qZnu9wVJQ==
    If you then save the plan it changes the key apparently.
  • Delete Cloud Data When No Longer Using A Backup Plan
    Yes I forgot that part - you can resync the repository via the MBS Portal by selecting Edit :Options: Repository. Another note - If you switch from legacy to V7 format on a machine, the new format data is in something called CBB_Archive - The old data will be the folder with a drive letter.
    Question for David G.:
    I have a server with 1 million files The inital backup two years ago took a week, but ever since, every backup is "incremental" - meaning only changed files get backed up. With the new format do we actually have to periodically upload everything again? Even jpg's and pdf's that have never changed?
    I suppose the synthetic backup would reduce the time to do that - but we use Google Nearline that currently does not support synthetic fulls.
  • Delete Cloud Data When No Longer Using A Backup Plan
    We use Cloudberry Explorer to delete old Cloud backups in scenarios such as you described.
    Be sure to put a password on the Explorer app for security purposes.
    For local backups we login to the server/device being backed up and manually delete the old backups. Since we started using 5TB external USB drives for local backups, space has rarely been an issue, so sometimes we just leave the old ones there. We use our RMM tool to monitor available space on the drive.
  • Configuring incremental backups with periodic full backup
    So help me understand. If we use the new backup format for a server with 1 million files - 800GB, we have to reupload all of the files periodically? I get that if one uses a Cloud storage vendor that supports Synthetic fulls that it would act like an incremental, but we backup to Google Nearline - not supported for Synthetic fulls.
    Is it true that the entire set of files would have to be reuploaded with each full? Even for Pdf's jpg's etc that never change?
  • MSPBackups.com Website Slow
    Performance is MUCH better. Kudos to MSP360 for upgrading the backend.
  • Unfinished Large Files
    Great, thank you. Saves me a lot of hassle having to manually go in and delete them each week.
  • Unfinished Large Files
    David-
    Can you confirm that the latest MSP360 Version 7 now properly deletes unfinished file uploads from BackBlaze B2/S3 compatible?
  • Retention Policy Problem with V7
    So I finally got the answer from support:

    You are absolutely right about new backup format retention. In new backup format, retention is the period of time a backup generation is retained once it has been replaced. This means in your Full Backup Only configuration you will always have two Full Backups. It is not currently possible to delete your previous Full Backup when you perform your next Full Backup.

    This is different from how retention worked in legacy backup format. Legacy format retention period was based on backup date.

    I'm going to submit a feature request on your behalf.
  • How to configure backup to use the less possible volume on destination ?
    David/MHC - A couple of points:
    1. Wasabi has a 90 day timed-storage policy, meaning that if you purge data prior to 90 days, you still get charged for 90 days. BackBlaze has no such timed-storage restrictions.
    2. I am working with support to understand why the retention/purge process behaves differently with the New backup format (NBF) compared to the old format.
    Simply put, in the old format I could run a weekly full image backup to BackBlaze with a 3 day retention period, and when the next weekly full ran, the previous week's image would get purged (as it is over three days old). I end up with one full image.
    That is NOT what is happening in NBF.
    Using weekly Synthetic Fulls only - no scheduled incrementals, with the same 3 day retention period, the previous week's generation is NOT getting purged at the completion of the the new synthetic full.
    I am in the process of trying the one day retention setting to see if it changes the behavior, but for the life of me I do not understand why it doesn't work the way it used to. Once the synthetic full completes there is ZERO dependency on the prior generation.
  • Retention Policy Problem with V7
    See the screenshot below - taken today 11/7.
    First generation from 10/30- 10/31 says it will be deleted in two days.
    Yet the retention period is only 3 days and the job runs weekly so the first generation should have been deleted at the completion of the second generation on 11/6.
    Now if the data would actually get purged without having to wait until the next week's plan runs it would be ok. But it appears that there is no way to prevent there always being two generations in Cloud storage - doubling my cost.
    aws4_request&X-Amz-Date=20211107T164433Z&X-Amz-Expires=604791&X-Amz-Signature=1381381a811600480a17869921bd390db8b68d4e94dd702ee6d167b156c0c092&X-Amz-SignedHeaders=host&response-content-disposition=inline
  • MSPBackups.com Website Slow
    Overall the web site response time is two to three times longer than is normal.
  • Internet connection is not available (Code: 1020)
    We got this error with BackBlaze B2 as well. Support has a fix that disables two problem weak Ciphers. After executing the fix below, we have uploaded over 4TB’s with no failures.
    Please follow the instructions below to troubleshoot this issue:
    1) Download Nartac IIS Crypto tool:

    https://www.nartac.com/Products/IISCrypto

    2) Go to Cipher Suites and turn off these 2 cipher suites:
    TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
    TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
    then click “Apply” button at the bottom ( you may have to move the window around to see it).
    3) Reboot the server. Please note that reboot is mandatory for the changes to take effect, make sure you reboot the server before trying to run the backup again.
  • New backup format in V7 Cloudberry Backup
    We have been using the new format for several weeks for VHDx and Image Backups to BackBlaze to take advantage of the synthetic full backups.
    We are using the MSP version and the thought of having to reupload all of our clients files is frightening .
    But for a single user it makes sense to do a parallel backup in the new format now, and once your retention period has expired, delete the old backup set.
  • File & Folder Restore Flaw with Deselected Folders
    James is correct. If you setup the restore plan to go to a different location, and select only some subfolders/files, it does not retain the folder structure. It just dumps the files in the destination folder. I agree with James that there should be an option to "retain original file/folder structure".
  • Retention for file based S3 backup (new format)
    Understand that retention policies apply to versions of files.
    If you backup a photo, which will not get modified, it will never get purged.
    Files that don’t change don’t get purged.
    It should say “Keep old versions for” x days/weeks/months/years.
  • delete option missing
    One of the most recent version updates changed the default Agent Options to NOT allow deletion of backup storage using the agent console. This was done to prevent hackers from deleting backups during ransomwware attacks.
    What we do when we need to delete items from storage is to go to Organization - Companies then edit the company select "Agent Options" then select "Use Custom options" and check "Enable ability to delete files from Storage".
    Close and re-open the agent console and you will be able to delete files from backup storage.
    Strongly recommend putting it back to default when you are done.
  • MSP360 Managed Backup 5.2 with New Backup Format, GFS, Restore Verification and more
    Haven’t found a way using the mbs portal to see exactly how much data gets uploaded for a specific file or partition during a synthetic bacjpkuo, but the plan details on the actual server let me know how much got copied in the cloud.
    For image backups we exclude folders that get backed up via file level backups.
    For Virtual Disks, we exclude the D: drive data vhdx’s since again that gets backed up via file backups.
  • Trying to restore a outlook pst file
    Does the plan have "Do not backup System and Hidden files" checked? The Appdata folder is a hidden folder