Comments

  • How Can I Restore *all* Versions of *all* files?
    Are you using legacy or New Backup format?
    Either way, you would need to restore them individually to a different location since they all have the same file name. With legacy this is a lot easier as it lists all of the versions both in the Web console and on the device agent console. A lot harder with new Backup format since right now there is no place that lists all of the versions of as file.
  • Local Backup seems to be have become unusable
    I would first ask how much data are you backing up?
    You are using what is called New Backup Format (NBF) instead of the “legacy” file backup option.
    Legacy backup is ideal for large volumes (>500GB) of local storage and/or that has a long retention period. There are no such thing as “fulls” with Legacy format, in the sense that once a file is backed up, it is never backed up again unless it is modified.
    The NBF on the othr hand, does Full Backups periodicaly, meaning that every single file has to be backed up again to create a new generation.
    You can use GFS options that allow you to keep daily fulls/incrementals for say 90 days then keep weekly monthly and annual fulls only. So after one year you would have 1 full for the recent dailies, 12 monthly fulls then an annual full.
    And since local storage cannot do “synthetic fulls” as is done in the cloud destination, you have o rebackup every single file when doiig a full to NAS.
    Evn withGFS, after ten years you will have 9 annual Fulls, 12 monthly fulls, and one “Daily” full. You could reduce thsi somewhat by doing monthly fulls every three months, but still you would be consuming a LOT of storage. You actually have to do a full backup of every file each time a full is schedued.
    Since you need to keep the data for ten years, I highly recommend that you re-backup your data to the NAS using legacy backup format and set the retention period to 10 years. Files that are never touched will stay forever as they are the “”most recent version of a file”.
    So in essence you will only be storing one full plus all of the incrementals that were created in ten years.
    If the files that you are backing up are mostly static innature (pictures, pdf’s,audio/video, exe’s etc), your total storage sonsumed after ten years is likely to be less than 1.5 x the amouunt of data you are backing up.
    Glad to anser any question,
    Steve (not an MSP360 employee)
  • How to properly configure MSP360 Desktop Edition to minimize the cost to Google Cloud Storage?
    Disregard my previous statement - I just learned that Google Archive is not currently supported. Changing recommendation to go with Wasabi or BackBlaze as they do not charge API fees. So while they MAY be more expensive than Google Coldlne (depending on the region you select), the absence of API fees will offset most if not all of the difference.
  • How to properly configure MSP360 Desktop Edition to minimize the cost to Google Cloud Storage?
    Yuri,
    For that amount of storage that I strongly suggest using Legacy format backups as it will save you a tremendous amount of space. With Legacy format, once you backup a file, it never gets backed up again unless it is changed which is rare for videos/pictures.
    I would recommend Google Archive or Coldline. The cost per month for Archive is $.0012/GB/Mo in the cheapest Google regions (US-Central,East1,East5,South1,West1). That is 70% less than the $.004/GB/Mo for Coldline in the same regions. The data is required to be retained for 365 days but in your case that is not an issue.
    5TB of data in Archive is a mere $6 per month vs $20/Mo with Coldline.
    The issue that I am not clear on is how much the initial upload wil cost to each of the destinations for Class A and B API operations. It might cost a lot more in API ops to upload the data to Archive vs Coldline such that it might take a long time to break even using Archive storage.
    Perhaps someone more knowledgable of API costs can weigh in.
  • Retention/Deletion Question
    Bill,
    You need to understand how the retention settings work. There are settings for deletion of versions of an existing file, and a separate seting for the deletion of an exiting file. We set versions to 90 days and deleted files retention to 90 days.
    I would be happy to speak with you directly to help you undersatnd, as it is not intuitive (at least not to my old brain)
  • Computer Post Support Restore
    The answer is yes, you can restore form old backups, but they will be old. You can restore files from the MBS portal or from the agent console as long as there is a licence for the machine.
  • New Web Portal UI
    So to add m two cents - the new Portal UI does NOT have all of the Legacy features that are vital for daily monitoring. Thank you for not deprecating it just yet
  • Convert Incremental backup plan to include fulls (
    Question: What is your retention period requirement for versions and deleted files?
    Since you have not run a full since day 1, you have saved every single version of every modified file for an entire year. We do Monthly incrementals (aka fulls) and daily block-level incrementals, with a retention period of 90 days. Keep in mind that with the legacy format, a "Full" backup is not a complete set of all of your data (like it would be with the new format using synthetic fulls).
    A legacy format full backup simply does a backup of each individual file that is new or has been modified since the last full backup. It never has to re-backup files that don't change such as pictures or pdf's.
    My recommendation would be to go into the schedule and set the retention to whatever number of days you want to keep old versons of modified files, and set "keep deleted files" to that same number.
    Set up an advanced schedule to do a monthly "incremental" full and daily "block-level" backups.
    Would be happy to discuss at greater length if you have more questions.
  • Rotating Drive Strategy documentation?
    I guess I would ask why you or your customer feels the need to have rotating drives?
    We backup our client's data files to two cloud locations each night (BackBlaze/Wasabi) and also do an Image backup each night to BackBlaze for Disaster Recovery. We keep roughly one week's worth of images in the cloud for each client.
    We also backup the files and images to a large local external USB drive.
    The cloud file/Image backups truly eliminate the need for rotating offsite USB drives. And those drives can (and will) fail at some point.
    We do have one customer (out of 100) who wishes to keep their own "offsite" copy of the data so we have them bring the device into their office every 90 days and plug it, and we run manual file and Image backups to it, then they take the drive home.
    It is of little real value other than to appease the "old-school" client who grew up using only local tape drives for backups.
  • Changing Drive Letters of Data
    Are you using the new Backup Format for your Cloud backups? If yes, then I am afraid that you cannot change the drive letter like you can with the Legacy format. I do not know if using the New Backup format will make the reupload go faster (deduplication impact). Perhaps someone else has experience with this issue.
  • David Gugick
    David is no longer with MSP360
  • An error occurred (code: 1003) on several servers since upgrade to 7.9.4.83
    Others have reported this issue to tech support - What do you use for clous storage?
  • Which backup schedule to choose?
    Not sure what your retention period is, and whether you have a requirement for using GFS.
    But assuming you don’t need GFS, if you want to keep say 90 days of backups, you would need to start backing up using legacy format going forward and keep the existing new format .bak files for 90 days until you can safely delete them from cloud storage. I use Cloudberry Explorer to delete the files that are no longer needed.
    I assume that you are not using the MSP360 SQL backup, so it is a little trickier to keep only 90 days of .bak files since as you stated, they are all unique files and would never get purged. There is a way to do it and I would be happy to explain how to set it up if you want.
    Just to clarify, you can’t change exisiting backups from one format to another.
    If you still have the .bak files from the past x days on primary storage, you could back them up again using legacy format and then you could delete the NBF .bak files right away.
  • Which backup schedule to choose?
    If you deleted all the legacy format data from the cloud, then you’d have to re-upload everything, but it may be worth it. Question: are you using SQL license from MSP360 or are you just backing up the .Bak files?
  • Which backup schedule to choose?
    The new format is not really suited for file backups such as you describe. IMHO, you should stick with the legacy format for SQL backup files, and any other file based cloud backups. We use the new format for Image/HYPERV VM backups which have a short retention perod (1-2 weeks), as it allows us to do synthetic fulls which reduces the full backup runtime by 75-80%
  • Can i perform file based backup for virtual machine.
    Can you show me your backup plans settings?
  • Backup plan configuration
    If you go to the MBS portal Computers page then select the client/computers using the search and checkboxes, then select "Plan Settings Report" from the menu.
    It will prompt for an email address and will send a link to get the .csv settings report.
    It shows the following:
    • Company
    • User
    • Login
    • Computer
    • Profile
    • Plan Name
    • Backup Format Type
    • Storage Destination
    • Source Folders
    • Excluded files
    • Encryption (Yes/No)
    • Compression (Yes/No)
    • Advanced Settings
    • Retention Policy
    • Notification (Yes/No)
    • Schedule Full Backup Schedule
    It is not the prettiest documentation, but gets most of what you need.
  • Backups not working, bug in 7.9.4.83 gives 1003 error
    What is your cloud storage location?
  • Error 1003: Unable to write data to the transport connection
    We experienced a rash of 1003 errors with Backblaze a few months back.
    I created a new account with an East coast Data Center and have not had the problem since.
  • Deleting Orphaned Data
    I don't use the MSP360 data deletion option, rather I use Cloudberry Explorer to delete orphaned data , then run a repo sync to get things right (if necessary).
    If you go to users and click on the green icon it shows you the MBS prefix for the client. I then go into CB explorer and delete the data directly from storage. It can take a while so you ned to leave CB explorer open, but at least I know what is getting deleted. I then run a repo sync if the machine in question is still active, otherwise there is nothing else to do once the data is deleted from the backend storage.