• Why Do I Have 4 Different Versions of an Outlook .PST in Amazon S3?
    For our clients in the Accounting/Tax fields we sell an "Extended Retention" option which keeps versions and deleted files in the backup location for 15 months (vs our standard of 90 days retention). We price this option under $10/month, but it has become very popular as it addresses the situation you described, and generates more revenue.
  • Backup History Search
    I apologize, I failed to specify the issue is with Backup History on the MSP portal, not the client console.
  • Backup History Search
    Individual endpoint. The search box does not appear to support wildcards, unless the "*" is not the right character for wildcard search :)
  • Moving from S3 to Wasabi
    There are ways to do migrations of S3 backups to Wasabi or BackBlaze, but they are complicated and expensive. For a 3TB client, we took a USB Hard drive MSP360 backup of the client data and brought it to our location where we have 100mbps upstream bandwidth. We then uploaded the backup to BackBlaze (took 7 days), but we were able to keep running the S3 Backups until we were done with the upload. We then connected the client's server to the BackBlaze Bucket - did a repo sync (that took over a day itself) and ran the backups. Since we had 90 days of version retention in the Amazon Bucket, we did not delete the S3 bucket until 90 days after the cutover.
    The extra cost was maintaining the two buckets for 3 monthgs plus the tech time to oversee the migration, but very low compared to a Snow/Fireball method.
  • Windows 10 Pro Desktop marked as a server.
    I find that every install defaults to the Server version. I always go back and change it to the appropriate license as part of our installation process.
  • File List With Sizes
    From the agent console storage tab, I select the “Capacity” view which sorts by folder size. Typically what chews up a lot of space are pst files, and if you keep a lot of fulls, the storage can get chewed up fast.
  • Best solution for worst case
    We don't disable the agent console on the client devices as there times that we need to utilize it.
    Here is what we do:
    1. Protect the agent/CLI with a password (as David suggested)
    2. Disable the ability to delete backups from storage from the console (it is now disabled by default in the latest version). This necessitates using Cloudberry Explorer or the BackBlaze web portal to delete unwanted backups, but it is significantly better from a security standpoint.
    3. Disable the ability to change backup/restore plans (which protects retention policies) using the console. There are rare times when we need to edit plans on the device console itself, so we change the company agent settings to allow it and push/install an updated agent on the machine. 99% of the time we edit the plans from the web portal.
    Prior to these features being implemented in MSP360, we had that worst case actually happen. What saved us is they forgot to delete one of our three backups.
  • Image Backups of Virtual Servers
    Thanks David. We did a test of Option #4 and it worked great.
  • Time Discrepancies and Overdue Backups
    This has been happening to us, particularly for SQL Backup Plans, for over a year.
    If I open the plan and save it, the overdue goes away.
  • Status of Backblaze Synthetic Backups
    Thanks for the update. We don't have a lot of Image Backups as we have moved to Hyper-V virtuals for the bulk of our clients. But I just moved all of the Image/VHDx backups from Standard Backblaze to BB S3 so that I can use Cloudberry Explorer to manage it (vs the awful Backblaze Web Portal that takes several minutes to load the list of files).
    On an semi-related note, how is the new Backup format going in the Standalone product?
  • Optimal Retention Policy for Wasabi
    Another option is to utilize Backblaze S3 Compatible storage. Cost is $.005 per GB per month and there is no minimum retention period. Since we only keep one monthly version of images/HyperV Vhdx files in the Cloud (vs. 90 days for files), Backblaze is ideal. We keep local Daily image/VHDx copies, and consider the monthly VHDx/Image to be a Disaster recovery solution.
  • Interrupted Image based backup: Graceful continue?
    David,
    I did not know this was a feature. Can you explain how it works technically? The article you referenced is light on details.
  • Size Mismatch Between MSP Space Used and Amazon AWS
    Alternative approach:
    We do not allow deletion of storage at the client MSP360 console as a hacker can delete your backups when installing ransomware. It is a setting in the Advanced Branding. It actually happened to one of our clients - the hacker installed ransomware and deleted backups, but fortunately they failed to remove all THREE of our backup copies (One Local, two cloud). After dodging that howitzer, we disabled deleteion of backups from the console and instead use Cloudberry Explorer (or the Backblaze Portal), then we run a repository sync on the client to update the storage usage.
    Yes the repository syncs take a long time, and unfortunately no backups can run until repo syncs are complete.
    It would be great to separate the repo syncs such that for example, we could still run a local backup while the Amazon repository sync is in process.
    And if all that seems like too much trouble, and you want to use the console to delete backups as David suggest, please be sure that you have an MBS console password set , including CLI, and that it is different than the server password.
  • Unfinished Large Files
    Best way to see what unfinished files you have is to go to the Backblaze portal. (https://secure.backblaze.com/b2_buckets.htm) and it will list any unfinished files. I then just go to that particular folder and delete the files that show 0 bytes.
    Ultimately it would be great to have that done automatically, but for now, it is worth a biut of manual effort to get the savings that Backblaze affords (without the 90-day minimum that Wasabi has).
  • Wasabi Data Deletion Charges
    Or you could send the backups to Backblaze, which has no minimum retention period and costs only $5.00 per TB per month. While we keep our data file backups for 90 days, we run new Image/VHDx backups to the cloud each month and only keep one copy in the Cloud.

    Yes Backblaze does charge $0.01 cent per GB for downloads (vs Wasabi's free downloads), but we only do large restores a few times a year - a 200GB Image download costs a whopping $2.00.
  • Portal Usage - Backup Size
    Starting with the easy one - The 6 files are the individual components of the image. If you look at the backup history detail and select "files" you will see that they are the drive partitions.
    And yes the numbers are different depending on where you look.
    There are at least three different size metrics , One is the actual sum of the partitions. Another is the used size of the partitions, and the third is the actual compressed uploaded size of those partitions. In your case, I would expect the actual backed up size to be 90-100GB (110 GB minus compression) , not 4GB. The only way it would be 4GB is if you ran a block level incremental backup after the full image backup completed.
    If the 4GB is the actual full image then the only explanation is that you excluded a large number of folders from the image.
  • How to Ensure Local Backups While Cloud Backup Runs
    What is the internet upload speed of your client? We require our clients to have at least 8 mbps upload speed in order for us to provide Image backups to the cloud. 8 mbps (1 MBps) translates to roughly 3GB of backup per hour so a 62GB upload could be done in ~20 hours. A client with 2TB of data and/or image Cloud backups cannot possibly be supported if they have DSL or a 3mbps upstream speed.
    For one large client, it took us two weeks to finish the initial upload of 2TB to the Cloud over a 15mbps upstream connection, but after the initial upload was complete, the nightly block level file changes amounted to no more than 10GB or so, usually less.
    We first setup a local flle backup to an external 5TB hard drive and that cranked along at 20MB per second - that runs every night so at least they were getting local backups during the two weeks that the cloud initial upload was running.
    For this client we actually run two Cloud file/data backups in addition to the local backup each night. One Cloud Backup goes to Amazon One Zone IA, and the other goes to Google Nearline. We schedule them to run at different times each night, and they finish in 1-2 hours each.
    Summary:
    • To provide Disaster Recovery Images and to backup that much data, we would insist on at least 8mbps upstream.
    • Once the client gets a faster connection, you should run the Local Backup of both the 2TB and the Image (minus data) and setup the File backup to run nightly. Schedule the Local Image backup to run, say, Mon - Thurs block level and a Full on Friday night.
    • Start the 2TB Initial Cloud File backup ( at 1 MBps it will still take ~30 days to complete - at 2 MBps = ~15 days)
    • Once the Initial 2TB upload is complete, schedule the File/Data Cloud Backup to run each night
    • Run the (62 GB) Image backup to the Cloud. Start it on Saturday morning and it should complete easily before Monday morning.
    • Setup the Monthly Cloud Image plan to run on the First Saturday of the month, and if you want, run weekly block level image backups on the other weekends.
    Let me know how you make out with your client. I am happy to assist in designing your backup plans.
    - Steve
  • How to Ensure Local Backups While Cloud Backup Runs
    We too have some large images, so we have adopted the following approach:
    • We do both Image and file backups.
    • We exclude the data folders from the image backups to keep the images to a manageable size (The OS and App installs primarily)
    • We run separate file data backup plans nightly - one to the local drive and another plan to the Cloud
    • We run the Full image (with data excluded) to the Local drive each Saturday, with incremental image backups each weeknight.
    • Once per month we run an image backup to the Cloud. If the image is still too large to get done in a weekend, we run a monthly incremental and periodically do the Full Image backups (over three day weekends usually)
    This way, the actual data is backed up every day to both Cloud and Local drive, and the Local image is only a few days old in the worst case. For DR, having an up-to-one-month-old image is fine for our situation - we can apply any program/OS updates after recovery.
    The key principle is that separating the OS and Apps image backups from the data backups allows you to run the data backups every night to both locations regardless of how often and for how long the Image backups run.
  • Problem trying to configure backup for g suite
    David - We are trying to test the Google App Backup but are getting the same error message. Temproarily disabled. Sent in a ticket, but so dfar no response. Can you send instructions?
    Steve P.
  • Files and folders were skipped during backup, error 1603
    It is a very recently - added message. Been using SW for 6 years and it showed up just this year