• BackupFan
    2
    Hello all,

    We have this continuous issue with a client whose bandwidth is limited, where the cloud image backups seem to be far larger than they need to be. The daily (incremental) cloud image backups for these workstations range from 5 GB up to about 20 GB--the average is probably about 15 GB a day. We are quite positive that the users are not making such massive changes on a daily basis. Unfortunately, this issue is pretty much universal for computers with daily incremental image backups, but this particular client has a fairly low bandwidth, and so has a much harder time finishing their backups on a daily basis.

    We have tried to relieve some of the pressure by gradually switching about half of these computers to run their cloud image plans only on Saturdays, and to otherwise run a daily cloud file plan on a daily basis. In these cloud file plans we are pretty much only backing up the Users folder, where presumably all of the users would be making their changes. These file plans result in very reasonable amounts of data being backed up (less than 10 KB a day).

    However, even with only about half of the computers running cloud image incremental backups on a daily basis, we still find computers that have been running their cloud incremental backups for a day and a half, and are nowhere near finishing (this morning I found a computer running its cloud image plan for 1 day and 16 hours, and was at about 7% completion).
    We are also using the agent version 7.8.1.147.

    Does anyone have any recommendations for us, or having similar problems? Obviously the image plans are larger than file plans by definition, and include the OS as well as files, but this still seems excessive. An average of 15 GB a day seems very unnecessary for the average workstation, and not every network can take such large data transfers. We would prefer to use image plans, so we could easily restore these machines in the event of a failure.
  • Steve Putnam
    36
    First of all, we use redirected folders for the majority of our clients, and only backup the server. Workstations are standard builds that can be reloaded faily quickly, and because we encourage clients to maintain a spare (or two) the rebuild of an individual PC is not an emergency.

    We do local image backups of the server on a daily basis - Usually weekly fulls and daily incrementals.
    This provides full operational recovery in the event of a failure of the OS/Hardware.
    Prior to the availability of Synthetic full backups, we did a full cloud image backup only once per month.
    We would exclude from the image backup all data folders as well as the temp,recycle bin, etc to keep the size down. In a true disaster, having a one month old image was acceptable as the OS and Apps typically do not change significantly in a month.
    We do cloud and local daily file backups as well that would be used to bring the server up to date after the image is restored.
    The daily delta for our image backups is typically in the 5-15GB range, due to the fact that any change in the OS, location of temp files, etc will result in changed blocks which need to be backed up again.
    With the synthetic full capailty we now run image backups every night for all clients except those with the very slowest link speeds (<5mbps).
    The synthetic full gets run on the weekend and takes a tenth of the time that a true full would take.
    For those with slow links, we do a monthly synthetic full and weekly incrementals on the weekends.
    For our clients who are using P2P devices for file sharing, again, we only do an image of the P2P server, not individual workstations on the network.
    Not knowing how your clients are setup it is hard to make a recommendation, but certainly you should have local and cloud image backups, and utilize Synthetic cloud fulls. I recommend using BackBlaze as there is no minimum retention period. And for disaster recovery, there is no real need to keep more than one or two versions of the image in the cloud.
    For our clients that have only individual machines with the data stored locally, we simply backup the data files to the cloud (and locally if they have a USB HD device). We do not do image backups unless they are willing to pay extra for that service. ($10/month).
    Brevity is not my strong suit :)
  • Jim Richardson
    0
    Steve,

    Thanks for sharing this. Some of our image backups from workstations are quite big. I can try to go back to see if we are excluding temp, recycle bin, etc. folders. Can you think of any other folders that you typically exclude from the image backup?

    I see what you are saying about keeping a ready to use computer in the case of a failure. I like the idea of being able to restore to the original hardware or a loaner computer the image when or if a failure occurs. It seems that this can often save time.

    I am surprised though that on a computer with 100GB on a drive that 20GB needs to be backed up on some daily image backups. Even with the temp OS changes, user file changes, and AV updates, this 20GB seems to be quite large. I would have expected 2GB of backup size instead of 20GB for the daily image backups.

    Ironically Windows Servers seem to have smaller daily image backups compared to Windows workstations. This is spite of many servers running active directory and/or serving as file servers for multiple users.

    Many of the computers that we work with have solid state drives now so the defrag process should not be a factor. I don't assume that MSP360 backs up any data in the SSD drive set aside for trim and active garbage collection, is that correct?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment