• BackupFan
    2
    It looks like there are some interesting upgrades available via the Backup Agent 7.8 and Management Console 6.3. I see that there are some limitations concerning GFS retention policy and Immutability if the Forever Forward Incremental schedule is utilized. Would be curious to hear any feedback.

    Does anyone know if the GFS retention policy and Immutability feature are expected to be make available with the use of the Forever Forward Incremental schedule in the near future?
  • Steve Putnam
    35
    The way I see it, they are mutually exclusive. The only value of FFI is to those of us who do not use GFS or immutability.
    It will hopefully make NBF file backups have similar storage consumption as Legacy format has.
    Am still trying to fully understand the new features, but when I do, I will write-up my assessment in this forum.
  • BackupFan
    2
    Thanks Steve.
  • Alexander Negrash
    23
    Forever Forward Incremental backup is designed to keep only one backup generation on the storage, e.g., one full and a series of increments. Are you looking for an FFI scheme together with an option to keep several full backups?

    Speaking of Immutability, currently, it works only with backup plans configured with GFS schedule enabled. We plan to make it available outside of GFS plans, meaning it will work with Forever Forward Incremental too.
  • WSpeed
    0
    Hi

    Does the FFI work with Glacier Storage? This is a game changer for us, we thought we were going to use the NBF for long term storage, but as it required many full uploads for a data that is only being added, it didn't work for us.

    Hopefully this new FFI work so we can migrate all customers to this new format and save money.
  • Alexander Negrash
    23
    Hi , FFI should work with S3 Glacier Instant Retrieval. NBF doesn't support S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes. That is due to no instant access to the storage required for the software to perform in-cloud copy operations to synthesize full backups inside the cloud.
  • WSpeed
    0
    Understood. Are there any plans to support this in the future? We heavily use Flexible Retrieval.
  • Alexander Negrash
    23
    How frequently do you run your backups? As I mentioned in my previous post, synthetic full backup, one of the core features of NBF, wouldn't work with long-term archival storage in the cloud due to technology limitations. In other words, if we add an option to disable synthetic full for NBF, you will need to upload an entire data set every time the software runs a full backup. With FFI, synthetic fulls happen every next backup plan run after your initial retention period expires.

    Can you give an example of your typical backup plan schedule and retention settings?
  • WSpeed
    0
    In this scenario for long-term storage, we're talking about image exams that need to be stored forever.
    That being said, there are no files being deleted, only added.

    We run daily backups using legacy format.
    We haven't used NBF because IMO it's pointless to have multiple full backups on a dataset that we're only adding data.

    In summary, we have a backup plan that backs up a folder running every day at 8pm never delete the files.
  • Alexander Negrash
    23
    Thanks for the details. I will discuss your use case with the RnD team and see what we can do for you.
  • WSpeed
    0
    That's awesome.
    Another thing to add here, imagine that a backup plan that uploads 5TB of data, may take 3-6 weeks depending on the internet connection, and during business hours we must reduce the usage to 30%, so it takes a while to upload everything.

    Sometimes, the connection or the server gets rebooted and the first full backup gets failed.

    We also would like your help to be able to continue a full backup failed (the data is already in the cloud), knowing that something happened in the middle of the operation.

    I'm not sure if I was able to get myself clear, but it's something that with the legacy backup format, sometimes on these cases we have to run 5-6 times on a 2month timespan until we have all data uploaded. As you might know, s*** happens, and that's what we must take into consideration on the first full backup, so we don't need to create a brand new Generation as the first one (that failed in the middle is pointless to have there in the cloud).
  • Steve Putnam
    35
    The new backup format, as I understand it, will continue "where it left off" rather than having to start all over again. But the big thing is the synthetic fulls process data at a rate of 200-300 GB per hour using in-cloud copying. In my experience synthetic fulls take less than -20% of the time that the initial full took. So a full that took 4 days gets done in less than 15 hours - something easily doane over the wekend.
  • Alexander Negrash
    23
    Steve is correct that NBF with synthetic full takes significantly less time to upload and allows resuming backups. Unfortunately, with S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes, it's technically impossible to perform synthetic fulls.

    One of the possible takes can be using one of the S3 hot or cool storage classes for initial data upload and then using S3 Lifecycle Rules to transfer objects to archive storage. Still, I think you will have at least two copies of your data in the cloud with this approach. Also, AWS itself has some limitations on how S3 Lifecycle Glacier storage classes work together; check out here
    https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
  • WSpeed
    0
    H Thanks for your feedback Steve. That was the same assumption I had, until I upload a lot of data, the backup failed because MSP360 maintenance window was in the middle of uploading 2-3TB of initial data.

    I opened a support ticket and that's the reply I had once I questioned about why it didn't left of where it was and why it was starting a new full backup, meaning that all the days I had sent data was going to the trash:

    "If the backup fails, it needs to be started anew (just the latest run, not the whole chain), leaving the data that was uploaded on the storage and marking it as a failed incremental run. The backup can not be continued, from the same spot, neither in legacy nor in the new backup format plan.
    What actually happens when the incremental backup fails and we start a new backup run: The software checks if the previous backup execution was successful or not and most importantly if the restore point created during the previous backup execution is valid. If the restore point due to backup failure is invalid the software looks for the last valid restore point before it. When the valid restore point is finally found, the software starts listing the information about the data that is on the machine and in the cloud. Once this is done it starts the upload of the data. (some steps like shadow copy creation are skipped to shorten the description of the process)."
    That said according to the screenshot you have provided, the full backup run failed and the last restore point before the full was in a previous chain, hence all your current incrementals that were successfully uploaded belong to a previous backup chain. (the last successful full backup)"

    Later I questioned about if the first full backup fails and would it reuse the data...

    "Unfortunately no, since the shadow copies are different, and the data on the storage might have an unfinished upload which means that the file is basically useless since part of it is missing. There is a way to use the data that is on the storage already, but it requires a few things such as a successful first full backup and Backup storage that supports synthetic backups. You can read about this option here: https://help.mspbackups.com/backup/about/backup-format/synthetic-full-backup""

    about the Synthetic Full, I'm aware and that's wonderful as it doesn't cost as well.
  • WSpeed
    0
    I guess the Lifecycle transition would be the way to go, but the Tool would need to work with when we need to restore data.

    we already have in place and working a type of storage that is
    7 days in hot storage
    in the 8th goes to the cold storage (Glacier), but using legacy, it works fine.

    With the new backup format, we could upload to S3 and then send to Glacier. That's a test I'm gonna try to make on the next week to see if it works. Specially the download part.
  • Steve Putnam
    35
    Wspeed . given that all the files you upload are never modified and never deleted, the old back up format is probably a better fit. One of the things that has tripped me up is the cost of lifecycle transitions from S3 to glacier. They charge five cents for every thousand objects migrated.
    I did some calculations a while back and determined that transitioning to glacier was only cost-effective if the average file size was over half a megabyte.
    I suspect the average size of your images is significantly more than that so even in legacy format it’s worth it.
    Going forward, we are sending all back ups, file and image,to Backblaze, as it only cost $.50 per gigabyte per month, supports synthetic fulls, and has no minimum retention.
    The API call charges are also significantly lower than Amazon.
  • WSpeed
    0
    Thanks Steve. I'm considering using other storage options, but as these are very important files, we still use Amazon.

    The bottom-line is that Backblaze is not even mentioned by gartner https://aws.amazon.com/resources/analyst-reports/22-global-gartner-mq-cips/?nc1=h_ls

    Regarding the lifecycle transition, if we could use the NBF the number of requests would be greatly reduced as for e.g., a case where we have 5.8 million files, would be greatly reduced using the NBF and would be cheaper to store, transaction and lifecycle change.

    Anyways, hopefully the team make some small changes on FFI and intelligent tracking to allow the usage of long term storage.

    As we say here, we already have the knife and the cheese, maybe with a couple of changes it could make it possible.

    Until then, we still use legacy.
  • WSpeed
    0
    Hi Team,
    Are you able to use New backup format and Glacier Instant Retrieval for the Synthetic Fulls?
    I tried to configure but it prompted to disable the synthetic full, despite using the supported format As per documentation, https://help.mspbackups.com/backup/about/backup-format/synthetic-full-backup
    Support for Major Storage Providers
    A synthetic backup type is supported by the following storage providers:

    Amazon S3 (except S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage classes)

    I opened a ticket and they said the documentation needed to be updated.

    Just to check if I made a confusion here, or maybe I'm doing something wrong to have NBF and Glacier Instant (which is the new glacier format)
  • Steve Putnam
    35
    I do not believe that any of the Glacier options support the in-cloud copying necessary for Synthetic fulls. Curious as to why you want to use Glacier - I know it is less expensive, but BackBlaze and Wasabi are not much more per GB and they do support the Synthetic fulls.
  • WSpeed
    0

    Thanks for your valuable inputs as always Steve.

    I asked first because they mentioned is supportable, but in fact it's not in the agent. I received a reply that on an upcoming version it will be available only for Instant Retrieval, which is good.

    The reason why we want to use Glacier for specific projects is that we have customer who have a huge amount of data which are for archival purposes, but are like 6,7,10 million files, which need to be stored for life. If we use the NBF, we have lower cost for the initial upload (less transactions), requirement satisfied on the archival, and fast download. The way the agent works with the legacy backup format that unarchives files by each 1000, it would take a year to download the entire dataset, which as far as the NBF would be really faster.

    Finally, the reason why we don't use Wasabi and Backblaze (despite the fact I know it's a lot cheaper than any of the AWS services) is because they are not even on the Gartner Magic quadrant:
    https://d2908q01vomqb2.cloudfront.net/da4b9237bacccdf19c0760cab7aec4a8359010b0/2022/10/28/CIPS-MQ-991x1024.png

    Those players don't provide the SLA and security we require for our customers.

    Thanks again for the inputs.

    Best regards,
  • Alexander Negrash
    23
    synthetic full for Glacier IR is only supported on the agent level starting from v7.8.2. You can try to configure backup plans there, and we are working to make it available in the web management console.
  • Steve Putnam
    35
    Backup Fan - I understand your concerns with BB and Wasabi - We actually use them as a SECOND cloud backup after our primary - Amazon S3 IA
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment