• WSpeed
    0
    Hi Team,
    We started to use the NBF and had great results with the restore time.

    We are considering to use on Long-term storage projects that uses Glacier.

    Everything is pretty much the same, the only thing that it's not clear to us is how does the in-cloud copy work with Glacier, that requires to wait for the time to make the file available from cold to hot storage.

    How does this new backup format work with Glacier as far as does the synthetic full work or do we need to always reupload that massive terabytes of data when the NBF requires the full backup?

    The concern lies on
    A - do we need to from time to time run the full and reupload everything or if not
    B - do we get the retrieve from cold storage costs to have the in-cloud mechanism to work

    Thanks again for the clarifications.

    Best regards,
  • David Gugick
    118
    I do not believe synthetic fulls work in S3 Glacier. Glacier is designed for archival storage and I do not think it has the necessary APIs to perform the synthetic full. When it comes time to restore from Glacier, you can do standard or expedited delivery depending on your SLAs for getting access to the data. Expedited is more expensive. There is a new storage class called glacier instant retrieval that we support and I believe that storage class will allow you to access the data immediately.

    I do not generally recommend backing up directly to S3 glacier. You can do it, but in my experience glacier is best used for archival storage. In other words, long-term storage on data that you don't plan to change and that you don't plan to restore from unless absolutely necessary. In other words, for compliance and emergencies. You could back up to a storage class like S3 standard or S3 and infrequent access, and use a lifecycle policy to automatically move that data after some time has passed, for example 30 or 60 days, to glacier for long-term storage. If you do that and you're outside your full backup schedule, then you're effectively moving data that will never change.

    Glacier does have minimum retention periods. If I'm not mistaken, regular glacier is 90 days, and glacier deep archive is 180 days. You can use standard or expedited retrieval. And then there's the new storage class called glacier instant retrieval. It can get a little confusing and pricing varies between the different storage classes and restore options; not only in what you pay for the storage, but how much it will cost you to restore the data - egress from glacier.

    So I would strongly encourage you to use the AWS calculator for glacier and type in some examples of how much data you might be restoring and doing that for the different storage classes and restore types, so you can understand better what you are going to pay in order to restore that data. and so you can have these prices negotiated properly, in advance, with your customers so there are no surprises.

    https://calculator.aws/#/
    https://aws.amazon.com/s3/storage-classes
    https://aws.amazon.com/s3/storage-classes-infographic/
    https://aws.amazon.com/s3/storage-classes/glacier/
  • WSpeed
    0
    Thanks David for the clarification.
    We already use it for archival purposes, but we use the legacy backup which works fine.
    I was thinking on using the NBF because of the reduced number of files which would be cheaper.

    The problem is that we don't have a folder that is frozen and won't get changed anymore. If we would, we could run a full backup with NBF and forget.

    However, we have this folder that all the files (images) that are there they won't change and they can be archived directly. Although the NBF could be a very good solution, I don't think it would be a fit for this scenario.

    It would be great if it could as it would be much cheaper for us and faster to recover on a disaster.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment