Comments

  • Incremental plus periodic full -- oldest backup data set "days to purge" is "keep forever"
    I don't think that is what I did. I have several other backup plans that are similar and they all show that same property that the oldest backup is marked keep forever. I created them in one pass setting all the backup and retention properties at the same time. However, I know how to delete the old backup set when it is past its time. I just thought someone could confirm what I was seeing is expected. Apparently it is not. When the time comes I'll report back on this forum.
  • How long will legacy file backup be supported?
    according to an entry currently at the end of https://forum.msp360.com/discussion/2421/configuring-incremental-backups-with-periodic-full-backup, there is no current plan to retire the legacy backup format.
  • Immutable Backups
    Here's my proposal for immutable. You know the time to purge for each full backup (it shows in the storage view), so when you do an incremental backup on top of that full backup, set the immutable time on the incremental to the time to purge of the underlying full backup.
  • New backup format in V7 Cloudberry Backup
    While you can use the new format without GFS, the retention flexibility of the legacy backup format is basically lost. You can have one full backup starting out and incrementals on top of that forever, but nothing is ever deleted. If you want to specify some limit retention time, you need to use GFS and have multiple full backups. Each full backup costs a another full amount of storage and (for me a desktop pro owner) eats into my 5TB limit. Further, if I wanted to take advantage of immutability, it only applies to the full backups, not to incremental backups, so one would want to have rather frequent backups, like weekly with a 2 or 3 week retention. That's a large storage multiplier and eats into my 5TB limit too. I'm backing up a few hundred GB on one computer. Maybe I have to buy ultimate? When I look at how crashplan worked (I used to have it before they went to enterprise only), I think there really should be a different way of doing this. What am I missing?
  • Immutable Backups
    I was looking at Arq and how they handled this, just as a reference. If I'm reading this correctly, they set the object lock time period to be the sum of the specified "keeping" time for the backup plus the interval between full backups or their equivalent "refresh", if I understand correctly (not certain I do perfectly). Anyway, the proposal I'd make is that you set the object lock time period to be the sum of the keeping time and the interval between equivalent backups, and for each incremental you set the object lock time to basically the keeping time for the relevant full backup plus the time to the next full backup. That way all files in the full+incremental are kept for the GFS keeping time, at least and any incremental backup is available. If one is trying to protect against ransomware by having backups, this just protects the files created between full backups. I kind assume in the ransomware case that the last backups are suspicious and I will need to go back to a restore point that precedes the attack. Protecting the incrementals with immutability means I can pick a restore point that is closer to the time of attack and rescue more files.

    I haven't set up real backups with the new backup format yet. I'm not 100% clear that you recommend switching to it now. That is, is the feature fully ironed out. I know it is just off beta, was beta until fairly recently.

    Given that each full backup is adds a substantial amount to my storage, I haven't settled on the right compromise between having a lot of GFS full backups to restore to and the extra storage for that. I might think that doing only say annual full backups and having incrementals provide the restore points across the year with a keeping period of just under a year would be a good solution where the backup storage would only be about twice my dataset size plus some for all the incrementals. Perhaps you might suggest weekly full backups with keeping period of say 2 weeks. I don't really know. My point is that not protecting the incrementals with immutability seems to make the feature less complete.

    for reference https://www.arqbackup.com/documentation/arq7/English.lproj/objectLock.html
  • Immutable Backups
    It seems that it only the GFS full backups are made immutable. Is there any way to apply this to the incremental backups based on a given full backup?
  • New backup format in V7 Cloudberry Backup
    while I'm at it, I'll ask when the new backup format will be supported on linux?
  • New backup format in V7 Cloudberry Backup
    Is now a good time to start switching to the new backup format?
  • Need Clarification About 1 TB Storage Limit
    I had trouble finding the information on how the limit is totaled and applied and I filed a ticket to get clarification. First, today, the desktop edition limit is 5TB https://help.msp360.com/cloudberry-backup/overview/compare-editions . Each backup plan to the cloud is summed up, so for example I run 3 backup plans on one computer, each to a different bucket on a different server. The total across those is calculated and if it exceeds 5TB, backups are not allowed until I do something to reduce that total. I believe that it is the cloud total that matters, so with compression it could be less, and with versions backed up it could be more (relative to what is on the local disk that is being backing up).

    The exception is backup plans to local file system. Backups to the local file system (including I'm pretty sure a local NAS that is part of your file system and not a local S3 server).

    Backups to locally sited S3 such as Minio are considered cloud and are part of the 5TB limit, even if on the local LAN subnet.

    I'm only posting this to help others looking for the same information. Hope it helps.
  • New backup format in V7 Cloudberry Backup
    When would be a good time to start using the new format? I'm thinking I'll parallel my backups and continue current plans but start a new plan backing up the same set of files and directories in the new format. Then when the new format is stable, I can switch over and trash the old buckets. But I don't want to start until you think it is stable. I guess Beta is not stable in truth, so when is the new format expected to exit Beta?
  • CloudBerry Explorer and CloudBerry Backup trust Amazon Trust Services?
    I looked at my windows trusted root certificates and saw I have an Amazon cert there. Also, I went here (found this) https://aws.amazon.com/blogs/security/how-to-prepare-for-aws-move-to-its-own-certificate-authority/ and when I clicked on the test links, other Amazon certs got added. It appears that if your windows 10 is updated properly, this should be seamless.
  • CloudBerry Explorer and CloudBerry Backup trust Amazon Trust Services?
    Just following up, I have the same concern. We have until March, 2021