Trying Blackblaze for the first time (was using S3) and I started the backup last night, checked it this morning, and got to about 1% apparently, and then just stopped sending data. The backup was still running but not showing any indication of transferring. The elapsed time was running, it showed 4 files in various progress states, but nothing being transferred for 10 hours apparently. I tried checking the log files but I don't see any particular errors. I may not be looking at the right logs though. As far as I know the internet didn't go out. I paused the backup, then resumed it, and it's back to transferring again. I hope this isn't common with Blackblaze. Any thoughts on what happened?
I would request you submit the logs to support either using the agent's tools diagnostic toolbar option or from the management interface under the gear icon for that endpoint. There may be something in the logs that support sees.
I'll move the post. Without some error in the logs, it's hard to know if the problem you had was just an intermittent one. Can you clarify the backup version, the type of backup (file, image, VM, etc.), and whether you are using the new backup format released with 7.0 or the legacy backup format (you can tell this quickly in the backup wizard Retention tab - if you see GFS options you are using the new format. If you see File Version based retention options, you are using the legacy format). Thanks.
Thank you David and sorry for any confusion. The version is 7.1.3.28 . It's doing a file backup. The backup format is legacy, I believe. Which is interesting; I cloned the S3 plan to make the BackBlaze plan, and I guess it carried over the legacy format because it doesn't have the same options for retention as when I use the wizard from scratch. So, since it's barely backed up much of the 360G it has queued, I may stop it and start over creating a new plan from scratch.
The S3 backup has been running for a few years without an issue, so I was surprised to find it stalled on the Backblaze backup. Hopefully it was just a fluke.
Using the legacy backup format is fine and fully supported, and will continue to be supported for a long time. I don't suspect that's the issue. If you continue to run into the same issue please reply here.
Well it keeps happening. I submitted a support request. I also figured out how to get the logs and see a lot of:
2021-09-09 10:32:48,330 [CL] [10] WARN - Generating chunks is idle for 675 minutes. Next chunk: 3. Object: xxxx
and
2021-09-09 10:33:18,036 [CL] [36] ERROR - On chunk finish. Cnunk info: Status:Cancelled Number:2 Length:1048576000 IsLast:False Offset:1048576000, Exception: CloudBerryLab.Client.Chunks.NotAllChunksUploadedException: Not all chunks were uploaded. False
and
Unable to write data to the transport connection: An established connection was aborted by the software in your host machine.
No disk errors in the Windows event log.
The other log for the S3 plan, goes back to 2018 and I don't see anything like the above.
What sort of erks me is the program doesn't stop with some message that there's a problem. It just.. stops and sits there like it's still doing the backup. This is all just to test Blackblaze and so far it's not looking good.
Well before I even finished the above post, I got a response from support. They said the chunk size was too high; it's set to 1GB. They recommend setting it back to the default of 10MB. I'll see if that's the issue!
how many threads are using for uploads? Is this a purchased pro edition or are you using freeware? I asked because freeware is limited to single-threaded uploads. I don't believe backblaze has posted any upstream bandwidth limits to B2. Any limits are usually single stream limits but products like ours that can do multi-stream uploads through multi-threading can get far superior speed.
1. Try increasing the number of threads - maybe to 7. While I agree that 20Mbit is low if your upstream bandwidth is higher and available, you may have more luck running some additional streams. I assume you have at least 8 CPU cores. If you have more, you could try 8 threads as a test.
2. Try using a larger chunk size. 10MB Chunk Size may be too low as it will create a large number of chunks - and that requires additional I/O. Maybe try 50 MB and see how that goes.