Bareos and Backblaze integration
To work, we need the bareos-storage-droplet
package installed.
Alas, such a package is not available on ubuntu (which packages are available where - you need to look in the documentation).
Create a device in the storage /etc/bareos/bareos-sd.d/device/S3_ObjectStorage.conf
from the following contents
Device {
Name = Backblaze_S3_Object1
Media Type = S3_Object1
Archive Device = S3 Object Storage
Device Options = "profile=/etc/bareos/bareos-sd.d/device/droplet/backblaze.profile,bucket=test-bareos,chunksize=100M,iothreads=8"
Device Type = droplet
Label Media = yes
Random Access = yes
Automatic Mount = yes
Removable Media = no
Always Open = no
Description = "S3 device"
Maximum Concurrent Jobs = 1
}
Where test-bareos
is the name of the bucket.
You need to pay attention to the size of the storage chunk. The speed of data recovery depends on this. On average, you need to have a window twice the size of the maximum file size that is backed up. But setting a size that is too small or too large is not entirely correct. This number is selected during the startup and operation process.
In the repository it will look like <volume name>/<chunk id>
. <chunk id>
itself looks like a four digit number.
In the file /etc/bareos/bareos-sd.d/device/droplet/backblaze.profile
it looks like this:
host = s3.us-west-001.backblazeb2.com:443
use_https = true
backend = s3
access_key = "<keyID>"
secret_key = "<applicationKey>"
pricing_dir = ""
Before filling it out, you need to create an Application key
.
Where
host
- copy the address fromS3 Endpoint
, not forgetting to specify the https port. Otherwise it won’t work.access_key
- copy fromkeyID
applicationKey
- copy fromapplicationKey
pricing_dir
- leave it blank
There are many different advantages to this integration
When a backup occurs, the data is immediately uploaded to S3.
Thanks to the iothreads
option, you can avoid data loading failures during writing - by buffering the data.
Next we go to the director and set it up.
In the file /etc/bareos/bareos-dir.d/storage/S3_Object.conf
we write
Storage {
Name = S3_Object
Address = ""
Password = ""
Device = "Backblaze_S3_Object1"
Media Type = S3_Object1
}
Where:
Address
- your storage server addressPassword
- connection passwordDevice
иMedia Type
- we take it from storage.
Next, in order to use this new storage, you need to register it in the pools. Example:
Pool {
...
Storage = S3_Object
...
}