Some our community members have repeatedly asked for Backblaze B2 Cloud Storage and MinIO integration  . B2 is competitively priced and has a huge fan following. We also heard from Backblaze team that they are actively expanding their B2 cloud storage service. We added experimental support for Backblaze B2 backend in MinIO to add S3 compatible API support for their B2 service.
MinIO internally translates all the incoming S3 API calls into equivalent B2 storage API calls, which means that all MinIO buckets and objects are stored as native B2 buckets and objects. S3 object layer is transparent to the applications which use S3 API. This way you can simultaneously use both Amazon S3 and B2 APIs without compromising any features.
Download pre-built binaries from
- Linux amd64 (https://data.minio.io:10000/minio-b2/linux-amd64/minio)
- MacOS amd64 (https://data.minio.io:10000/minio-b2/darwin-amd64/minio)
Source is available at https://github.com/minio/minio/pull/5002
Install on Linux amd64
wget https://data.minio.io:10000/minio-b2/linux-amd64/minio chmod +x minio
Start Minio Server
Once you have the account id and application key, export them as environment variables.
export MINIO_ACCESS_KEY=<your_b2_accound_id> export MINIO_SECRET_KEY=<your_b2_application_key> minio gateway b2
To test your setup, point your browser to http://localhost:9000. Use the same B2 credentials to login and access your data.
Download MinIO client (mc) which provides a modern alternative to UNIX coreutils such as ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services.
mc config host add myb2 http://localhost:9000 b2_account_id b2_application_key
Once you have configured
mc , you can use subcommands like
ls, cp, mirror to manage your data.
mc ls myb2 [2017-02-22 01:50:43 PST] 0B b2-bucket/ [2017-02-26 21:43:51 PST] 0B my-bucket/ [2017-02-26 22:10:11 PST] 0B test-bucket1/
from minio import Minio from minio.error import ResponseError client = Minio('localhost:9000', access_key='b2_account_id', secret_key='b2_application_key', secure=False) # Get a full object try: data = client.get_object('my-bucketname', 'my-objectname') with open('my-testfile', 'wb') as file_data: for d in data.stream(32*1024): file_data.write(d) except ResponseError as err: print(err)
Backblaze B2 does not support CopyObject and CopyObjectPart API which is rarely used. Though it is possible to emulate this at MinIO layer using GetObject and PutObject it would incur additional costs for data transfer.
Backblaze B2 support is available now in MinIO master branch, please test with the latest code
docker run -p 9000:9000 --name azure-s3 \ -e "MINIO_ACCESS_KEY=b2_accound_id" \ -e "MINIO_SECRET_KEY=b2_application_key" \ minio/minio:edge gateway b2