Percona Streaming Backup

Percona Streaming Backup

Percona started out as a souped up, enterprise version of MySQL. With its advanced features and outstanding performance it served a valuable role in the days before big data, distributed systems and scalable infrastructure were known. You had 1 primary DB and maybe 2 secondary nodes and your entire infrastructure you store its valuable data in. These days things have arguably gotten more complex. Gone are the days of 2-3 node clusters. At big installations of Percona there is MySQL DB Sharding taking place at a thousand node cluster level where everything is automated. Nodes are taken out of usage using an automated process that first flushes the nodes and then rehydrates a new slave node before finally taking it offline.

To backup all this data Percona uses xtraBackup utility to ensure backup in streaming mode. What is streaming mode? Essentially it allows you to backup the data without touching the disk on the local node where the database is running. The advantage of this is that you can stream the backup data directly to MinIO Jumbo, which is designed to upload and retrieve large objects from the MinIO cluster. MinIO efficiently uploads these large objects by chunking them into smaller objects allowing multiple parallel streams of update (which can be configured) using multi-part upload. Jumbo is a simple Go binary so it can be integrated as part of your existing toolset as it accepts piped streams from almost any data source. This simplicity allows you to use Jumbo with any of your applications to store their large data sets in MinIO.

Next let's see how to setup Jumbo and then use it to backup your Percona data using xtraBackup to MinIO bucket.

MinIO Jumbo

You can use an existing installation of MinIO and create a bucket in there if you want to. But if you are just spinning up something quick to test you can follow the instructions below.

We’ll bring up a MinIO node with 4 disks. MinIO runs anywhere - physical, virtual or containers -  and in this overview, we will use containers created using Docker.

For the 4 disks, create directories on the host for minio:

mkdir -p /home/aj/minio/disk-1 \
mkdir -p /home/aj/minio/disk-2 \
mkdir -p /home/aj/minio/disk-3 \
mkdir -p /home/aj/minio/disk-4

Launch the Docker container with the following specifications for the MinIO node:

docker run -d \
  -p 20091:9001 \
  -v /home/aj/minio/disk-1:/mnt/disk1 \
  -v /home/aj/minio/disk-2:/mnt/disk2 \
  -v /home/aj/minio/disk-3:/mnt/disk3 \
  -v /home/aj/minio/disk-4:/mnt/disk4 \
  --name minio \
  --hostname minio \
  quay.io/minio/minio server http://minio/mnt/disk{1...4}/minio --console-address ":9001"

Now let's go ahead and set up Jumbo. To get the Jumbo binary please email us at hello@min.io.

Once the Jumbo binary is installed use the following command to set the environment variable for authentication

export JUMBO_ACCESS_KEY=”minioadmin” JUMBO_SECRET_KEY=”minioadmin”

Using MinIO Jumbo with Percona Streaming

Before we get started install the Percona xtrabackup binary to your preferred method of environment using these instructions.

The below command will get a container quickly up and running in Docker

sudo docker run --name percona-xtrabackup --volumes-from percona-server-mysql \
percona/percona-xtrabackup
xtrabackup --backup --data-dir=/var/lib/mysql --target-dir=/backup --user=root --password=mysql

Percona has several ways xtrabackup can be used in order to take backups. For example, backups can be taken to disk or streamed in a compressed format to be encrypted and later into a tarball. It makes it really easy out of the box to get your database backups to MinIO bucket.

We’ll do a couple of things here

  • We’ll compress out backup so we can use as less space as possible
  • We’ll encrypt it so that it cannot be tampered with
  • Finally copy it to MinIO bucket with increased threads for faster backup

Compress the backup

xtrabackup --backup --compress --compress-threads=8 --stream=xbstream --parallel=4 --target-dir=./ | backup.xbstream gzip -`` | \

Encrypt the backup. You can use library available such as GPG keys but in this case we’ll show you with openssl

openssl des3 -salt -k “password” backup.xbstream.gz.des3 | \

Once its encrypted we can put it in MinIO using Jumbo. WIth the support of MinIO’s Key Encryption Service (KES) brings this all together. SSE uses KES and KMS to perform cryptographic operations. The KES service itself is stateless and acts as a middle layer as it stores its data in KMS. With MinIO you can set up various levels of granular and customizable encryption. You can always choose to encrypt on a per object basis, however, we strongly recommend setting up SSE-KMS encryption automatically on the buckets so all objects are encrypted by default. Encryption is accomplished using a specific External Key (EK) stored in KMS, which can be overridden on a per object basis with a unique key.

In addition to this MinIO supports IAM S3-style policies with built in IDP (see MinIO Best Practices - Security and Access Control for more information) and Object Locking and Retention which enforces write once and ready many operations for duration based and indefinite legal hold. This allows for key data retention compliance and meets SEC17a-4(f), FINRA 4511(C), and CFTC 1.31(c)-(d) requirements.

./jumbo_0.1-rc2_linux_amd64 put http://localhost:20091/testbackup123/percona-xtrabackup

The above binary might change its version but that will essentially put the xtraBackup in MinIO without touching any part of the disk. Everything is happening in the stream in memory.

Bringing it all together would look something like this

xtrabackup --backup --compress --compress-threads=8 --stream=xbstream --parallel=4 --target-dir=./ | backup.xbstream gzip -`` | \

openssl des3 -salt -k “password” backup.xbstream.gz.des3 | \

./jumbo_0.1-rc2_linux_amd64 put http://localhost:20091/testbackup123/percona-xtrabackup

Storage Storage Storage

It's not only important to backup your databases, but to back them up to a storage that is reliable and scalable. MinIO is capable of PUT throughput of 1.32 Tbps and GET throughput of 2.6 Tbps in a single 32 node NVME cluster. This means that backup and restore operations run faster, decreasing the potential business impact of downtime. During disaster recovery scenarios the recovery time is ever more paramount time than backup times. If you backup to a slow storage your restore times are going to be slow. It's always important to test the integrity of your backup to ensure they are in working order by restoring them and testing them at random to ensure when during real DR scenarios the data can be restored safely and completely.

For more information, ask our experts using the live chat at the bottom right of the blog to learn more about the SUBNET experience or email us at hello@min.io.