Blue-Green, Canary, DR and Ransomware - MinIO Deployment and Update Strategies

Blue-Green, Canary, DR and Ransomware - MinIO Deployment and Update Strategies

Historically, system admins brought applications offline when deploying changes and updates, resulting in downtime. Engineers then scrambled to install, update config, validate and provide a go, no-go signal. If things didn't go as planned, there could be even more delays to bring the system live and serve traffic, potentially resulting in huge revenue loss.

Now, continuous integration and continuous deployment (CI/CD) pipelines automate the application build, test, and deploy framework. This has materially improved uptime for many environments as much as possible, and streamlined and accelerated  the deployment process. Even with these advancements, however, deploying an application or updating an environment can still cause downtime and other issues.

Even before the cloud, no one liked deployment downtime. With applications hosted in traditional data centers that restricted access to local users, many organizations scheduled deployments when users were less likely to be using the applications, like the middle of the night. With widespread adoption of cloud-based, 24x7 environments that are accessed from all time zones, at every hour of the day, easy-to-find deployment windows are gone. Everyone is aiming for their applications to always be available to all potential users, all the time.

With storage solutions the complexity compounds due to the nature of JBOD/JBOF. Deployment strategies should take this into consideration and build redundancy to facilitate zero data loss and zero downtime infrastructure architecture.

In this article, we will explore two common deployment techniques that virtually eliminate downtime: blue-green deployment configuration and canary deployment configuration. We will also look at some of the trade-offs, requirements and advantages of choosing one or the other.

Approach to Deployment

It goes without saying that your application and deployment architecture plays a key role in minimizing or even eliminating deployment downtime. Generally, your environment should meet the following requirements for both the canary and blue-green deployment methods:

  • A deployment pipeline that can build, test, and deploy to specific environments
  • Multiple application nodes or containers distributed behind a load balancer
  • An application that is stateless, allowing any node in the cluster to serve requests at any time

Changes made to your application layer should be non-destructive to your data layer, and vice-versa. The challenge in minimizing deployment related downtime is bigger when the change is at the data layer. The application layer should stagger the requests to newly deployed changes to avoid errors resulting in business revenue losses.

Downtime is a major threat to business. There is much statistical research showing that people or human errors cause about 18% of unplanned outages or downtime. If we just focus on critical applications, human errors rise to another level; they could account for 40% of operation errors and system outages, which make up 55% and 22% of the critical applications downtime, respectively *


With these requirements in mind, let us dive into both zero downtime deployment options: blue-green  and canary.

Blue-Green Deployment

Blue-green deployment, the more common of the two options we are considering, essentially splits your application environment into two equally-resourced sections, a blue and a green. This enables you to serve the current version of the application on one half of your environment (the blue environment) using your load balancer to direct traffic. You can then deploy your new application version to the other half of your environment (the green environment) without affecting the blue environment.

By using your load balancers to direct traffic, you keep your blue environment running seamlessly for production users while you test and deploy to your green environment. When your deployment and testing are successful, you can switch your load balancer to target your green environment with no perceptible change for your users.

Canary Deployment

Canary deployment works similarly to blue-green deployment, but uses a slightly different method. Instead of another full environment waiting to be switched over once deployment is finished, canary deployments cut over just a small subset of servers or nodes first, before finishing the others.

There are many ways to configure your environment for canary deployments, but the simplest is to set up your environment behind your load balancer as normal, but keep an additional node or server or two (depending on the size of your application) as an unused spare.

This spare node or server group is your deployment target for your CI/CD pipeline. Once you build, deploy, and test this node, you add it back into your load balancer for a limited time for a limited group of people. This allows you to make sure changes are successful before repeating the process with the other nodes in your cluster.

MinIO Deployment Strategy

Choosing the right deployment strategy for MinIO is important. Due to the nature of how storage is managed, canary deployment will not work. Here are some of the reasons why canary is not a recommended deployment strategy:

  • MinIO operates on commodity servers with locally attached drives (JBOD/JBOF). All of the servers in a cluster are equal in capability (fully symmetrical architecture). There are no name nodes or metadata servers to set aside.
  • MinIO writes data and metadata together as objects, eliminating the need for a metadata database. In addition MinIO performs all functions (erasure code, BitRot protection, encryption) as inline, strictly consistent operations. These operations are performed on all the nodes which pool their drives and resources for supporting object storage/retrieval requests. MinIO recommends nodes in multiples of 4 and all nodes are leveraged to create a server pool. There is no concept of additional or spare nodes or drives that can be used for  canary upgrades. A canary deployment of an updated MinIO version onto an existing cluster would be disruptive to the existing server pool or pools.
  • Each MinIO cluster is a collection of distributed MinIO servers with one process per node. MinIO runs in the user space as a single process and uses lightweight coroutines for high concurrency. Drives are grouped into erasure sets (16 drives per set by default) and objects are placed on these sets using a deterministic hashing algorithm. Since all the drives are grouped and leveraged there are no additional or spare nodes or drives to perform canary upgrades. Again, canary deployment to update MinIO would be disruptive to erasure coding, hashing algorithms and defined erasure sets.

As we can see there is quite a lot going on under the hood that would make having spare nodes dangling in the cluster difficult for canary deployments. The better approach is to have two separate clusters as blue-green deployment and enable bi-directional replication, a feature provided by MinIO. Bi-directional replication will allow us to enable data consistency across blue-green cluster deployments.

Continuous Replication

The challenge with traditional replication approaches is that they do not scale effectively beyond a few hundred TiB. And yet, everyone needs a replication strategy to support disaster recovery and that strategy needs to span geographies, data centers and clouds.

MinIO’s continuous replication is designed for large scale, cross data center deployments. By leveraging Lambda compute notifications and object metadata, MinIO  quickly and efficiently computes the delta. Lambda notifications ensure that changes are propagated immediately as opposed to traditional batch mode. In the event of a failure, continuous replication means that data loss will be  minimal  - even in the face of highly dynamic datasets.

MinIO Blue-Green Deployment in a Primary Site

To successfully deploy MinIO using a blue-green deployment methodology, we would need to establish two sets of racks preferably with all the necessary switches, power supply etc separate and redundant. MinIO is a Kubernetes-native high performance object store with an S3-compatible API. The MinIO Kubernetes Operator supports deploying MinIO Tenants onto private and public cloud infrastructures (“Hybrid” Cloud).

The following procedure installs the latest stable version (latest) of the MinIO Operator and MinIO Plugin on Kubernetes infrastructure:

  • The MinIO Operator installs a Custom Resource Document (CRD) to support describing MinIO tenants as a Kubernetes object. See the MinIO Operator CRD Reference for complete documentation on the MinIO CRD.
  • The MinIO Kubernetes Plugin brings native support for deploying and managing MinIO tenants on a Kubernetes cluster using the kubectl minio command.

Follow these instructions to deploy MinIO Operator on two separate clusters of Kubernetes (MinIO supports multiple flavors of Kubernetes, including Red Hat OpenShift and VMware Tanzu) in a specific site. Once you have deployed the MinIO Operator on both the clusters, run the kubectl minio proxy command to temporarily forward traffic from the MinIO Operator Console service to your local machine. You can deploy a new MinIO Tenant from the Operator Dashboard. The next step is to enable bi-directional replication between blue and green clusters as explained here. At this point we are ready to establish a connection to both the clusters with a load balancer.

The loadbalancer could be any servicemesh, but for this blog post we will take advantage of Nginx Service Mesh. Nginx Service Mesh is an infrastructure layer designed to decouple application business logic from complex networking concerns. A mesh is designed to provide fast, reliable, and low-latency network connections for modern application architectures.

Here is a well documented step by step configuration of Nginx service mesh that can be followed to complete the setup. When a new MinIO version is released your upgrade process will look like this:

  1. Traffic is directed 100% to the blue cluster. A simple kubectl command will make sure that is taken care of.
  2. Start the deployment on the green cluster to upgrade the server to the latest version.
  3. Once basic tests are completed, start the migration process with 5% of all traffic going to the green cluster and 95% of traffic  still going to the blue cluster.
  4. Observe the behavior of MinIO to see if there are any alerts or anomalies.
  5. If there are issues, roll back with a simple yaml file and kubectl command .
  6. If everything looks good, continue the above steps with 10%, 25%, 50% and finally to 100% traffic going to the green cluster.

With this approach your Recovery Time Objective (RTO) and Recovery Point Objective (RPO)  are minimized.

To be fair, you will have some downtime in the 5% & 10% traffic cut over but this is more than acceptable given the likelihood of a total outage when using alternative approaches.

MinIO Blue-Green Deployment with a Disaster Recovery Site

Disaster Recovery can be achieved by creating another Kubernetes cluster with MinIO deployed in a different site location. Bi-directional site replication can be enabled across the blue and green clusters from the Primary Site to the DR Site cluster.

Site replication configures multiple independent MinIO deployments as a cluster of replicas called peer sites. Site replication assumes the use of either the included MinIO identity provider (IDP) or an external IDP. All configured deployments must use the same IDP. Deployments using an external IDP must use the same configuration across sites. Any MinIO deployment in the site replication configuration can resynchronize damaged replica-eligible data from the peer with the most updated (“latest”) version of that data.Only one site can have data at the time of setup. The other sites must be empty of buckets and objects.After configuring site replication, any data on the first deployment replicates to the other sites. So the starting cluster with data population will only be either blue or green cluster whichever we start with. For more information on site replication please refer here.

Initially, all APIcalls and data from the applications will flow 100% through the GSLB load balancer to NGINX Service Mesh, and Service Mesh sends 100% of the traffic to the blue cluster. Please refer to the diagram below

When planning for disaster recovery, evaluate your plan for these three main categories of disaster:

  • Natural disasters, such as earthquakes or floods
  • Technical failures, such as power failure or network connectivity
  • Human actions, such as inadvertent misconfiguration or unauthorized/outside party access or modification

Each of these potential disasters will also have a geographical impact that can be local, regional, country-wide, continental, or global. Both the nature of the disaster and the geographical impact are important when considering your disaster recovery strategy. For example, you can mitigate a local flooding issue causing a data center outage by employing a multi rack on different floors, since it would not affect more than one rack. However, an attack on production data would require you to invoke a disaster recovery strategy that fails to backup data at another site.

MinIO Deployment with a Disaster Recovery Site and Ransomware Protection

Expanding on the DR Site, we can take deployment architecture even further to include ransomware protection by creating a third site that is unidirectional from all the three sites, blue cluster and green cluster from primary site and the DR site. With this approach, any data that is being captured on any of these three sites will be sent to the third site for ransomware protection. At any given point one of the sites is active and only during split traffic both blue and green will be replicating to the third site. If and when there is a ransomware attack and Primary and Disaster Recovery sites need to be destroyed, cleaned and protected again, the data can be reverse replicated back into those sites as needed. Read multi site active-active replication for more details.

Advantages and Disadvantages of the Blue-Green Deployment Model

Let’s summarize what we have talked about in this post and why we are strongly recommending the blue-green deployment model for MinIO.


  • RTO and RPO can be zero with zero to few errors from the applications’ perspective
  • Upgrades can be validated and traffic can be managed by staggering
  • Image rehydration can be done on 30 or 60 day windows and traffic be switched.
  • Corrupted drive changes and node changes can be performed without great difficulty


  • Complexity increases materially
  • CI/CD and Automation becomes extremely important
  • Four clusters means 4X the cost for hardware (two in primary, one in DR, one in ransomware)
  • Managing GSLB load balancer and NGINX Service Mesh load balancer.


Blue-green deployment methodology and MinIO replication simplify deployment and management of your large scale object storage infrastructure without any downtime or service interruption.

Achieving this goal requires load balancing and stateless architectures. Deployment automation and consistency also matter when updating and changing your environments, if you want to achieve zero downtime. Leveraging your CI/CD pipeline to handle the automated deployment, testing, and cutover of application environments helps immensely. Maintaining a checklist of activities for each procedure will help reduce human errors.

Customers are responsible for the availability of their applications in the cloud. It is important to define what a disaster is and to have a disaster recovery plan that reflects this definition and the impact that it may have on business outcomes. Create a Recovery Time Objective (RTO) and a Recovery Point Objective (RPO) based on impact analysis and risk assessments and then choose the appropriate architecture to mitigate against disasters.

Ensure that detection of disasters is possible and timely — it is vital to know when objectives are at risk.

Ensure you have a plan and validate the plan with testing. Disaster recovery plans that have not been validated risk not being implemented due to a lack of confidence or failure to meet disaster recovery objectives.

As always, we are here to help. We have commercial licenses that include direct-to-engineer support through SUBNET. We also have our community Slack channel. We do not recommend depending on that if you are engaging in any of the above tasks - but questions can be asked and answered there.

Previous Post Next Post