Gone [to Prod] in 60 Seconds

Gone [to Prod] in 60 Seconds

We designed the MinIO with a plethora of features without sacrificing the simplicity of usage or the beauty of our Console UI/UX. To learn what Enterprises want out of their object stores we’ve polled hundreds of people across multiple conferences in the past six months. Two of the top three things that they identify as key features/capabilities are: Ease of Install, Use and Management of MinIO.We often talk about this as the Day One, Two and Beyond class challenges and no one spends more time thinking about this than MinIO. To prove it we are going to get you to production in 60-ish seconds. We encourage you to try it yourself and post your videos. We will respond with a MinIO tshirt for those who come in under a minute. 

Why is this important? CTOs and other Industry leaders want their teams spending more time using and learning the product rather than trying to set it up and running. There is no benefit to debugging the install portion of an application, it's more fun to use the application itself once it's up and running. Also the more arduous the install and setup phase the more deterred organizations are to go through with the product because that means more valuable time taken away from engineers on subsequent installs when they want to expand globally to other regions.

Speaking of global, it's important to not only ensure the initial development phase goes smoothly but also the subsequent phases of going to production. No point in getting everything working on a single instance in dev mode but then struggle to set it up in Cluster mode or scale and expand that Cluster.

In this post we’ll show you how quickly you can get a production grade of MinIO cluster up and running in just a few seconds. Not only that, but we’ll also show you how you can expand that cluster quickly in just a few seconds as well.

Kubernetes Cluster Prerequisites

Before deploying MinIO be sure you have the required hardware setup.

  • You need a minimum of 4 nodes. These could be VMs or Physical nodes.
  • You need Kubernetes running atop of these.
  • The Disks, CPU and Memory must meet the minimum requirements.

Add a label to the 4 nodes into pool `zero` like so

kubectl label nodes k8s-worker  pool=zero

kubectl label nodes k8s-worker2 pool=zero

kubectl label nodes k8s-worker3 pool=zero

kubectl label nodes k8s-worker4 pool=zero

Next, use the MinIO Enterprise operator’s Kustomize configuration. This is where the Kubernetes yaml files for creating our 4-node MinIO cluster are stored.

git clone https://github.com/miniohq/aistor-operator.git

Deploy MinIO Cluster

Launching the MinIO cluster is fairly straightforward. You can edit the yaml files for fine tuning settings but here we will go with the sane defaults that have already been set for you.

Go ahead and apply tenant configuration to launch pool-0

$ kubectl apply -k aistore-operator/examples/kustomization/bases

Check to make sure there are 4 pods in the pool

NAME                         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)  

minio                        LoadBalancer 10.104.10.9   <pending> 443:31834/TCP

myminio-hl            ClusterIP   None          <none>    9000/TCP

myminio-log-hl-svc    ClusterIP   None          <none>    5432/TCP

myminio-log-search-api ClusterIP   10.102.151.239 <none>    8080/TCP

myminio-prometheus-hl-svc ClusterIP   None          <none>    9090/TCP

That is! This is the initial setup that most folks start out with. This ensures you are set up in a way that sets you up to seamlessly expand in the future in a production environment.

Expand MinIO Cluster

Now ideally, MinIO recommends you to capacity plan ahead with enough hardware and storage capacity that you don’t need to add pools to expand the cluster. But on the off chance that you do need to expand, it's not so difficult and can be done in just a few seconds.

The prerequisites are similar to the pool-0. Bring up 4 more nodes without enough resources, be sure to have Kubernetes installed on them and apply the node label. Let's go ahead and do that now.

Add a label to the 4 nodes into pool zero like so

kubectl label nodes k8s-worker5 pool=one

kubectl label nodes k8s-worker6 pool=one

kubectl label nodes k8s-worker7 pool=one

kubectl label nodes k8s-worker8 pool=one

Expanding a pool is a non-disruptive operation which causes zero cluster downtime. Below is a diagram of what we intend to achieve as an end result.

Edit the tenant-lite config to add pool-1

kubectl edit tenant -n tenant-lite

It should open a yaml file, find the pools section and add this below that.

  - affinity:

      podAntiAffinity:

        requiredDuringSchedulingIgnoredDuringExecution:

        - labelSelector:

            matchExpressions:

            - key: v1.min.io/tenant

              operator: In

              values:

              - myminio

            - key: v1.min.io/pool

              operator: In

              values:

              - pool-1

          topologyKey: kubernetes.io/hostname

    name: pool-1

    nodeSelector:

      pool: one

    resources: {}

    runtimeClassName: ""

    servers: 4

    volumeClaimTemplate:

      metadata:

        creationTimestamp: null

        name: data

      spec:

        accessModes:

        - ReadWriteOnce

        resources:

          requests:

            storage: "2147483648"

        storageClassName: standard

      status: {}

    volumesPerServer: 2

As soon as you save the file, the new pool should be starting to deploy. Verify it by getting a list of pods.

$ kubectl get pods -n tenant-lite -o wide

NAME               READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES

myminio-pool-0-0   1/1     Running   0          12h   10.244.7.5    k8s-worker    <none>           <none>

myminio-pool-0-1   1/1     Running   0          12h   10.244.5.5    k8s-worker3   <none>           <none>

myminio-pool-0-2   1/1     Running   0          12h   10.244.4.10   k8s-worker2   <none>           <none>

myminio-pool-0-3   1/1     Running   0          12h   10.244.8.13   k8s-worker4   <none>           <none>

myminio-pool-1-0   1/1     Running   0          12h   10.244.3.10   k8s-worker8   <none>           <none>

myminio-pool-1-1   1/1     Running   0          12h   10.244.6.15   k8s-worker6   <none>           <none>

myminio-pool-1-2   1/1     Running   0          12h   10.244.2.7    k8s-worker5   <none>           <none>

myminio-pool-1-3   1/1     Running   0          12h   10.244.1.10   k8s-worker7   <none>           <none>

There you have it. Wasn’t that a remarkably easy way to expand?

With a Grain of Salt

Take the above with a grain of salt. To achieve this level of installability and expandability you do have to have the initial hardware and Kubernetes cluster setup to make this as quick as possible. But how many high-performance, enterprise class storage systems out there can you set up in production with the same speed as the MinIO Object Store? I mean think about it, go ahead, name me one, I’ll wait….

Couldn’t think of any? Yes there are storage solutions that can get it in Dev mode quickly, but they are a pain to get it working anywhere near in clustered mode in regular bare metal Linux boxes, let alone adding the complexities of Kubernetes on top of that. 

Here at MinIO we built our Object Store with simplicity and ease of install and usage not as an afterthought, but as the foundation. While we have every enterprise-grade feature imaginable (and some others), ultimately an application has to be used and maintained on a daily basis, so the easier it is to set up, use, maintain and automate - the more valuable it is to the organization. 

If you have any questions on how to get MinIO Enterprise Object Store up and running quickly be sure to reach out to us on Slack!