Simplifying Object Storage as a Service with Kubernetes and MinIO’s Operator

Simplifying Object Storage as a Service with Kubernetes and MinIO’s Operator

Object storage as a service is the hottest concept in storage today. The reason is straightforward: object storage is the storage class of the cloud and the ability to provision it seamlessly to applications or developers makes it immensely valuable to enterprises of any size.

The challenge is that object storage as a service has traditionally been very difficult to deliver. Overly complex, hard to tune for performance, prone to failure at scale etc. While systems like Kubernetes offer powerful tools for automating the deployment and management of these systems, the overall problem of complexity remains unsolved as administrators must still invest significant time and effort to deploy even a small scale object storage resource.

By combining Kubernetes with our new Operator and our Operator Console graphical user interface, MinIO is changing that dynamic in a big way. It should be stated upfront that MinIO has always obsessed over simplicity. It permeates everything we do, every design decision we make, every line of code we write.

Nonetheless, we saw even more opportunity for simplification. To do this we created the MinIO Operator and the MinIO kubectl plugin to facilitate the deployment and management of MinIO Object Storage on Kubernetes. While the Operator commands were critical for users already proficient with Kubernetes, we also wanted to address a wider audience so we created a Graphical User Interface for the Operator and incorporated it into our new MinIO Operator Console to enable anyone in the organization to create, deploy and manage object storage as a service.

Kubernetes is the platform of the Internet. Given its massive adoption, we chose to remain consistent with the Kubernetes way of doing things. This meant not using any specialized tools or services to setup MinIO.

The effect is that the MinIO Operator works on any Kubernetes distribution, be it OpenShift, vSphere 7.0u1, Rancher or stock upstream. Further, MinIO will work on any public cloud provider such as Amazon's EKS (Elastic Kubernetes Engine), Google's GKE (Google Kubernetes Engine), Google's Anthos or Azure's AKS (Azure Kubernetes Service).

Pretty much all you need to get started on any distribution of Kubernetes is some storage device that can be presented to Kubernetes either via Local Persistent Volumes or with a CSI Driver.

Let’s start with a review on using MinIO with the kubectl plugin and a kustomize based approach. You'll need to install the kubectl tool on a computer with network access to the Kubernetes cluster. See Install and Set Up kubectl for installation instructions. You may need to contact your Kubernetes administrator for assistance in configuring your kubectl installation for access to the Kubernetes cluster.

Installation

kubectl plugin

To install MinIO Operator we can leverage it's kubectl plugin which can be installed via krew

kubectl krew install minio


After which we can install the Operator by simply doing

kubectl minio init


Installation with kustomize

Alternatively for anyone who prefers a kustomize based approach our repository supports installing specific tags, of course you can also use this as the base for your kustomization.yaml file

kubectl apply -k github.com/minio/operator/\?ref\=v4.0.2


Provisioning Object Storage

The analogy we used to represent a MinIO Object Storage cluster is Tenant. We did this to communicate that with the MinIO Operator one can allocate multiple Tenants within the same Kubernetes cluster. Each tenant, in turn, can have different capacity (i.e: a small 500GB tenant vs a 100TB tenant), resources (1000m CPU and 4Gi RAM vs 4000m CPU and 16Gi RAM) and servers (4 pods vs 16 pods), as well a separate configurations regarding Identity Providers, Encryption and versions.

Let's start by creating a small tenant with 16Ti capacity across 4 nodes. We will first create a namespace for the tenant to be installed called `minio-tenant-1` and then place the tenant there using the `kubectl minio tenant create` command.

Pay close attention to the storage class. Here we will use the cluster's default storage class - called standard, but you should use whatever storage class can accommodate 16Ti (or 1Ti persistent volumes).

kubectl create ns minio-tenant-1
kubectl minio tenant create minio-tenant-1 \
      --servers 4                             \
      --volumes 16                            \
      --capacity 16Ti                         \
      --namespace minio-tenant-1              \

      --storage-class standard

This command will output the credentials needed to connect to this tenant. MinIO only displays these credentials once, so make sure you copy them to a secure location.

Tenant 'minio-tenant-1' created in 'minio-tenant-1' Namespace
  Username: admin
  Password: dbc978c2-bfbe-41bf-9dc6-699c76bafcd0 

  Note: Copy the credentials to a secure location. MinIO will not display these   

  again
+-------------+------------------------+------------------+--------------+-----------------+
| APPLICATION |      SERVICE NAME      |     NAMESPACE    | SERVICE TYPE | SERVICE PORT(S) |
+-------------+------------------------+------------------+--------------+-----------------+
| MinIO       | minio                  | minio-tenant-1   | ClusterIP    | 443             |
| Console     | minio-tenant-1-console | minio-tenant-1   | ClusterIP    | 9090,9443       |
+-------------+------------------------+------------------+--------------+-----------------+

Usually a tenant takes a few minutes to provision while the MinIO Operator requests TLS certificates for MinIO and the Operator Console via Kubernetes Certificate Signing Requests, you can check the progress by doing:

kubectl get tenant -n minio-tenant-1

This will tell you your tenant’s current state:

➜ kubectl get tenants -n minio-tenant-1
NAME             STATE                               AGE
minio-tenant-1   Waiting for MinIO TLS Certificate   19s

After a few minutes the tenant should report an Initialized state, indicating your Object Storage cluster is ready:

➜ kubectl get tenants -n minio-tenant-1      
NAME             STATE         AGE
minio-tenant-1   Initialized   3m21s

That's it, our Object Storage cluster is up and running, we can access it via kubectl port-forward. In the case of MinIO:

➜ kubectl port-forward svc/minio 9000:443 -n minio-tenant-1
Forwarding from 127.0.0.1:9000 -> 9000
Forwarding from [::1]:9000 -> 9000

And then go to https://localhost:9000/minio/ in your local browser

Or you can also go to the MinIO Console

➜ kubectl port-forward svc/minio-tenant-1-console 9090:9443 -n minio-tenant-1
Forwarding from 127.0.0.1:9090 -> 9443
Forwarding from [::1]:9090 -> 9443

Pretty easy right?

But now let's stop, rewind and remix to add a tenant using the MinIO Console for Operator (a.k.a Operator UI). To access it we can simply run the kubectl minio proxy command. This will tell us how to access the Operator UI.

As you can see, it's telling you to visit your local browser's http://localhost:9090/login and it's also telling you the JWT to access the Console UI

Inside the Operator UI we can see the tenant that we provisioned previously using the kubectl plugin.

To add another one, hit Create Tenant. The first screen will ask a few configuration questions:

1) name the tenant

2) select a namespace

3) select a storage class

If you wish to configure an Identity Provider, TLS Certificates, Encryption or Resources for this tenant I invite you to play with the Advanced Mode where these configuration options reside.

On the next screen you will be asked to size your tenant and get a preview of what's going to happen for the number of servers, drives and erasure coding parity values you select:

The following screen provides a preview of what's about to be provisioned:

Now click Create. That is it.

Going back to the list of tenants we can see our original cli-provisioned tenant next to the tenant created using the Operator UI. These processes are equivalent. It is only personal preference as to which you select.

Finally, if you are curious about how to provision a MinIO tenant via good old yamls you can get the definition of a tenant and get familiar with with our Custom Resource Definition:

➜ kubectl get tenant bigdata-storage -o yaml

Which returns

apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: bigdata-storage
namespace: default
spec:
console:
  consoleSecret:
    name: bigdata-storage-console-secret
  image: minio/console:v0.6.0
  replicas: 1
  resources:
    requests:
      memory: 64Mi
credsSecret:
  name: bigdata-storage-secret
env:
  - name: MINIO_STORAGE_CLASS_STANDARD
    value: EC:8
exposeServices:
  console: true
  minio: true
image: minio/minio:RELEASE.2021-03-01T04-20-55Z
imagePullSecret: { }
log:
  audit:
    diskCapacityGB: 10
  image: minio/logsearch:v4.0.0
  resources: { }
mountPath: /export
pools:
  - affinity:
      podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
                - key: v1.min.io/tenant
                  operator: In
                  values:
                    - bigdata-storage
                - key: v1.min.io/pool
                  operator: In
                  values:
                    - pool-0
            topologyKey: kubernetes.io/hostname
    name: pool-0
    resources:
      limits:
        memory: 32Gi
      requests:
        memory: 2Gi
    servers: 4
    volumeClaimTemplate:
      metadata:
        name: data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: "68719476736"
        storageClassName: standard
    volumesPerServer: 4
prometheus:
  diskCapacityGB: 5
  resources: { }
requestAutoCert: true

Conclusion

We've gone to great lengths to simplify the deployment and management of MinIO on Kubernetes. It is simple to install the Operator and use it to create tenants either by command line or by graphical user interface.  This, however, is just a subset of the features of MinIO on Kubernetes. Each MinIO Tenant has the full feature set available with baremetal deployments - so you can migrate your existing MinIO deployments to Kubernetes with full confidence in functionality

I encourage you to try the MinIO Operator yourself and explore other cool features such as using the Prometheus Metrics and Audit Log, or securing your MinIO Tenant with an external Identity Provider such as LDAP/Active Directory or an OpenID provider.

No matter what approach you take, the ability to provision muti-tenant, object storage as a service is now within the skill set of a wide range of IT administrators, developers and architects.


Previous Post Next Post