Locking down MinIO Operator Permissions

Locking down MinIO Operator Permissions

While you can deploy MinIO on Kubernetes with a deployment or statefulset, the recommended way of deploying MinIO on Kubernetes is via the official MinIO Operator. Why? 

The MinIO Operator simplifies managing MinIO on your Kubernetes cluster, not only during the initial deployment (Day 0 and Day 1) but also during ongoing Day 2 operations. For instance, when expanding a cluster by adding Server Pools it's as simple as running a few kubectl commands. Please see Expand a MinIO cluster using Pools for additional details.

The MinIO Operator supports deploying MinIO tenants onto a Kubernetes cluster in any cloud, on-prem, at the edge or in a hybrid environment – essentially wherever you can run a Kubernetes cluster. The MinIO Operator installs a Custom Resource Definition (CRD) and a Kubernetes plugin using the krew plugin which provides the ability to manage MinIO tenants using kubectl minio commands.

Let’s take a look at the above diagram in detail. The MiniO Operator will be deployed in a dedicated namespace; we’ll show you how this comes in handy later. The namespace consists of 2 pods:

Operator: The operator pod is responsible for the maintenance of tenants such as deploying, managing, modifying and other actions.

Console: The console pod is a graphical interface for performing similar functions as you would with the CLI using the kubectl minio command.

In addition to this, each tenant deployed by the MinIO Operator needs to be in a separate namespace. It also needs to create the following 3 containers within the pod.

Init container: This is used to configure the main MinIO container on start-up, once the MinIO container is up this container is terminated.

MinIO container: This is the container where MinIO runs (similar to a single bare-metal install). This is the container where the tenants ultimately attach Persistent Volume Claims (PVCs) to talk to Persistent Volumes (PV) that store the objects.

Sidecar container: This is a container used for monitoring various operations in the cluster such as configuration secrets for the tenant and root credentials. If these get changed they will be automatically updated. MinIO has also built a sidecar container called Sidekick, a tiny load balancer attached as a sidecar to each of the client application processes; you can eliminate the centralized loadbalancer bottleneck and DNS failover management. Sidekick automatically avoids sending traffic to failed servers by checking their health via the readiness API and HTTP error returns.

As you can see, there are several moving parts such as containers, namespaces and PVCs required to deploy the MinIO cluster. These moving parts need specific permissions to be able to perform their actions. While we always follow security best practices and design the MinIO Operator to use the least permissions possible, sometimes a MinIO deployment must be further locked down to meet regulatory requirements in fields such as finance and healthcare where models for AI/ML and other sensitive data/IP are stored. 

In this post, we’ll show you how to configure the MinIO Operator with the most restrictive namespace permissions – all the while being able to fully utilize the power and flexibility of the MinIO Operator for day-to-day operations.

How to Lockdown Operator

As we go through the process of locking down the MinIO Operator, we assume that you are familiar with Kubernetes concepts and procedures. While we might show you some best practices, this blog post is not a replacement for the Kubernetes docs. With that in mind let's get the ball rolling.

We’ll use Kustomize to install the MinIO Operator, be sure to install Kustomize using these instructions beforehand.

Generate the operator.yaml file and concatenate all the resources into a single file

kustomize build github.com/minio/operator/resources/\?ref\=v5.0.9 > operator.yaml

Open operator.yaml. There will be the following sections related to console-sa-role and console-sa-binding. Remove everything related to these two settings.

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: console-sa-role

rules:

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: console-sa-binding

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: console-sa-role

subjects:

  - kind: ServiceAccount

name: console-sa

namespace: default

Please note that the above yaml has been truncated because of the number of rules. We showed you the beginning and end of the yaml, be sure to remove everything between those two sections.

We are removing the permissive permissions that the console had been given by default. The downside to this is that since the JWT Token will be limited to the Operator's Cluster Role, the Console UI will no longer be able to create namespaces or delete volumes. These tasks have to be done manually outside the automation scope of the Operator.

Once the console-sa-role and console-sa-binding resources are removed from operator.yaml, apply the rest of the resources.

kubectl apply -f operator.yaml

Generally, the next step would be to access the tenant console by port forwarding with the command below

kubectl minio proxy

But in this case, although we can access the UI, we cannot create any tenants because of the locked-down namespace that contains the console. So how do we deploy the tenant? Let’s use Kustomize again to build a yaml of the resources required to deploy the tenant.

kustomize build github.com/minio/operator/examples/kustomization/tenant-lite\?ref\=v5.0.9 > tenant.yaml

Once the tenant yaml is built it can be deployed as follows

kubectl apply -f tenant.yaml

Let's test to make sure we can access the newly created tenant

# mc alias set myminio https://minio.tenant-lite.svc.cluster.local:443 minio minio123
Added `myminio` successfully.

Create a bucket to verify it gets created

root@ubuntu:/# mc mb myminio/ajtest
Bucket created successfully `myminio/ajtest`.

Check the status of the MinIO cluster and the erasure sets

root@ubuntu:/# mc admin info myminio

●  myminio-pool-0-0.myminio-hl.tenant-lite.svc.cluster.local:9000

   Uptime: 3 minutes

   Version: 2024-01-09T19:57:37Z

   Network: 4/4 OK

   Drives: 2/2 OK

   Pool: 1


●  myminio-pool-0-1.myminio-hl.tenant-lite.svc.cluster.local:9000

   Uptime: 3 minutes

   Version: 2024-01-09T19:57:37Z

   Network: 4/4 OK

   Drives: 2/2 OK

   Pool: 1


●  myminio-pool-0-2.myminio-hl.tenant-lite.svc.cluster.local:9000

   Uptime: 3 minutes

   Version: 2024-01-09T19:57:37Z

   Network: 4/4 OK

   Drives: 2/2 OK

   Pool: 1


●  myminio-pool-0-3.myminio-hl.tenant-lite.svc.cluster.local:9000

   Uptime: 3 minutes

   Version: 2024-01-09T19:57:37Z

   Network: 4/4 OK

   Drives: 2/2 OK

   Pool: 1


Pools:

   1st, Erasure sets: 1, Drives per erasure set: 8


8 drives online, 0 drives offline

Final Thoughts

We built the MinIO Operator with simplicity and ease of use as guiding principles. We don't bundle in extra functionality that merely increases attack surface and the default installation is as locked down as possible out of the box, eliminating the risks of provisioning with overly permissive access – the MinIO Operator only has the permission that it needs to do its operations. 

In some environments, it may be necessary to further lock down the MinIO Operator. The only permissions that are truly needed for cluster role access is the console pod, for which you use a combination of kubectl minio and mc to manage your cluster. 

Alternatively, if so required by company security standards, you can consider using a Role instead of a Cluster Role to deploy the Operator and Tenants within the same namespace. This is not what we would recommend as optimal, but it's a compromise if you need to restrict everything to a single namespace.

The MinIO Kubernetes deployment is primarily designed for use with the Operator. The MinIO operator not only streamlines the initial deployment of your MinIO cluster, it also aids in upgrading MinIO cluster when a new version is out. Most importantly the MinIO operator allows you to deploy multiple MinIO tenants and that allows you to logically separate the data among different teams and departments by setting limitations on how much space they are allowed to use, the number of buckets, among other things. It really is a powerful resource.

If you have any questions on the MinIO Kubernetes Operator be sure to reach out to us on Slack!

Previous Post Next Post