Develop for Red Hat OpenShift with CRC and MinIO
MinIO has always been a pioneer in high-performance and interoperable cloud-native object storage that is versatile and agile. MinIO runs on a myriad of platforms such as Kubernetes, AWS, GCP, Azure, bare metal Linux and a host of other environments.
Lately, there has been a trend in the industry to bring data “closer” to home. The result is that organizations now want to keep their data on servers that they own, in their own datacenter or at a colocation provider. The primary reason is the out of control cost of the cloud coupled with the current economic climate. For most applications, as long as the workload is well understood, it is possible to achieve the same level of scalability and performance on-prem as the cloud - at a fraction of the cost.
But this brings up a conundrum – one of the benefits of the cloud is also that the infrastructure is more or less managed by the cloud provider. For instance, managed Kubernetes services such as EKS and GKE take care of managing the backend with regards to upgrades, downgrades, adding and removing nodes and other operations for you. When it comes to on-prem Kubernetes, perhaps dozens or hundreds of clusters, can add a good chunk of tech debt to your engineering and operations team. You need to bootstrap your own Kubernetes infrastructure and be able to take ownership of management, maintenance and other operations. This can be quite a challenge – wouldn’t it be great if it made our lives a little easier in the process?
This is where Red Hat OpenShift is a game changer. It not only gives you the ability to manage your own Kubernetes clusters but also lets you do it on your own on-prem hardware. You must ensure the various components are up to date on a regular basis, but the OpenShift platform, with its DevOps centric yet Developer oriented approach to architecture, makes an excellent choice for on-prem Kubernetes clusters.
The MinIO Operator and Kubernetes plugin are certified for use with OpenShift, making it easy to incorporate MinIO within existing workflows. Our customers frequently run OpenShift in a multi-cloud configuration that leverages on-premise and public cloud resources. Running MinIO on OpenShift enables enterprises to achieve cloud-native elasticity on their hardware or cloud instance of choice, balance cost, capacity and performance.
By running MinIO on OpenShift, you gain software-defined scalability and automation using Kubernetes orchestration with MinIO providing the object storage. With storage part of the software-defined infrastructure, you can unify the deployment and management of AI/ML, analytics, and other modern data workloads. Instead of building multiple data silos and duplicating data between them, these applications can share the same on-premise MinIO deployment, which aids in the security and resiliency of data. Finally, this approach avoids cloud lock-in. . You want to be able to move your data to whatever environment that best fits your workloads and business objectives. By deploying MinIO on top of OpenShift in multiple locations/clouds, you can move the data seamlessly using site-to-site replication. This ensures you always have access to the best tools for the job - whether that be on the public cloud, private cloud or a colo.
That being said, when you want to develop and have your developers deploy components to OpenShift, setting it up can be prohibitively resource intensive. You need an eight to ten node cluster to get the entire infrastructure up and operational, which might be a detriment to development. Rather than fully deploy OpenShift wouldn’t it be great if developers could test apps locally on their laptops before going into production?
To achieve this in this tutorial we’ll leverage Red Hat CodeReady Containers (CRC) that gives developers the ability to test and develop on OpenShift using a local computer. Although it requires some expertise to get OpenShift up and running, this blog post will walk you through deploying MinIO to OpenShift without the overhead of a full blown OpenShift installer installation as you would rely on in a production environment.
Okay, let's go!
Use a CRC Openshift cluster
When using CRC, you have a choice of runtimes to deploy the resources, and in this case let’s use the OpenShift Container Platform. Please be sure the following minimum requirements are met on the machine where you are developing.
- 4 Core CPU
- 10 GB Memory
- 50 GB free Storage space
We’ll use the CentOS operating system in the following steps
Download CRC from Red Hat.
Go to the download location and untar it
$ cd ~/Downloads
$ tar xvf crc-linux-amd64.tar.xz
Create a ~/bin
directory and copy the crc
executable to that directory
$ mkdir -p ~/bin
$ cp ~/Downloads/crc-linux-*-amd64/crc ~/bin
Of course, don’t forget to add ~/bin
to your PATH
$ export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
Run setup to set the basic configurations
$ crc setup
Finally, start the CRC instance, be sure to note the credentials that are output
$ crc start
Configure OpenShift Components
In order to communicate with the cluster, we’ll use the oc
CLI instead of using kubectl
.
Manual Method
Download the binary that suits your operating system from the downloads page.
Select the version to match the OpenShift version you are running in CRC.
Click on Download Now. Once downloaded, untar the archive.
$ tar xvzf <file>
Move the oc binary to ~/bin
$ mv oc ~/bin
Cached Method
In addition to the above, there is a Cached Method that in some ways is simpler than the manual method. This method caches the required authentication credentials along with the oc
binary to talk to the CRC cluster without the need to download anything manually.
You can add the cached oc to your PATH using the following command
$ crc oc-env
Once you pick a method to install oc
cli, run the following oc
command to access the console.
The crc start
command should’ve output some credentials, use them here. If you did not save them, run the below command to retrieve them
$ crc console --credentials
Now you can login as the developer
user
$ oc login -u developer https://api.crc.testing:6443
From now on, we can use the oc
CLI command to interact with CRC. Lets run a few commands to verify everything is working as expected.
Change the oc
context to crc-admin
$ oc config use-context crc-admin
Verify you are a kubeadmin
$ oc whoami
kubeadmin
Run the get containers command as well
$ oc get co
Okay now that we have all the foundations setup, lets go ahead and install MinIO.
Install MinIO
We’ll install MinIO in the same CRC Openshift cluster as our other resources. You only need 3 components to get started with this simple installation.
- Statefulset
- Service
- Route
First let’s go ahead and create the statefulset, save the below file as statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: minio
name: minio
namespace: default
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: minio
serviceName: minio
template:
metadata:
creationTimestamp: null
labels:
app: minio
spec:
containers:
- args:
- server
- /data
env:
- name: MINIO_KMS_SECRET_KEY
value: my-minio-key:oyArl7zlPECEduNbB1KXgdzDn2Bdpvvw0l8VO51HQnY=
image: minio/minio:RELEASE.2023-05-27T05-56-19Z
imagePullPolicy: IfNotPresent
name: minio
ports:
- containerPort: 9000
hostPort: 9000
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: crc-csi-hostpath-provisioner
volumeMode: Filesystem
status:
phase: Pending
Once it’s saved, apply it with the oc
command
oc apply -f statefulset.yaml
Next let's create the service.yaml
so we can access MinIO deployed via statefulset.
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
type: NodePort
ports:
- port: 9000
name: minio
nodePort: 30080
selector:
app: minio
Once it’s saved, apply it with the oc
command
oc apply -f service.yaml
Last but not least, create the route.yaml
so that we can access the MinIO service externally using the mc
client.
apiVersion: v1
kind: Service
metadata:
name: minio
labels:
app: minio
spec:
type: NodePort
ports:
- port: 9000
name: minio
nodePort: 30080
selector:
app: minio
Access through the mc cli using the following command
$ mc alias set myminio http://my-route-default.apps-crc.testing minioadmin minioadmin --insecure
Added myminio successfully.
Upgrading CRC
As with any software, you want to ensure you are always up to date with the latest and greatest, and that includes CRC. Upgrading CRC is straightforward.
Download the latest version of CRC from Red Hat.
Delete the existing CRC instance, which will cause a loss of data stored on that instance – so be careful
$ crc delete
Untar the downloaded CRC tar using the steps from the install process. Once you have the new crc
binary, replace it with the existing one in ~/bin
. Verify it with the command below
$ crc version
Setup and Start the new CRC instance
$ crc setup
$ crc start
Flexibility to Develop and Deploy to Red Hat OpenShift
Although Red Hat CRC is pretty cool, and comes with a swiss army knife of features to get you started quickly on your local environment, it does have some caveats that are worth mentioning.
Needless to say the CRC cluster is ephemeral in nature. Do not deploy it in production nor store any production data on it other than for testing purposes. You should go through the full end to end production install of OpenShift cluster on properly planned production hardware in order to have the best performance possible.
Upgrading to newer versions of OpenShift Container Platform versions is currently not supported, upgrading it may cause unforeseeable edgecase issues that will be hard to reproduce. Also internally OpenShift container platform runs as an instance so if you have to access network resources externally to the instance you may have to do some additional configuration for that to work.
All that being said, our recommendation is to still use CRC so that you can quickly get started with OpenShift and MinIO. Don’t take our word for it though — build it yourself and you can join our Slack channel here so we can help you along the process.