There are two forces that are fundamentally remaking the technology landscape today. One is Kubernetes and the other is high performance Object Storage. They are powering (or are shaped by, depending on your perspective) modern, data-rich applications that include AI/ML and application logs. Either way, modern applications need Kubernetes and Object Storage and Kubernetes and Object Storage owe their rise in part to these same modern applications.
They are symbiotic and they are tech’s new building blocks.
Kubernetes, in just a couple of years, has fundamentally altered the way we manage our computing, networking and storage infrastructure. It is THE dominant approach to the build/package/deploy framework and is expressly designed for an environment of continuous change. It has rapidly cemented itself as the dominant computing paradigm due to its ability to abstract the physical infrastructure from the application stack in a way that facilitates the collaboration between development, operations, and IT. Its meteoric rise is due in large part to the maturation of the technologies that built the cloud: elasticity, scalability, resilience and self-service configurability via declarative representations and APIs.
Kubernetes, while already dominant, is still growing. Yes, there is a ton (technical term) of deployed technology that is not Kubernetes today. No, Kubernetes does not represent the majority of the tech landscape.
Kubernetes does, however, dominate the majority of new development. New development eats old development far faster than it did even five years ago. Cycles continue to compress as data growth renders older technologies obsolete.
Kubernetes is going to keep coming. While there are lots of businesses built around it, there is not one entity behind it (a la Docker) that needs a business model. This ensures it will be the dominant paradigm for a decade at least.
The second piece of the puzzle is high performance object storage. High performance object storage is used to distinguish from the traditional, slow, archival object storage that was designed to be a step above tape.
High performance object storage is THE default platform for Kubernetes. Sure the CSI lets you get at SAN/NAS, but SAN/NAS is fundamentally on the decline due to scalability concerns and an outdated API (POSIX).
Need proof? There is not a CSI for object storage. It doesn’t need it. Kubernetes’ storage sibling is modern, high performance object storage. We have detailed the reasons at length but here is a quick summary:
- Kubernetes and modern object storage allow operators to manage storage with the Kubernetes interface and Kubernetes to handle everything from provisioning to volume placement.
- Modern object storage like Minio is multi-tenant by nature. Multi-tenancy allows multiple customers to use a single instance of an application, and when implemented correctly can reduce operational overhead, decrease costs and reduce complexity, especially at scale - provided it can deliver strict resource isolation. If Kubernetes isn’t managing the underlying infrastructure then it is not truly cloud native. This disqualifies those appliance vendors with CSI or Operator integration.
True multi-tenancy isn’t possible unless the storage system is extremely lightweight and able to be packaged with the application stack. If the storage system takes too many resources or contains too many APIs, it won’t be possible to pack many tenants on the same infrastructure.
- One of Kubernetes’s advantages is that it has proved itself at scale. Kubernetes can also be used to manage storage scaling, but only if the underlying storage system integrates with Kubernetes and hands off provisioning and decommissioning capabilities.
- One of the core tenets of Kubernetes and cloud native systems in general is to manage as much as possible through automation. For a storage system to be truly cloud native, it must integrate with Kubernetes through APIs and allow for dynamic, API-driven orchestration.
- HTTPS/RESTful API is the basic method of communication between the applications in the Kubernetes world. For example Istio and Envoy manages the services discovery and routing based on RESTful API endpoints. Modern object storage is built on RESTful APIs (S3) from the ground up. Traditional SAN/NAS systems don’t fit this model.
- Modern object storage is designed to deliver end to end encryption - on the wire and at rest. In addition it also comes with advanced identity and policy management at object level granularity. This is a major departure from traditional systems that relied on operating system kernel to enforce protection. These are overly complex, hard to automate and, by extension more prone to failure.
- Finally, perhaps most importantly, for an object storage solution to be cloud native it has to run entirely in the user space with no kernel dependencies. This is not how most object storage systems have been built, particularly the hardware appliances. Nonetheless, if you want to containerize your storage and deploy it on any Kubernetes cluster, you have to abide by these constraints. By definition, this means solutions that require kernel patching or have specialized hardware won’t be cloud native.
In case that wasn’t clear, you can’t containerize an appliance. That means you cannot orchestrate an appliance. That means you cannot adopt Kubernetes if you keep buying appliances.
When VMware is turning the ENTIRE ship around and racing to adopt Kubernetes - that is a signal. Appliance vendors that built their business on VMware are looking at a major disruption.
If you run an appliance today, you need to start making the change to software defined storage. That means commodity hardware. No need to fret, you will save money in the process and, if you select your software vendor carefully, you will retain the operational efficiencies you have become accustomed to.
If you don’t, your competitiveness will be impaired and your organization will suffer. Kubernetes will kill the appliance. It is just the nature of technology.
Software eats everything.
To learn more about how we price MinIO and how we support multi-petabyte deployments visit our pricing page.