Successful Strategies for the Hybrid Cloud

Successful Strategies for the Hybrid Cloud

The Hybrid Cloud is a hot term these days. It should be. Ultimately it will represent the vast majority of enterprise cloud architectures (Gartner is saying 90%+).

The public cloud will continue to grow. The private cloud will continue to grow. The edge will continue to grow. All fueled by the data that continues to grow.

While the hybrid cloud has many implications for the enterprise, the simplest truth is this: you need your storage to work everywhere - public, private, edge.

Customers, and developers in particular, want the same experience whether in the public cloud or the private cloud. They don’t care about the “origin” of the solution (private or public) - it just has to work seamlessly in the other place.

For some, this is an ahh-ha moment - this concept that the workload, the economics, the performance and security should define the appropriate cloud.

For us, it is the natural order of things. We didn’t set out to be a leader in the hybrid cloud, indeed, we targeted the private cloud as our source of distinction. But our customers had other ideas.

It began as a curiosity. We saw a steady but growing number of MinIO instances in places like AWS, GCP and Azure - not to mention all of the other clouds. Then, over the course of a few quarters, that number exploded.

It wasn’t entirely clear as to why these customers were running object on object or object on EBS - but it wasn’t a novelty, it was pervasive and persistent. As we dug into it, developers had a similar story.

There was consistency in MinIO. In the interface, in the performance, in the API calls. They valued that consistency even more than the incremental costs.

It made sense and we responded by continuing to develop features and capabilities that enabled it to grow - to the point where we are likely the most deployed object store in the hybrid cloud.

Still, the Hybrid Cloud’s “it” moment has some risk. Expectations can become inflated resulting in disappointment. Gartner cleverly articulated this phenomena and it has been proven across a number of different technologies (take AI for example…).

To help customers through this hype-cycle and to compress the high’s and low’s we have developed a handy guide to what’s really required to deliver the hybrid cloud. It is based on our experience - delivering the hybrid cloud to tens of thousands of customers and community members.

Software Defined

To run in someone else cloud (public) you need to run on someone else’s hardware. Ideally, all of their hardware. To run on your client’s private cloud you need to run on their hardware. Ideally, any of their hardware. If you require your own boxes - you are not software defined. If you require the customer to select from three or four tightly defined boxes, the deviation from which requires the involvement of professional services - you are not software defined.

Software abstracts the backend physical storage. Software defines the user experience. If you are not software defined you don’t have a legitimate hybrid cloud solution.


The hybrid cloud is Kubernetes-native. This is completely consistent with the first point around software-defined but goes deeper than that requirement. Kubernetes is as much of a philosophy as it is a technology. There are those that are philosophically aligned (microservices, S3 API, containerization) and there are those that are not. If your company pre-dated containers - odds are you are not Kubernetes-native. This is one of the reasons we have so much respect for VMware. They are not Kubernetes-native but they are all-in on Kubernetes. They are going to be a major player in the space because they are philosophically aligned. Others will not.

Simple, Strictly Consistent User and API Experiences

To operate in a hybrid world you have to be strictly consistent in your user experience. By strictly, we mean exactly, but we also mean comprehensively. API calls need to do the same thing no matter where they are. In the case of the cloud this means S3 calls.

We want to be totally upfront that this is not entirely possible despite the claims of various companies (including ourselves in moments of exuberance). There are things about how the public cloud operates that do not have relevance on the private cloud and vice versa. We are private cloud focused so there are elements (very rare and always for a reason) where there is not strict consistency with the S3 API, but the overall message is that while there is some interpretive element, the message should not be lost, nor should it differ depending on your cloud.

You must be consistent in your developer experience. Doing this correctly reduces developer friction, speeds IT reviews and delivers application interoperability.  

The only way to do that is with a relentless focus on simplicity. Simplicity is hard. It takes work, discipline and above all commitment. But simplicity scales - both when automation is involved, but also humans. Some big name appliance vendors have invested in their user experiences with great results - it won’t help them in the hybrid world until they shed the HW and go all in on the software.


The term Hybrid Cloud is generally applied to public and private clouds but edge clouds are part of the equation too. To run at the edge the software must be extremely lightweight. This allows it to be packaged with the application stack and to thrive at the edge.


We don’t include high performance simply because we are the world’s fastest object store, we include it because it expands the pool of applications that you can pair with object storage. Object storage is the storage class of the cloud. This is well documented and we have written on it as well. AWS S3 pioneered performance and in doing so attracted hundreds of applications to the storage medium. Need we remind anyone that Snowflake runs on AWS? If you are not performant you cannot run Spark, Presto, Tensorflow or any of the other AI/ML and big data applications that have come to define the enterprise landscape. Even the “secondary” storage use cases demand performance - those include Splunk, Veeam, Teradata, Commvault and others.

It may seem out of place, but performance matters to the hybrid cloud because it creates consistency at scale.


The purpose of this list is not to pick and choose. The list is to be taken as a whole. Every element needs to be met. You fall short on one, you are not going to be a legitimate player in the hybrid space.

Some will see this as self-serving. We would challenge that by saying the public cloud players - particularly AWS could make a really strong play here, but they would have to abandon the concept inherent in Outpost. The other public players could too - but they too would have to make some changes, and there seems to be reluctance there.

Nonetheless, the hybrid cloud is a challenge worthy of our time and our investment because it is here to stay. It will survive inflated expectations because if anything, it may prove more important than we project today.

Previous Post Next Post