Two Things Can Be True at the Same Time
There is an interesting report out from McKinsey on the impending impact of AI on an enterprise’s cloud investments.
There was a quote early on in the piece where McKinsey states:“While the possible impact varies by sector, adopting cloud represents an opportunity for the average company to increase profitability by 20 to 30 percent.”
To many, this would be a clarion call to put everything in the public cloud - but there is significant nuance added in the next sentence:
“Many digital-native companies are already taking full advantage of this opportunity. Nearly one-third of the EBITDA value gain over the past decade in the S&P 500 has come from just eight digital-native companies that utilized cloud-like infrastructure.”
Cloud-like infrastructure.
Not “because they used a public cloud”, but because they adopted the cloud operating model. The report is excellent and we recommend it highly.
There is another truth out there that we see time and time again and that is, by repatriating workloads from the cloud, companies save on average 60%. This is what we see consistently from the teams that have published their numbers like 37Signals, X.com, Prerender, and Ahrefs. It is what the one of the biggest enterprise security companies (and our customer) saved (but didn't publish).
So the question becomes - how do you increase profitability by both going to the cloud AND from leaving it.
The answer is in the adoption of the cloud operating model. Adopting the cloud operating model changes the way you think about infrastructure, the developer experience and end to end technical efficiency (from the data team to IT). Training, tuning and deploying generative AI models requires proximity to real-time business processes and data. The McKinsey report also predicts that over the next decade close to half the data will continue to be generated on premise. Depending on a number of factors like data privacy, point-of-sale systems and others an organization should consider their on-premise vs public cloud data architecture carefully.
The cloud operating model delivers infrastructure as code. That means smart software and dumb hardware. Google (TPU), AWS (Graviton) and Azure (FPGA) all have their own silicon, but these are designed for general purpose workloads. Software is what makes them sing. That is the model going forward. Inexpensive yet powerful commodity hardware that is frankly disposable and reusable. That is why you don’t find appliances in the public cloud or in the cloud operating model.
The cloud operating model means sharing and reusing preconfigured tooling and application patterns across developer disciplines. This approach includes a unified consumption layer with self-service for developers and a standardized tech stack to support speed, agility, and security.
The cloud operating model also enhances technical efficiency through managed services (databases, key value stores, security). It requires automation and standardization of IT processes, such as deployment, scaling, and management. It facilitates CI/CD practices, enabling frequent and automated code deployments. Each of these (and others) makes a more efficient organization.
The net of it is that two things can be true at the same time. You can “go to the cloud” and become more profitable AND you can “repatriate” and be more profitable. The common denominator is the model. If you want to talk about what workloads belong where - hit us up on hello@min.io and we can share our thinking. You will find us remarkably honest in our advice. If you belong on the public cloud - we have no problem saying that, because if you are successful, at some point you will want to come back out. It is all about what you optimize for.