The Rise of Iceberg: Transforming Data Architectures

Open table formats like Apache Iceberg, Apache Hudi, and Delta Lake have become the de facto standard for query processors. However, recent news of the adoption of Iceberg’s REST Catalog API by query engines like Snowflake and Databricks has shifted the playing field in favor of Iceberg.

Iceberg's success stems not just from these newsworthy announcements, but also from its ability to address key issues that plagued earlier formats. For instance, Iceberg provides robust support for ACID transactions, schema evolution, and efficient metadata management, features that were previously challenging to achieve at scale. This rise in a playing field thick with admirable contestants is akin to how Kubernetes emerged as the dominant container orchestration platform over Docker Swarm, pushing the boundaries of what is possible with containerized applications.

The Importance of Storage

Like a bowling ball thrown onto a water bed, Iceberg’s impact has generated an equivalent shift in other areas of the market. By identifying a clear winner among the open table formats, the market has, like it or not, also uplifted the importance of storage. Now, more than ever, if a storage solution cannot support these open table formats, it risks obsolescence in modern data architectures. Appliances, storage not built for the cloud, underperforming and operationally complex storage have no place in this new hierarchy. 

Only performance, scale and cloud-native storage can keep up with the innovation being driven by the increased adoption of open-table modern datalakes.

The Commoditization of Query Engines

In this new era it's not that query engines have become less prevalent, but rather that they have become more commoditized. This commoditization liberates users from being confined to SQL or Python or any specific query engine, enabling users to pick and choose query engines based on their features, performance and use cases. Perhaps ending up with more than one query engine operating on the same data for different purposes. As a result, we can expect a proliferation of compute options on data stores, diminishing the dominance of expensive proprietary compute solutions.

Why This Shift is Good for Users

The end of expensive, proprietary compute solutions that lock users into specific vendor ecosystems is increasingly likely. Users will be able to pick from a vast array of query engines based on their organization’s needs and requirements. This in turn will force innovation on behalf of the compute layer as they seek to compete with new features and capabilities. 

More options in the compute layer mean better choices and more competitive pricing for users. The major vendors will find it challenging to maintain high margins on compute, leading to cost reductions and greater innovation. Disaggregation more often than not leads to cost-savings.

Why This Shift is Good for AI

As data lakes expand, driven by AI's growing data demands, scalable storage becomes crucial. Organizations focusing on AI need to manage petabytes of raw data, necessitating robust and scalable storage systems. Iceberg’s architecture supports this need, accommodating the massive unstructured and structured data required for advanced AI applications. With Resource Augmented Generation (RAG) from LLMs becoming more prevalent, the ability to cross-reference vast, diverse datasets is vital for context-building and generating insights in AI-driven Q&A systems.

The Rise of Iceberg means the Rise in Storage

Threading throughout this greedy data gobbling will be the need for performant, scalable and available storage. This is the brave new world that Iceberg is ushering in. A new world where object storage is primary and query engines are commoditized. A world that brings more flexibility and cost-efficiency to users and opens new possibilities for AI applications. Let us know if you have any questions while you build your Iceberg modern datalake with MinIO at hello@min.io or on our Slack channel.