Databases on Object Storage - the New Normal

Databases on Object Storage - the New Normal

When you think about object storage workloads and storage types - databases are not the first thing that comes to mind. That is changing rapidly, however, driven by just two forces: the availability of true, high performance object storage and explosive growth of data and, perhaps more impactfully, its associated metadata.

Because of these two forces, almost every major database vendor now includes S3 compatible endpoints. Further, for many organizations and most workloads, this becomes the default architecture whether in the cloud or on-prem.

Let’s explore the concepts briefly.

Performance

The storage performance requirements associated with databases have inverted in the past few years. Databases previously demanded high IOPS. This was a function of the need to make lots of small changes across the network. This was well suited to the SAN and NAS architectures and thus databases became their bread and butter. The problem is that IOPS is not particularly scalable - at least economically.

Databases no longer mutate the data across the network in 4KB chunks. Instead they stream the object (specifically table segments) in MB sized extents to the client-side memory and mutate them locally. The local memory IOPS in combination with a 100GBe makes this a throughput problem - not an IOPS problem.

Object storage is throughput driven, not IOPS or latency driven. This is true even today in the era of high performance object storage and NVMe drives. The new database model is ideal for object storage because the extents are immutable by nature. Since every change is automatically versioned, object storage can provide continuous data protection without the need for snapshots.  

In the case of MinIO, we are not just the world’s fastest object store in terms of throughput but MinIO also excels in small objects performance where the table segments range from 256K to 2MB.

The reason for this is important. MinIO does not use a metadata database because of its deterministic hashing to look up objects. Other object storage implementations quickly get overwhelmed when you store these table segments as small objects.

For an object store to host a database it needs to deliver exceptional throughput alongside acceptable latency. The better it can do on those metrics, the greater percentage of workloads will migrate to object. This should make sense. If an object store can deliver hardware-choking throughput and acceptable latency, it can run 80% or more of the database’s requirements. If it can’t, it will only be good for 20-30% of those workloads - effectively backups.

MinIO delivers against these new requirements and, as a result, is the leading object store for this increasingly prevalent architecture.

If this sounds familiar to you it should. It is the narrative of disaggregation. The database vendors have effectively chosen to disaggregate storage and compute - claiming the compute side for themselves and offloading the storage to high-performance object storage. They are focused on distributed, high-performance query processing. By doing so, they focus on features and functions and leave the heavy lifting of storage to companies like MinIO. The result is that they can now lay claim to ever larger swaths of data because of the scalability attributes of object storage.

Let's turn our attention there for a moment.


Scalability

Most file and block systems were not designed for scale. As a result, when you push past 100TB tradeoffs start to be made. When you get to 1PB - you are in rarefied air for these systems.  

Object storage on the other hand, just starts hitting its sweet spot at 1PB. The sky is the limit from there. This makes object storage the ideal complement for databases with ambitions to deliver against giant application workloads that cover large components of an organization's data.

The model is simple given the throughput capabilities of fast object storage. A small portion of data is kept “in memory” for ultra-fast processing while the vast majority of it sits in a really, really warm tier - available using the standard S3 calls that have come to define the modern application ecosystem.

This is even more potent when support for S3 Select is available. Only a handful of vendors support this predicate pushdown functionality and none of them are capable of delivering high performance. MinIO is - and as a result just the data you need can be pulled from PBs of data. This is why MinIO is so popular with the database vendor community.

Altinity/Clickhouse

A great example can be found in our work with Altinity, who specializes in the blazing fast, feature rich, feature rich Clickhouse data warehouse. Altinity has a superb tutorial on integrating the two here. More interesting, however, is their work comparing MinIO and AWS on the OnTime and NYC Taxi datasets.

The OnTime dataset contains almost two hundred million rows of airline flight data. Altinity selected this dataset for performance benchmarks because the dataset contains 109 columns.

The benchmarks take advantage of S3 Select to optimize the data ingestion and therefore speed queries.

Altinity also performed benchmarks on the NYC Taxi Dataset - perhaps the most widely used benchmarking dataset. The dataset is 1.1 billion records, 51 columns and is 500 GB in size when in uncompressed CSV format. Here again, the speeds are simply remarkable for “object storage”:

These speeds would put Clickhouse/MinIO in the top 15 of Mark Litwintschik’s list of fast databases (and fast hardware to be fair).

Again - we encourage you to take in the posts in their entirety.

Snowflake

Snowflake is one of the biggest workloads of Amazon S3 and remains one of the fastest growing. It is built on object storage - like every other modern system. They certainly had the choice to build on EFS of EBS but they didn’t. It wasn’t just the economics. It was the scale.

Other Examples

Don’t just take our word for it. Take your favorite database. Like MongoDB. Or MariaDB. Or CockroachDB. Or Teradata. The list is long and distinguished and Google can get you there quickly...

Conclusion

The S3 API is the de facto standard for storage these days, relegating POSIX to “irreversible decline” status. As a result, almost every database now supports the S3 API - but only a tiny handful of those “S3 compatible object stores” can deliver the combination of performance and scale and, more importantly,  performance at scale required to support the range of use cases a modern enterprise wants.

With MinIO you get all three and it is why we are the object store of choice for modern databases and data warehouses - whether on prem or in the public cloud where MinIO can be found everywhere.

The result is an effective separation of storage and compute with best of breed for both elements. We encourage you to try it out for yourself. You can download MinIO here or track down a tutorial on your favorite database blog. Just type in their name and MinIO - someone, somewhere has likely done the work already.

Previous Post Next Post