MinIO benchmarks with COSBench

MinIO benchmarks with COSBench

MinIO is an AWS S3 compatible object storage server, that can be deployed on a variety of hardware and software platforms. Consistent S3 API with freedom of deployment on a variety of platforms makes MinIO an ideal cloud storage platform.

The versatility of MinIO server makes it tricky to gauge performance since several factors decide the overall performance. These factors include storage drives in use, available network bandwidth, available RAM and available processing power. These details may differ per deployment and user, making it impossible to state a universal performance number that holds correct for everyone.

To make things straightforward, we chose fastest NVMe storage drives available from cloud providers to test MinIO server performance. Here are the details of the configuration used for benchmarking:

Hardware configuration

Here is the NVMe disk I/O performance using 10GiB files (measured using Linux dd command)

Disk I/O Performance

MinIO single node deployment

We tested with object sizes used are 256 KiB, 1 MiB, 5 MiB, 10 MiB and 32 MiB — each with parallel threads ranging from 128 to 2048.

Write operations

Single node deployment write bandwidth usage peaks at around 2.79 GiB/sec with 1024 parallel threads attempting to write 32 MiB objects.

Write bandwidth for file system deployment

The write throughput peaks at 4875 ops/sec with 128 threads trying to write 256 KiB objects each.

Write throughput for file system deployment

Read operations

Read operations were benchmarked with similar number of parallel works and object sizes. MinIO single node deployment read bandwidth peaks at around 2.51 GiB/sec with 512 parallel threads attempting to read 32 MiB objects.

Read bandwidth for file system deployment

MinIO single node deployment read throughput peaks at around 9544 ops/sec with 512 to 2048 parallel threads attempting to read 256 KiB objects.

Read throughput for file system deployment

MinIO distributed erasure code deployment

Next let's take a look at MinIO distributed erasure code mode. The setup includes 10 server instances and 10 client machines. Object sizes are 256 KiB, 1 MiB, 5 MiB, 10 MiB, 32 MiB and 64 MiB — each with number of parallel threads ranging from 128 to 8192.

Write operations

Distributed erasure setup write bandwidth usage peaks at around 8.8 GiB/sec with 64 MiB objects being written by 2048 parallel threads.

Write bandwidth for distributed erasure

Write throughputs peak at around 1200 ops/sec for 256 KiB objects being written by 8192 parallel threads.

Write throughput for distributed erasure

Read operations

With only read operations, bandwidth usage peaks at 8.17 GiB/sec with 10 MiB objects being written by 8192 parallel threads.

Read operations for distributed erasure

Read throughput peaks at 1556 ops/sec with 256 KiB objects being written by 128 parallel threads. Same operation when done by 8192 threads reach 1552 ops/sec.

Read throughput for distributed erasure

COSBench configuration files used for benchmarking are available in our benchmarks repository.


While you’re at it, help us understand your use case and how we can help you better! Fill out our best of MinIO deployment form (takes less than a minute), and get a chance to be featured on the MinIO website and showcase your MinIO private cloud design to MinIO community.