Turbocharged Storage: MinIO, KIOXIA, and AMD team up to take on AI

MinIO, the leader in high-performance AI storage, has once again raised the bar in the AI infrastructure industry with its groundbreaking MinIO AIStor platform. Leveraging next-generation AMD hardware, KIOXIA NVMe™ SSDs, and cutting-edge software optimizations, MinIO AIStor delivers unmatched performance, scalability, and efficiency for AI-driven and other data intensive workloads. Today, we are excited to share benchmark results that demonstrate MinIO AIStor’s exceptional performance capabilities.
In my last blog in this series, I talked about how AMD and MinIO are a perfect pair to tackle modern AI/ML workloads. I covered some AI/ML processes like model training, data preprocessing, and inference. I also got into some real world use cases for healthcare, autonomous vehicles, and financial services data.
While talking about it is great, our next goal was to prove that AMD and MinIO can not only talk the talk but also walk the walk. So, in this blog, you’ll not only see benchmarks covering multiple object sizes and request types on AMD’s latest 5th generation chips and KIOXIA PCIe® Gen5 NVMe SSDs, you’ll also see how overall performance improves upon tests against MinIO AIStor on AMD’s 4th gen CPUs and the same KIOXIA drives.
About the Test
Before we jump into the numbers, let’s take a look at what both companies bring to the table and what the hardware configuration was for each test.
AMD
AMD has made significant strides in the AI and ML space with its latest generation of AMD EPYC™ processors and AMD Instinct™ MI300 Series accelerators. AMD advancements in chip design, such as AI-optimized architecture and scalable architecture, focus on delivering high performance, energy efficiency, and cost-effectiveness — all critical factors for offering the computational power needed to train larger models, run real-time inference, and handle vast amounts of unstructured data in high-performance AI storage systems like MinIO AIStor.
MinIO
The cloud model has shown object storage to be the most efficient and scalable way to store unstructured and semi-structured data. MinIO AIStor is a high-performance, software-defined, and object-native storage system, designed to meet the demands of AI/ML and advanced analytics workloads. It was built with high throughput, low latency, and massive scalability in mind — all key ingredients for these data intensive workloads. With longstanding features like high-performance parallel I/O, exabyte scale scalability, high data resiliency and integrity, and full S3 compatibility, along with newer AI-centric features like PromptObject and AIHub, enterprises can unlock new levels of efficiency and capability for their AI-driven applications.
Kioxia
Kioxia Corporation, a world leader in memory solutions, dedicated to the development, production and sale of flash memory and solid-state drives (SSDs). In April 2017, its predecessor Toshiba Memory was spun off from Toshiba Corporation, the company that invented NAND flash memory in 1987. Kioxia is committed to uplifting the world with “memory” by offering products, services and systems that create choice for customers and memory-based value for society. Kioxia's innovative 3D flash memory technology, BiCS FLASH™, is shaping the future of storage in high-density applications, including advanced smartphones, PCs, SSDs, automotive and data centers.
Cluster Architecture
Generation 4 CPUs
Generation 5 CPUs
The Results
AMD 4th Gen CPU
The first tests we ran were to find out what AIStor could do on AMD EPYC™ 9534 processors to get a baseline. Even on the older chips, the performance was impressive.
AMD 5th Gen CPU
Next, we swapped out the 4th generation CPUs with the 5th generation AMD EPYC 9555P processor and re-ran the same tests. Again, the results were top notch. We saw a 1.6 GiB/s (8.7%) increase in throughput on PUT requests at 100 MB object size and a 13% increase on 1 MB object sizes. While already basically saturating the network on GET requests with the 4th generation chips, we still saw a roughly 0.5 GiB/s increase on those. Keep in mind that because of AIStor’s truly software-defined nature and how it was built, it’s really only limited by the network and drive speeds, which is why we see less improvement in the GET requests as the network was already saturated.
The rest of the results were still positive, even as you get into the smaller file sizes. Keep in mind that no particular optimizations were made for any of these workloads. If you were working with only very small files, optimizing your application for that size would likely improve the returns. Despite no optimizations, we still received great objects/s numbers with this configuration on just a four node cluster.
Conclusion
In conclusion, the results of our benchmark tests confirm that the collaboration between MinIO, Kioxia, and AMD delivers a powerful, scalable, and high-performance solution for AI-driven workloads. They also showed that even within an already optimized environment, you can still find significant throughput improvement with AIStor running on the latest AMD processors. These advancements are evidence that both companies are committed to driving innovation in the AI/ML space, empowering industries to handle increasingly complex, data-intensive tasks with efficiency and reliability. As AI and machine learning continue to evolve, MinIO AIStor, KIOXIA NVMe SSDs, and AMD hardware will undoubtedly remain a key combination powering enterprises as they look to stay on the forefront.
PCIe is a registered trademark of PCI-SIG.
NVMe is a registered or unregistered trademark of NVM Express, Inc. in the United States and other countries.
Other company names, product names, and service names may be trademarks of third-party companies.