Working with Small Objects in AI/ML workloads

Working with Small Objects in AI/ML workloads

As AIStor usage becomes the dominant storage for cloud-native workloads, developers are turning to object storage to satisfy more and more use cases. This is a function of the attributes of modern object storage - performance, scalability, security, resilience and RESTful APIs tailor made for Kubernetes. In particular, AIStor is the embodiment of these attributes and can support a variety of tasks in a variety of locations - on-premise, at the edge, or in a private, public or hybrid cloud.    

Early object storage platforms were designed and built for archiving large objects, frequently as targets for backup jobs. Although excellent for high-bandwidth access to large files, these systems struggled (and continue to struggle) with workloads involving operations on many small files. In particular, access to each file’s data requires first accessing the metadata server (for mapping information and other settings) and then accessing physical storage. With large files, the latency introduced by one-time-per-file metadata access is almost negligible when compared to the time needed to completely load the full large file. With many small files, however, metadata server access can basically double the latency for data access and become a bottleneck on the overall object storage system.

Not every object storage system is capable of extreme performance and resiliency across a variety of object sizes and access patterns. Many small objects push traditional object storage systems to their limits with demands for low latency read and write operations. It is a challenging problem to solve to provide thousands of concurrent object operations in a manner that is strictly consistent, performance optimized and uses physical storage efficiently. The problem is compounded by taxing the system with serving metadata for more and more copies of files as they are replicated. Systems designed and optimized for large objects aren’t capable of satisfying these requirements and provide a sub-optimal experience when forced into them.

Workloads that rely on large amounts of unstructured data, such as AL/ML/DL illustrate the challenges for object storage. For example, an ML workload might look for anomalies in sensor data, inspecting millions or billions of small log files. That’s a lot of metadata and file data access calls, and this isn’t even a complex workload, yet the demand placed on many object storage systems can overwhelm metadata servers and cluster networks making it impossible to leverage the result of the workload in real time.

Unlike other object storage solutions, AIStor doesn’t rely on an external metadata database. Metadata databases can become unresponsive when faced with many concurrent queries and operations across enormous numbers of objects. Removing this dependency allows MinIO to work with large numbers of small objects much faster.

MinIO continues to extend its leadership on the small object front adding several features to deliver greater performance and scalability for small object storage and retrieval. AIStor includes optimized small objects storage with inline metadata/data and the ability to upload and auto-extract .tar files right out of the box. Combining metadata and small object data greatly improves performance because there is no latency introduced going back and forth between metadata and data. Auto-extracting .tar files makes working with many small objects much more efficient, simply archiving small data files together, uploading them and they’re available for application workloads.

First, let’s take a look at some of the difficulties inherent in storing and retrieving large numbers of small objects, and then we can dig into how MinIO optimizes these operations and our new features for working with .tar and .zip files on MinIO client and server.

Big Challenges in Small Objects

Working with large numbers of small objects instead of small numbers of large objects places different demands on an object storage system. Typically, storage administrators have had to design and tune storage systems based on anticipated usage and object size, for example adjusting properties for block, chunk or cache size to match typical read/write patterns.

In addition, small object workloads are more heavily affected by metadata I/O than are large object workloads. AIStor alleviated much of this burden by removing the dependency on an external metadata database. MinIO stores metadata and data directly on disk to provide greater performance and scalability.

Traditionally each object is stored within AIStor as:

bucket/
   object_path/
      object-name/
         xl.meta  <<-- Metadata
         data-uuid/
	         Part.1  <<-- Object content
	         Part.2 (up to 10000 parts for multipart objects)

This means that in order to read data, at least two files would need to be opened. To write data, at least two files and a folder will be written. This overhead, negligible when working with fewer large files, can start to add up when dealing with many parts of many objects.

More Efficient Small Object Storage and Retrieval

A few factors determine when this is done, but generally files less than 128KiB are likely to be stored inline with the metadata. This reduces the IOPS needed both to read and write the files.

This isn’t enabled for all file sizes because it would impact the mutation speed of an object. Mutations occur, for example, when new tags are added or other properties change. So to keep update times reasonable and not require many megabytes of data to be duplicated or rewritten this is only applied for small files.

From a client, user and application perspective this happens transparently. Everything is managed by AIStor, so the only thing needed to start optimizing small object storage is to upgrade MinIO Server.

Auto-Extraction of .tar Files

Included with the feature is also the ability to auto-extract .tar files after upload.  

It is not easy to take many small unstructured data files and put them on object storage to be accessed by applications and users. In many workflows and environments, this can be the most time consuming part of the process. Think about the case of millions of sensor logs required for ML analysis, or another common case of thousands of small Microsoft Excel or Word documents from a NAS migration. If you upload each file individually, then you incur a significant network overhead as you set up and tear down a multitude of connections while placing thousands of API PutObject calls. A common solution is to tar all of the files together into one large file or tarball, upload it, and then extract all of the files.

With this capability, users no longer need to start an upload then come back later to do the extraction. Scripts that upload .tar files can be simplified to upload and auto-extract. Objects are immediately available to applications and users without a separate extraction step, making developers and their code’s operations on object storage more efficient and more timely.  Revisiting the sensor data example, .tar file auto-extraction makes real-time anomaly detection possible by exposing unstructured log data to workloads more quickly.

Small Object Support in Action

Here’s an example of how auto-extract works. Simply download the latest version of AIStor and MinIO Enterprise Client and install them. You can even try it out using our play environment by just downloading mc.

Use any .tar file, optionally compressed with Zstandard (recommended), lz4, gzip or bzip2.

mc mb play/mybucket
mc cp <path-to-archive>.tar play/mybucket --disable-multipart --attr "X-Amz-Meta-Snowball-Auto-Extract=true"
mc ls play/mybucket

That’s it! You’re experimenting with MinIO’s simple, transparent and powerful features for working with small objects. The equivalent API call is PutObjectExtract .  

If you have any specific questions, drop us a note on hello@min.io or join the conversation on Slack. We are here to help.