Microblink: Repatriating Compute and Storage with MinIO

Microblink: Repatriating Compute and Storage with MinIO

Microblink is an AI company specializing in image detection. They got their start in the identity space with products like BlinkID, BlinkID Verify, and BlinkCard. Most recently, their image detection capabilities have led to products that can process other types of images. For example, product detection can be performed on receipts, whereby product descriptions on a receipt are used to

Read more...

Open Source or Closed? The AI Dilemma

Open Source or Closed? The AI Dilemma

This post first appeared on The New Stack on July 29th, 2024. Artificial Intelligence is in the middle of a perfect storm in the software industry, and now Mark Zuckerberg is calling for open-sourced AI.  Three powerful perspectives are colliding on how to control AI:  1. All AI should be open-source for sharing and transparency. 2. Keep AI closed-source and

Read more...

Build a Distributed Embedding Subsystem with MinIO, Langchain, and Ray Data

Build a Distributed Embedding Subsystem with MinIO, Langchain, and Ray Data

An embedding subsystem is one of four subsystems needed to implement Retrieval Augmented Generation. It turns your custom corpus into a database of vectors that can be searched for semantic meaning. The other subsystems are the data pipeline for creating your custom corpus, the retriever for querying the vector database to add more context to a user query, and finally,

Read more...

Data-Centric AI with Snorkel and MinIO

Data-Centric AI with Snorkel and MinIO

With all the talk in the industry today regarding large language models with their encoders, decoders, multi-headed attention layers, and billions (soon trillions) of parameters, it is tempting to believe that good AI is the result of model design only. Unfortunately, this is not the case. Good AI requires more than a well-designed model. It also requires properly constructed training

Read more...

The Architects Guide to Machine Learning Operations (MLOps)

The Architects Guide to Machine Learning Operations (MLOps)

MLOps, short for Machine Learning Operations, is a set of practices and tools aimed at addressing the specific needs of engineers building models and moving them into production. Some organizations start off with a few homegrown tools that version datasets after each experiment and checkpoint models after every epoch of training. On the other hand, many organizations have chosen to

Read more...

The Architect’s Guide to the GenAI Tech Stack - Ten Tools

The Architect’s Guide to the GenAI Tech Stack - Ten Tools

This post first appeared on The New Stack on June 3rd, 2024. I previously wrote about the modern data lake reference architecture, addressing the challenges in every enterprise — more data, aging Hadoop tooling (specifically HDFS) and greater demands for RESTful APIs (S3) and performance — but I want to fill in some gaps.  The modern data lake, sometimes referred to as

Read more...

Setting Up A Development Machine with MLRun and MinIO

Setting Up A Development Machine with MLRun and MinIO

MLOps is to machine learning what DevOps is to traditional software development. Both are a set of practices and principles aimed at improving collaboration between engineering teams (the Dev or ML) and IT operations (Ops) teams. The goal is to streamline the development lifecycle, from planning and development to deployment and operations, using automation. One of the primary benefits of

Read more...

Improve RAG Performance with Open-Parse Intelligent Chunking

Improve RAG Performance with Open-Parse Intelligent Chunking

If you are implementing a generative AI solution using Large Language Models (LLMs), you should consider a strategy that uses Retrieval-Augmented Generation (RAG) to build contextually aware prompts for your LLM. An important process that occurs in the preproduction pipeline of a RAG-enabled LLM is the chunking of document text so that only the most relevant sections of a document

Read more...

The Architect’s Guide: A Modern Datalake Reference Architecture

The Architect’s Guide: A Modern Datalake Reference Architecture

An abbreviated version of this post appeared on The New Stack on March 26th, 2024. Businesses aiming to maximize their data assets are adopting scalable, flexible, and unified data storage and analytics approaches. This trend is driven by enterprise architects tasked with crafting infrastructures that align with evolving business demands. A Modern Datalake architecture addresses this need by integrating the

Read more...

Architect’s Guide to a Reference Architecture for an AI/ML Datalake

Architect’s Guide to a Reference Architecture for an AI/ML Datalake

An abbreviated version of this post appeared on The New Stack on March 19th, 2024. In enterprise artificial intelligence, there are two main types of models: discriminative and generative. Discriminative models are used to classify or predict data, while generative models are used to create new data. Even though Generative AI has dominated the news of late, organizations are still

Read more...

MinIO Enterprise Cache: A Distributed DRAM Cache for Ultra-Performance

MinIO Enterprise Cache: A Distributed DRAM Cache for Ultra-Performance

As the computing world has evolved and the price of DRAM has plummeted, we find that server configurations often come with 500GB or more of DRAM. When you are dealing with larger deployments, even those with ultra-dense NVMe drives, the number of servers multiplied by the DRAM on those servers can quickly add up – often to several TBs. That DRAM

Read more...

The Strengths, Weaknesses and Dangers of LLMs

The Strengths, Weaknesses and Dangers of LLMs

Much has been said lately about the wonders of Large Language Models (LLMs). Most of these accolades are deserved. Ask ChatGPT to describe the General Theory of Relativity and you will get a very good (and accurate) answer. However, at the end of the day ChatGPT is still a computer program (as are all other LLMs) that is blindly executing

Read more...

Distributed Training and Experiment Tracking with Ray Train, MLflow, and MinIO

Distributed Training and Experiment Tracking with Ray Train, MLflow, and MinIO

Over the past few months, I have written about a number of different technologies (Ray Data, Ray Train, and MLflow). I thought it would make sense to pull them all together and deliver an easy-to-understand recipe for distributed data preprocessing and distributed training using a production-ready MLOPs tool for tracking and model serving. This post integrates the code I presented

Read more...

Distributed Data Processing with Ray Data and MinIO

Distributed Data Processing with Ray Data and MinIO

Introduction Distributed data processing is a key component of an efficient end-to-end distributed machine-learning training pipeline. This is true if you are building a basic neural network for statistical predictions where distributed training could mean each experiment runs in 10 minutes vs. an hour. It is also true if you are training or fine-tuning a Large Language Model (LLM) where

Read more...