Improve RAG Performance with Open-Parse Intelligent Chunking

Improve RAG Performance with Open-Parse Intelligent Chunking

If you are implementing a generative AI solution using Large Language Models (LLMs), you should consider a strategy that uses Retrieval-Augmented Generation (RAG) to build contextually aware prompts for your LLM. An important process that occurs in the preproduction pipeline of a RAG-enabled LLM is the chucking of document text so that only the most relevant sections of a document

Read more...

Architect’s Guide to a Reference Architecture for an AI/ML Datalake

Architect’s Guide to a Reference Architecture for an AI/ML Datalake

An abbreviated version of this post appeared on The New Stack on March 19th, 2024. In enterprise artificial intelligence, there are two main types of models: discriminative and generative. Discriminative models are used to classify or predict data, while generative models are used to create new data. Even though Generative AI has dominated the news of late, organizations are still

Read more...

MinIO Enterprise Cache: A Distributed DRAM Cache for Ultra-Performance

MinIO Enterprise Cache: A Distributed DRAM Cache for Ultra-Performance

As the computing world has evolved and the price of DRAM has plummeted, we find that server configurations often come with 500GB or more of DRAM. When you are dealing with larger deployments, even those with ultra-dense NVMe drives, the number of servers multiplied by the DRAM on those servers can quickly add up – often to several TBs. That DRAM

Read more...