back
Back to Glossary

What is Retrieval Augmented Generation

Retrieval-augmented generation (RAG) is a way of pairing a large language model with an external knowledge source so the model can look things up before it answers. Instead of relying only on what was baked into its training data, the model first retrieves relevant documents or records from a company wiki, database, data lake, or search index, then uses that retrieved context to craft a response.

This makes answers more factual, more specific to your domain, and easier to keep up to date.

Put simply: classic LLMs “answer from memory”; RAG systems “answer from memory plus your data.” Across vendors, you’ll see RAG described as a technique for improving the accuracy and reliability of generative models by feeding them results from an authoritative external knowledge base at query time instead of trusting the model’s static training data alone.

Most RAG pipelines follow the same basic loop:

  1. Indexing your data – Content from PDFs, HTML pages, tickets, call notes, and databases is split into chunks, embedded into vectors, and stored in a vector store or hybrid search index.
  2. Retrieval – When a user asks a question, a retriever finds the most relevant chunks using semantic search, keyword search, or a hybrid of both.
  3. Augmentation – The retrieved snippets are added to the user’s prompt as context, usually with simple instructions like “answer using only this context and say if the answer is not present.”
  4. Generation – The LLM reads the augmented prompt and generates an answer that is grounded in the retrieved content, often with citations or references back to sources.

Because the knowledge base lives outside the model, you can update or replace data without retraining. New policies, product docs, or contracts are indexed, and the next query immediately benefits. This is a big reason RAG has become a default pattern for enterprise chatbots, knowledge assistants, and internal search tools.

Common use cases for RAG include:

  1. Customer and employee support – Answer questions from product manuals, policies, or runbooks instead of handing users a long list of search results.
  2. Document Q&A and summarization – Let users ask questions over long documents, collections of contracts, or technical specs and get short, sourced answers.
  3. Analytics “copilots” – Combine warehouse or lakehouse data with metric definitions and business glossaries so analysts can ask, “Why did revenue dip in Q3 for region X?” and get a grounded explanation.

Good RAG systems pay attention to a few details that are easy to overlook:

  1. Chunking and indexing strategy – How you split and embed content determines whether retrieval pulls the right passage or a noisy mix.
  2. Retrieval quality and re-ranking – It’s often worth adding a second step that re-scores retrieved chunks using a more precise model before passing them to the LLM.
  3. Guardrails and refusal behavior – The model should be encouraged to say “I don’t know” when the retrieved context doesn’t contain an answer, instead of hallucinating.
  4. Evaluation – Teams need labeled question–answer pairs and policies so they can measure factual accuracy, retrieval coverage, and the rate of unsupported claims.

RAG is increasingly multimodal. Instead of only searching text, newer stacks can retrieve tables, charts, images, and even video transcripts, then let the model answer questions that require combining those signals (“What changed in the Q4 board report slides compared to Q3?”).

For data teams, RAG does not remove the need for high-quality data curation and labeling; it moves the problem. You still need:

  1. clean document and data catalogs,
  2. sensible schemas and metadata for what each source contains, and
  3. labeled evaluation sets that reflect the real questions users ask.

Platforms like Taskmonk sit underneath these efforts by helping teams curate and label the documents, passages, and answers used to train and evaluate RAG systems—so you can see, per domain or use case, where retrieval is strong, where grounding fails, and where more data or review is needed.