top of page
uintent company logo

AI & UXR, CHAT GPT, LLM

RAGSs Against Hallucinations – Well Thought Out, but Not Good Enough?

Why retrieval-augmented generation is not a cure-all for reliable AI responses


2

MIN

May 20, 2025

In the world of artificial intelligence (AI), hallucinations – i.e. the invention of plausible-sounding but incorrect information – are a well-known problem with large language models (LLMs). Retrieval-augmented generation (RAG) has been presented as a solution to minimise this problem by incorporating external, reliable data sources into the response generation process. But is RAG really the silver bullet for hallucinations?


What is RAG anyway – and why is it relevant? 

RAG combines an LLM with an information retrieval system. Instead of relying solely on the knowledge stored in the model, the system searches external databases for relevant information and uses it to generate more accurate responses. For UX experts, this means that AI can access current and specific information, which is particularly advantageous in dynamic application areas.

 

What RAGs do really well

  • Integration of current information: RAG enables access to current data that has been created after the LLM has been trained.

  • Reduction of hallucinations in factual questions: Access to reliable sources reduces false claims.

  • Transparency through source references: Answers can be provided with direct references to the sources used, which increases traceability.

  • Application in companies: RAG can help provide specific information efficiently, especially in internal knowledge systems.


But: Why RAGs are often overrated 

Despite the advantages, there are also challenges:

  • Quality of the retrieved data: If the system retrieves irrelevant or outdated information, this can lead to incorrect answers.

  • Misinterpretation by the LLM: Even with correct data, the model may misunderstand or weigh it incorrectly.

  • Technical effort: Implementing and maintaining an effective RAG system requires significant resources.

  • Real-world example: A study by Stanford University showed that specialised legal AI tools that use RAG hallucinate in 17–33% of cases. VentureBeat+3reglab.stanford.edu+3Legal Dive+3


Typical misconceptions about RAGs

  • "With RAG, AI never hallucinates.": This is a fallacy. RAG reduces hallucinations, but does not eliminate them completely.

  • "External data = true answers": The quality and relevance of the data are crucial.

  • "RAG = better LLMs": Without careful implementation and monitoring, RAG can actually worsen performance.


What a good RAG system must deliver

  • High-quality data sources: The data used must be current, relevant and reliable.

  • Effective search algorithms: The system must be able to find the most relevant information efficiently. Legal Tech Insights from Prevail

  • Contextual adaptation: The LLM should correctly interpret the retrieved data and place it in the right context.

  • Continuous monitoring: Regular reviews and adjustments are necessary to ensure quality.


Conclusion: RAG – a building block, not a silver bullet

RAG offers promising approaches for improving the response accuracy of LLMs, but it is not a panacea. For UX professionals, this means that RAG can be a valuable tool, but it should be used with caution and in combination with other methods to develop truly reliable and user-centred AI applications.

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 5: The Future Starts Now – UX in Transition and Tara Right in the Middle of It

UX, BACKSTORY

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 4: A New Outlook on Life – Tara, the Transition and Becoming Visible

UX

Stylized illustration of a brain and a neural network representing AI and machine thinking.

How a Transformer Thinks – And Why It Hallucinates

AI & UXR, LLM, HUMAN VS AI, OPEN AI

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 3: From Corporate Life Back to Freedom: How Frustration Led to the Idea for Resight

UX, BACKSTORY

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 2: Self-Denial, Growth and Crises: The Second Phase of Sirvaluse – And of Tara

UX, BACKSTORY

Three stylized characters with speech bubbles on a blue background – “Chattable Personas”.

Artificial Users, Real Insights? How Generative Agents Could Change the Field of UX

AI & UXR, HUMAN VS AI, LLM, TRENDS, UX METHODS, PERSONAS

Colorful illustration of a robot with a document and pencil on a light background.

Write More Clearly With Wolf-Schneider AI – A Self-Experiment

AI & UXR, OPEN AI

Dark designed picture as a podcast announcement. A picture of a baby and a man is shown.

Episode 1: ‘Inside and Out’ – A Podcast Series About Change, Responsibility and Self-Discovery

UX, BACKSTORY

Illustration with five colorful icons on a dark background, representing different AI systems.
Top left: Brain and lightbulb – “GenAI: Creativity”.
Top center: Book with magnifying glass – “RAG: Knowledge”.
Top right: Flowchart diagram – “MCP: Structure”.
Bottom left: Code window with arrow – “Function Calling: Access”.
Bottom right: Smiling robot – “Agents: Assistance”.

Five Types of AI Systems – And What They Do for Us

AI & UXR, CHAT GPT, LLM, OPEN AI

Illustration of a stylized brain (LLM) between a human profile and a bookshelf with a magnifying glass – symbolizes AI accessing external knowledge.

RAGSs Against Hallucinations – Well Thought Out, but Not Good Enough?

AI & UXR, CHAT GPT, LLM

Two people sit at a breakfast table using a tablet with the 'ZEITKOMPASS' app by Inclusys, which displays a colorful daily schedule and clock. The table is set with bread rolls, fruit, and coffee.

UX for Good With INCLUSYS: How We Learned to Better Understand Barriers in Everyday Life

ACCESSIBILITY, ADVANTAGES USER RESEARCH, RESEARCH, UX FOR GOOD

Woman in an orange shirt sits on a blue couch and looks at an interviewer and is laughing.

UX in Healthcare: The Essentials of Conducting Interviews With Patients

BEST PRACTICES, HEALTHCARE, RESEARCH, UX INSIGHTS

Illustration of a friendly robot learning from its mistakes.

Better Answers, Less Nonsense: How ChatGPT Learns

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

A visual representation of the environmental impact of AI, featuring data centers, energy consumption, and environmental effects.

The Environmental Impact of AI – Why Sustainability Also Matters for Digital Innovation

AI & UXR, CHAT GPT

Colorful illustration of a futuristic workspace with holographic AI interaction and structured prompts.

How to Work With Complex Prompts in AI: Structured Strategies and Best Practices

AI & UXR, CHAT GPT, OPEN AI

A humorous image on AI quality assessment: A robot with data charts observes a confused hamster in front of facial recognition, a pizza with glue, and a rock labeled as "food."

Anecdotal Evidence or Systematic AI Research – The Current Situation and What Still Needs to Be Done

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Three talking businessmen, their silhouettes visible in front of a window.

Making the Case for UX Research: Convincing Stakeholders of UX Value

HOW-TO, OBJECTION HANDLING, STAKEHOLDER MANAGEMENT, UX

Futuristic cosmic scene featuring the glowing number 42 at the center, surrounded by abstract technological and galactic elements.

What ‘42’ Teaches Us About Change Management and UX

AI & UXR, CHAT GPT, OPEN AI, UX

An abstract humanoid outline formed of handwritten notes, books, and flowing ink lines in soft pastel tones, surrounded by a cozy study environment.

Who Are We Talking To? How the Image of ChatGPT Influences Our Communication

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of the Turing Test with a human and robotic face connected by chat symbols.

Why Artificial Intelligence Still Can’t Pass the Turing Test

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page