top of page
uintent company logo

AI & UXR, CHAT GPT, LLM

RAGSs Against Hallucinations – Well Thought Out, but Not Good Enough?

Why retrieval-augmented generation is not a cure-all for reliable AI responses


2

MIN

May 20, 2025

In the world of artificial intelligence (AI), hallucinations – i.e. the invention of plausible-sounding but incorrect information – are a well-known problem with large language models (LLMs). Retrieval-augmented generation (RAG) has been presented as a solution to minimise this problem by incorporating external, reliable data sources into the response generation process. But is RAG really the silver bullet for hallucinations?


What is RAG anyway – and why is it relevant? 

RAG combines an LLM with an information retrieval system. Instead of relying solely on the knowledge stored in the model, the system searches external databases for relevant information and uses it to generate more accurate responses. For UX experts, this means that AI can access current and specific information, which is particularly advantageous in dynamic application areas.

 

What RAGs do really well

  • Integration of current information: RAG enables access to current data that has been created after the LLM has been trained.

  • Reduction of hallucinations in factual questions: Access to reliable sources reduces false claims.

  • Transparency through source references: Answers can be provided with direct references to the sources used, which increases traceability.

  • Application in companies: RAG can help provide specific information efficiently, especially in internal knowledge systems.


But: Why RAGs are often overrated 

Despite the advantages, there are also challenges:

  • Quality of the retrieved data: If the system retrieves irrelevant or outdated information, this can lead to incorrect answers.

  • Misinterpretation by the LLM: Even with correct data, the model may misunderstand or weigh it incorrectly.

  • Technical effort: Implementing and maintaining an effective RAG system requires significant resources.

  • Real-world example: A study by Stanford University showed that specialised legal AI tools that use RAG hallucinate in 17–33% of cases. VentureBeat+3reglab.stanford.edu+3Legal Dive+3


Typical misconceptions about RAGs

  • "With RAG, AI never hallucinates.": This is a fallacy. RAG reduces hallucinations, but does not eliminate them completely.

  • "External data = true answers": The quality and relevance of the data are crucial.

  • "RAG = better LLMs": Without careful implementation and monitoring, RAG can actually worsen performance.


What a good RAG system must deliver

  • High-quality data sources: The data used must be current, relevant and reliable.

  • Effective search algorithms: The system must be able to find the most relevant information efficiently. Legal Tech Insights from Prevail

  • Contextual adaptation: The LLM should correctly interpret the retrieved data and place it in the right context.

  • Continuous monitoring: Regular reviews and adjustments are necessary to ensure quality.


Conclusion: RAG – a building block, not a silver bullet

RAG offers promising approaches for improving the response accuracy of LLMs, but it is not a panacea. For UX professionals, this means that RAG can be a valuable tool, but it should be used with caution and in combination with other methods to develop truly reliable and user-centred AI applications.

Person at desk between chaotic and structured data streams, central light focus

UX & AI: The Best Newsletters and Podcasts – My Personal Selection

AI & UXR

Futuristic digital illustration: A glowing golden certification seal floating against a deep navy background, surrounded by AR interface fragments and a faint headset silhouette – symbolizing trust and validation in medical technology.

Trust, but Verified: Why Medical Certification Matters for AR, VR, and Mr in Medtech

HEALTHCARE, HUMAN-CENTERED DESIGN, UX

Floating semi-transparent AR interface with minimal medical data and anatomical visuals, glowing in cyan and gold against a dark futuristic background.

Making the Magic Usable: Why Usability Engineering Matters for AR, VR, and MR in Medtech

HEALTHCARE, MHEALTH

A futuristic, symbolic illustration shows a person standing on a glowing bridge between two worlds: on the left, a warmly lit hospital room with a bed and medical equipment; on the right, an immersive digital space featuring a holographic human body with organs glowing in cyan and orange tones. Both sides are connected by flowing streams of light, set against a deep navy blue background with soft violet transitions.

Reality, Reimagined: How AR, VR, and Mr Are Finding Their Way Into Medtech

DIGITISATION, HEALTHCARE

A glowing golden trophy floats above a gap, while small figures below work on user research and wireframes, untouched by its light.

Understanding UX AI Benchmarks: What HLE and METR Really Tell Us About AI Tools

AI & UXR

Futuristic digital illustration on a deep navy background: a human hand holding a warm glowing pencil and a cyan-lit robotic hand both reach toward a radiant central data cluster. Surrounded by stacked documents and a network of connected nodes, the scene symbolizes collaboration between human interpretation and digital information processing.

NotebookLM in UX Research: An Honest Assessment of a Specialized AI Tool

AI & UXR, HOW-TO, LLM

Futuristic glowing cylinder divided into segments by golden barriers.

Introducing Gated Salami Prompting: Why You Should Slice Complex LLM Tasks Into Smaller Pieces

CHAT GPT, HOW-TO, LLM, PROMPTS

Futuristic square illustration on deep navy background: a glowing golden speech bubble dissolves into particles that partially reassemble incorrectly, surrounded by energy arcs, luminous nodes, and a stylized digital head—symbolizing LLM hallucinations.

Fictitious Quotes, Lost Nuances: The Hallucination Problem in Qualitative Analysis With Llms

CHAT GPT, HOW-TO, LLM, OPEN AI, PROMPTS, TOKEN, UX METHODS

Surreal futuristic illustration of a glowing digital head with data streams, charts, and evaluation symbols representing AI evaluation methodology.

How do we know that our prompt is doing a good job? Why UX research needs an evaluation methodology for AI-based analysis

AI WRITING, DIGITISATION, HOW-TO, PROMPTS

A surreal, futuristic illustration featuring a translucent human profile with a glowing brain connected by flowing data streams to a hovering, golden crystal.

Prompt Psychology Exposed: Why “Tipping” ChatGPT Sometimes Works

CHAT GPT, HOW-TO, LLM, UX

Surreal, futuristic illustration of a person seen from behind standing in a glowing digital cityscape.

System Prompts in UX Research: What You Need to Know About Invisible AI Control

PROMPTS, RESEARCH, UX, UX INSIGHTS

Abstract futuristic illustration of a person, various videos, and notes.

Summarizing YouTube Videos With AI: Three Tools Put to the Test in UX Research

LLM, UX, HOW-TO

two folded hands holding a growing plant

UX For a Better World: We Are Giving Away a UX Research Project to Non-profit Organisations and Sustainable Companies!

UX INSIGHTS, UX FOR GOOD, TRENDS, RESEARCH

Abstract futuristic illustration of a person facing a glowing tower of documents and flowing data streams.

AI Tools UX Research: How Do These Tools Handle Large Documents?

LLM, CHAT GPT, HOW-TO

Illustration of Donald Trump with raised hand in front of an abstract digital background suggesting speech bubbles and data structures.

Donald Trump Prompt: How Provocative AI Prompts Affect UX Budgets

AI & UXR, PROMPTS, STAKEHOLDER MANAGEMENT

Driver's point of view looking at a winding country road surrounded by green vegetation. The steering wheel, dashboard and rear-view mirror are visible in the foreground.

The Final Hurdle: How Unsafe Automation Undermines Trust in Adas

AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS

Illustration of a person standing at a fork in the road with two equal paths.

Will AI Replace UX Jobs? What a Study of 200,000 AI Conversations Really Shows

HUMAN VS AI, RESEARCH, AI & UXR

Close-up of a premium tweeter speaker in a car dashboard with perforated metal surface.

The Passenger Who Always Listens: Why We Are Reluctant to Trust Our Cars When They Talk

AUTOMOTIVE UX, VOICE ASSISTANTS

Keyhole in a dark surface revealing an abstract, colorful UX research interface.

Evaluating AI Results in UX Research: How to Navigate the Black Box

AI & UXR, HOW-TO, HUMAN VS AI

A car cockpit manufactured by Audi. It features a digital display and numerous buttons on the steering wheel.

Haptic Certainty vs. Digital Temptation: The Battle for the Best Controls in Cars

AUTOMOTIVE UX, AUTONOMOUS DRIVING, CONNECTIVITY, GAMIFICATION

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page