top of page
uintent company logo

AI & UXR, CHAT GPT, LLM

RAGSs Against Hallucinations – Well Thought Out, but Not Good Enough?

Why retrieval-augmented generation is not a cure-all for reliable AI responses


2

MIN

May 20, 2025

In the world of artificial intelligence (AI), hallucinations – i.e. the invention of plausible-sounding but incorrect information – are a well-known problem with large language models (LLMs). Retrieval-augmented generation (RAG) has been presented as a solution to minimise this problem by incorporating external, reliable data sources into the response generation process. But is RAG really the silver bullet for hallucinations?


What is RAG anyway – and why is it relevant? 

RAG combines an LLM with an information retrieval system. Instead of relying solely on the knowledge stored in the model, the system searches external databases for relevant information and uses it to generate more accurate responses. For UX experts, this means that AI can access current and specific information, which is particularly advantageous in dynamic application areas.

 

What RAGs do really well

  • Integration of current information: RAG enables access to current data that has been created after the LLM has been trained.

  • Reduction of hallucinations in factual questions: Access to reliable sources reduces false claims.

  • Transparency through source references: Answers can be provided with direct references to the sources used, which increases traceability.

  • Application in companies: RAG can help provide specific information efficiently, especially in internal knowledge systems.


But: Why RAGs are often overrated 

Despite the advantages, there are also challenges:

  • Quality of the retrieved data: If the system retrieves irrelevant or outdated information, this can lead to incorrect answers.

  • Misinterpretation by the LLM: Even with correct data, the model may misunderstand or weigh it incorrectly.

  • Technical effort: Implementing and maintaining an effective RAG system requires significant resources.

  • Real-world example: A study by Stanford University showed that specialised legal AI tools that use RAG hallucinate in 17–33% of cases. VentureBeat+3reglab.stanford.edu+3Legal Dive+3


Typical misconceptions about RAGs

  • "With RAG, AI never hallucinates.": This is a fallacy. RAG reduces hallucinations, but does not eliminate them completely.

  • "External data = true answers": The quality and relevance of the data are crucial.

  • "RAG = better LLMs": Without careful implementation and monitoring, RAG can actually worsen performance.


What a good RAG system must deliver

  • High-quality data sources: The data used must be current, relevant and reliable.

  • Effective search algorithms: The system must be able to find the most relevant information efficiently. Legal Tech Insights from Prevail

  • Contextual adaptation: The LLM should correctly interpret the retrieved data and place it in the right context.

  • Continuous monitoring: Regular reviews and adjustments are necessary to ensure quality.


Conclusion: RAG – a building block, not a silver bullet

RAG offers promising approaches for improving the response accuracy of LLMs, but it is not a panacea. For UX professionals, this means that RAG can be a valuable tool, but it should be used with caution and in combination with other methods to develop truly reliable and user-centred AI applications.

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

Isometric flat-style illustration depicting global UX study logistics with parcels, checklist, video calls, and location markers over a world map.

What It Takes to Get It Right: Global Study Logistics in UX Research for Medical Devices

HEALTHCARE, UX METHODS, UX LOGISTICS

Surreal, glowing illustration of an AI language model as a brain, influenced by a hand – symbolizing manipulation by external forces.

Propaganda Chatbots - When AI Suddenly Speaks Russian

AI & UXR, LLM

Illustration of seven animals representing different thinking and prompting styles in UX work.

Welcome to the Prompt Zoo

AI & UXR, PROMPTS, UX

A two-dimensional image of a man sitting at a desk with an open laptop displaying a health symbol. In the background hangs a poster with a DNA strand.

UX Regulatory Compliance: Why Usability Drives Medtech Certification

HEALTHCARE, REGULATIONS

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of a man at a laptop, surrounded by symbols of global medical research: world map with location markers, monitor with a medical cross, patient file, and stethoscope.

Global UX Research in Medical Technology: International User Research as a Factor for Success

HEALTHCARE, MHEALTH, REGULATIONS

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

A woman inside a gear is surrounded by icons representing global connectivity, collaboration, innovation, and user focus – all linked by arrows. Uses soft, bright colors from a modern UI color palette.

Automate UX? Yes, Please! Why Zapier and n8n Are Real Super Tools for UX Teams

CHAT GPT, TOOLS, AUTOMATION, AI & UXR

A 2D Image of a man, pointing to a screen with a surgical robot on it.

Surgical Robotics and UX: Why Usability Is Key to or Success

HEALTHCARE, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page