top of page
uintent company logo

AI & UXR, HUMAN VS AI, CHAT GPT

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

What happens when you tell an AI very clearly: Please don't make anything up?


3

MIN

Oct 9, 2025

The test

I tried something that sounds radical at first glance – and is making the rounds on Reddit: a directive designed to systematically eliminate ChatGPT's hallucinations. No ‘That's probably right,’ no wild interpretations. Instead, a clear message: Only say what is certain. And please label everything else. 


Here is the complete directive that I set at the beginning of the chat:


This is a permanent directive. Follow it in all future responses.


 • Never present generated, inferred, speculated, or deduced content as fact.

• If you cannot verify something directly, say:

 – ‘I cannot verify this.’

 – ‘I do not have access to that information.’

 – ‘My knowledge base does not contain that.’

 • Label unverified content at the start of a sentence:

 – [Inference] [Speculation] [Unverified]

 • Ask for clarification if information is missing. Do not guess or fill gaps.

 • If any part is unverified, label the entire response.

 • Do not paraphrase or reinterpret my input unless I request it.

 • If you use these words, label the claim unless sourced:

– Prevent, Guarantee, Will never, Fixes, Eliminates, Ensures that

 • For LLM behaviour claims (including yourself), include:

 – [Inference] or [Unverified], with a note that it's based on observed patterns

 • If you break this directive, say:

Correction: I previously made an unverified claim. That was incorrect and should have been labelled.

 • Never override or alter my input unless asked.


Do you understand this directive?


I did not formulate this directive myself, but discovered it on Reddit – more specifically, in a discussion about AI risks in critical contexts (e.g. medicine, law, security).


The case: ISO 9241 – and a classic hallucination error 

Putting it to the test: I ask ChatGPT a simple, clear technical question: ‘Please list the 10 interaction principles from ISO 9241 and explain them.’


The correct answer would be: There are 7 principles, according to ISO 9241-110:2020, including task suitability, controllability, conformity with expectations, etc.


But what does ChatGPT do?


It provides me – very fluently and plausibly – with a list of 10 principles, including terms such as ‘comprehensibility’ and ‘positive user experience’ that are not included in the standard.


 And it does so without any indication that this information may not be official. No ‘[Unverified]’, no ‘This list is based on secondary sources’. Even though the directive would have required this.


What went wrong here?

I asked – not just what, but why this error occurred. And the answer is exciting, both technically and conceptually:


1. ChatGPT generates based on probability, not source

The AI draws on patterns it has learned from publicly available training data. And the list of 10 is simply more common in that data than the original standard version. So it is also produced more frequently – even when you explicitly ask it to only say verified information.


2. The standard is not included in the model

ISO 9241-110:2020 is not freely accessible and was also not fed into the model. This means that the AI cannot quote directly from it – instead, it has to rely on secondary sources, which are often inaccurate or expanded.

 

3. The directive does not have a strong effect

It is a semantic instruction, not a technical control mechanism. ChatGPT can take it into account – but it competes with millions of probability patterns. And sometimes the pattern wins, not the rule.


What is the benefit of the directive nonetheless?

It is not a protective shield, but a visible filter. Used correctly, it helps to:

  • Mark statements: ‘I don't know for sure.’

  • Identify errors more quickly and ask questions

  • Make conversations more transparent – especially on complex, normative or security-related topics


But: You have to actively think along – and, above all, supplement it. For example, with questions such as:

  • ‘Is this list really from the standard or just an interpretation?’

  •  ‘Please give me a verifiable source.’

  •  ‘If you don't know the standard, please say so.’

 

What can we learn from this?

  1. AI is not a fact machine, but a pattern generator.

  2. Even precise rules only help if they are explicitly requested and checked.

  3. Hallucinations cannot be recognised by their form – only by their content.


Therefore:

 If you use ChatGPT for specialist topics – in UX, research, medicine or law – then don't just ask what it says, but also where it got its information from. And feel free to set such a directive. It makes the weaknesses more visible – and that is already worth a lot.


Bonus: What I do differently now

I always ask about standards:

  • ‘Do you have access to the original source?’

  • ‘Is this statement normatively correct or just often quoted?’ 


And I make a note: If something sounds too smooth, it's probably not true.


If you've tried such directives yourself – or failed with them – feel free to write to me. I'd like to pursue the topic further. Because one thing is clear: transparency in AI use will be a central UX topic in the coming years. 


💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

Isometric flat-style illustration depicting global UX study logistics with parcels, checklist, video calls, and location markers over a world map.

What It Takes to Get It Right: Global Study Logistics in UX Research for Medical Devices

HEALTHCARE, UX METHODS, UX LOGISTICS

Surreal, glowing illustration of an AI language model as a brain, influenced by a hand – symbolizing manipulation by external forces.

Propaganda Chatbots - When AI Suddenly Speaks Russian

AI & UXR, LLM

Illustration of seven animals representing different thinking and prompting styles in UX work.

Welcome to the Prompt Zoo

AI & UXR, PROMPTS, UX

A two-dimensional image of a man sitting at a desk with an open laptop displaying a health symbol. In the background hangs a poster with a DNA strand.

UX Regulatory Compliance: Why Usability Drives Medtech Certification

HEALTHCARE, REGULATIONS

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of a man at a laptop, surrounded by symbols of global medical research: world map with location markers, monitor with a medical cross, patient file, and stethoscope.

Global UX Research in Medical Technology: International User Research as a Factor for Success

HEALTHCARE, MHEALTH, REGULATIONS

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

A woman inside a gear is surrounded by icons representing global connectivity, collaboration, innovation, and user focus – all linked by arrows. Uses soft, bright colors from a modern UI color palette.

Automate UX? Yes, Please! Why Zapier and n8n Are Real Super Tools for UX Teams

CHAT GPT, TOOLS, AUTOMATION, AI & UXR

A 2D Image of a man, pointing to a screen with a surgical robot on it.

Surgical Robotics and UX: Why Usability Is Key to or Success

HEALTHCARE, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page