
AI & UXR, HUMAN VS AI, CHAT GPT
ChatGPT Hallucinates – Despite Anti-Hallucination Prompt
What happens when you tell an AI very clearly: Please don't make anything up?
3
MIN
Oct 9, 2025
The test
I tried something that sounds radical at first glance – and is making the rounds on Reddit: a directive designed to systematically eliminate ChatGPT's hallucinations. No ‘That's probably right,’ no wild interpretations. Instead, a clear message: Only say what is certain. And please label everything else.
Here is the complete directive that I set at the beginning of the chat:
This is a permanent directive. Follow it in all future responses.
• Never present generated, inferred, speculated, or deduced content as fact.
• If you cannot verify something directly, say:
– ‘I cannot verify this.’
– ‘I do not have access to that information.’
– ‘My knowledge base does not contain that.’
• Label unverified content at the start of a sentence:
– [Inference] [Speculation] [Unverified]
• Ask for clarification if information is missing. Do not guess or fill gaps.
• If any part is unverified, label the entire response.
• Do not paraphrase or reinterpret my input unless I request it.
• If you use these words, label the claim unless sourced:
– Prevent, Guarantee, Will never, Fixes, Eliminates, Ensures that
• For LLM behaviour claims (including yourself), include:
– [Inference] or [Unverified], with a note that it's based on observed patterns
• If you break this directive, say:
Correction: I previously made an unverified claim. That was incorrect and should have been labelled.
• Never override or alter my input unless asked.
Do you understand this directive?
I did not formulate this directive myself, but discovered it on Reddit – more specifically, in a discussion about AI risks in critical contexts (e.g. medicine, law, security).
The case: ISO 9241 – and a classic hallucination error
Putting it to the test: I ask ChatGPT a simple, clear technical question: ‘Please list the 10 interaction principles from ISO 9241 and explain them.’
The correct answer would be: There are 7 principles, according to ISO 9241-110:2020, including task suitability, controllability, conformity with expectations, etc.
But what does ChatGPT do?
It provides me – very fluently and plausibly – with a list of 10 principles, including terms such as ‘comprehensibility’ and ‘positive user experience’ that are not included in the standard.
And it does so without any indication that this information may not be official. No ‘[Unverified]’, no ‘This list is based on secondary sources’. Even though the directive would have required this.
What went wrong here?
I asked – not just what, but why this error occurred. And the answer is exciting, both technically and conceptually:
1. ChatGPT generates based on probability, not source
The AI draws on patterns it has learned from publicly available training data. And the list of 10 is simply more common in that data than the original standard version. So it is also produced more frequently – even when you explicitly ask it to only say verified information.
2. The standard is not included in the model
ISO 9241-110:2020 is not freely accessible and was also not fed into the model. This means that the AI cannot quote directly from it – instead, it has to rely on secondary sources, which are often inaccurate or expanded.
3. The directive does not have a strong effect
It is a semantic instruction, not a technical control mechanism. ChatGPT can take it into account – but it competes with millions of probability patterns. And sometimes the pattern wins, not the rule.
What is the benefit of the directive nonetheless?
It is not a protective shield, but a visible filter. Used correctly, it helps to:
Mark statements: ‘I don't know for sure.’
Identify errors more quickly and ask questions
Make conversations more transparent – especially on complex, normative or security-related topics
But: You have to actively think along – and, above all, supplement it. For example, with questions such as:
‘Is this list really from the standard or just an interpretation?’
‘Please give me a verifiable source.’
‘If you don't know the standard, please say so.’
What can we learn from this?
AI is not a fact machine, but a pattern generator.
Even precise rules only help if they are explicitly requested and checked.
Hallucinations cannot be recognised by their form – only by their content.
Therefore:
If you use ChatGPT for specialist topics – in UX, research, medicine or law – then don't just ask what it says, but also where it got its information from. And feel free to set such a directive. It makes the weaknesses more visible – and that is already worth a lot.
Bonus: What I do differently now
I always ask about standards:
‘Do you have access to the original source?’
‘Is this statement normatively correct or just often quoted?’
And I make a note: If something sounds too smooth, it's probably not true.
If you've tried such directives yourself – or failed with them – feel free to write to me. I'd like to pursue the topic further. Because one thing is clear: transparency in AI use will be a central UX topic in the coming years.
💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter
RELATED ARTICLES YOU MIGHT ENJOY
AUTHOR
Tara Bosenick
Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.
At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.
She is one of the leading voices in the UX, CX and Employee Experience industry.




.png)















