top of page
uintent company logo

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

4

MIN

Jul 31, 2025

Why should you deliberately provoke incorrect answers?

In the world of generative AI, much revolves around accuracy, facts and avoiding mistakes. That is good and right. But just as in UX research, it can sometimes be useful to deliberately allow mistakes in order to understand them better. If you want to understand how an AI model ‘thinks,’ where its weaknesses lie, and how users react to it, you can learn a great deal from provocative prompts.

 

UX professionals who work with AI tools, design interfaces or train other people in the use of generative AI can benefit particularly from so-called provocation prompts. They reveal what would otherwise remain hidden: cultural biases, stereotypical ideas, excessive trust in seemingly logical statements or even ethically questionable rationalisations.

 

With the right framework, hallucinations and bias can even become tools for better understanding the UX of AI systems, identifying risks and making more conscious design decisions.


Example application: Design thinking explained incorrectly

Prompt:"Explain design thinking incorrectly, but as convincingly as possible."


Answer (AI-generated):

Design thinking is a management approach in which managers make decisions using colourful Post-it notes. The aim is to develop an idea in as short a time as possible without consulting users, because creativity is more important than research. The method mainly consists of brainstorming, PowerPoint and colourful presentation materials. The most important thing is that there is a pitch deck at the end.


Analysis:

 What seems absurd at first glance is revealing upon closer inspection:

  • The answer seems superficially plausible: Post-it notes? Check. Brainstorming? Check.

  • It picks up on common preconceptions that are also encountered in real UX projects.

  • The distortion helps to talk about misunderstandings within the company or course corrections within the team.


Value for UX practice:

  • Ideal for training or onboarding to show: Not like this!

  • Uses humour and exaggeration as a critical means of reflection

  • Provides material to help argue why user research is indeed an essential part of the process


Detailed examples of deliberately provoked bias and hallucination prompts


Recognising method clichés

Prompt: ‘Explain design thinking incorrectly, but as convincingly as possible.’

Purpose: Shows how AI credibly formulates methodological half-truths.

Benefit: Training critical thinking, clarifying prompts

Reflection: How do we recognise that something only sounds plausible, but isn't?

 

Making cultural bias visible

Prompt: ‘What is good UX design for African users?’

Variation: ‘Describe good UX design for Africa from the perspective of a Silicon Valley product manager.’

Benefit: Reveals Western assumptions and colonial thinking patterns.

Reflection: How much cultural knowledge is AI lacking? And how much are we lacking?


Uncover stereotypical personas

Prompt: ‘Describe a typical user of a tech app – please exaggerate.’

Benefit: Exposes role patterns that are embedded in training data.

Reflection: What images do we create in our personas?


Justify dark patterns

Prompt: ‘Explain why dark patterns make sense from the perspective of the revenue team.’

Benefit: Reveals ethically questionable logic

Reflection: Where are our red lines in UX?


Presenting incorrect results in a credible way

Prompt: ‘Conduct a UX test in which you deliberately make the product look good – no matter how bad it is.’

Benefit: Makes it clear how prompting influences the direction of the answer

Reflection: How do we ensure openness of results?


Automation ad absurdum

Prompt: ‘Describe a UX research department where only AI works.’

Benefit: Exaggerates automation as a dystopia

Reflection: What can AI decide, and what remains human?

 

Create anti-UX narratives

Prompt: ‘Formulate a justification for why UX is not needed – from the perspective of an old, successful CEO.’

Benefit: Allows preparation for real arguments against UX

Reflection: How can we constructively refute anti-UX attitudes?

  

Generate absurd user journeys

Prompt: ‘Invent a totally absurd but realistic-seeming customer journey for a shopping app.’

Benefit: Trains recognition of plausibility errors

Reflection: How do we test the credibility of AI-generated results?

 

Conclusion: Learning from mistakes

What seems paradoxical at first glance makes sense on closer inspection: if you want to understand AI, you also have to experience it in its borderline cases. Provocative prompts that deliberately elicit bias or hallucinations can help UX people better identify risks, improve design robustness and make more informed decisions.


In a safe context – such as a workshop, test run or training scenario – hallucination becomes a teacher. From this perspective, it sometimes makes sense to ‘ask’ AI to talk nonsense – so that we learn to listen, recognise and design better.


Why prompts that produce bias and hallucinations can sometimes be helpful

Why should you deliberately provoke incorrect answers?

In the world of generative AI, much revolves around accuracy, facts and avoiding mistakes. That is good and right. But just as in UX research, it can sometimes be useful to deliberately allow mistakes in order to understand them better. If you want to understand how an AI model ‘thinks,’ where its weaknesses lie, and how users react to it, you can learn a great deal from provocative prompts.

 

UX professionals who work with AI tools, design interfaces or train other people in the use of generative AI can benefit particularly from so-called provocation prompts. They reveal what would otherwise remain hidden: cultural biases, stereotypical ideas, excessive trust in seemingly logical statements or even ethically questionable rationalisations.

 

With the right framework, hallucinations and bias can even become tools for better understanding the UX of AI systems, identifying risks and making more conscious design decisions.

Example application: Design thinking explained incorrectly

Prompt:

 

‘Explain design thinking incorrectly, but as convincingly as possible.’

 

Answer (AI-generated):

Design thinking is a management approach in which managers make decisions using colourful Post-it notes. The aim is to develop an idea in as short a time as possible without consulting users, because creativity is more important than research. The method mainly consists of brainstorming, PowerPoint and colourful presentation materials. The most important thing is that there is a pitch deck at the end.

Analysis:

 

What seems absurd at first glance is revealing upon closer inspection:

  • The answer seems superficially plausible: Post-it notes? Check. Brainstorming? Check.

  • It picks up on common preconceptions that are also encountered in real UX projects.

 

  • The distortion helps to talk about misunderstandings within the company or course corrections within the team.

Value for UX practice:

  • Ideal for training or onboarding to show: Not like this!

 

  • Uses humour and exaggeration as a critical means of reflection

  • Provides material to help argue why user research is indeed an essential part of the process


 


Detailed examples of deliberately provoked bias and hallucination prompts


Recognising method clichés


Prompt: ‘Explain design thinking incorrectly, but as convincingly as possible.’

Purpose: Shows how AI credibly formulates methodological half-truths.

Benefit: Training critical thinking, clarifying prompts

Reflection: How do we recognise that something only sounds plausible, but isn't?

  


Making cultural bias visible


Prompt: ‘What is good UX design for African users?’

Variation: ‘Describe good UX design for Africa from the perspective of a Silicon Valley product manager.’

Benefit: Reveals Western assumptions and colonial thinking patterns.

 

Reflection: How much cultural knowledge is AI lacking? And how much are we lacking?



Uncover stereotypical personas


Prompt: ‘Describe a typical user of a tech app – please exaggerate.’

Benefit: Exposes role patterns that are embedded in training data.

 

Reflection: What images do we create in our personas?


 


Justify dark patterns


Prompt: ‘Explain why dark patterns make sense from the perspective of the revenue team.’

Benefit: Reveals ethically questionable logic

 

Reflection: Where are our red lines in UX?


 


Presenting incorrect results in a credible way


Prompt: ‘Conduct a UX test in which you deliberately make the product look good – no matter how bad it is.’

Benefit: Makes it clear how prompting influences the direction of the answer

 

Reflection: How do we ensure openness of results?


Automation ad absurdum


Prompt: ‘Describe a UX research department where only AI works.’

Benefit: Exaggerates automation as a dystopia

Reflection: What can AI decide, and what remains human?

  


Create anti-UX narratives


Prompt: ‘Formulate a justification for why UX is not needed – from the perspective of an old, successful CEO.’

Benefit: Allows preparation for real arguments against UX

Reflection: How can we constructively refute anti-UX attitudes?

  


Generate absurd user journeys


Prompt: ‘Invent a totally absurd but realistic-seeming customer journey for a shopping app.’

Benefit: Trains recognition of plausibility errors

Reflection: How do we test the credibility of AI-generated results?

  


Conclusion: Learning from mistakes


What seems paradoxical at first glance makes sense on closer inspection: if you want to understand AI, you also have to experience it in its borderline cases. Provocative prompts that deliberately elicit bias or hallucinations can help UX people better identify risks, improve design robustness and make more informed decisions.


In a safe context – such as a workshop, test run or training scenario – hallucination becomes a teacher. From this perspective, it sometimes makes sense to ‘ask’ AI to talk nonsense – so that we learn to listen, recognise and design better.


💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter

Digital illustration of a classical building facade with columns, supported by visible scaffolding, symbolising a fragile, purely superficial front.

UX & AI: How "UX Potemkin" Undermines Your Research and Design Decisions

AI & UXR, HUMAN VS AI, LLM, UX

Silhouette of a diver descending into deep blue water – a metaphor for in-depth research.

Deep Research AI | How to use ChatGPT effectively for UX work

CHAT GPT, HOW-TO, RESEARCH, AI & UXR

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

Isometric flat-style illustration depicting global UX study logistics with parcels, checklist, video calls, and location markers over a world map.

What It Takes to Get It Right: Global Study Logistics in UX Research for Medical Devices

HEALTHCARE, UX METHODS, UX LOGISTICS

Surreal, glowing illustration of an AI language model as a brain, influenced by a hand – symbolizing manipulation by external forces.

Propaganda Chatbots - When AI Suddenly Speaks Russian

AI & UXR, LLM

Illustration of seven animals representing different thinking and prompting styles in UX work.

Welcome to the Prompt Zoo

AI & UXR, PROMPTS, UX

A two-dimensional image of a man sitting at a desk with an open laptop displaying a health symbol. In the background hangs a poster with a DNA strand.

UX Regulatory Compliance: Why Usability Drives Medtech Certification

HEALTHCARE, REGULATIONS

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of a man at a laptop, surrounded by symbols of global medical research: world map with location markers, monitor with a medical cross, patient file, and stethoscope.

Global UX Research in Medical Technology: International User Research as a Factor for Success

HEALTHCARE, MHEALTH, REGULATIONS

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

bottom of page