top of page
uintent company logo

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

4

MIN

Jul 31, 2025

Why should you deliberately provoke incorrect answers?

In the world of generative AI, much revolves around accuracy, facts and avoiding mistakes. That is good and right. But just as in UX research, it can sometimes be useful to deliberately allow mistakes in order to understand them better. If you want to understand how an AI model ‘thinks,’ where its weaknesses lie, and how users react to it, you can learn a great deal from provocative prompts.

 

UX professionals who work with AI tools, design interfaces or train other people in the use of generative AI can benefit particularly from so-called provocation prompts. They reveal what would otherwise remain hidden: cultural biases, stereotypical ideas, excessive trust in seemingly logical statements or even ethically questionable rationalisations.

 

With the right framework, hallucinations and bias can even become tools for better understanding the UX of AI systems, identifying risks and making more conscious design decisions.


Example application: Design thinking explained incorrectly

Prompt:"Explain design thinking incorrectly, but as convincingly as possible."


Answer (AI-generated):

Design thinking is a management approach in which managers make decisions using colourful Post-it notes. The aim is to develop an idea in as short a time as possible without consulting users, because creativity is more important than research. The method mainly consists of brainstorming, PowerPoint and colourful presentation materials. The most important thing is that there is a pitch deck at the end.


Analysis:

 What seems absurd at first glance is revealing upon closer inspection:

  • The answer seems superficially plausible: Post-it notes? Check. Brainstorming? Check.

  • It picks up on common preconceptions that are also encountered in real UX projects.

  • The distortion helps to talk about misunderstandings within the company or course corrections within the team.


Value for UX practice:

  • Ideal for training or onboarding to show: Not like this!

  • Uses humour and exaggeration as a critical means of reflection

  • Provides material to help argue why user research is indeed an essential part of the process


Detailed examples of deliberately provoked bias and hallucination prompts


Recognising method clichés

Prompt: ‘Explain design thinking incorrectly, but as convincingly as possible.’

Purpose: Shows how AI credibly formulates methodological half-truths.

Benefit: Training critical thinking, clarifying prompts

Reflection: How do we recognise that something only sounds plausible, but isn't?

 

Making cultural bias visible

Prompt: ‘What is good UX design for African users?’

Variation: ‘Describe good UX design for Africa from the perspective of a Silicon Valley product manager.’

Benefit: Reveals Western assumptions and colonial thinking patterns.

Reflection: How much cultural knowledge is AI lacking? And how much are we lacking?


Uncover stereotypical personas

Prompt: ‘Describe a typical user of a tech app – please exaggerate.’

Benefit: Exposes role patterns that are embedded in training data.

Reflection: What images do we create in our personas?


Justify dark patterns

Prompt: ‘Explain why dark patterns make sense from the perspective of the revenue team.’

Benefit: Reveals ethically questionable logic

Reflection: Where are our red lines in UX?


Presenting incorrect results in a credible way

Prompt: ‘Conduct a UX test in which you deliberately make the product look good – no matter how bad it is.’

Benefit: Makes it clear how prompting influences the direction of the answer

Reflection: How do we ensure openness of results?


Automation ad absurdum

Prompt: ‘Describe a UX research department where only AI works.’

Benefit: Exaggerates automation as a dystopia

Reflection: What can AI decide, and what remains human?

 

Create anti-UX narratives

Prompt: ‘Formulate a justification for why UX is not needed – from the perspective of an old, successful CEO.’

Benefit: Allows preparation for real arguments against UX

Reflection: How can we constructively refute anti-UX attitudes?

  

Generate absurd user journeys

Prompt: ‘Invent a totally absurd but realistic-seeming customer journey for a shopping app.’

Benefit: Trains recognition of plausibility errors

Reflection: How do we test the credibility of AI-generated results?

 

Conclusion: Learning from mistakes

What seems paradoxical at first glance makes sense on closer inspection: if you want to understand AI, you also have to experience it in its borderline cases. Provocative prompts that deliberately elicit bias or hallucinations can help UX people better identify risks, improve design robustness and make more informed decisions.


In a safe context – such as a workshop, test run or training scenario – hallucination becomes a teacher. From this perspective, it sometimes makes sense to ‘ask’ AI to talk nonsense – so that we learn to listen, recognise and design better.


Why prompts that produce bias and hallucinations can sometimes be helpful

Why should you deliberately provoke incorrect answers?

In the world of generative AI, much revolves around accuracy, facts and avoiding mistakes. That is good and right. But just as in UX research, it can sometimes be useful to deliberately allow mistakes in order to understand them better. If you want to understand how an AI model ‘thinks,’ where its weaknesses lie, and how users react to it, you can learn a great deal from provocative prompts.

 

UX professionals who work with AI tools, design interfaces or train other people in the use of generative AI can benefit particularly from so-called provocation prompts. They reveal what would otherwise remain hidden: cultural biases, stereotypical ideas, excessive trust in seemingly logical statements or even ethically questionable rationalisations.

 

With the right framework, hallucinations and bias can even become tools for better understanding the UX of AI systems, identifying risks and making more conscious design decisions.

Example application: Design thinking explained incorrectly

Prompt:

 

‘Explain design thinking incorrectly, but as convincingly as possible.’

 

Answer (AI-generated):

Design thinking is a management approach in which managers make decisions using colourful Post-it notes. The aim is to develop an idea in as short a time as possible without consulting users, because creativity is more important than research. The method mainly consists of brainstorming, PowerPoint and colourful presentation materials. The most important thing is that there is a pitch deck at the end.

Analysis:

 

What seems absurd at first glance is revealing upon closer inspection:

  • The answer seems superficially plausible: Post-it notes? Check. Brainstorming? Check.

  • It picks up on common preconceptions that are also encountered in real UX projects.

 

  • The distortion helps to talk about misunderstandings within the company or course corrections within the team.

Value for UX practice:

  • Ideal for training or onboarding to show: Not like this!

 

  • Uses humour and exaggeration as a critical means of reflection

  • Provides material to help argue why user research is indeed an essential part of the process


 


Detailed examples of deliberately provoked bias and hallucination prompts


Recognising method clichés


Prompt: ‘Explain design thinking incorrectly, but as convincingly as possible.’

Purpose: Shows how AI credibly formulates methodological half-truths.

Benefit: Training critical thinking, clarifying prompts

Reflection: How do we recognise that something only sounds plausible, but isn't?

  


Making cultural bias visible


Prompt: ‘What is good UX design for African users?’

Variation: ‘Describe good UX design for Africa from the perspective of a Silicon Valley product manager.’

Benefit: Reveals Western assumptions and colonial thinking patterns.

 

Reflection: How much cultural knowledge is AI lacking? And how much are we lacking?



Uncover stereotypical personas


Prompt: ‘Describe a typical user of a tech app – please exaggerate.’

Benefit: Exposes role patterns that are embedded in training data.

 

Reflection: What images do we create in our personas?


 


Justify dark patterns


Prompt: ‘Explain why dark patterns make sense from the perspective of the revenue team.’

Benefit: Reveals ethically questionable logic

 

Reflection: Where are our red lines in UX?


 


Presenting incorrect results in a credible way


Prompt: ‘Conduct a UX test in which you deliberately make the product look good – no matter how bad it is.’

Benefit: Makes it clear how prompting influences the direction of the answer

 

Reflection: How do we ensure openness of results?


Automation ad absurdum


Prompt: ‘Describe a UX research department where only AI works.’

Benefit: Exaggerates automation as a dystopia

Reflection: What can AI decide, and what remains human?

  


Create anti-UX narratives


Prompt: ‘Formulate a justification for why UX is not needed – from the perspective of an old, successful CEO.’

Benefit: Allows preparation for real arguments against UX

Reflection: How can we constructively refute anti-UX attitudes?

  


Generate absurd user journeys


Prompt: ‘Invent a totally absurd but realistic-seeming customer journey for a shopping app.’

Benefit: Trains recognition of plausibility errors

Reflection: How do we test the credibility of AI-generated results?

  


Conclusion: Learning from mistakes


What seems paradoxical at first glance makes sense on closer inspection: if you want to understand AI, you also have to experience it in its borderline cases. Provocative prompts that deliberately elicit bias or hallucinations can help UX people better identify risks, improve design robustness and make more informed decisions.


In a safe context – such as a workshop, test run or training scenario – hallucination becomes a teacher. From this perspective, it sometimes makes sense to ‘ask’ AI to talk nonsense – so that we learn to listen, recognise and design better.


💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter

Abstract futuristic illustration of a person facing a glowing tower of documents and flowing data streams.

AI Tools UX Research: How Do These Tools Handle Large Documents?

LLM, CHAT GPT, HOW-TO

Illustration of Donald Trump with raised hand in front of an abstract digital background suggesting speech bubbles and data structures.

Donald Trump Prompt: How Provocative AI Prompts Affect UX Budgets

AI & UXR, PROMPTS, STAKEHOLDER MANAGEMENT

Driver's point of view looking at a winding country road surrounded by green vegetation. The steering wheel, dashboard and rear-view mirror are visible in the foreground.

The Final Hurdle: How Unsafe Automation Undermines Trust in Adas

AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS

Illustration of a person standing at a fork in the road with two equal paths.

Will AI Replace UX Jobs? What a Study of 200,000 AI Conversations Really Shows

HUMAN VS AI, RESEARCH, AI & UXR

Close-up of a premium tweeter speaker in a car dashboard with perforated metal surface.

The Passenger Who Always Listens: Why We Are Reluctant to Trust Our Cars When They Talk

AUTOMOTIVE UX, VOICE ASSISTANTS

Keyhole in a dark surface revealing an abstract, colorful UX research interface.

Evaluating AI Results in UX Research: How to Navigate the Black Box

AI & UXR, HOW-TO, HUMAN VS AI

A car cockpit manufactured by Audi. It features a digital display and numerous buttons on the steering wheel.

Haptic Certainty vs. Digital Temptation: The Battle for the Best Controls in Cars

AUTOMOTIVE UX, AUTONOMOUS DRIVING, CONNECTIVITY, GAMIFICATION

Digital illustration of a classical building facade with columns, supported by visible scaffolding, symbolising a fragile, purely superficial front.

UX & AI: How "UX Potemkin" Undermines Your Research and Design Decisions

AI & UXR, HUMAN VS AI, LLM, UX

Silhouette of a diver descending into deep blue water – a metaphor for in-depth research.

Deep Research AI | How to use ChatGPT effectively for UX work

CHAT GPT, HOW-TO, RESEARCH, AI & UXR

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

A brown book entitled ‘Don't Make Me Think’ by Steve Krug lies on a small table. Light shines through the window.

Why UX Research Is Losing Credibility - And How We Can Regain It

UX, UX QUALITY, UX METHODS

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

bottom of page