top of page
uintent company logo

AI & UXR, HUMAN VS AI, LLM, TRENDS, UX METHODS, PERSONAS

Artificial Users, Real Insights? How Generative Agents Could Change the Field of UX

3

MIN

Jun 26, 2025

What if we could simulate users – believably, in a differentiated way, and at any time?

What if we could conduct interviews without recruiting participants?

  

What if we could get feedback on prototypes before anyone had even seen them?

These questions sound like science fiction – but this is exactly the direction in which current research on generative agents is heading. It is a trend that is also becoming increasingly important in the world of UX and user research.

  

What is Generative Agent-Based Modelling (GABM)?

Generative Agent-Based Modelling, or GABM for short, combines two developments:

  1. Agent-based modelling (ABM) – these are simulations in which individual units (agents) follow simple rules and generate complex phenomena through their interactions. This is familiar, for example, from urban planning, traffic models or epidemic research.

  2. Generative AI (LLMs) – e.g. ChatGPT, Claude or Gemini – systems that can generate text, answer questions or behave like humans.

  

In classic ABMs, modellers define the decision-making logic. In GABM, however, a large language model makes these decisions based on language, context and personality. This makes the agent generative: it does not decide according to fixed rules, but rather in a situational, context-sensitive and sometimes surprisingly human way.

  

Research meets reality

Two scientific papers are currently setting the tone:

  • 🧪 Virginia Tech (Ghaffarzadegan et al., 2024) shows how GPT can be used to model social systems – e.g. how social norms spread in organisations. The LLM becomes part of a decision-making process in the model: agents read a ‘message’ about the pandemic status in the morning and then decide via ChatGPT whether to go to work or stay at home.

  • 🧑‍🤝‍🧑 Stan (Park et al., 2024) interviewed over 1,000 real people, created generative agents from their responses, and then tested how well these agents could predict the attitudes, values, and behaviours of real people. The result: impressive matches, some of which were on par with the self-consistency of real people two weeks later.


And what does this have to do with UX?

Quite a lot. Because what these studies show is the following:

Today, we can create realistically simulated people – with psychological profiles, backstories and behavioural logic – and place them in a wide variety of situations.

 

In the UX world, these include:

  • Prototype testing

  • Persona validation

  • Early hypothesis formation

  • Simulated interviews

  • Concept reactions


Initial tools and approaches in the UX context

Here is a selection of tools and methods that are already available or in development:

  

Generative personas with LLMs

These tools create individual, believable users – often based on target group segments, interview data or prompt building blocks:

  • UXPin Persona AI: Generates dynamic personas with goals, fears and behaviours.

  • ChatGPT with prompt templates: Structured prompts allow you to generate personas at the touch of a button (‘Create a 35-year-old mother with little technical affinity who uses a banking app’).

  • Fictive Kin's Synthetic Users: Open-source approach to generating AI-based test subjects.


🟢 Useful for: Brainstorming, team alignment, workshop simulations

🔴 Limitations: Often generic, dependent on prompting, no real data



Simulated usability tests

Some experimental tools attempt to simulate user behaviour directly – e.g. how they navigate through an app or react to UI elements:

  • Figma plug-ins with GPT: Agent ‘clicks’ through screens, comments on comprehensibility.

  • SimulAItor (research): GPT-based agents analyse prototypes and suggest improvements.

  • Forethought AI: Combines historical UX data with predictive LLMs.


🟢 Useful for: Early design feedback, comprehensibility testing

🔴 Limitations: No real cognitive processes or motor interactions

  

AI as an interview partner

LLMs can be trained to behave like users – including answering key questions, responding spontaneously and following up.

  • AI-based interview ‘bots’: Replicate realistic conversation situations.

  • AI coaching for UX researchers: Interview training with simulated target groups.

  • OpenPrompt projects: Collection of prompts for specific target group behaviour.


🟢 Useful for: Interview training, guideline validation, quick response scenarios

🔴 Limitations: Emotional depth and situational contextualisation still limited


Behaviour simulation with agent networks

Similar to GABM, entire groups are simulated rather than individuals – e.g. to observe network dynamics, opinion changes or feature adoption.

  • Market diffusion models with GPT agents (e.g. in Python, NetLogo or AnyLogic): How does a new feature spread among a user base?

  • AI simulations for service design: Simulated customers react to different touchpoints.


🟢 Useful for: strategic UX decisions, feature rollout planning

🔴 Limitations: High complexity, requires good modelling skills


Interim conclusion: Not competition, but complement

Will generative AI replace UX research? No.

Will it expand it, accelerate it, perhaps even make it more accessible? Yes, absolutely.

These tools are not a substitute for real users, but they are a valuable building block:

  • For phases when real interviews are not possible.

  • For early exploration when there are no concrete hypotheses yet.

  • For team discussions when you want to get closer to the user profile.


Outlook: What is already possible – and what is coming (soon)

You can already do the following today:

  • Brainstorm with generative personas

  • Test interview questions on simulated users

  • Get initial reactions to prototypes from synthetic ‘users’

 

Coming soon:

  • automated UX tests with simulated users

  • ‘living’ personas that evolve

  • virtual focus groups with different agent types


Conclusion: Now is the right time to get started

For us UX people, it's the perfect time to take a playful and critical look at this topic.

If you try it out a little today, you'll understand tomorrow what this technology can do – and what it can't. And above all: what it can do for you in your own context. 


Because in the end, it's always about understanding people better, improving decisions – and finding the right balance between empiricism, empathy and efficiency.



A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

Isometric flat-style illustration depicting global UX study logistics with parcels, checklist, video calls, and location markers over a world map.

What It Takes to Get It Right: Global Study Logistics in UX Research for Medical Devices

HEALTHCARE, UX METHODS, UX LOGISTICS

Surreal, glowing illustration of an AI language model as a brain, influenced by a hand – symbolizing manipulation by external forces.

Propaganda Chatbots - When AI Suddenly Speaks Russian

AI & UXR, LLM

Illustration of seven animals representing different thinking and prompting styles in UX work.

Welcome to the Prompt Zoo

AI & UXR, PROMPTS, UX

A two-dimensional image of a man sitting at a desk with an open laptop displaying a health symbol. In the background hangs a poster with a DNA strand.

UX Regulatory Compliance: Why Usability Drives Medtech Certification

HEALTHCARE, REGULATIONS

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of a man at a laptop, surrounded by symbols of global medical research: world map with location markers, monitor with a medical cross, patient file, and stethoscope.

Global UX Research in Medical Technology: International User Research as a Factor for Success

HEALTHCARE, MHEALTH, REGULATIONS

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

A woman inside a gear is surrounded by icons representing global connectivity, collaboration, innovation, and user focus – all linked by arrows. Uses soft, bright colors from a modern UI color palette.

Automate UX? Yes, Please! Why Zapier and n8n Are Real Super Tools for UX Teams

CHAT GPT, TOOLS, AUTOMATION, AI & UXR

A 2D Image of a man, pointing to a screen with a surgical robot on it.

Surgical Robotics and UX: Why Usability Is Key to or Success

HEALTHCARE, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page