top of page
uintent company logo

AI & UXR, LLM, HUMAN VS AI, OPEN AI

How a Transformer Thinks – And Why It Hallucinates

5

MIN

Jul 3, 2025

Why UX people should understand how large language models really work


Large language models such as ChatGPT, Claude and Gemini have long been more than just toys for tech enthusiasts. They write texts, answer questions, analyse feedback, generate ideas – and often do so with remarkable confidence. Many of us already use them every day in our UX practice. And rightly so.


But as exciting as these tools are, it's important to realise that what we're experiencing is not a form of genuine understanding. And the mistakes these systems make are not operational accidents. They are a structural component of the technology.

 

If you know how a transformer works, you understand why large language models hallucinate.

And if you understand that, you can make better UX decisions – for tools, for processes and for users.


What a transformer actually is (and what it isn't) 

The ‘transformer’ is the basic model on which almost all of today's LLMs are based – from GPT-4 to Gemini, from Mistral to LLaMA. It was introduced in 2017 by a Google team (‘Attention is all you need’) and largely replaced older language models such as LSTMs and RNNs.

 

The basic idea is easy to explain:

A transformer is a model that predicts language. Token by token. It calculates which token (this can be a word, part of a word or a punctuation mark) is most likely to follow next, based on the context so far.


An example:

You enter: ‘UX researchers observe how users...’

The model then calculates: What is likely to follow? ‘...interact’, ‘...encounter difficulties’, ‘...use the app’?


The Transformer selects the most likely continuation. And then the next one. And the next one. Until an entire paragraph has been created.


The crucial point here is:

that the Transformer does not ‘know’ what is true. It knows no facts, no meaning, no world. It only ‘knows’ what sounds probable from a linguistic point of view – because it often occurred in its training data. And this is precisely where all the problems originate.


A look inside: How a transformer works

To understand where hallucinations come from, it's worth having a quick look at the technical side of things (we promise: no maths, no equations – just a basic understanding of the structure):


1. Tokenisation

The text entered is broken down into small building blocks called tokens. These can be words, syllables or parts of words (‘UX’ + ‘-researcher’ + ‘:innen’).


2. Embedding

Each of these tokens is translated into a numerical vector – a kind of meaning code. This makes language mathematically calculable.


3. Position coding

Because a transformer processes everything simultaneously (‘parallelises’), it needs additional information: Where in the sentence is each token located? This position data is incorporated so that the model understands sequences.


4. Self-attention

The heart of the transformer: Each token ‘pays attention’ to all other tokens in the context and weighs up which of them are important for its meaning. This creates semantic relationships – e.g. who ‘she’ is in the sentence or what an adjective refers to.


5. Multilayer processing

This does not happen just once, but many times over – with dozens to hundreds of layers. This creates complex patterns of probabilities that map linguistic structures with astonishing precision.


6. Autoregressive generation

The model outputs a token, attaches it to the context, calculates the next token based on the new context, and so on. Step by step, the text grows.


The result often looks like a train of thought – but in reality it is a recursive probability forecast. Linguistically brilliant. Not necessarily correct in terms of content.


Why transformers hallucinate – structurally, not randomly

Now it gets exciting: the way transformers work inevitably introduces certain sources of error. And if you ignore this, you run the risk of using LLMs incorrectly – in interfaces, in research, in product development.


1. Lack of world knowledge (no grounding)

Transformer models have no connection to the real world. They don't know that Paris is in France or that UX doesn't mean ‘user xylophone.’ They only know the probability with which certain word sequences occur.

This means that they can generate completely fabricated statements – as long as they sound linguistically plausible.

 

2. Lack of fact checking

A transformer does not check content. It has no internal control system, no logical validation, no semantic redundancy check.

If the training material often stated, ‘AI was developed by Alan Turing in 1983,’ the model might consider this a valid statement – even if it is historically incorrect.

 

3. Chain reaction through autoregressive writing

 A small error at the beginning (e.g., a made-up name or a false fact) automatically leads to further errors. This is because each new token is based on the previous context.

This is called compounding errors – a slip-up turns into a whole hallucination.


4. Same linguistic feel – regardless of truthfulness

A transformer speaks in the same style – regardless of whether it is guessing, hallucinating or stating a fact.

The model has been trained to sound fluent and coherent – not to make the truth transparent.

This is dangerous: the linguistic quality suggests certainty – where there is none.

  

5. Quality and bias of training data

LLMs learn from huge text corpora – internet forums, websites, books, PDFs. These contain a wealth of knowledge – but also many errors, fiction, opinions, satire and polemics.

The model cannot distinguish whether a sentence comes from Wikipedia or Reddit.

 

What occurs often enough shapes the model, regardless of whether it is true or false.


Why this should matter to UX professionals

As UX professionals, we often build systems in which AI plays a role – whether for content creation, feedback analysis or user support. And the following applies to LLMs in particular:

 

Just because a system sounds good doesn't mean it's good.

 

When integrating LLMs into UX work, we must:

  • Understand where the strengths come from (e.g. language flow, pattern recognition, context sensitivity)

  • Recognise where the weaknesses come from (e.g. hallucinations, fact blindness, bias)

  • Take targeted measures to catch errors (e.g. through RAG, feedback loops, fact checking)

  • Build UX systems that make uncertainty visible (e.g. confidence scores, source references, indications of AI origin)

  • Design prompts and interfaces to guide LLMs in a meaningful way – and not overwhelm them


Conclusion: Understanding is better than amazement

Transformer models are a technological masterpiece. But they are not truth tellers. They generate language spaces, not world models. They make predictions, not statements.


Anyone working with LLMs as a UX professional – whether for tools, processes or content – should understand the inner workings of these models. Not in detail, but in terms of their overall structure.


Because only those who understand how a Transformer thinks can prevent it from hallucinating – especially when it matters most.



Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 5: The Future Starts Now – UX in Transition and Tara Right in the Middle of It

UX, BACKSTORY

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 4: A New Outlook on Life – Tara, the Transition and Becoming Visible

UX

Stylized illustration of a brain and a neural network representing AI and machine thinking.

How a Transformer Thinks – And Why It Hallucinates

AI & UXR, LLM, HUMAN VS AI, OPEN AI

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 3: From Corporate Life Back to Freedom: How Frustration Led to the Idea for Resight

UX, BACKSTORY

Podcast cover for episode 2 of “Beyond Your Business: Transitions” with two photos of Tara at different life stages.

Episode 2: Self-Denial, Growth and Crises: The Second Phase of Sirvaluse – And of Tara

UX, BACKSTORY

Colorful 3D illustration of a digital workspace with data and tools.

Making Better Use of UX Knowledge: What Repository Tools Can Do – And What They Can’t

AI & UXR, TOOLS, CHAT GPT, LLM, OPEN AI

Three stylized characters with speech bubbles on a blue background – “Chattable Personas”.

Artificial Users, Real Insights? How Generative Agents Could Change the Field of UX

AI & UXR, HUMAN VS AI, LLM, TRENDS, UX METHODS, PERSONAS

Surgeon in the clinical operating room, surrounded by a sterile surgical environment, lots of equipment and monitors

"Good Enough?": Balancing compliance and pragmatism in medical device user research

HEALTHCARE, FORMATIVE EVALUATION, SUMMATIVE STUDY

Colorful illustration of a robot with a document and pencil on a light background.

Write More Clearly With Wolf-Schneider AI – A Self-Experiment

AI & UXR, OPEN AI

Dark designed picture as a podcast announcement. A picture of a baby and a man is shown.

Episode 1: ‘Inside and Out’ – A Podcast Series About Change, Responsibility and Self-Discovery

UX, BACKSTORY

Illustration with five colorful icons on a dark background, representing different AI systems.
Top left: Brain and lightbulb – “GenAI: Creativity”.
Top center: Book with magnifying glass – “RAG: Knowledge”.
Top right: Flowchart diagram – “MCP: Structure”.
Bottom left: Code window with arrow – “Function Calling: Access”.
Bottom right: Smiling robot – “Agents: Assistance”.

Five Types of AI Systems – And What They Do for Us

AI & UXR, CHAT GPT, LLM, OPEN AI

Illustration of a stylized brain (LLM) between a human profile and a bookshelf with a magnifying glass – symbolizes AI accessing external knowledge.

RAGSs Against Hallucinations – Well Thought Out, but Not Good Enough?

AI & UXR, CHAT GPT, LLM

Two people sit at a breakfast table using a tablet with the 'ZEITKOMPASS' app by Inclusys, which displays a colorful daily schedule and clock. The table is set with bread rolls, fruit, and coffee.

UX for Good With INCLUSYS: How We Learned to Better Understand Barriers in Everyday Life

ACCESSIBILITY, ADVANTAGES USER RESEARCH, RESEARCH, UX FOR GOOD

Woman in an orange shirt sits on a blue couch and looks at an interviewer and is laughing.

UX in Healthcare: The Essentials of Conducting Interviews With Patients

BEST PRACTICES, HEALTHCARE, RESEARCH, UX INSIGHTS

Illustration of a friendly robot learning from its mistakes.

Better Answers, Less Nonsense: How ChatGPT Learns

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

A visual representation of the environmental impact of AI, featuring data centers, energy consumption, and environmental effects.

The Environmental Impact of AI – Why Sustainability Also Matters for Digital Innovation

AI & UXR, CHAT GPT

Colorful illustration of a futuristic workspace with holographic AI interaction and structured prompts.

How to Work With Complex Prompts in AI: Structured Strategies and Best Practices

AI & UXR, CHAT GPT, OPEN AI

A humorous image on AI quality assessment: A robot with data charts observes a confused hamster in front of facial recognition, a pizza with glue, and a rock labeled as "food."

Anecdotal Evidence or Systematic AI Research – The Current Situation and What Still Needs to Be Done

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Three talking businessmen, their silhouettes visible in front of a window.

Making the Case for UX Research: Convincing Stakeholders of UX Value

HOW-TO, OBJECTION HANDLING, STAKEHOLDER MANAGEMENT, UX

Futuristic cosmic scene featuring the glowing number 42 at the center, surrounded by abstract technological and galactic elements.

What ‘42’ Teaches Us About Change Management and UX

AI & UXR, CHAT GPT, OPEN AI, UX

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page