top of page
uintent company logo

AI & UXR, CHAT GPT, LLM, OPEN AI

Five Types of AI Systems – And What They Do for Us

4

MIN

Jun 5, 2025

Why this article?

Anyone dealing with AI today – whether as a UX professional, product manager or strategist – quickly realizes that there is no such thing as the artificial intelligence. Instead, we encounter different systems with completely different capabilities. And more importantly, with completely different benefits for us as users.


Terms such as GenAI, RAG and agents are only of limited help here. This is because they usually describe technical concepts rather than user experiences. What matters is:

  • What do I get as a user?

  • What can I hand over – and what do I still have to do myself?

  • Where does the system help me – and where are the risks?


In this article, I present five basic types of AI systems. All five are found – explicitly or implicitly – in modern tools. They differ in function, benefits and typical UX scenarios. And they all have their limitations. It's time to bring some order to the chaos.


  1. GenAI – When creativity happens at the touch of a button

 The classic among AI systems: Generative AI (GenAI for short). You enter a prompt and get text, images, code or ideas in return. Usually without any waiting time, often quite impressive.


What is it?

 A trained language model that generates new content based on probabilities. It doesn't remember – it invents. And it does it damn well.


What do you get?

🟢 Ideas, first drafts, creative suggestions, visual variations – all fast, all with a prompt.


UX examples:

  • Generate invitation texts for usability tests

  • Write down 10 ideas for user feedback questions

  • Sketch initial personas based on a few keywords

  • Output wireframe description as HTML


Benefits:

  • Starting point for creative or conceptual work

  • Time savings for repeatable tasks (e.g., text drafts)

  • Idea generator in stagnating processes


Limitations:

  • Hallucinations: GenAI invents facts – even if it sounds plausible, it is often wrong

  • Lack of context: Without a precise prompt, there is little useful output

  • Style risk: Many texts sound the same – ‘AI speak’ rears its head

  

Conclusion: Great for creative rough drafts – dangerous if used without checking


  1. RAG – When AI knows what we know

Retrieval-Augmented Generation (RAG for short) is an attempt to make GenAI smarter: The AI first searches specifically for relevant information – and then formulates an answer based on that. Sounds simple, but it's powerful.


What is it?

A combination of vector search (e.g. in PDFs, databases or Miro boards) and an LLM that uses this content instead of inventing it.


What do you get?

🟢 Context-based, well-founded answers – with direct references to your data, documents and reports.


UX examples:

  • ‘What UX criteria apply to our MedTech products?’ → Quote from internal standard

  • ‘What was said about the onboarding flow in the last interviews?’ → Text excerpts with sources

  • ‘Which KPIs do we use for NPS tracking?’ → Extracted from workshop documentation


Benefits:

  • Saves time searching for information

  • Leads to more reliable answers

  • Supports internal knowledge sharing


Limitations:

  • If the data is poor, even RAG won't help

  • Relevance logic often remains opaque

  • High data maintenance effort (structure, timeliness, rights)


Conclusion: Finally makes large amounts of data usable – as long as it is well-prepared.

  

  1. MCP – When AI can think in multiple stages

Modular Conversational Pipelines (MCP) are orchestrated systems in which a task is broken down into sub-steps and processed sequentially. This enables AI to suddenly analyse, structure and weigh – not just respond.


What is it?

 A chain of specialized AI modules or prompts – e.g. ‘Summarize → Cluster → Prioritize → Visualize’.


What do you get?

🟢 Logically structured results for complex, multi-layered tasks – including intermediate steps


UX examples:

  • Automatically code and cluster open interview responses

  • Harmonize multilingual user feedback

  • Systematically evaluate use cases according to impact and feasibility

  • Create benchmark comparisons based on UX metrics


Benefits:

  • Saves time on recurring evaluations

  • Reduces errors through standardization

  • Makes complex data sets tangible

  

Limitations:

  • Errors in the first step can carry through

  • Often unclear why a particular cluster was named that way

  • Requires good process definition and prompt strategy


Conclusion: Ideal for structured thought processes – provided you control the intermediate steps.


  1. Function calling – when AI uses specific tools

An often overlooked but extremely practical category: tool-enhanced prompts or function calling. Here, the AI calls up specific predefined functions or APIs – and combines human language with concrete logic.


What is it?

 The AI passes tasks to an external module – e.g. a calculator, a weather API, an Excel plugin or a calendar system.


What do you get?

🟢 Answers based on current, accurate data or functions – no guessing, no hallucinations


UX examples:

  • ‘How many test subjects did we have in March?’ → API call to CRM

  • ‘Calculate the average of these UX scores’ → Excel function in the background

  • ‘Show me open appointment slots next week’ → Access to calendar data

  • ‘Give me live user numbers from Google Analytics’ → Direct integration


Benefits:

  • Connects voice AI with real functionality

  • Provides precise, data-based answers

  • Enables automation of standard processes


Limitations:

  • Must be explicitly connected and defined – not a self-runner

  • Only works if permissions, formats and APIs are cleanly configured

  • Little leeway – no creative input from the AI


Conclusion: Perfect when accuracy is key – but no substitute for thinking or interpreting.


  1. Agents – When AI really does something for us 

The supreme discipline: AI agents. They act independently, execute steps, use tools, ask questions – and often deliver a result, not just an answer.


What is it?

A (semi-)autonomous assistant that plans tasks, makes decisions and uses tools – without every action having to be prompted individually.


What do you get?

🟢 Results, not suggestions. The AI takes over – and gets back to you when it needs something.


UX examples:

  • Creation of a complete slide deck summary based on a study

  • Consolidation of user feedback + metrics + logs into a journey map

  • Better focus on analysis & strategy

  • Planning a UX workshop with suggested methods, duration, agenda, participant logic

  • Deriving recommendations for action from qualitative and quantitative data


Benefits:

  • Relief from routines and multistep tasks

  • Interaction feels more like a project partner


Limitations:

  • High loss of control – what is actually happening?

  • Decisions are often not transparent or explainable

  • Potential for errors due to ‘incorrect’ assumptions or poor tool execution


Conclusion: Extremely powerful – but to be used with caution. Responsibility remains with humans.




System

What does it do?

What do I get?

User role

Typical limitations

In one word

GenAI

Generate content

Inspiration, text, image

Prompt generator

Hallucinated, context missing

Creativity 

RAG

Search for and use knowledge

Well-founded answers

Questioner

Incorrect source, maintenance effort

Knowledge

MCP

Map process

Structured results

Process control

Error chains, lack of transparency

Structure

Function Call

Execute function

Exact, data-based answer

Task setter

Only as good as the tool/API behind it

Access

Agent

Solve tasks autonomously

Result or output

Task delegation

Loss of control, potential for errors

Assistance

Conclusion: What fits when?

It's not about which system is ‘best.’ It's about what fits which problem.

  • Need ideas? → GenAI

  • Looking for internally distributed knowledge? → RAG

  • Want to evaluate data systematically? → MCP

  • Need exact figures or real-time data? → Function Calling

  • Want to hand over an entire task? → Agent

 

💡 The real magic often happens when you combine things:

A RAG system retrieves content, an MCP structures it, an agent prepares it – and function calls ensure that the data is correct.


And that's exactly why it's worth taking a differentiated view. Not only because there are technological differences – but because it enables us as users to make better decisions:

→ Which AI do I use for what?

→ And where am I better off staying in the driver's seat? 

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

Isometric flat-style illustration depicting global UX study logistics with parcels, checklist, video calls, and location markers over a world map.

What It Takes to Get It Right: Global Study Logistics in UX Research for Medical Devices

HEALTHCARE, UX METHODS, UX LOGISTICS

Surreal, glowing illustration of an AI language model as a brain, influenced by a hand – symbolizing manipulation by external forces.

Propaganda Chatbots - When AI Suddenly Speaks Russian

AI & UXR, LLM

Illustration of seven animals representing different thinking and prompting styles in UX work.

Welcome to the Prompt Zoo

AI & UXR, PROMPTS, UX

A two-dimensional image of a man sitting at a desk with an open laptop displaying a health symbol. In the background hangs a poster with a DNA strand.

UX Regulatory Compliance: Why Usability Drives Medtech Certification

HEALTHCARE, REGULATIONS

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of a man at a laptop, surrounded by symbols of global medical research: world map with location markers, monitor with a medical cross, patient file, and stethoscope.

Global UX Research in Medical Technology: International User Research as a Factor for Success

HEALTHCARE, MHEALTH, REGULATIONS

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

A woman inside a gear is surrounded by icons representing global connectivity, collaboration, innovation, and user focus – all linked by arrows. Uses soft, bright colors from a modern UI color palette.

Automate UX? Yes, Please! Why Zapier and n8n Are Real Super Tools for UX Teams

CHAT GPT, TOOLS, AUTOMATION, AI & UXR

A 2D Image of a man, pointing to a screen with a surgical robot on it.

Surgical Robotics and UX: Why Usability Is Key to or Success

HEALTHCARE, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page