top of page
uintent company logo

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

When the Text Is Too Smooth: How to Make AI Language More Human

4

MIN

Sep 4, 2025

Artificial intelligence can write. And quite well, at that. At least at first glance. It forms perfect sentences, knows countless text types, and delivers outlines, headings, and even teasers at lightning speed. But anyone who works with AI-generated texts on a regular basis quickly notices that these texts have their own voice. And that voice sounds... artificial. Not always, but often.

It is polite, smooth, diplomatic. It loves introductions with ‘In the age of...’ and likes to end with ‘In summary, it can be said...’. It is not wrong – just empty. And that is a problem. Especially when it comes to UX, trust, attitude and comprehensibility. So it's high time to take a look at the typical characteristics of AI-generated language – and consider how to counter them with good prompts and UX savvy.


What AI texts reveal: patterns that stand out

Whether it's a LinkedIn post, blog article or onboarding text, AI language has recognisable characteristics. Many of these result from the statistical principle on which large language models are based: they calculate the most probable continuation of a sentence – not the smartest, boldest or most human.


One typical feature, for example, is overly structured language. Paragraphs often follow a template: introduction, three-step argument, conclusion. It sounds well thought out – and it is – but it lacks organic movement, imperfection, and the between-the-lines. Added to this are repetitions: the same statement appears several times, only with slightly different wording. In terms of content, it quickly becomes apparent that much of it remains generic. There is a lack of depth. There is a lack of genuine perspective. Often there are no new ideas, just a collection of things that have already been said a thousand times before.


Another indication is sources. Language models sometimes cite sources that do not exist – so-called hallucinations. Or they provide evidence that sounds formally correct but does not prove anything. Factual errors also creep in, especially when it comes to specialist topics. And let's not forget: overly correct language. Grammar and punctuation are flawless – and thus sometimes suspiciously polished. Hardly anyone writes like that in real life.


What is almost always missing are emotions, personal experiences or attitude. A human might write: ‘That really annoyed me at the time.’ An AI is more likely to write: ‘This challenge requires an appropriate solution.’ It sounds professional – but it doesn't feel alive.


How detection tools work – and where their limits lie

AI detection tools such as GPTZero, Originality.ai and Turnitin analyse texts for typical characteristics of machine language. Two values are central to this: perplexity – the measure of the predictability of words – and burstiness – i.e. how much sentence lengths and forms of expression vary.


An AI text is usually very ‘predictable’ and rhythmically uniform. This can be measured. Typical phrases such as ‘not least’, ‘plays a central role’ or ‘should be considered’ are also considered indicators. Nevertheless, these tools do not provide evidence, but probabilities. And they can be wrong – both with very good human language and with atypical AI output.


What really helps: good prompting with attitude

The simplest and most effective method of making AI texts more human begins with the prompt. Not technically, but stylistically. If you simply tell ChatGPT, ‘Write a text about UX and health,’ you get exactly that: a text about UX and health. If, on the other hand, you say, ‘I'm a UX designer and I'm currently writing a blog article about the challenges of digital health communication. My style is clear, slightly critical, but friendly – no marketing phrases, preferably with personal observations,’ then the result will look very different.


It also helps to narrow the scope. Something like this: "Just give me the three most important aspects. No filler sentences. No sweeping statements. Focus on depth, not breadth.‘ This automatically reduces the likelihood of the AI getting lost in clichés, repetitions or list logic.


The style can also be explicitly controlled: ’Speak in my tone of voice – factual, but not sterile. Avoid typical AI phrases. Make the text come alive, even if it's not perfect.‘ You can even say: ’Avoid bullet points and structure it more like a conversation."


And: You can refine it at any time. Have a first draft generated – and then ask for variations, rephrasing, more specificity, different perspectives. AI is not a substitute for language – but a tool that works better the clearer you are in formulating what you want.


Why this is crucial for UX

Language is not an accessory. Language is an experience. And in UX, language is often the first – and sometimes the only – interface that users experience. When texts sound generic, not only does trust suffer, but so does usability. People now recognise when a text comes from AI. And they are becoming increasingly sceptical, especially in sensitive contexts: health, finance, education.


At the same time, more and more UX teams are using AI to create texts: for prototypes, test materials, interface text, even research scripts. If these texts sound like they came from a machine, it distorts the feedback. The impact of a product is not independent of the tone of voice – on the contrary: tonality helps determine whether a user feels confident, understood or taken seriously.



Conclusion: Imperfection is a strength

Those who use AI to write texts do not necessarily have to produce perfect language. On the contrary: often, human imperfection is exactly what makes a good text.

 A little attitude, a little edge, a little change of rhythm – all of this brings language to life. And trust in communication.


Good texts are not created despite AI – but through good control of AI. Those who know the typical patterns and consciously work against them will end up with texts that are not only correct, but also effective. And that is exactly what makes for good UX.


💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter


 



A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

Isometric flat-style illustration depicting global UX study logistics with parcels, checklist, video calls, and location markers over a world map.

What It Takes to Get It Right: Global Study Logistics in UX Research for Medical Devices

HEALTHCARE, UX METHODS, UX LOGISTICS

Surreal, glowing illustration of an AI language model as a brain, influenced by a hand – symbolizing manipulation by external forces.

Propaganda Chatbots - When AI Suddenly Speaks Russian

AI & UXR, LLM

Illustration of seven animals representing different thinking and prompting styles in UX work.

Welcome to the Prompt Zoo

AI & UXR, PROMPTS, UX

A two-dimensional image of a man sitting at a desk with an open laptop displaying a health symbol. In the background hangs a poster with a DNA strand.

UX Regulatory Compliance: Why Usability Drives Medtech Certification

HEALTHCARE, REGULATIONS

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of a man at a laptop, surrounded by symbols of global medical research: world map with location markers, monitor with a medical cross, patient file, and stethoscope.

Global UX Research in Medical Technology: International User Research as a Factor for Success

HEALTHCARE, MHEALTH, REGULATIONS

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

A woman inside a gear is surrounded by icons representing global connectivity, collaboration, innovation, and user focus – all linked by arrows. Uses soft, bright colors from a modern UI color palette.

Automate UX? Yes, Please! Why Zapier and n8n Are Real Super Tools for UX Teams

CHAT GPT, TOOLS, AUTOMATION, AI & UXR

A 2D Image of a man, pointing to a screen with a surgical robot on it.

Surgical Robotics and UX: Why Usability Is Key to or Success

HEALTHCARE, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page