top of page
uintent company logo

AI & UXR, LLM

Propaganda Chatbots - When AI Suddenly Speaks Russian

How targeted disinformation is undermining our language models – and why this also affects UX

4

MIN

Aug 7, 2025

The invisible influence

We increasingly rely on AI-powered tools when it comes to quickly grasping complex content, comparing perspectives or developing initial hypotheses. Chatbots such as ChatGPT, Perplexity and Gemini have long been an integral part of everyday life for UX teams, editorial departments, product development and research. But what if these systems reproduce content that is not neutral or fact-based, but has been deliberately manipulated?

That is exactly what is happening right now: language models have become the new target for digital propaganda. And the methods used to undermine them are subtle – but highly effective.


What exactly is happening here?

A recent investigation by NewsGuard shows how a Russian propaganda network called Pravda systematically disseminates content online with the aim of influencing large language models.

 

This content appears on well-designed websites that look like reputable news portals. It is specifically optimised for search engines and published en masse.


The tactic behind this is called ‘LLM grooming’. LLM stands for Large Language Model, which is the type of system that chatbots are based on. In this case, grooming refers to the targeted preparation of the model by spreading publicly available misinformation in such a way that it becomes deeply embedded in the model – without anyone having to directly manipulate the system. It's like putting hundreds of articles in a library where an AI later searches for its answers.

If many of these books are incorrect, the system will find it difficult to distinguish between fact and fiction.


Why this is dangerous

1. Language models sound neutral – but they are not always

A key problem: AI systems generate content in a factual-sounding style. This conveys objectivity – even if the content is biased or even manipulated. Users often do not notice this.


2. Mass dissemination leads to long-term effects

The more often such disinformation is cited, clicked on or processed, the more it influences the output of the models. Once established, a distorted narrative can persist in the long term – even if the original source has been deleted.


3. Manipulation is untraceable

Those who receive a seemingly neutral AI response usually do not see the sources on which it is based. Where traditional media have editorial processes and source criticism, many AI tools currently lack these mechanisms.


What this has to do with UX

Even if this development initially sounds like a geopolitical problem, UX teams are directly affected on several levels:


  • UX as an interface to AI 

    Conversational interfaces, recommender systems and assistants based on LLMs are increasingly becoming part of products. If these draw on manipulated content, this jeopardises the quality of information that users receive via our products.

  • AI-based research

    Many teams now also use AI tools in research – for initial syntheses, GPT-generated personas or hypothetical journey analyses. If the content basis is distorted, this leads to incorrect assumptions about user needs or markets.

  • Trust as the basis of UX

    User experience is based on trust, consistency and reliability. When users notice that an AI system makes questionable statements, it not only affects the tool itself – it casts a shadow over the entire product.


How to recognise misinformation – even in everyday life

 Not only UX professionals, but also private individuals and everyday users of chatbots should develop a sense of what manipulative content can look like.

 Here are a few typical signs:


  • Excessive certainty on controversial topics: When an AI responds very decisively to questions that are actually open to debate or controversial.

  • Vague wording without sources: Statements such as ‘some experts say’ or ‘it is reported’ without concrete evidence.

  • One-sided argumentation: Only one perspective is presented without mentioning alternatives.

  • Emotionalising terms in an otherwise factual tone: for example, ‘brutal,’ ‘scandalous,’ ‘heroic’ – often an attempt to reinforce a narrative.

 

Recommendations for action for UX teams and anyone who uses AI


For UX and product managers

  1. Critically question AI outputs

    Even if they sound good, content relevant to design, communication or product strategy should always be supplemented by human reflection.

  2. Create transparency

    Show where information comes from in your interfaces – or that it is AI-generated content.

  3. Establish ethics checks

    Incorporate regular tests with ‘bias triggers’ into your development and QA process. Test with politically or culturally charged questions, for example.

  4. Raise awareness among teams

    This topic belongs not only in IT or communication, but also in UX training, retrospectives and project briefings.


For anyone who uses chatbots privately or professionally 

  1. Ask questions

    Ask the AI for counterarguments, sources or limitations: ‘What other perspectives are there on this?’ or ‘Is there any evidence for this statement?’

  2. Cross-check with other systems

     Ask ChatGPT, Perplexity and Gemini the same question, for example – differences may indicate bias.

  3. Cultivate healthy scepticism

    Take AI seriously, but not literally. It can provide inspiration, but it is no substitute for critical thinking or genuine research.


Conclusion

The targeted manipulation of language models through LLM grooming is not a thing of the future, but a reality. It affects not only politics and the media, but also the everyday work of UX teams, designers, analysts and anyone who works with AI.


Precisely because AI systems are so accessible, credible and fast, we now need something else that is just as accessible, credible and fast: critical thinking.


UX can serve as a model here – through transparent interfaces, clean methodology and conscious use of AI. And we can all start to see chatbots not as neutral knowledge machines, but as what they really are: tools with strengths – and weaknesses that we should be aware of.


💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter


 



A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

Isometric flat-style illustration depicting global UX study logistics with parcels, checklist, video calls, and location markers over a world map.

What It Takes to Get It Right: Global Study Logistics in UX Research for Medical Devices

HEALTHCARE, UX METHODS, UX LOGISTICS

Surreal, glowing illustration of an AI language model as a brain, influenced by a hand – symbolizing manipulation by external forces.

Propaganda Chatbots - When AI Suddenly Speaks Russian

AI & UXR, LLM

Illustration of seven animals representing different thinking and prompting styles in UX work.

Welcome to the Prompt Zoo

AI & UXR, PROMPTS, UX

A two-dimensional image of a man sitting at a desk with an open laptop displaying a health symbol. In the background hangs a poster with a DNA strand.

UX Regulatory Compliance: Why Usability Drives Medtech Certification

HEALTHCARE, REGULATIONS

Illustration of a lightbulb surrounded by abstract symbols like a question mark, cloud, speech bubble, and cross – symbolizing creative ideas and critical thinking.

Why Prompts That Produce Bias and Hallucinations Can Sometimes Be Helpful

AI & UXR, CHAT GPT, HUMAN VS AI, OPEN AI

Illustration of a man at a laptop, surrounded by symbols of global medical research: world map with location markers, monitor with a medical cross, patient file, and stethoscope.

Global UX Research in Medical Technology: International User Research as a Factor for Success

HEALTHCARE, MHEALTH, REGULATIONS

Abstract pastel-colored illustration showing a stylized brain and geometric shapes – symbolizing AI and bias.

AI, Bias and the Power of Questions: How to Get Better Answers With Smart Prompts

AI & UXR, CHAT GPT

A woman inside a gear is surrounded by icons representing global connectivity, collaboration, innovation, and user focus – all linked by arrows. Uses soft, bright colors from a modern UI color palette.

Automate UX? Yes, Please! Why Zapier and n8n Are Real Super Tools for UX Teams

CHAT GPT, TOOLS, AUTOMATION, AI & UXR

A 2D Image of a man, pointing to a screen with a surgical robot on it.

Surgical Robotics and UX: Why Usability Is Key to or Success

HEALTHCARE, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page