top of page
uintent company logo

UX, UX QUALITY, UX METHODS

Why UX Research Is Losing Credibility - And How We Can Regain It

5

MIN

Oct 22, 2025

For years, we have been preaching the importance of user-centred product development. UX research has become a buzzword, and every company wants to make ‘data-driven’ decisions. Design teams swear by user journeys, product owners quote user feedback, and every product presentation features colourful charts with research insights.


But behind the beautiful presentations and well-founded recommendations lies an uncomfortable truth: the quality of our research is systematically deteriorating.

While demand for UX research is exploding, paradoxically, decision-makers' confidence in our results is declining. Projects are being implemented differently despite research recommendations. Budgets are being cut. Timelines are being shortened. The CEO is playing his boss card and ignoring all user tests: ‘I know what our customers want.’


The industry is facing a credibility crisis – and we have only ourselves to blame.


The toolbox is shrinking dramatically

UX research should encompass a diverse range of methods, from ethnographic studies to statistical analyses. Instead, we are seeing a dramatic narrowing down to a few familiar techniques. Usability tests and qualitative interviews dominate – not because they are always the right choice, but because they are the only ones that many researchers are proficient in.


When was the last time you conducted a contextual inquiry? Accompanied diary studies over several weeks? Used card sorting for information architecture? Quantitative methods are completely neglected, even though they would be indispensable for many research questions.


A typical example: an e-commerce company wants to understand why users abandon their shopping carts at checkout. Instead of a quantitative analysis of the abandonment points combined with qualitative interviews, only a usability test with five users is conducted. The result? Superficial insights that miss the real problem.


This methodological impoverishment leads to a dangerous tunnel vision: we only answer the questions that our limited methods can answer – instead of the questions that really matter to the business. Quantitative validation? ‘We don't need it.’ Triangulation of different methods? ‘Too time-consuming.’ Mixed-methods approaches? ‘I'm not familiar with them.’


The Nielsen myth and its fatal consequences

‘Five users are enough’ – Jakob Nielsen's rule from the 1990s still haunts our industry like an undead zombie. Taken completely out of context, it is treated as a universal truth for every type of research. Nielsen was referring to iterative usability tests to identify interface problems, not to fundamental user research or market validation.


But this pseudo-scientific approach gives us a false sense of security. We test five carefully selected users and believe we have the truth about all target groups. Statistical significance? ‘Overrated.’ Representative samples? ‘Too expensive.’ Confidence intervals? ‘What's that?’


A concrete example: a fintech start-up tests its new investment app with five tech-savvy millennials from Berlin. Based on this ‘research,’ the app is launched for the entire German-speaking market – and completely flops with older target groups. Surprise? Hardly.


The result is insights that are built on alarmingly thin ice – and yet are sold as a sound basis for multi-million-pound product decisions.


The observational blindness of the digital generation

Efficiency is important. Scaling is necessary. But our drive for automation and remote research is costing us the most valuable thing we have: the ability to make nuanced observations.

Online tools only capture explicit statements. Unmoderated tests only provide superficial data. Screen recordings show clicks, but not the emotions behind them. Chatbots collect feedback, but don't understand the frustration in the voice.


Yet the crucial things happen between the lines: the micro-gesture of confusion when a button isn't where expected. The pause before the answer that betrays uncertainty. The moment when the user unconsciously searches for an alternative. The nervous laugh when something doesn't work. The body language that says, ‘I would never buy that.’


If we only listen to what is said, we as UXers have already lost. The deepest insights come from observation – a skill we are systematically unlearning as we fall in love with digital tools.


Distorted realities and self-made filter bubbles

Recruitment is expensive, time-consuming and complex. So many companies resort to tempting shortcuts: their own employees, existing customer pools, online platforms with questionable quality standards. The problem? These samples are systematically distorted.


Employee bias: A software company tests its new CRM software with its own sales staff. How can people who use the product every day and whose salary depends on its success be expected to evaluate it neutrally? They know all the workarounds, overlook fundamental usability issues and rate features more positively than external users would.


Loyalty bias: An e-commerce portal recruits testers exclusively from its newsletter distribution list. These users are already emotionally invested, use the portal regularly and are significantly more tolerant of problems than new customers.


The designer-tests-own-interface bias: It gets even worse when designers test their own interfaces. Ownership bias, sunk cost fallacy and cognitive dissonance all come into play at the same time. Unconsciously, they ‘help’ the test subjects, steer conversations in positive directions and interpret criticism as ‘the user didn't understand’ rather than as valid feedback.


We create our own filter bubble and call it research.


AI hallucinations as a new quality trap

The latest threat to research quality comes from an unexpected corner: artificial intelligence. AI tools promise faster evaluations, automated insights and scalable analyses. But they also bring new risks.

AI can convincingly ‘recognise’ false patterns that do not even exist. Sentiment analyses interpret irony as a positive evaluation. Automated clustering algorithms find user groups that only exist in the computer. Transcription AI invents quotes that were never said.


The insidious thing is that these hallucinations often seem more convincing than real data because they deliver exactly what we want to hear. Confirmation bias meets algorithmic bias – an explosive mixture for anyone who does not understand the limits of technology.


The briefing disaster: research without a goal

Before even the first question is asked, many research projects already fail at the briefing stage. ‘We want to know how users find our product’ is not a research assignment – it is a wish

.

There are no specific research questions. Hypotheses are not formulated. Success metrics remain vague. The result? Research becomes occupational therapy that produces data but does not provide any usable insights.


A classic scenario: the marketing team commissions a ‘user study’ for the new website. Three weeks later, a 50-page report is presented, confirming that ‘users prefer intuitive navigation’. Money burned, time wasted, zero insights gained.


Interviewing is not just interviewing

‘Anyone can conduct interviews’ – a dangerous myth in our industry. There is a world of difference between a casual conversation and professional interviewing, which many researchers underestimate.

Suggestive questions produce desired answers: ‘Don't you think this feature is great?’ Leading questions distort results: ‘Would you buy this product or not?’ Failure to ask follow-up questions leaves important insights undiscovered.


The result is pseudo-data that is more dangerous than no data at all – because it conveys a false sense of security. Bad interviews still provide answers, just the wrong ones.


The HiPPO trap: when hierarchy overrides research

The classic ‘Highest Paid Person's Opinion’ (HiPPO) effect is perhaps the most frustrating quality problem of all. A team invests weeks in careful research, delivers well-founded recommendations – and the CEO still decides differently because he has a ‘feeling’.


Even more insidious: research is misused as an alibi. The decision has already been made, but ‘research’ is still allowed to continue in order to appear scientific. Results are cherry-picked, and uncomfortable insights are swept under the carpet.


The timing is manipulated: research is scheduled so late that only confirmatory results are ‘helpful’. Critical findings come at a time when changes are ‘unfortunately no longer possible’.


The downward spiral of quality

These problems reinforce each other and create a downward spiral in research quality. Poor methodology delivers weak insights. Weak insights lead to poor product decisions. Poor decisions undermine confidence in UX research. Lower confidence leads to less budget and time for proper research.

The consequence: research is degraded to a ritual that is done because it ‘belongs’. Insights disappear into PowerPoint graveyards. User insights are not collected in systematic repositories, but are lost from project to project.


The tragedy: the worse our research becomes, the more decision-makers justify their gut-feeling decisions. ‘Research doesn't bring anything new anyway’ becomes a self-fulfilling prophecy when we actually only provide superficial confirmations instead of surprising insights.


Other systematic quality pitfalls

Survivorship bias: We only talk to current users, never to those who have left. Yet this is often where the most valuable insights about product weaknesses lie.


Timing problems: Research takes place too late, when decisions have already been made. Or too early, when it is not yet clear what needs to be researched.


Lack of intercoder reliability: Qualitative data is evaluated by only one person, without validation by other researchers. Subjective interpretations are sold as objective findings.


Lack of baseline measurements: Improvements cannot be measured because the initial state was not documented. Was the conversion rate really worse before the redesign?


Time-pressure-induced shortcuts: ‘We only have two weeks’ leads to methodological compromises that render the results worthless. Instead of a proper study, there is a ‘quick & dirty’ test that helps no one.


The way back to credibility

The solution does not lie in more tools, faster methods or cheaper alternatives. It lies in fundamental quality improvement and a return to technical excellence:


Radically expand your methodological skills: Invest in further training. Learn quantitative methods. Understand when which method is appropriate. A UX researcher should be able to perform a statistical significance test just as naturally as a qualitative interview.


Enforce strict objectivity: No self-tests. No complacency research. No tests with employees for external products. External perspectives are priceless – even if they are more expensive.


Strengthen a culture of observation: Not everything has to be scaled. Sometimes the more time-consuming but insightful approach is needed. In-person research may be more expensive, but it provides insights that no online tool can capture.


Establish triangulation as standard: One method is never enough. Validate quantitative data with qualitative insights. Confirm individual opinions with broader surveys. Combine different data sources.

Define clear research questions: Define precisely what you want to know - before you start

researching. Formulate testable hypotheses. Define success metrics.


Develop a long-term research strategy: Build on previous findings. Create an organisational research memory. Document not only results, but also methods and limitations.


Use AI critically: Use AI as a tool, not as a substitute for human analysis. Validate automated insights through manual review. Understand the limitations of the technology.


Professionalise research communication: Prepare insights in an action-oriented manner. Speak the language of business. Quantify problems: Don't say ‘users are frustrated’, say ‘68% drop out at this point’.


The decision is ours

UX research is at a crossroads. Either we accept the erosion of quality and become suppliers of pseudo-insights that no one takes seriously. Or we reflect on technical excellence and fight for the credibility of our discipline.


The choice is up to each and every one of us. In every project where we decide whether to take the easy or the right path. In every methodological decision where we have to choose between ‘fast’ and ‘well-founded’. In every sample where we decide between “available” and ‘representative’.


Because one thing is clear: an industry that does not take its own standards seriously cannot expect others to take it seriously. If we continue to sell methodological shortcuts as ‘agile research’ and present superficial insights as ‘data-driven decisions,’ we will permanently lose our credibility.

The question is not whether we can afford quality. The question is whether we can afford a loss of quality.


Can we really afford poor research?



Illustration of Donald Trump with raised hand in front of an abstract digital background suggesting speech bubbles and data structures.

Donald Trump Prompt: How Provocative AI Prompts Affect UX Budgets

AI & UXR, PROMPTS, STAKEHOLDER MANAGEMENT

Driver's point of view looking at a winding country road surrounded by green vegetation. The steering wheel, dashboard and rear-view mirror are visible in the foreground.

The Final Hurdle: How Unsafe Automation Undermines Trust in Adas

AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS

Illustration of a person standing at a fork in the road with two equal paths.

Will AI Replace UX Jobs? What a Study of 200,000 AI Conversations Really Shows

HUMAN VS AI, RESEARCH, AI & UXR

Close-up of a premium tweeter speaker in a car dashboard with perforated metal surface.

The Passenger Who Always Listens: Why We Are Reluctant to Trust Our Cars When They Talk

AUTOMOTIVE UX, VOICE ASSISTANTS

Keyhole in a dark surface revealing an abstract, colorful UX research interface.

Evaluating AI Results in UX Research: How to Navigate the Black Box

AI & UXR, HOW-TO, HUMAN VS AI

A car cockpit manufactured by Audi. It features a digital display and numerous buttons on the steering wheel.

Haptic Certainty vs. Digital Temptation: The Battle for the Best Controls in Cars

AUTOMOTIVE UX, AUTONOMOUS DRIVING, CONNECTIVITY, GAMIFICATION

Digital illustration of a classical building facade with columns, supported by visible scaffolding, symbolising a fragile, purely superficial front.

UX & AI: How "UX Potemkin" Undermines Your Research and Design Decisions

AI & UXR, HUMAN VS AI, LLM, UX

Silhouette of a diver descending into deep blue water – a metaphor for in-depth research.

Deep Research AI | How to use ChatGPT effectively for UX work

CHAT GPT, HOW-TO, RESEARCH, AI & UXR

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

A brown book entitled ‘Don't Make Me Think’ by Steve Krug lies on a small table. Light shines through the window.

Why UX Research Is Losing Credibility - And How We Can Regain It

UX, UX QUALITY, UX METHODS

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

Illustration of a red hand symbolically prioritizing “Censorship” over “User Privacy” in the context of DeepSeek, with the Chinese flag in the background.

Censorship Meets AI: What Deepseek Is Hiding About Human Rights – And Why This Affects UX

AI & UXR, LLM, OPEN AI

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page