
HEALTHCARE, TRENDS, UX METHODS
AI Diagnostics in Transition: Between Technological Precision and Human Trust
4
MIN
Jul 1, 2025
Artificial intelligence in diagnostics: from technological promise to clinical res ponsibility
Artificial intelligence is transforming medical diagnostics – especially in radiology. According to JAMA Network, more than 900 AI-supported medical devices will have been approved by the FDA by August 2024, including over 75 % for radiology applications (Source: MedTechDive). In 2024, the number of 1,000 approved AI medical devices exceeded 1,000 for the first time, with radiology dominating with 758 devices (Source: HealthImagine).
However, new studies reveal a two-pronged reality:
Microsoft's AI diagnostic tool achieved 85.5% accuracy in a benchmark study of 304 complex clinical cases, compared to 20% for human specialists who were not allowed to use any aids (Source: Business Insider).
Nevertheless, trust remains a key issue: according to a study by UArizona, around half of those surveyed would not be willing to entrust themselves to an AI rather than a human doctor (Source: University of Arizona Health Science).
A comprehensive analysis in NPJ Health Systems shows that trust only develops when AI tools communicate in a comprehensible manner and are regularly validated (Source: npj health systems).
These findings make it clear that technological progress alone is not enough. The success and widespread use of AI in diagnostics depend crucially on professionals and patients developing understanding, control and trust in the technology.
UX research in AI diagnostics: How to build trust in medical AI systems
UX research is becoming the central mediator between technical feasibility and practical use. We show ways to bridge the gap between AI and humans:
Transparency & explainability
Research shows that explainable AI (XAI) promotes trust – especially when results are presented with comprehensible reasons.
→ Possible methods: Explainability workshops, cognitive walkthroughs with simulated AI results.
Responsibility & decision-making dynamics
When AI diagnoses are available, doctors may tend to accept recommendations uncritically.
→ Possible methods: Shadowing in everyday clinical practice, mental model interviews, usability tests with critical scenarios.
Trust & validation over time
Trust in AI does not grow automatically – it can rise or fall abruptly after mistakes. Longitudinal studies with feedback loops show which information standards stabilise trust.
→ Possible methods: longitudinal user studies, trust measurement surveys.
Risk analysis & regulatory compliance
UX weaknesses are considered potential safety risks in regulated environments.
→ Possible methods: use error analysis, heuristic expert evaluation, UX risk register for inclusion in MDR/FDA dossiers.
UX research thus identifies potential acceptance issues at an early stage, prevents costly readjustments and contributes directly to clinical safety and regulatory approval.
Why UX expertise is crucial for successful AI diagnostics – and how uintent can help
uintent has exactly the expertise and methodology to anchor AI diagnostics in a sustainable, user-centred way:
Deep understanding of regulated environments: UX research specifically for MDR, documented in accordance with FDA requirements.
Explainable variety of methods: From explainability workshops to longitudinal studies.
Global scalability: Comparable findings from radiologist studies in Europe, Asia and North America.
User experience is therefore not an add-on – but a strategic prerequisite for effective, trustworthy AI in diagnostics. As a purely research-oriented unit, uintent is the partner that implements this approach – not as a decorative touch, but as a decisive factor for acceptance, safety and efficiency.
💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter