top of page
uintent company logo

AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS

The Final Hurdle: How Unsafe Automation Undermines Trust in Adas

5

MIN

Dec 18, 2025

Having examined the distraction risks of touchscreens and the trust dilemmas of voice control, we now turn to the final stage of HMI evolution: advanced driver assistance systems (ADAS) and automated driving. With Levels 2 and 3, drivers gradually relinquish control. But this "handover problem" leads to a dangerous phenomenon: ‘"rust miscalibration".


In our presentation ‘Touch, Trust and Transformation’ at UXMC 2025, we explained that the biggest safety issue is not the technology itself, but human trust in it.


The promise vs. the reality: trust in AI

The promise of automated driving is clear: fewer accidents, less stress and more efficient use of driving time. Statistics show that up to 90 per cent of all accidents are due to human error. This is where technology can help. (Source: https://www.spiegel.de/auto/autonomes-fahren-us-studie-sieht-weniger-unfallgefahr-als-bei-menschlichen-fahrern-a-f9b71de2-3fa7-47d0-9bd1-a85fe645bcdd).


However, users remain deeply sceptical, especially in Germany:

  • Safety concerns: Despite a high willingness to test the technology, safety concerns have been raised, particularly with regard to hacker attacks while driving. (Source: Study by Detecon: Autonomous driving: High willingness to test, but safety concerns

  • Ethics and liability: The question of liability in the event of an unavoidable accident and the programming of algorithms (the moral dilemma) that have to make decisions about life remain unresolved social challenges that undermine trust. (Source: Autonomous driving and digital ethics: Who decides? - State Agency for Civic Education Baden-Württemberg

  • Trust issue: Another fundamental trust issue repeatedly emerges in our user tests: Many drivers doubt that the vehicle really detects everything relevant in its surroundings. This mistrust can be addressed by having the system visualise what it ‘sees’, i.e. displaying detected vehicles, pedestrians or obstacles in real time. Some manufacturers, such as Tesla, are already implementing this approach, even in vehicles without fully autonomous driving functions.


The phenomenon: trust calibration

The greatest risk in semi-automated vehicles (levels 2 and 3) is the so-called trust calibration problem. Ideally, the driver's trust should be appropriate – that is, only as high as the actual system performance justifies.


Reality shows two dangerous deviations, both forms of miscalibration:

  • Overconfidence (overtrust): The driver trusts the system too much (e.g. in traffic jam assist) and is mentally absent. When the system suddenly requests a takeover, the driver is unable to react quickly and safely enough. Cognitive load increases dramatically at the moment of handover. In our real-world traffic studies, we observed that drivers in stressful situations sometimes needed more than 10 seconds to be ready to resume manual control, significantly more than would be expected from simulator experiments

  • Distrust: The driver trusts the system too little and intervenes unnecessarily in the control system. This disrupts the system's function and also leads to frustration and potentially dangerous manoeuvres.


Research findings confirm that transparency, competence and reliability are the keys to building trust in autonomous vehicles and increasing willingness to use them. (Source: Blind trust in cars? – Factors influencing trust in autonomous vehicles – University of Trier


The design response: Driver monitoring and adaptive HMI

To avoid overconfidence and maintain the driver's situational awareness, manufacturers are relying on driver monitoring systems (DMS). These systems use cameras to detect the driver's gaze and head tilt in order to identify fatigue or distraction.


  • Mandatory regulation: With the General Safety Regulation (GSR), the EU will require all new registrations from July 2026 to have a driver distraction detection system (ADDW). These systems must be ‘default on’, i.e. they are always active unless the driver consciously deactivates them. (Source: In-Cabin Sensing Systems – ÖAMTC

  • Adaptive HMI: Research approaches aim to adapt the presentation of information to the driver's situational awareness. This allows the complexity of the information to be reduced or increased when awareness is low in order to bring the driver back into the takeover loop. (Source: FAT publication series 392 | VDA


Cultural differences: Acceptance of being monitored

The acceptance of DMS systems raises a new HMI trust crisis: the feeling of being monitored in one's own car.

The way forward: Appropriate trust calibration

Automotive HMI design must move away from the ‘Perfect Automation Schema’ (PAS) – the cognitive belief that automated systems must be perfect – in order to create realistic trust. (Source: PAS – The Perfect Automation Schema: Influencing Trust – scip AG


The solution lies in calibrating trust:

In addition, manufacturers and researchers are working on further approaches to strengthen situational awareness and increase the acceptance of monitoring systems: from entertainment concepts that integrate traffic events into the field of vision, to haptic cues in the seat to prevent motion sickness, to gamification approaches that set positive incentives instead of prohibitions. We will address these topics in an upcoming article.


Only when the HMI actively works to neither overburden nor underburden the driver's trust can the final hurdle to safe automated driving be overcome.


💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter

Jan Panhoff and Maffee Peng Hui Wan presented the profound insights and research findings that show how cultural differences measurably influence trust in touch systems and voice assistants in their presentation ‘Touch, Trust and Transformation’ at UXMC 2025.



Abstract futuristic illustration of a person facing a glowing tower of documents and flowing data streams.

AI Tools UX Research: How Do These Tools Handle Large Documents?

LLM, CHAT GPT, HOW-TO

Illustration of Donald Trump with raised hand in front of an abstract digital background suggesting speech bubbles and data structures.

Donald Trump Prompt: How Provocative AI Prompts Affect UX Budgets

AI & UXR, PROMPTS, STAKEHOLDER MANAGEMENT

Driver's point of view looking at a winding country road surrounded by green vegetation. The steering wheel, dashboard and rear-view mirror are visible in the foreground.

The Final Hurdle: How Unsafe Automation Undermines Trust in Adas

AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS

Illustration of a person standing at a fork in the road with two equal paths.

Will AI Replace UX Jobs? What a Study of 200,000 AI Conversations Really Shows

HUMAN VS AI, RESEARCH, AI & UXR

Close-up of a premium tweeter speaker in a car dashboard with perforated metal surface.

The Passenger Who Always Listens: Why We Are Reluctant to Trust Our Cars When They Talk

AUTOMOTIVE UX, VOICE ASSISTANTS

Keyhole in a dark surface revealing an abstract, colorful UX research interface.

Evaluating AI Results in UX Research: How to Navigate the Black Box

AI & UXR, HOW-TO, HUMAN VS AI

A car cockpit manufactured by Audi. It features a digital display and numerous buttons on the steering wheel.

Haptic Certainty vs. Digital Temptation: The Battle for the Best Controls in Cars

AUTOMOTIVE UX, AUTONOMOUS DRIVING, CONNECTIVITY, GAMIFICATION

Digital illustration of a classical building facade with columns, supported by visible scaffolding, symbolising a fragile, purely superficial front.

UX & AI: How "UX Potemkin" Undermines Your Research and Design Decisions

AI & UXR, HUMAN VS AI, LLM, UX

Silhouette of a diver descending into deep blue water – a metaphor for in-depth research.

Deep Research AI | How to use ChatGPT effectively for UX work

CHAT GPT, HOW-TO, RESEARCH, AI & UXR

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

A brown book entitled ‘Don't Make Me Think’ by Steve Krug lies on a small table. Light shines through the window.

Why UX Research Is Losing Credibility - And How We Can Regain It

UX, UX QUALITY, UX METHODS

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Jan Panhoff

started working as a UX professional in 2004 after completing his M.Sc. in Digital Media. For 10 years he supported eBay as an embedded UX consultant. His focus at uintent is on automotive and innovation research.

Moreover, he is one of uintent's representatives in the UX Alliance, a global network of leading UX research and design companies around the globe.

bottom of page