
AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS
The Final Hurdle: How Unsafe Automation Undermines Trust in Adas
5
MIN
Dec 18, 2025
Having examined the distraction risks of touchscreens and the trust dilemmas of voice control, we now turn to the final stage of HMI evolution: advanced driver assistance systems (ADAS) and automated driving. With Levels 2 and 3, drivers gradually relinquish control. But this "handover problem" leads to a dangerous phenomenon: ‘"rust miscalibration".
In our presentation ‘Touch, Trust and Transformation’ at UXMC 2025, we explained that the biggest safety issue is not the technology itself, but human trust in it.
The promise vs. the reality: trust in AI
The promise of automated driving is clear: fewer accidents, less stress and more efficient use of driving time. Statistics show that up to 90 per cent of all accidents are due to human error. This is where technology can help. (Source: https://www.spiegel.de/auto/autonomes-fahren-us-studie-sieht-weniger-unfallgefahr-als-bei-menschlichen-fahrern-a-f9b71de2-3fa7-47d0-9bd1-a85fe645bcdd).
However, users remain deeply sceptical, especially in Germany:
Safety concerns: Despite a high willingness to test the technology, safety concerns have been raised, particularly with regard to hacker attacks while driving. (Source: Study by Detecon: Autonomous driving: High willingness to test, but safety concerns
Ethics and liability: The question of liability in the event of an unavoidable accident and the programming of algorithms (the moral dilemma) that have to make decisions about life remain unresolved social challenges that undermine trust. (Source: Autonomous driving and digital ethics: Who decides? - State Agency for Civic Education Baden-Württemberg
Trust issue: Another fundamental trust issue repeatedly emerges in our user tests: Many drivers doubt that the vehicle really detects everything relevant in its surroundings. This mistrust can be addressed by having the system visualise what it ‘sees’, i.e. displaying detected vehicles, pedestrians or obstacles in real time. Some manufacturers, such as Tesla, are already implementing this approach, even in vehicles without fully autonomous driving functions.
The phenomenon: trust calibration
The greatest risk in semi-automated vehicles (levels 2 and 3) is the so-called trust calibration problem. Ideally, the driver's trust should be appropriate – that is, only as high as the actual system performance justifies.
Reality shows two dangerous deviations, both forms of miscalibration:
Overconfidence (overtrust): The driver trusts the system too much (e.g. in traffic jam assist) and is mentally absent. When the system suddenly requests a takeover, the driver is unable to react quickly and safely enough. Cognitive load increases dramatically at the moment of handover. In our real-world traffic studies, we observed that drivers in stressful situations sometimes needed more than 10 seconds to be ready to resume manual control, significantly more than would be expected from simulator experiments
Distrust: The driver trusts the system too little and intervenes unnecessarily in the control system. This disrupts the system's function and also leads to frustration and potentially dangerous manoeuvres.
Research findings confirm that transparency, competence and reliability are the keys to building trust in autonomous vehicles and increasing willingness to use them. (Source: Blind trust in cars? – Factors influencing trust in autonomous vehicles – University of Trier
The design response: Driver monitoring and adaptive HMI
To avoid overconfidence and maintain the driver's situational awareness, manufacturers are relying on driver monitoring systems (DMS). These systems use cameras to detect the driver's gaze and head tilt in order to identify fatigue or distraction.
Mandatory regulation: With the General Safety Regulation (GSR), the EU will require all new registrations from July 2026 to have a driver distraction detection system (ADDW). These systems must be ‘default on’, i.e. they are always active unless the driver consciously deactivates them. (Source: In-Cabin Sensing Systems – ÖAMTC
Adaptive HMI: Research approaches aim to adapt the presentation of information to the driver's situational awareness. This allows the complexity of the information to be reduced or increased when awareness is low in order to bring the driver back into the takeover loop. (Source: FAT publication series 392 | VDA
Cultural differences: Acceptance of being monitored
The acceptance of DMS systems raises a new HMI trust crisis: the feeling of being monitored in one's own car.
Scepticism in Germany: In an Allianz study, the majority of respondents expressed scepticism about electronic monitoring of drivers. Only 39% agreed to camera or infrared scanning of the eyes and face, even if the technology only detects distraction anonymously. (Source: Modern means of communication distract drivers too much | springerprofessional.de
‘Patronising’ feeling: Too frequent or unfounded warnings from an adaptive HMI or DMS can make users feel that they are being “patronised” by the system (‘Users can feel patronised by the technical system’). This reduces acceptance and can lead to the system being deliberately deactivated. (Source: Qucosa - Monarch: Adaptive Human Machine Interfaces in a Vehicle Cockpit: Indication, Impacts and Implications
The way forward: Appropriate trust calibration
Automotive HMI design must move away from the ‘Perfect Automation Schema’ (PAS) – the cognitive belief that automated systems must be perfect – in order to create realistic trust. (Source: PAS – The Perfect Automation Schema: Influencing Trust – scip AG
The solution lies in calibrating trust:
System transparency: The system must clearly communicate what it can and cannot do (its limitations).
System language: The language used to talk about the systems must be precise in order to promote an appropriate attitude of trust. (Source: Trust in robots and how it can be influenced by linguistic framing – OAPEN Library
In addition, manufacturers and researchers are working on further approaches to strengthen situational awareness and increase the acceptance of monitoring systems: from entertainment concepts that integrate traffic events into the field of vision, to haptic cues in the seat to prevent motion sickness, to gamification approaches that set positive incentives instead of prohibitions. We will address these topics in an upcoming article.
Only when the HMI actively works to neither overburden nor underburden the driver's trust can the final hurdle to safe automated driving be overcome.
💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter
Jan Panhoff and Maffee Peng Hui Wan presented the profound insights and research findings that show how cultural differences measurably influence trust in touch systems and voice assistants in their presentation ‘Touch, Trust and Transformation’ at UXMC 2025.
RELATED ARTICLES YOU MIGHT ENJOY
AUTHOR
uintent
We are an international, employee-owned UX and usability research agency based in Hamburg & Munich, founded in 2018. We have UX Labs directly on site at our offices and have a network of partners to rent labs elsewhere as well. Our team consists of industry pioneers with more than 20 years of experience as well as young professionals. We have conducted research on every continent and have seen thousands of interfaces - from early, non-functional paper prototypes to production models. We draw on a wide variety of qualitative and quantitative research methods. Because we value quality, confidentiality, availability and integrity of information, we are ISO 9001 certified and TISAX®️ participants.
We are part of ReSight Global where we are united with our sister companies in the USA, UK, India, China, and Japan. Moreover, we are a member of the UXalliance, a global network of partner agencies.









.png)










