top of page
uintent company logo

AUTOMOTIVE UX, VOICE ASSISTANTS

The Passenger Who Always Listens: Why We Are Reluctant to Trust Our Cars When They Talk

4

MIN

Dec 12, 2025

Does this sound familiar? The children are arguing in the back seat, it's pouring with rain outside, and you shout in exasperation: ‘Hey car, play something calming!’ And the system? It understands ‘Navigate to Burger King’ and turns the air conditioning up to maximum cold. Voice control, designed as an elegant solution to the distraction dilemma of touchscreens, has in many cases turned out to be a source of new frustration and, above all, new crises of confidence.


In our presentation ‘Touch, Trust and Transformation’ at UXMC 2025, we highlighted how HMI development is progressing from haptics to touch to voice control. The central question remains: do we trust the system when it matters? With voice control, a new, existential dilemma is added to usability issues: the fear of eavesdropping.


From command to free dialogue: the promise of voice control

After touchscreens only exacerbated the problem of distraction, voice control stepped in to build the golden bridge: hands on the wheel, eyes on the road.


The modern generation, powered by large language models (LLMs), promises natural dialogue: the driver simply says ‘I'm cold’ and the car adjusts the temperature. Supplier Continental, for example, emphasises that it is moving away from memorised commands and instead wants the driver to be "on first-name terms" with the car. (Source: Voice control is trending: on first-name terms with your car – Autohaus)


The maturity dilemma: when LLMs promise more than they deliver

However, the reality of current LLM integrations in production vehicles reveals a fundamental problem: many of these systems are not yet mature. They simulate understanding and control over vehicle functions without actually possessing them. A typical scenario: the driver asks the system to open the window. The LLM politely confirms that it will do so – but the window remains closed. The system simply has no interface to the vehicle control system. The frustrated driver insists, the LLM apologises and ‘corrects’ its mistake, without changing the result.


The generational trauma

This premature market launch of immature technology repeats a pattern that caused lasting damage 20 years ago. German motorists experienced the first voice control systems at that time and learned: "It doesn't work well." This early impression continues to have an impact today and shapes the sceptical attitude towards voice control among large sections of the population. In China, on the other hand, many users first encountered voice control in its more mature form, so their basic level of trust is correspondingly much higher. The current immature LLM integration risks frustrating a new generation of users in the long term.


To make matters worse, unlike classic voice control, which was clearly limited to vehicle functions, LLMs suggest universal competence. Users can no longer assess which commands work and which do not – every interaction becomes trial and error.


Hard facts: distraction through misinterpretation

As elegant as the dialogues appear in marketing, acceptance quickly dwindles when the technology fails in everyday use. Acoustic complexity (engine noise, music, wind) and linguistic variability (dialects, accents) rapidly increase the error rate.

  • Comparison of operating times (Aalen University): A pilot study by Aalen University investigated the influence of infotainment systems on car accidents in a realistic test setup. The test subjects had to perform tasks at a constant speed. The results clearly showed that the median time taken to perform the tasks was fastest with a rotary control (seven seconds), followed by the **touchscreen (nine seconds). Voice control and other forms of interaction often took longer, which prolonged the distraction. (Source: NEWS Aalen University conducts study on the risks of infotainment applications in cars - Aalen University)


Every error, every misinterpretation that forces the driver to repeat the command or ultimately touch the screen leads to frustration and prolonged distraction.


The cloud trap: when the network becomes a weak point

An often overlooked problem further exacerbates the situation: most LLM-based voice systems do not run (in full) locally in the vehicle, but are cloud-based in order to enable better recognition quality and more complex interactions. This means that even a system that works in theory will fail on German country roads without a 5G or LTE connection. For the user, this creates a double lack of transparency: they cannot tell whether the system does not understand them, whether it is not working technically, or whether there is simply no network connection. The unreliability becomes a self-fulfilling prophecy.


The next level: emotional intelligence in the car

To make interaction more human, car manufacturers are integrating so-called "emotion engines". These AI systems analyse the driver's emotions based on intonation, word choice and facial expressions.

  • Frustration reduction: Studies by the German Aerospace Centre (DLR) have tested the effectiveness of voice interventions in reducing frustration in the car. The results were clear: frustration-reducing voice interventions – for example, in the case of HMI problems – led to significantly lower frustration levels among the test subjects. The system can therefore not only control functions through targeted communication, but also actively contribute to road safety by preventing aggressive driving behaviour through stress reduction. (Source: Improving the in-car user experience through voice-based frustration reduction – DLR

  • The cultural balancing act: However, the acceptance of such systems is proving to be an extreme balancing act and is heavily dependent on cultural and situational factors. Studies show clear differences between Germany and China, for example: while young men in China – who, for cultural reasons, often have little opportunity for emotional exchange – appreciate emotional dialogue with the vehicle, this only applies when they are travelling alone. As soon as passengers are present, the same function is perceived as unpleasant. The challenge for manufacturers is to develop systems that are context-sensitive enough to recognise social situations and respond accordingly.

  • Proactive assistance: The Honda NeuV concept car, for example, featured an ‘Emotion Engine’ that attempted to recognise the driver's mood in order to suggest the appropriate driving mode or music. (Source: When the car listens to your every word | springerprofessional.de)


The new core conflict: data protection as a cost-benefit calculation

The most profound crisis of confidence in voice control is not usability, but data protection. A system that recognises the driver's mood and reacts context-sensitively must be constantly active – it listens.


But is data protection really the killer criterion it is often portrayed as? 

The blanket assumption that ‘Germans want data protection’ may be too simplistic. Reality shows that for many users, data protection is a question of weighing up the costs and benefits. 35 million Germans use Payback – despite comprehensive data collection. Studies on digital payment services show that if the value is sufficient, for example cashback offers, many users are willing to put their data protection concerns aside. So the question is not whether users are willing to share data, but under what conditions and with what transparent added value.


Despite this pragmatic attitude, expectations regarding security remain high. A study by Veritas showed that 86 per cent of consumers expect appropriate measures to protect their data, and almost three-quarters would consider switching car brands in the event of a data leak. (Source: Non-transparent handling of private data: Car manufacturers risk losing the trust of their customers – silicon.de)


In a forthcoming article, we will examine how the actual attitude of German users towards data protection differs from the socially expected response – and what this means for manufacturers.


The voice as a biometric key (voiceprints)

Conversely, voice control can also contribute to solving security problems. Biometrics in cars is becoming increasingly important, with the voice serving as a "voiceprint" or acoustic fingerprint.

  • Secure authentication: Voice verification provides a unique acoustic fingerprint for each user and enables precise identification of the driver. This protects against unauthorised access and serves to personalise the experience. Future systems will rely on multi-layered authentication to allow payment functions or engine start-up only after biometric approval, for example. (Source: Voice assistants in cars learn with AI – AI Training Centre)

  • Market growth: The biometrics market in the automotive sector is growing rapidly. Companies are integrating facial recognition, iris scanning and voice recognition into their high-end models to personalise the driving experience (seat and mirror settings) and increase safety. (Source: Biometrics in the automotive market – size and share analysis – Mordor Intelligence)


Trust vs. well-being: cultural differences in the HMI experience

The cultural differences we already saw with haptic controls continue with voice control:

A study on culture-specific design requirements for autonomous vehicles concluded that in Germany, drivers' trust in their vehicle is strongly influenced by the user interface. In the US, on the other hand, the user interface primarily influences the well-being and comfort of users. (Source: Innovative UI/UX design drives trends in the automotive sector – Softeq)


In markets such as Germany, voice control must therefore primarily gain trust through high reliability, precision and seamless data protection.


The way forward: intelligent design instead of dogmatism

The future lies in the intelligent fusion of touch, haptics and voice – always with the premise of minimising the cognitive load on the driver.


UX designers must weigh up which functions require tactile certainty (buttons), which require flexibility (touch/AI assistance) and which require secure, distraction-free input (voice).


The key to acceptance is answering the most fundamental question: "Can I trust this system when it matters". This trust can only be established if the interface consistently reduces the cognitive load on the driver instead of increasing it.


💌 Not enough? Then read on – in our newsletter. It comes four times a year. Sticks in your mind longer. To subscribe: https://www.uintent.com/newsletter

Jan Panhoff and Maffee Peng Hui Wan presented the profound insights and research findings that show how cultural differences measurably influence trust in touch systems and voice assistants in their presentation ‘Touch, Trust and Transformation’ at UXMC 2025. How can car manufacturers regain customer trust in voice control despite the data protection risks, and what role does emotion recognition play in this?





Abstract futuristic illustration of a person facing a glowing tower of documents and flowing data streams.

AI Tools UX Research: How Do These Tools Handle Large Documents?

LLM, CHAT GPT, HOW-TO

Illustration of Donald Trump with raised hand in front of an abstract digital background suggesting speech bubbles and data structures.

Donald Trump Prompt: How Provocative AI Prompts Affect UX Budgets

AI & UXR, PROMPTS, STAKEHOLDER MANAGEMENT

Driver's point of view looking at a winding country road surrounded by green vegetation. The steering wheel, dashboard and rear-view mirror are visible in the foreground.

The Final Hurdle: How Unsafe Automation Undermines Trust in Adas

AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS

Illustration of a person standing at a fork in the road with two equal paths.

Will AI Replace UX Jobs? What a Study of 200,000 AI Conversations Really Shows

HUMAN VS AI, RESEARCH, AI & UXR

Close-up of a premium tweeter speaker in a car dashboard with perforated metal surface.

The Passenger Who Always Listens: Why We Are Reluctant to Trust Our Cars When They Talk

AUTOMOTIVE UX, VOICE ASSISTANTS

Keyhole in a dark surface revealing an abstract, colorful UX research interface.

Evaluating AI Results in UX Research: How to Navigate the Black Box

AI & UXR, HOW-TO, HUMAN VS AI

A car cockpit manufactured by Audi. It features a digital display and numerous buttons on the steering wheel.

Haptic Certainty vs. Digital Temptation: The Battle for the Best Controls in Cars

AUTOMOTIVE UX, AUTONOMOUS DRIVING, CONNECTIVITY, GAMIFICATION

Digital illustration of a classical building facade with columns, supported by visible scaffolding, symbolising a fragile, purely superficial front.

UX & AI: How "UX Potemkin" Undermines Your Research and Design Decisions

AI & UXR, HUMAN VS AI, LLM, UX

Silhouette of a diver descending into deep blue water – a metaphor for in-depth research.

Deep Research AI | How to use ChatGPT effectively for UX work

CHAT GPT, HOW-TO, RESEARCH, AI & UXR

A referee holds up a scorecard labeled “Yupp.ai” between two stylized AI chatbots in a boxing ring – a symbolic image for fair user-based comparison of AI models.

How Yupp Uses Feedback to Fairly Evaluate AI Models – And What UX Professionals Can Learn From It

AI & UXR, CHAT GPT, HUMAN VS AI, LLM

A brown book entitled ‘Don't Make Me Think’ by Steve Krug lies on a small table. Light shines through the window.

Why UX Research Is Losing Credibility - And How We Can Regain It

UX, UX QUALITY, UX METHODS

3D illustration of a digital marketplace with colorful prompt stalls and a figure selecting a prompt card.

Buying, sharing, selling prompts – what prompt marketplaces offer today (and why this is relevant for UX)

AI & UXR, PROMPTS

Robot holds two signs: “ISO 9241 – 7 principles” and “ISO 9241 – 10 principles”

ChatGPT Hallucinates – Despite Anti-Hallucination Prompt

AI & UXR, HUMAN VS AI, CHAT GPT

Strawberry being sliced by a knife, stylized illustration.

Why AI Sometimes Can’t Count to 3 – And What That Has to Do With Tokens

AI & UXR, TOKEN, LLM

Square motif divided in the middle: on the left, a grey, stylised brain above a seated person working on a laptop in dark grey tones; on the right, a bright blue, networked brain above a standing person in front of a holographic interface on a dark background.

GPT-5 Is Here: Does This UX AI Really Change Everything for Researchers?

AI & UXR, CHAT GPT

Surreal AI image with data streams, crossed-out “User Expirince” and the text “ChatGPT kann jetzt Text in Bild”.

When AI Paints Pictures – And Suddenly Knows How to Spell

AI & UXR, CHAT GPT, HUMAN VS AI

Human and AI co-create a glowing tree on the screen, set against a dark, surreal background.

When the Text Is Too Smooth: How to Make AI Language More Human

AI & UXR, AI WRITING, CHAT GPT, HUMAN VS AI

Futuristic illustration: Human facing a glowing humanoid AI against a digital backdrop.

Not Science Fiction – AI Is Becoming Independent

AI & UXR, CHAT GPT

Illustration of an AI communicating with a human, symbolizing the persuasive power of artificial intelligence.

Between Argument and Influence – How Persuasive Can AI Be?

AI & UXR, CHAT GPT, LLM

A two-dimensional cartoon woman stands in front of a human-sized mobile phone displaying health apps. To her right is a box with a computer on it showing an ECG.

Digital Health Apps & Interfaces: Why Good UX Determines Whether Patients Really Benefit

HEALTHCARE, MHEALTH, TRENDS, UX METHODS

 RELATED ARTICLES YOU MIGHT ENJOY 

AUTHOR

Jan Panhoff

started working as a UX professional in 2004 after completing his M.Sc. in Digital Media. For 10 years he supported eBay as an embedded UX consultant. His focus at uintent is on automotive and innovation research.

Moreover, he is one of uintent's representatives in the UX Alliance, a global network of leading UX research and design companies around the globe.

bottom of page