Automotive, Autonomous Driving, Voice Assistance

Why developing car-interfaces the way you do it right now might not work for autonomous cars

Connected features still only play a minor role in most users’ list of requirements. The development of car HMI systems and feature priorities, e.g. from touch to voice interaction, will change with self-driving cars.

The advent of self-driving cars poses significant challenges, not only to the engineers who must produce a technical system that satisfies safety and legal requirements, but also to the designers of user-facing features and interfaces. In our projects dealing with requirements for user interfaces in self-driving cars, it has become apparent that the separation of system functions into distinct areas will become less and less useful for the car’s users. This is mostly due to the shift from display driven interaction to situational interaction and the dominance of voice interaction.

 

The classical way of developing car HMI systems

The first challenge we frequently encounter in our daily work with automotive development teams is overcoming compartmentalized thinking and planning of features and information: At many OEMs, the design of infotainment system elements is still siloed – one team is taking care of the entertainment part, one of the car- and system-settings, one of the navigation, etc. In addition, communication and alignment between these teams appears to be limited to a design framework level to ensure consistency of use, but does not aim to maintain or develop cross-domain functionality. For some contracts, we are explicitly asked to focus solely on solutions serving one of these function areas and to actively disregard any results that are cross-functional.

Currently, this division of development is mirrored in most car human-machine-interfaces (HMIs), which benefits users by making it very easy to learn how to use the system and to find the most relevant functions quickly and without much distraction. However, as a consequence, we can already see the downside of this way of developing interfaces with connected functions provided in the cars we drive today. As soon as a feature is difficult to attribute to one of the traditional system areas, it finds itself in sections named “connected services,” “additional services” or similar – despite its functional nature not having anything in common with other features in this category. A usual sign of this divided feature landscape is the location of an online weather or restaurant rating app in such a limbo-category.

In many instances, these sections also only consist of literal lists of apps and functions, making it extremely cumbersome for users to locate a feature by scanning the lengthy list for anything relevant, often without knowing if the function they are looking for is actually provided in their car or what the OEM has decided to call it. This traditional way of developing car infotainment systems has so far been of minor relevance to the way users experience their cars. These connected features still only play a minor role in most users’ list of requirements and are overshadowed by the need for quick access to standard navigation and media functions. However, this is all going to change with the self-driving car.

 

 

 

Voice interaction as the go-to modality for self-driving cars

Most of our participants, when asked how they imagine their interaction with their future autonomous vehicle, expect that they’ll able to at least interact with their car at the same level as they currently do with their Alexa, Siri or Google Home. While being driven autonomously, paradoxically, users appear to be even less willing to navigate on-screen menus. Their Alexa does not have a screen and they can order a barrel of fish oil and switch off their living room lights just fine –why should they have to rely on outmoded touch/dial interaction in their fancy car of the future? This shift in perception appears to be also based on the fact that while they had to drive their cars manually, the on-screen interaction promised (and to be honest, mostly delivered) a level of precise control that was necessary to limit the amount of distraction from the road to an absolute minimum.

Now, in their self-driving cars, users appear to feel that this level of control is no longer required. They seem to expect that they can simply tell their car what to do and in the unlikely event of a missed command, they can fix the problem without any personal risk. After all, they do not have to focus on the road anymore. Visual displays can still be used for entertainment or information purposes such as showing a movie or displaying general route or specific environmental information. However, after having set the destination for and having started the autonomous trip, users don’t want to rely on touch/dial interaction anymore.

So if voice interaction becomes the go-to modality of interacting with your car (after the initial setup of the route), the compartmentalized structure of current systems loses its purpose – which, foremost, was to allow you to quickly build a mental model of the system function structure and to access it with minimum distraction from your main task of driving. At the same time, we see in user tests that cross-sectional functionality becomes more and more expected (also in part thanks to Alexa and co.).

 

 

So what to do?

Our recommendations are:

  1. Drop compartmentalized development of infotainment systems as the systems that it produces do not match users’ future requirements.
  2. Be open to cross-sectional functionality that is accessible via voice control – e.g. “Play an audio book that will be finished when I reach my destination.”
  3. Allow users to quickly set up their autonomous route and then to be done with on-screen interaction for the whole trip.
  4. We can’t stress this enough – in autonomous cars, voice control will become the most used modality – if you still rely on the sectional model of current car HMIs at that time, make sure that you have processes in place to provide seamless interaction between development teams and developed functions.

Author

Jan Panhoff

Jan begann seine Tätigkeit als UX-Profi im Jahr 2004 nach Abschluss seines M.Sc. in Digitalen Medien. 10 Jahre lang unterstützte er eBay als Embedded UX-Berater. Seine Schwerpunkte bei uintent liegen in der Automobil- und Innovationsforschung.

Go back

Privacy settings

We use cookies on our website. Some of them are essential, while others help us to improve this website and your experience.
In this overview you can select and deselect individual cookies from a category or entire categories. You will also find more information about the cookies available.
Group Analyse
Name Google Analytics
Technical name _gat,_gtag_UA_120284862_1
Provider Google LLC
Expire in days 730
Privacy policy https://policies.google.com/privacy
Use Google cookie for website analysis. Generates anonymous statistical data on how the visitor uses the website.
Allowed
Group Essenziell
Name Contao CSRF Token
Technical name csrf_contao_csrf_token
Provider
Expire in days 0
Privacy policy
Use Serves to protect the website from falsification of cross-location inquiries. After closing the browser, the cookie is deleted again
Allowed
Group Essenziell
Name Contao HTTPS CSRF Token
Technical name csrf_https-contao_csrf_token
Provider
Expire in days 0
Privacy policy
Use Serves to protect the encrypted website (HTTPS) from falsification of cross-location inquiries. After closing the browser, the cookie is deleted again
Allowed
Group Essenziell
Name PHP SESSION ID
Technical name PHPSESSID
Provider
Expire in days 0
Privacy policy
Use PHP cookie (programming language), PHP data identifier. Contains only a reference to the current session. No information is stored in the user's browser and this cookie can only be used by the current website. This cookie is mainly used in forms to increase usability. Data entered in forms are e.g. B. stored briefly if there is an input error by the user and the user receives an error message. Otherwise all data would have to be re-entered.
Allowed
Group Essenziell
Name FE USER AUTH
Technical name FE_USER_AUTH
Provider
Expire in days 0
Privacy policy
Use Saves information of a visitor as soon as he logs into the frontend.
Allowed