THE CHALLENGE
THE APPROACH
THE OUTCOME
HMI development originally far removed from users’ real needs
Our client’s HMI development had originally been driven by their engineers’ assumption of what users would need in their HMI and how they would use it, instead of on actual, validated user needs. Regular user tests were conducted to evaluate developments but those were primarily focused on gathering quantitative metrics that did not include any information outside of task performance (task completion rate, time on task). In addition, their testing had been limited almost exclusively to participants from their own company and culture. Overall, our client recognized that, while their results satisfied the need for solid quantitative backing of development decisions, the HMIs could not keep up with the quality delivered by competitors and test results did not translate accurately to users.
Solid quantitative framework, qualitative feedback, and well-respected metrics
As the requirement for a more accurate representation of user interaction with the HMI was initially subscribed to by departments outside of those who had been responsible for HMI development, any new approach had to be accurate, useful, and satisfy those who were initially opposed to changing the testing approach.
In order to achieve this goal, we took a page from our experience in summative medical testing procedures and devised a strict methodological approach that could easily be repeated in any country, with any UX-focused agency the client might want to work with, resulting in comparable data. We utilized well-established metrics to measure the individual quality of each car’s HMI including task completion rate, time on task, NPS, ASQ, SUS, and others, as well as to describe their user experience. Using these respected and well-known metrics made it much easier to convince development teams of their validity.


The methodology also takes into account that the HMI experience is strongly impacted by driving conditions. Therefore, in order to identify issues which could not be caught in a static testing, we also included a smaller number of dynamic tests in the setup. This allowed us to provide a more rounded picture of the overall experience. Initially, those dynamic sessions were conducted in traffic, but were eventually switched to a simulated, standardized mental load (e.g. pedal tracking).
The methodology was statistically validated with pre-tests in Japan, the USA, and Germany, before being established as a regular and recurring research project at our client.
Fast and convenient access to research data
So far, the testing methodology has been utilized to test 50+ prototypes and cars, ranging from cars as small as a Fiat 500 to a Volvo XC90 or Mercedes-Benz S-Class. After each test, all data, including task path videos, videos documenting each individual uncovered issue, anonymized participant profile, and all quantitative data, is uploaded into a unified database on the client’s premises. The database feeds into a web interface that automatically puts the data from the most recent test into context with existing test data and visualizes it to allow developers to quickly understand the results without having to look at the underlying heaps of collected data.
Thanks to the large amount of metrics collected, it is now much easier to determine underlying causes for performance issues in specific domains of the tested cars’ interfaces. The developers are also able to prioritize interface updates based on performance indicators rather than isolated self-assessments of issue impact.
The data delivered was precise enough to identify that in one car model, the system’s processor was significantly slower than the OEM specification.
THE PROJECT LEAD

Jan Panhoff
