• benchmarking

Continuous car infotainment interface testing

Ongoing benchmarking of human-machine interfaces (HMI).
Feb
03
2019

THE CHALLENGE

Our client was interested in continuously measuring their prototype and market model performance of key tasks against performance of competitor cars. It was a requirement for this project to gather quantitative and qualitative data during sessions so that decisions made to address issues uncovered in their own cars would be based on a solid quantitative basis.

THE APPROACH

We combined methods with quantitative measurement of multiple UX aspects by means of well-established questionnaires across representative tasks. We have frequently utilized this approach in other benchmarking studies. In addition, we established the test protocol to include static and dynamic tests; the participants would test the system while experiencing a similar mental load as they would while driving the car. Finally, results were provided in a visual, online dashboard that allowed stakeholder access to all qualitative and quantitative information.

THE OUTCOME

For several years, after more than 50 cars and prototypes tested with this framework, the results from the benchmark database are a main driver for our client’s interface development processes. The quick and structured access to a combination of qualitative and quantitative findings has helped them to develop a better understanding of the connections between specific decisions in interface design and performance. Therefore, this has allowed them to adjust their development priorities accordingly.

HMI development originally far removed from users’ real needs

Our client’s HMI development had originally been driven by their engineers’ assumption of what users would need in their HMI and how they would use it, instead of on actual, validated user needs. Regular user tests were conducted to evaluate developments but those were primarily focused on gathering quantitative metrics that did not include any information outside of task performance (task completion rate, time on task). In addition, their testing had been limited almost exclusively to participants from their own company and culture. Overall, our client recognized that, while their results satisfied the need for solid quantitative backing of development decisions, the HMIs could not keep up with the quality delivered by competitors and test results did not translate accurately to users.

Solid quantitative framework, qualitative feedback, and well-respected metrics

As the requirement for a more accurate representation of user interaction with the HMI was initially subscribed to by departments outside of those who had been responsible for HMI development, any new approach had to be accurate, useful, and satisfy those who were initially opposed to changing the testing approach.

In order to achieve this goal, we took a page from our experience in summative medical testing procedures and devised a strict methodological approach that could easily be repeated in any country, with any UX-focused agency the client might want to work with, resulting in comparable data. We utilized well-established metrics to measure the individual quality of each car’s HMI including task completion rate, time on task, NPS, ASQ, SUS, and others, as well as to describe their user experience. Using these respected and well-known metrics made it much easier to convince development teams of their validity.

The methodology also takes into account that the HMI experience is strongly impacted by driving conditions. Therefore, in order to identify issues which could not be caught in a static testing, we also included a smaller number of dynamic tests in the setup. This allowed us to provide a more rounded picture of the overall experience. Initially, those dynamic sessions were conducted in traffic, but were eventually switched to a simulated, standardized mental load (e.g. pedal tracking).

The methodology was statistically validated with pre-tests in Japan, the USA, and Germany, before being established as a regular and recurring research project at our client.

„We supported a global automotive OEM by setting up standardized tracking of their HMI UX performance, as well as, benchmarking the HMI against competitor systems.“

Fast and convenient access to research data

So far, the testing methodology has been utilized to test 50+ prototypes and cars, ranging from cars as small as a Fiat 500 to a Volvo XC90 or Mercedes-Benz S-Class. After each test, all data, including task path videos, videos documenting each individual uncovered issue, anonymized participant profile, and all quantitative data, is uploaded into a unified database on the client’s premises. The database feeds into a web interface that automatically puts the data from the most recent test into context with existing test data and visualizes it to allow developers to quickly understand the results without having to look at the underlying heaps of collected data.

Thanks to the large amount of metrics collected, it is now much easier to determine underlying causes for performance issues in specific domains of the tested cars’ interfaces. The developers are also able to prioritize interface updates based on performance indicators rather than isolated self-assessments of issue impact.

The data delivered was precise enough to identify that in one car model, the system’s processor was significantly slower than the OEM specification.

THE PROJECT LEAD

Jan Panhoff

Jan started working as a UX professional in 2004 after finishing his M.Sc. in Digital Media. For more than 10 years, he supported eBay as an embedded UX consultant. Jan’s main focus at uintent is automotive and innovation research.

Conglaturation!!! You've found our secret.
Our site provides a much better user experience when used in portrait mode, so we advise you switch to portrait to enjoy it.
If you want to continue using in landscape mode at your own risk (for your enjoyment), you can use this button :-)

Use site in landscape mode