How we approached the project
Our client, an established food retail brand, enjoys high credibility among its customers. This is helpful when reducing user control over the checkout process.
Even though checkouts are probably the most researched and well-known functionalities, we decided to use a higher sample size than necessary. While an n=5 users can identify approximately 80% of all usability issues in an application, we chose to use n=10 sample to evaluate the subjective assessment of the users’ perceived control and security. Subjective assessments require a higher sample size to deliver clear tendencies.
The stimulus itself was quickly assembled using a prototyping tool without visual design, images etc. Users could complete core tasks and it supported both desktop and mobile device resolutions. We strongly recommend not to use complex stimuli to test fundamental hypotheses like in this case. This helps keeps the cost-benefit ratio in balance and allows quick changes during testing.
The client’s initial plan was to use participants from a customer pool, recruited from internal lists with customers who opted in for research activities. We challenged this approach for a couple of reasons:
- Participants from customer lists are often less talkative than those who proactively reached out to recruitment agencies to participate in market research; information gathered from the “think aloud” process is a key to success.
- Participants from customer lists are statistically less reliable and have a significant higher no-show rate compared to users identified through recruitment agencies (30% vs. 10%). This is a cost and time factor for clients and our team.
- To evaluate if the revised checkout process was impacted by the trust in the brand, we needed to add the perspective of a potential customer that has not built brand trust, yet.
How we collected data
The interview approach was based on ISO 9241-220 (User Experience) which evaluates UX as expectations toward a product, its usage, and the processing of the experience made during usage. The core of the session concentrated on solving tasks in different settings:
- Logged out as a new client
- Logged in as an existing client
- A standard checkout
- A super-fast checkout
- All of this in desktop and mobile setting
Particularly for agile and iterative development processes it is important to make the most out of the field phase instead of waiting days for a final report. Thus, we encouraged our client’s development team to be present during all field days. We structured the session times to discuss major findings between sessions with the moderator present, as well as, making tweaks to the prototype to see if different solutions or wording worked better. After each day, we summarized the findings and solutions in a workshop so the development team could use them immediately and progress development. The final delivery was a bullet-point summary of main findings and recommendations.
We strongly encourage a hands-on approach for interfaces in early development rather than restrict testing to a scientific approach that just ensures comparability over all sessions. We love to look at things from the perspective of an optimal output for our client.