OPEN AI, CHAT GPT, RESEARCH, LLM

Leveraging ChatGPT and AI-services in User Research

Co-Pilot, secretary, sparring partner - how to make the most of AIs: How we as user researcher can leverage current enormous advancements in AI – especially ChatGPT? How will they affect our business? Our first thoughts.

08 MIN

The field of artificial intelligence has been evolving in the speed of light in the last months, weeks, and even days. In the coming months and years, it's inevitable that we will have to adjust to these changes, which will impact the way we think, search, produce, and express ourselves, and of course, work.

As user researchers, we are not an exception. The recent development seems to have potential to affect most of our routine works in very near future, from planning research projects, conducting desk research, preparing and carrying out interviews, focus groups, usability tests and workshops, analyzing data, creating reports, keeping track of insights, and collaborating with management to share our findings, to name a few. So, why not be proactive and try it out now?

In this article, I want to explore how we as user researchers can leverage these advancements in AI – especially ChatGPT - and what it means for the future of our business. I am eager to share my thoughts and hope to have a fun and informative conversation about the exciting changes happening in the world.

ChatGPT as Michelangelo's marble

Before diving into detail, let’s briefly have a look at what ChatGPT does. To put it simply, the so-called a generative AI generates the most probable sequence of words that would follow the input given to it. It has been trained using huuuuuuuuge amount of word sequences, “knows from experience” which word would follow another very well. So well, that it sometimes seems for us to have natural intelligence.

This makes ChatGPT reasonably reliable for established, common-sense facts, but also predictable. If you ask a general question like "tell me about XX" or "write a LinkedIn post about YY," it will most likely provide a classical, or even clichéd, answer. To get meaningful and valuable responses, it is important to limit the context so that responses you want is the most probable answer for the AI by giving a clear instruction (a prompt) with enough information.

So do not think of ChatGPT as a new search engine or randome text generator. It is more like a piece of marble that needs to be shaped into a desired form - Just like Michelangelo chiseled away at a block of marble to create a masterpiece, you need to ask the right questions to shape the answers you want from ChatGPT. To make the most of this technology, we must know exactly what we want to achieve and be able to express our requirements clearly.

That being said: let me share some practical way I see to leverage ChatGPT in user research.

Planning research

Tools: Generative AI such as ChatGPT and (new) Bing

Give AI of your choice detailed description of your situation. For example, brief description of project idea, project phase, budget situation and hypothesis to test are good point to start. They can suggests some methods which meet your requirements. Chat with them to brush up the idea. (left pic)

When you need some overview of the suggestions, it can even summarize the information in a table form. (right pic)

ChatGPT can also behave as a person. Tell them to behave as, for example, a conservative, less tech-savvy manager. Then ask for feedback to your project proposal to get prepared for possible opposition from managements.

Desktop research

Tools: AI-Assisted search tool such as (new) Bing, You.com, Lexii.ai, Perplexity

  • Those tools can provide you a good overview of a topic with reference links.
  • If you search for academic paper, Scispace is of a great help, as you can chat with AI which behave like papers – just ask them summary of the paper, its practical impact, research gap it fills…
  • Syntheticusers (not yet public) advocates their AI simulates user segments so that we can check desirability of product ideas. Sustainability and usability should follow. Depending on the quality, this tool could actually spare some early user research for concept validation.

Create interview script / survey questions

Tools: Generative AI such as ChatGPT and easy-peasy.ai

  • Give them a good prompt with enough information – for example why you conduct that research, what you want to find out, information about target groups. Then brush it up by chatting with the service.
  • In case you have lots of background info, easy-peasy.ai has a custom text generator to which you can give prompt and background information separately.

Interview analysis

Tools: Auto transcription services + custom text generator such as easy-peasy.ai

  • Get the transcription of a session using your favorite transcription service, give your generator the transcript and your requirements in analysis (e.g., which method it should use, your hypothesis, required form of outcomes…). Brush up the outcome in chat.
  • For example, it can (of course) simply summarize the interview, answer your research question, draw 3 important insights, or even alanysis schema, if you are not sure about how to proceed.
  • Here again, easy-peasy.ai makes it easy for you as you can give them prompt and background info separately.

Reporting

Tools: Generative AI such as ChatGPT

This is probably the most “classical” use of generative AIs. Give them bullet points of your insights, specify the audience and the practical aspects such as length and format to get a draft of your report.

Here again, you can make ChatGPT behave as a person with specific trait - say, your critical and detail-oriented colleague. Make them check up a possible mistake in your text or a hole in your logic.

Insight management

Tools: behave-as-document AIs, such as Filechat.io and humata.ai

The Idea is very simple: you feed the AI with all your research outcomes (reports, customer journey, persona…) and tell them to behave as a persona and ask what they think about your new product idea. In reality, it might not be that simple yet, as those tools let us speak to only one file at one time. Also, AIs might require tons of reports to be properly trained to behave properly.

But I find it this aspect personally the most exciting! So far, to maintain an overview of research result, you needed a research repository, which requires tons of manual tasks. Transfer all the old reports in one place, setting up taxonomy, tagging all the finding accordingly and encouraging others to stick to it when creating a new report, to name a few. For all these tasks we simply didn’t have time and most of the time a research repository fail to flourish. But with generative AIs, which seems to be capable of “understanding” organic relationship of information, we might be able to spare all those manual cares and enjoy a single source of truth, finally!

Caution!

Please note that the situation changes day by day, so the services might be no longer available or not suitable when you try them out. Also, always keep an eye on data privacy issue. Check twice whether your data can be fed to AI and make sure to make NDA if you buy services.

What we are still good at

As a researcher in a UX research agency, it is both exciting and frightening to see how much of our work can be compressed using AIs. Still, I think, there are some spaces for human work.

Conducting interviews, especially fine-tuning the procedure based on real-time reactions of participants, would be an example. Plan and script can be genarated using AIs, still, explorative interviews must be executed by human, which ChatGPT still can't. I tried to make it conduct an interview with me, without success (it generated an interview script with “me” and “you”). AIs capable of sentiment analysis have been long out there, so it is probably question of time, until half-automated interview bots are built. But still, I think and hope that early user research needs moments of serendipity which cannot be covered by probablistic approach.

Another example would be testing of medical devices and field observations. Given that (as of today) AIs still cannot shadow an excavator operator for 8 hours a day or use an autoinjector as if it were a caregiver of a person with diabetes, there will be still needs for testing with real human as an observers and/or testers. Execution of summative tests, where we already have to behave as a maschine, might be able to automated. But again, root cause probing must be done by human. Where humans and physical objects are the core of work, there should still be enough room for us human researchers.

Let's talk!

Share your thoughts and the results of your AI experiments with us! Let’s keep riding the wave and carve masterpieces of AI-supported user research together.

Author

Iris K.

Iris has 3 years of experience as a user researcher. Her expertise lies in qualitative methods and establishing research OPs, especially insights management. At uintent, she is responsible for usability tests for automotive industry, medical products as well as various digital products.

Go back