How can a song parodying the dance moves of ravers help you involve your team in your research planning?
I’ve been part of the research team at NHS Digital COVID-19 testing since the beginning of May 2020. The UR testing team has conducted over 231 rounds of research over that time. I’ve primarily worked on lateral flow test self-report services, part of which is the Report a COVID-19 Test Result service for mass asymptomatic testing.
It’s really important to us that we help teams understand the importance of meeting user needs, and how they can use research insights to drive service improvements. One of the ways I like to capitalise on this is to engage the team in pre-research planning discussions.
Before we start conducting a round of user research, we will create a research plan and discussion guide template to help shape and guide the research. This used to be something I would do myself in isolation, after having discussed the requirements with the team. However, I’ve found it much better to actually work together on creating the research plan together.
One of the most recent ways I did this — was to create a Miro template that I decided to affectionately name “Big Fish, Little Fish, Cardboard Box”. You may argue that this is a tenuous link at best, but it made my little heart happy, and small moments of happiness at work are not to be derided.
Why create your research plan with the product team?
Research plans often feel like stand-alone artefacts for the use of the UR or UR team only. They tell us what we’re doing, how we’re going to do it, and what we’re going to ask the participants. Why would we need to consult with anyone else? However, this would be an underestimation.
If you engage with the product team when planning the research you can:
- Ensure both they, and you, understand the aims of the proposed research and what the intended outcome is (aka: what impact will conducting this research have on your product or service design?)
- Manage expectations — what can you reasonably get done in the time frame? What sorts of insights can you expect to generate?
- Agree on the best methodology for the research
- Identify “already knowns” and “unknowns”
- Discuss limitations to the research
I’ve been in playbacks where someone has said “this isn’t what I thought we were doing” or “why weren’t the participants asked X?”. By starting your planning with one of these sessions, you can avoid these situations.
Instructions for Big Fish, Little Fish, Cardboard Box
First up, kudos (and probably royalties from somewhere) go to Nik Martin, who originally coined the catchy tune for which this Miro template is named.
I appreciate it doesn’t fit exactly — but you get the gist.
- Cardboard Box
This is where you ask people to generate what you already know. What have we already established about our user’s behaviours and actions? What do our analytics show? What about previous rounds of research? What do we have evidence for?
2. Little Fish
Now you generate the gaps. What questions are still unanswered? What do we have little or no evidence for? You will probably have a fair few of these little fish questions, which is fine.
3. Big Fish
Ask your team to generate one sticky with what they consider to be “the most crucial question” that they think needs answering through this research. This is the “big fish”. The main thing that we need to come away from the research having answers, or hypotheses at least, about. If they try to use more than one post-it, you’re gonna need a bigger boat. (or what I would do, is make them throw one of them back into the sea for another time).
4. Assumptions and Hypotheses
Now, admittedly this doesn’t fit neatly into the Big Fish, Little Fish metaphor. However, I’ve included it as I think it is also important to refer to this section to capture assumptions (the things that are accepted as true, or at least plausible) the team have so you can determine if you can correctly draw conclusions from the results of your analysis. It also allows you to create any hypotheses off the back of this exercise that you can then go on to test. Hypothesis-driven design anyone?
If you try using this method and it works well, I’d love to know!