Rewriting and prioritizing user research questions
Your stakeholders have 99 questions, how to prioritize them ain't one
👋🏻 Hi, this is Nikki with a 🔒subscriber-only 🔒 article from User Research Academy. In every article, I cover in-depth topics on how to conduct user research, grow in your career, and fall in love with the craft of user research again.
I remember a time when stakeholders started to get excited about user research. It was an interesting switch for me — I went from constantly checking in to identify user research projects within teams to colleagues coming to me with research project ideas in hand.
It👏🏻was👏🏻awesome👏🏻
I felt like research was exploding. I felt like I finally had a say. I felt like I finally had power.
But “with great power comes great responsibility” (Source: Uncle Ben + others).
And I quickly realized that, with these research projects, as exciting as they were, I started to feel extremely overwhelmed by them. It wasn’t necessarily the amount of projects (that would come later) but rather the number of questions people had within each research project.
The first time I encountered this was at my job at a social media management company. One of my stakeholders had an idea in mind for a concept they wanted to test. We had heard several times within previous research that the analytics on our platform were not aligning with users’ expectations and needs. In fact, they fell short in several key areas.
Some of the key pain points highlighted from previous research included:
We did not provide sufficient engagement analytics for our clients, inhibiting them from making data-driven decisions
Many clients were asking for manual reports from account managers as our platform isn’t providing sufficient metrics/data that allows them to compare data
Our current metrics had little context and weren’t very useful/reliable for our clients to make decisions
These were some pretty big flaws in our platform, rendering it an underutilized feature and, ultimately, creating more work for customers and our account managers.
So, with that in mind, my stakeholder came to me with a concept based on this previous research. I was thrilled. Not only had they listened to previous research, but they had used it as a jumping-off point for a concept! Hurrah!
And then I looked at the list of questions this stakeholder had that they wanted answered within the research project:
Do people understand the concept?
Do people like the concept?
Do people perceive our recommendations as trustworthy?
What types of comparison timelines do people prefer when it comes to analytics?
What kind of engagement metrics are most important to see?
How do people perceive the difference between engagement and interaction metrics?
Can people use the concept?
Would they like to use the concept to try it out?
Are people annoyed when they have to open a new window to compare data?
Is it clear how people navigate through the concept to get a monthly report?
😱😱😱😱😱😱😱
Not only were these a whole lotta questions, but they were also a lot of unideal-for-qualitative-user-research kinds of questions. There was no way we could answer so many questions in a 60-minute concept test, let alone get answers to the majority of these questions.
I went back to the stakeholder, terrified that I would disappoint them. I had just started the research ball rolling, and the last thing I wanted to do was say no to a research project or tell them that I couldn’t answer these types of questions.
Since I was still early on in my career, I had a tough time rewriting and narrowing down the scope of the questions. We went into the concept test with way too many yes/no and preference questions to answer.
This was one of the first projects that had landed on my desk from a stakeholder, and the results were a bit disappointing. Because the small sample sizes within qualitative user research aren’t ideal for answering yes/no questions (all the “do” and “are” questions), I didn’t have much impact.
Saying, “8 out of 12 people understood the concept,” was not powerful.
Similarly, saying “7 out of 12 people liked the concept” did not tell us anything. Many of the stares I got during my report said, “So what?” or “What now?”
I was gutted (my new British slang). That wasn’t the first or the last time I received a research project with a slew of questions that were either impossible to answer with user research or were way too broad in scope.
Over time, I was able to create a mechanism to help me with prioritizing and rewriting research questions, which I am going to share with you.
Keep reading with a 7-day free trial
Subscribe to The User Research Strategist to keep reading this post and get 7 days of free access to the full post archives.