Write impactful user research insights
Empower and encourage your team to make the best decisions possible
👋 Hey, Nikki here! Welcome to this month’s ✨ free article ✨ of User Research Academy. Three times a month, I share an article with super concrete tips and examples on user research methods, approaches, careers, or situations.
If you want to see everything I post, subscribe below!
User research is a support system. With that support, we help our teams:
Mitigate risky decisions
Highlight the most important pain points and unmet needs
Narrow the scope of possible solutions for a problem or unmet need
Make more user-centric decisions
Generate empathy and curiosity toward users
If you think about your user research as a product, those are the goals you try to achieve with your research studies. You are attempting to help teams make less risky, more user-centric decisions and also alleviate the pain point of trying to create meaningful products without a user’s perspective.
Our research is meant to boost our teams, empower them, and enable them to make the best decisions they can, given the information in front of them. This is the crux of user research and, often, one of the most important parts of our job.
When I was earlier in my career, I struggled so much with writing insights. I spent more hours Googling what insights were than writing them (and trust me, I spent many, many hours writing insights). They were an enigma, something that was meant to be magical, motivating, realistic, relevant, and concise.
It seemed nothing I wrote could come close to what everyone called an “actionable insight” (I hate the word actionable, by the way, because it is just such a vague word I tripped over for years). Yet, I also couldn’t find any concrete examples of insights seeing as most of them are kept locked away and confidential. The only real examples I found were ones I didn’t want to replicate. And, while it’s helpful to know what not to do, it doesn’t fully guide you in best practices.
Similar to my first personas and journey maps, my insights fell flat. They didn’t inspire great action and help teams make better decisions. They kind of just relayed the facts of the situation with subjective, vague language.
And, repeatedly, I was disappointed in my work. I felt like I wasn’t filling the full potential of my role and doing what user researchers are meant to. After some time, I decided to dive deeper into researching user research insights and create something that felt good for me, and that helped my teams in all the ways I strived to.
What is a user research insight?
Because it’s more interesting and fun, let’s start with defining all the things that a user research insight isn’t. There are a lot of terms floating out there that seem to get lumped together or used interchangeably with the word insight. Let’s take a closer look at these words and what they mean, independent of the word “insight.”
An observation. An observation, on its own, is not an insight because it cannot tell us why a person is acting in that way. It is simply something you observed happen without additional context surrounding it.
Quantitative data trends. Data trends tell you a lot about what actions users are taking on a product and can also highlight important trends in behaviors, as well as metrics. However, quantitative data doesn’t help explain why something is happening.
A fact. When we simply state a fact, such as “users have a lot to juggle at their jobs” or “participant one has poor eyesight,” we aren’t doing any justice to our projects. Facts are often well-known and lack a high degree of context, and that context is hugely important to insights.
A bug. Something wrong with the product isn’t an insight, but rather a bug that needs to be fixed. A bug is very product-centric, which is different from insights.
A finding. If you have information that will solve something today but won't have a significant impact in the future, that is most likely a finding, not an insight. A finding typically doesn’t have a big consequence (we will get to define that word later) and is more on the shallow side. You typically have a lot of findings in evaluative research, such as usability tests.
A preference or wish. When a participant says, "I would love this feature..." you can't use this as an insight. Dig deeper into why they want the particular feature to understand the outcome they desire. This outcome is the underlying motivation and is much more valuable (and closer to an insight) than a feature wish.
An opinion. Opinions are trickier than the above. When a participant expresses their opinion on something, that isn’t necessarily an insight. If a participant says, “Apple products are much better than Microsoft products,” that doesn’t really tell us much, does it? Similar to preferences and wishes, we need to dig deeper to expose the root of this opinion for it to get into the realm of an insight.
To demonstrate this a bit more clearly, let’s take a look at some of the insights I’ve written in the past that are less than ideal and break them down into these categories. This was when I was working at a hospitality b2b company, and we were also exploring residential properties as potential customers. Here is a screenshot from a report I wrote way back when:
Yes, I titled these as “insights.” Feel free to laugh — they make me laugh too. Or, if these look a lot like your insights, know that you aren’t alone! Writing insights is super hard work, and it takes a lot of practice. So, let’s rip my insights apart.
Example one
“Filtering by type is good (recurring, maintenance, appliances, etc) because they have a long list of requests.”
Now, the team obviously had more context than everyone reading this right now, however, this is distinctly not an insight. Above is an example of an opinion and fact. We have several problems with the above statement:
It gives absolutely no context surrounding the filtering, when people use it, why they use it, or any problems with the filtering
It uses subjective language like “good” — what does “good” mean to our users?
There is little understanding about what is in that “long list of requests” or what that might mean to users
It is extremely product- and feature-centered, rather than user-centered. It talks more about the feature than the people using it
Example two
“There is no immediate need for recurrent tasks, but good if allocated as ‘future’ tasks.”
Again, this is not at all an insight, and rather a fact, as it lacks:
Any sort of context surrounding what a recurrent or future task means to users and how this kind of task fits into their lives
Understanding of why and how people currently use these tasks and any pain points or unmet needs behind the concepts
User-centricity and, again, just mentions the feature of tasks rather than anything about the user
Clarity about the concept or what “good” means
Example three
“Quick search is very important for easy access to resident profiles.”
Again, this is a fact and a bit of an opinion. For such an important feature for our users, I don’t say anything about it in this statement. In fact, I’ve called out the most subjective and vague information as possible by calling it “important” and using the phrase “easy access.” There is absolutely no clarity or context that answers:
Why it is important to users have easy access to resident profiles
What easy access means to them
How they currently use it
What problems or unmet needs might surround the concept of a quick search
What resident profiles are, why they are important
Let’s do what we constantly tell our teams to do and look at our research as a product and our stakeholders as users of the product.
If my team were looking at these three statements above, what meaningful action would they be able to take? How am I mitigating their risk? How am I narrowing the scope of solutions or helping them make their decisions more user-centric?
How have these “insights” helped them?
Spoiler: they haven’t.
If I were tasked with improving or creating a product based on this information, I would feel super lost. And, believe it or not, my teams still felt lost after my research projects, which, again, made me feel like I wasn’t properly doing my job.
While user research isn’t a magical answer to all our problems, it should still give my teams the support they need to mitigate risk and make more confident decisions.
What makes a good user research insight?
Let’s start with a definition of a user research insight:
An insight is a nugget of truth about human behavior that pushes us to challenge our preconceived notions about how people act or perceive the world. They reveal to us the underlying motivations behind behavior and help us understand what happened, why it happened, and what the potential consequence is of not addressing the insight.
There are a million different ways we could define this, so please keep in mind that this is my definition and feel free to tweak and redefine it to your context!
We have a few different components in this definition that we can break down further:
A truth about human behavior
Pushes us to challenge our preconceived notions
Reveals underlying motivations
Helps us understand what and why it happened
Highlights potential consequences
If we revisit any of the examples from above, they are hugely lacking in all of these different components. In fact, I don’t think any one of them has any of the above parts of the definition of an insight.
Looking at this list of things included in an insight might seem scary because, well, there’s a lot in there. When I first created this definition and included these components, I scared myself. How was I ever going to write something that ticked all these boxes?
Before I go down that rabbit hole of fear and anxiety, let me quickly talk about how I identified these components: user research on my colleagues. I went back to the basics and looked at my insights through the frame of reference of my colleagues, asking them:
What kind of information would they need to know to make their jobs easier?
What could I include in my insights that would make them “actionable?”
What was missing from my current insights?
What would make my colleagues feel more confident about their decision-making?
I conducted lots of stakeholder interviews on this topic and even went outside my organization to find a more broad understanding. During the interviews, I also had stakeholders share with me what they categorized as helpful versus unhelpful insights (to the best of their ability as some couldn’t share this kind of information) and explain why.
By having these deep conversations with stakeholders that use user research insights (e.g. designers, product managers, developers), I learned so much about what information directly impacts them. Synthesizing the results, I was able to create a definition that included those components the participants most frequently mentioned.
Once I had the definition, alongside the examples, I went ahead and started to write insights much more differently.
How to write an impactful user research insight
Before we dive into the actual craft of writing user research insights, let’s talk through some prework that may be helpful for you to do. Because environments, contexts, and stakeholder needs can vary so much between organizations, I highly recommend doing this prework as it will set you up for success.
Prework
Interviewing your stakeholders
Similar to what I did above, I would highly recommend interviewing your stakeholders about insights in general, as well as previous insights you’ve sent to them. I recommend doing this with at least five different stakeholders, if you can, up to about 10 — I found I hit diminishing returns around 10 stakeholders.
As I mentioned, during these stakeholder interviews, you can ask them questions on how they think about insights, how they define good insights, and what “actionable” or “empowering” insights mean to them.
I would also encourage you to go through previous reports and have them highlight insights you’ve sent that are helpful for them, and explain why they are helpful, as well as the other side, highlighting unhelpful insights and why they were particularly unhelpful.
To be honest, getting stakeholders’ feedback on my insights was sometimes a little tough, and I had to take a huge step back not to take the feedback personally. I recommend getting in a really positive headspace and remembering that this is about you improving and helping to support your teams even better.
Using this information, I’d recommend pulling some themes of what good insights mean to your team and using that to create a definition and model.
Setting up a satisfaction survey
Another really important aspect of getting these insights right is to iterate and improve upon them over time. There’s typically not a one-size-fits-all approach to writing insights. Some teams’ needs are different. I remember one team where I created a lot of visuals, and they loved all of them, while another team thrived off more story-based information and another needed clip highlights.
And sometimes those needs change over time. Maybe you get new team members or priorities shift. Regardless, it is essential to track this information over time, not only to improve but also to have concrete data on how satisfied your stakeholders are with your insights.
For this, I took my user research hat and applied it to my stakeholders. I typically used surveys to track impact over time and to get continuous feedback from users, so why not do that with my stakeholders as well? I set up a satisfaction survey that I sent to each stakeholder after each project, which asked about different aspects of the project and how I could improve.
The survey results were incredibly helpful in understanding what was working, what was lacking, and how I could iterate on my current process. If you are going to dive super deep into your insight-writing craft, I recommend setting up a satisfaction survey based on your insights that you can send after each project. You can include questions like:
How clear or confusing are the research insights?
5-point scale, 1 = very clear, 5 = very confusing
How satisfied or dissatisfied are you with the research insights?
5-point scale, 1 = very satisfied, 5 = very dissatisfied
How actionable or unactionable do you find the research insights?
5-point scale, 1 = very actionable, 5 = not at all actionable
How do you feel about the insights from the research?
Open field
What can we improve when it comes to research insights?
Open field
Step-by-step guide on writing user research insights
Now that we’ve covered the prework let’s dive into how to structure and write these beautiful user research insights. Again, this is my process, so always pick and choose the most applicable and helpful parts for you and tweak anything necessary!
Identifying insights
First, it’s important to identify insights. As we saw above, a lot of things are not insights. So, how do we identify when we have a beautifully rich insight versus something like a finding or preference?
I generally look at four different aspects of the data to identify insights and ask questions surrounding those aspects:
A discovery about human behavior, and the underlying motivations behind that behavior. Does what you found give you a new understanding of attitudes, pain points, needs, or the context of users (inside and outside your product/service)?
Information that challenges what we believe about users and how they exist in the world. Does what you found negate or change the way you have viewed users in the past?
Knowledge that reveals fundamental principles that drive us toward seeing users in a new way. Does what you found help you understand the user's mental models on how the world should work?
Surprising information that makes you say, “Wow, that is so interesting, I had NO idea!” (Think Owen Wilson’s “wow”). Does what you found surprise you? Was it unexpected?
When you’ve ticked one (or multiple) of these boxes, you have uncovered an insight! This data is deep and really helps you uncover something (not necessarily new, but sometimes new) about users that is profound and can help your team serve your users better.
How to write an insight
Now let’s dive into actually writing these wonderful nuggets of information. I generally think of insights as including three major components:
A key learning. The key learning may be an unexpected attitude, behavior, need, motivation, mental model, or pain point. It’s the thing that made you say, “Wow, that’s interesting,” and is the major thing you have learned from that piece of data.
The why. The why describes the motivation or the “point” behind the attitude, behavior, need, motivation, mental model, or pain point. It’s the answer the user gave you when you dug deeper during your interview into why they were feeling a certain way or operating with a particular mental model.
The consequence. This is the bit that is left out most frequently from insights and the part that is, to me, the most actionable. What does this particular insight lead to, or what impact does it have on your product/service? Explain what will happen if you don't act on this insight.
If you’re having trouble filling out this information, it might indicate to you that you have a finding rather than an insight. By trying to write insights, we can sometimes discover the data is actually too shallow for a full insight, but it can still make for a great finding — remember that findings aren’t bad at all.
Since insight writing can feel like some mysterious, veiled process, let’s take a look at some concrete examples to illustrate how to build these from raw data.
Example one
When I was working at a travel company, we were doing some generative research on how people planned their trips and the concept of purchasing “package trips,” where you purchase the transport, hotel, and activities all in one, which we were interested in exploring as a feature.
I saw a major theme come through in the pain points around the concept of “package trips,” where participants were hugely concerned about what would happen if something went wrong with these trips. I looked through the data and found various quotes like:
“I’m just not sure what happens if something goes wrong with the package - how do I know who to reach out to and fix it? It’s still just as stressful…” -P2
“Will the company charge a bunch if something goes wrong? And do I even reach out to them? What if they tell me to contact the other company directly?” -P3
“What if a flight gets canceled? Who is going to figure that out for me, and what will they charge? It makes me want just to plan my own trip.” -P5
There were quite a few more people who expressed similar concerns. I had assumed people loved package deals because they were cheaper and easier. I hadn’t thought much about the consequences or problems associated with them. As I looked through the data, I found this fell into the following categories:
Surprising information
Knowledge that reveals fundamental principles that drive us toward seeing users in a new way
Information that challenges what we believe about users and how they exist in the world
So, I went through my process of building the insight using the three components I mentioned above:
The key learning (what I learned from the data):
People are concerned about the policies and consequences of a package deal getting changed or canceled, as they have no idea who would be responsible for helping them and if they’d get charged.
The why (why this is happening):
Many people get stressed about bad things happening to their trip since it is such a painful experience from the past and, with package deals, a lot is out of their control, and many different components are put together in a package with no clear indication of who is “responsible.”
The consequence (what we should do about it):
People might not use a package deal service because the possibility of something going wrong and not having control over the situation might cause more stress than actually booking their travel.
The insight ended up looking like this:
People are concerned about the policies and consequences of a package deal, as the process is out of their control. Since many different components are put together in a package, there is no clear indication of who is “responsible” if something goes wrong. This can lead to more stress than booking travel, causing people to choose not to use a travel package service, especially from a third party service. We should consider other avenues helping users with their additional pain points and unmet needs rather than a package deal service.
Within this insight, I clearly pointed out what was happening, gave context, and then highlighted the consequences if we were to move forward without heeding this insight, and what we should do instead.
Example two
I’m using the travel company example again because, well, the company went under, so I am able to share much more detail 😁
For this project, we were looking at the process our users went through when booking a ticket on our platform so that we could make meaningful improvements on the most painful experiences.
Again, while synthesizing my data, I found a major pain point. Prices for travel often change, and people can get quite frustrated feeling the price volatility and trying to compare prices for travel to get the best deal. Here are some of the quotes from this pain point:
“I have about fifty tabs open from various platforms comparing all these different prices at once. It’s so frustrating, and it feels like if I close one tab by mistake and open it again, the price somehow changes. I just don’t get it.” - P5
“I noticed the prices for travel change, but I honestly have no idea when they will change and why. It makes me not know when to book a ticket, and also I feel like I have to open so many different tabs to make sure I’m getting the best price. It gives me a lack of trust like, just tell me the best time to book a ticket and what others are charging.” - P8
“The variability in prices is frustrating. I wish there was some sort of alert when prices were going up or an easy way to compare them without having to go through every website. It makes me trust the platform less, like all you care about is taking my money.” - P12
Again, I went through my process of building the insight using the three components:
The key learning (what I learned from the data):
Many people experience a high degree of stress when trying to buy a ticket at the best price and often get frustrated because the prices seem to be extremely volatile, changing in ways they can’t understand.
The why (why this is happening):
Because of the constant price changes, people are opening multiple tabs or comparing many different platforms when looking for the best price. The entire process can be quite time-consuming and frustrating because there is a lack of transparency on when/why the prices for a particular trip will change.
The consequence (what we should do about it):
Since our prices constantly change, and we give little indication of this, people might have a hard time trusting our platform, thinking we are trying to charge more than competitors. This could stop them from purchasing from us and could impact our retention rates and customer lifetime value if they decide to purchase elsewhere.
The insight ended up looking like this:
Many people experience a high degree of stress when trying to buy a ticket at the best price and often get frustrated because the prices seem to be extremely volatile, causing them to open multiple tabs or compare many different platforms simultaneously. The entire process and lack of transparency can be quite time-consuming and frustrating. Since our prices constantly change, and we give little indication of this, people might have a hard time trusting our platform, thinking we are trying to charge more than competitors. This could stop them from purchasing from us, impacting our retention rates and customer lifetime value if they decide to purchase elsewhere.
Example of a finding versus insight
Just because it can sometimes be tough to distinguish between insights and findings and the fine line, I wanted to give a quick example of a finding, and then how it could transform into an insight.
We sent quite a few discount codes to our customers for particular train tickets, and I found that people hugely struggled to find the discount code area, which meant they were really frustrated when they finally went through the process of finding a trip then couldn’t use their code.
Here is that information as a finding:
5/10 people struggled to find the discount code area in the checkout form when purchasing their train tickets.
Why is this a finding versus an insight? It is much more shallow than an insight as it doesn’t give context, why, or consequence. It states what happened and the frequency of the behavior.
If we wanted to turn this into an insight, we would need deeper information. Now, if you were running a usability test and didn’t get that deeper information, it is fine for this to stay as a finding. One change I would make, however, is adding a potential consequence if you can.
However, if you did have additional information, we could turn this into an insight:
5/10 people struggled to find the discount code area when purchasing their train tickets. It was incredibly frustrating for them to get all the way through the process that we triggered through the discount code email, only for them not to be able to find where to put the code. They spent some time clicking around to find it, but ultimately got annoyed and, since they couldn't apply the code we sent for the sale, they dropped off the website. The average order value during the sale was $50 (with the discount applied). Looking at this population, instead of making $500, we only made $250. We lost 50% of our revenue from this issue.
Within this, we have what happened, the context behind it, and a very solid consequence!
As you can see, building an insight can take some intention, time, and thought, but it is well worth it to find a few of these deep, rich pieces of data because they can be incredibly helpful to your team.
I hope these examples have been helpful for you to model when building your own insights!
Remember, it’s okay not to have a million insights
Please keep in mind that not every study will have insights, and that is 10000% okay! Insights are rare, and they don’t just magically pop up with every study. In fact, I rarely get insights from evaluative research, and, if I do, it’s typically a fluke where I’ve gone off track and started straying into more generative questions. My insights tend to come from generative research, and, even in those super-deep studies, I may get a handful of them.
And I also want to say that findings, observations, facts, and bugs are all fantastic to report on and can be very helpful to the team. I try to have a mix of these outcomes in my report because, although insights are great, they generally need more thought to solve or ideate on, whereas findings or bugs can be quick fixes and low-hanging fruit.
Join my membership!
If you’re looking for even more content, a space to call home (a private community), and live sessions with me to answer all your deepest questions, check out my membership (you get all this content for free within the membership), as it might be a good fit for you!
Love this approach! Have you written about how to position research or insights that support stakeholder's assumptions, so really mainly increases the confidence in what was already believed but not well evidenced? As that can potentially be tricky to frame the value