Uncategorized Uncategorized

Measuring the internal success of user research

Photo by patricia serna on Unsplash

I was recently asked, “how would you measure the success of user research?” I took a deep breath and started explaining our general UX metrics, such as task success, task completion, the SUS, and then dove, proudly, into how these metrics could also impact important business KPIs (I talk about both in this article). I described how you could benchmark all of these metrics, measure them over time, and have a good understanding if the success of your UX and research is improving. I looked back at the person in front of me, smiling. I had brought business into user research, something I had just recently learned how to do in the past year. The next question really threw me, “but how do you measure whether or not user research is successful, in practice?”

I racked my brain for an answer. Usually my UX metrics and business KPIs were enough, but this was going a step deeper. I laughed and used a response common to all researchers, “well, that is a really great question.” I have answers on how to prove the value of user research, or get stakeholder buy-in, but this was essentially benchmarking and measuring the concept and practice of user research in an organization.

What are the questions we could be asking:

The posed question threw me into a bit of a frenzy, researching what others may have done to measure the success of user research in an organization. I typed many search queries into Google and was met with similar responses: time on task, task success, revenue! Or the NPS. This led me to form several questions (of which I have not answered):

  1. What does it mean to have a “successful” user research practice in an organization?
  2. How do you measure whether or not user research is successful across an organization?
  3. Can you quantify impact of user research at an organization?

For those who have previously posed the question of quantifying a successful user research practice in an organization, there were a few thoughts:

  • Presenting and showing how X research insight led to Y decision and Z design change, which positively impacted task success, SUS, time on task, etc.
  • Asking internal stakeholders how impactful they believed a research initiative or outcome was (via a survey)
  • Building visualizations based on research that stakeholders had asked for (personas, journey maps, etc.)

While these ideas make sense, I was looking for something more concrete. I brainstormed some of the following:

  • Number of beta tests to try out new and innovative features, based off of research
  • Number of usability tests run
  • Number of research requests from internal stakeholders (product managers, designers, developers, customer experience, marketing, etc)
  • Internal stakeholders using persona names in user stories or to communicate with each other
  • “Important” features with low usage analytics improving (lower bounce rate, less time on page — if applicable)
  • The killing of features not needed by users
  • Decrease in customer support questions about functionality

These are all subjective to the company, specific goals a team may have, the product and the users. They are certainly not the answer to every problem, but they can, hopefully, open us up to more discussion on how user research impacts an organization. Ideally, we move towards a way to quantify our impact outside of revenue or time on task.

One final closing thought: it might not always be the impact of user research that needs to be measured, but how effectively team members are using the insights. For this, I am still brainstorming. I would love to hear more about how others have tackled, or are thinking about, this idea!

[embed]https://upscri.be/50d69a/[/embed]

Read More