User research isn’t black & white
And how to navigate the grey area.
![](https://cdn-images-1.medium.com/max/800/1*uEmwKAPFANf4HtczrjdIuQ.gif)
Recently, I was thinking back to when I started my career in user research. I was a Google machine. I looked up and read everything I could about the field. What I found most prevalent were the “User Research Best Practices” lists. There were many of them floating around the internet, but they all had something in common. They all told us what not to do.
I sat there, with my head full of the knowledge of what not to do in a research session, but didn’t have nearly as much information about what I should be doing. How was I supposed to navigate a messy situation? The negative reinforcement was doing nothing to move me forward, but instead creating an air of fear that I would mess up and get fired for ruining a company’s strategy.
Still, after many years in the industry, I continuously come up against this concept of all the things we are doing wrong in the field. The thing is, I agree with them. It isn’t as though I believe we should be leading our users and asking future-based biased questions during our interview sessions. However, the world is rarely so black and white, and companies are rarely so easy to navigate with a list of best practices. Oftentimes these best practices go out the window, given a stakeholder, situation or uncertainty with how to tackle something. So, instead of speaking in these black and white manners, let’s acknowledge and embrace all the grey in this world of user research.
Things we are doing wrong (or the deadly sins):
- Asking future-based questions
- Asking yes/no questions
- Leading the user with biased questioning
- Ignoring what the user is doing and placing too much importance on their words
- Asking double-barreled questions
- Not being open to the user’s answers
- Interrupting participants
- Speaking too much during the interview/filling the silences
- Validating concepts from upper-management
For real though, and maybe I shouldn’t be honest about this, but sometimes I commit these sins. Sometimes I ask yes/no questions to get a conversation started. Sometimes, in a moment, I ask a biased question and do my best to backtrack. I don’t think it is realistic to juggle all of these points and have a natural conversation at the same time. In addition, some of these best practices are extremely vague and don’t offer much guidance in the “correct” direction.
The problem I have with these best practices
Again, so I don’t seem like a crazy user researcher, I do think we should avoid pitfalls and use the best practices. However, many of these best practices are constructed as if we live in a perfect world of black and white. Very rarely are situations, especially those in user research, black and white. In fact, they are largely grey. And that is where I struggle with these lists the most.
User research isn’t, and will probably never be an exact science. We can’t always be perfect, and flawlessly execute a research session. We are humans, not robots, and I think that is 100% fine. We bring a level of empathy and perspective that would otherwise get lost.
Maybe some people call this bias, but I think it allows us to interpret research in a human and empathetic way.
Since the field of user research is so new and often misunderstood, we can get backed into a corner by stakeholders wanting specific questions to be answered. And also stakeholders who want specific answers to questions. Rarely are we working in an ideal environment for the best user research practices.
The thing is, conversations are h̵a̵r̵d̵ nearly impossible to predict. Trying to construct the perfect interview may end up making participants feel uncomfortable since it could feel very contrived. Either way, we always run risks. Risks are inherent when you bring two people together to unravel thoughts or perceptions.
An example: subscription box
Say you are working for a tech company that specializes in selling computer parts. You have identified a huge portion of the user base are gamers, who are constantly modifying their laptops, or building new computers for themselves and friends.
With this knowledge, stakeholders want to start a monthly subscription box with cool computer parts and coupons to various popular games. They essentially want you to validate this hypothesis as quickly as possible to push out this product. In the best terms: they want to roll out this product as quickly as possible, without pushback.
In this scenario, you are faced with the following challenges:
- Managing stakeholders expectations
- Asking about potential future-based behavior
- Predicting whether or not a user would buy something
- Forcing the user to think about a specific solution (subscription box)
- Asking potential customers about the price they would pay for such a service
- Getting a yes/no answer to interest in this specific solution
That list violates much of what we, as researchers, “should not” do. But, these scenarios are our day-to-day reality and happen more frequently than ideal situations.
So…what can we do in a situation like this? Here is my approach:
- Set expectations for stakeholders on what user research can do, and what it cannot do. I make it clear that user research is not used to validate what we want, but to explore whether or not a hypothesis is valid. We might not hear what we want to hear. Warning: they may not listen
- Recruit current users who have subscribed to a recurring subscription box in the past. If you can’t find any, that may be a sign that this is not for your target audience. In the case that you must run the study anyways, find users who know of people who have subscribed to a recurring subscription box. If this isn’t possible, recruit current users
- If you are, in fact, able to get users who have subscribed to a recurring box in the past, fantastic! Your job just got easier by mitigating the problem of asking for future behavior. In this case, the best way to predict future behavior is through past behavior. Whenever I am trying to anticipate whether a user will or won’t use a certain product, I ask them about their past experiences with something similar. I ask what they enjoyed about a similar product, and what they would change. This gives a good indication of whether or not we are going in the right direction
- If you are not able to get users who have subscribed to a recurring box, you are in a bit more of a tough spot. For this, I try to relate the question to something similar. Have they signed up for any recurring service? I try to mention common services, such as a gaming subscription (ex: World of Warcraft), Netflix or Amazon Prime. This way I can at least get a pulse on what they think of the above. This may give me some insight into how to structure the service. I then would ask them if they have any bought any computer part (or whatever we would offer) packages for themselves or others. This would then give me an idea of what they were buying together
- If I could find no current, churned or potential users who did any of the above, well, then I would caution against the idea in general. I would tell stakeholders I don’t believe there is a market fit in this particular area, with this particular audience. If they forced me to do the research (which some have), I would have to resort to asking future-based questions. At the end of the day, they are paying me. When you get backed into a corner, and no one heeds your warnings, there is little more you can do. I have refused to do the research, and it resulted in my freelance gig ending early. Spoiler: the product I was asked to do research on failed.
- After asking users about their past behavior (assuming we found users who subscribed to recurring boxes), I then pry into as many details as possible:
- What is the box like?
- Do you know anyone else who receives this box? Have you ever given it as a gift?
- How often does it come?
- What are the contents? What is the quality of the contents?
- What do you love about? Hate about it?
- How would you improve it?
- Do you currently still subscribed to it? Why/why not?
- What would make you stop subscribing to the box? What would make you a lifelong customer? (Yes, future-based, but still interesting to think about)
- What is the price? How do you feel about this price?
This information will give me insight into how we could structure a potential subscription box that would appeal to this audience. Of course, user research isn’t a straight answer and doesn’t guarantee success, but it gives us a direction to go with.
7. Asking about prices is always difficult and, in the example I mentioned, I ask for the price of a comparable product. This is the absolute best I can do, even if I am relating the computer subscription box to Amazon Prime, it is better than flat out asking someone how much they would pay for a hypothetical subscription box. Even if the two services aren’t equal, it gives me an idea of how they think about price and value for a service
8. I must admit, and maybe this is a sin, but I do float the actual idea by users at the end of an interview. I like to gauge their initial reaction when I mention an idea, such as the computer subscription box. I don’t base my entire interview on how they react to the idea, but it can give a decent indication of interest level. If you’re lucky, you get some really honest users, although most aren’t because of a variety of psychological phenomenon
9. Once I mention the idea to users, sometimes I will ask point-blank: would you use something like this? Would you subscribe to something like this? What would you expect it to be like? What would it contain? How much would you pay for it? In the end, I ask all the wrong questions because I spent fifty-five out of sixty minutes doing my best to ask all the right ones. Just because they are the best questions doesn’t mean they can’t give us any information. Again, I don’t base my research findings on these last five minutes, but I try to dig as much as I can out of the users, even if it is hypothetical
10. If I found very little interest in subscription boxes, both past and the potential current, I have to present a “failed” idea to stakeholders. Whenever I do this, there are two ways it can go: they listen or they don’t. In any case, I never just present that the project failed and that there is no way such an idea would ever work. I present alternative routes or ways to improve the proposed offering. At least, with this, there is a higher likelihood the stakeholders will listen to some insights and make the product as best as possible
Ultimately, what we are trying to do when asking about a future product is to understand the user’s mental models. How do they currently think about subscription boxes, specifically those for computer parts? How do subscription boxes fit into their day-to-day life? How do they enhance it? Or take away from it?
This is our goal, and we must remember it. Nowhere in that goal does it say you must never utter yes/no questions, or that you must never try to predict future behavior.
I would love to hear more about what you would do in this situation — my perspective only goes so far and I am excited to hear how others would tackle this.
Let’s adapt our research situations to realistic guidelines
We’ve established that user research tends to sit in a grey area, so what can we do in the future. The purpose of this post is to make sure we all know, especially those just beginning in the research field, that things won’t always be perfect. You may have to do research you don’t believe in and I am sure you will have to deliver results that are directly opposite of what stakeholders want to believe.
Instead of attempting to force user research to fit neatly into a chaotic environment, or trying to make an inexact science perfect, let’s allow user research to continue being what it is: human-centered. We don’t need user research to be perfect and we don’t need to always be the perfect researchers. What we do need is to remember our goal, and use whatever we can to get to that goal. Ultimately, it is about the users, not the practice.
The impostor UX’er
The Syndrome is Real
![](https://cdn-images-1.medium.com/max/800/1*tI12lTja7v6HnWbW7fuhEg.jpeg)
What is Impostor Syndrome?
Impostor syndrome, otherwise known as impostor phenomenon, impostorism, fraud syndrome, or impostor experience (fun, light names), is a “psychological pattern in which an individual doubts their accomplishments, and has a persistent internalized fear of being exposed as a ‘fraud,’ despite external evidence of their knowledge.”
What does this feel like? Essentially, impostor syndrome feels the same way as it sounds and is defined: in many situations, you are constantly questioning whether you have enough knowledge to successfully accomplish what is needed for your job (or, even, hobby). Unsurprisingly, it is a terrible feeling. You go through days and meetings filled with uncertainty, unsure if you can live up to the expectations of others, or yourself. You don’t know if you can properly fulfill the roles and responsibilities. Sometimes this can seep into your personal life, and you question if you can even hold an interesting conversation with others, or if they just find you plain boring.
Great news though, impostor syndrome is super common! So, hi impostors! You aren’t alone! About 80% of people will encounter this feeling at least once in their lifetime. Hurrah! However, this is not an emotion that can necessarily be solved by “strength in numbers.” In fact, it is generally a feeling you keep to yourself, or only share with people closest to you who then proceed to say: “but you’re so smart, that isn’t true at all!” Don’t get me wrong, really nice, but also not the solution.
What does this have to do with UX?
The field of user experience is extremely dynamic and ever-changing. It is an exciting and thrilling area to be in right now; everyday feels it like there is a new trend to be tried or read about. The options and potential are limitless. This same fact is also terrifying and exhausting, especially for those newer in the field (and, by new, I mean less than 5 years in).
There are a lot of innovative ideas being born into the field of user experience every week, every day actually. There are a plethora of ways to approach a problem (which, to be fair, is essentially the point of UX), hundreds of methodologies, thousands of books to read and millions of opinions to listen to. There are countless articles on best practices for design patterns and research processes (some of which I have written).
With this constant overload of information, of which it is impossible to be privy to all, it is no wonder we trip up our sentences and second guess ourselves. Someone may have recently read an article you didn’t have a chance to glance at yet, which completely debunks your rationale behind a research methodology or design decision. For every UX opinion or trend, you can find an an equal and opposing view. It’s Isaac Newton’s law of UX.
As UX’ers, we are being hit with many contradicting views: use NPS, NPS is useless; you only need to conduct 7 1x1 interviews, you need over 25 interviews for statistical significance; qualitative data versus quantitative data; autoplay audio media to get user’s attention, don’t ever autoplay audio media; hamburger menus for minimalism, hamburger menus have been eaten. UX trends are as fickle as a child.
What I’m trying to say is there is no perfect answer or solution when it comes to UX. There are many different options to try, and, one trend may be out of style as quickly as avocado toast (have you heard of sweet potato toast) or bell bottom jeans, so don’t grasp too tightly. Go through the life of a UX’er with an open-mind and willingness to try and fail, that is what we desperately need, more so than endless lists of UX trends. If you do it “wrong,” do it again. Try not to doubt yourself. Many of us are feeling like impostors, while swimming in the same fish bowl, attempting to puzzle out the best approach without drowning in this information influx.
Keep your head up and follow the golden rule of UX: iterate, fail, iterate, fail, iterate, win for a bit, iterate, fail, iterate, win a little bit more. By doing this, you are absolutely not an impostor.
[embed]https://upscri.be/50d69a/[/embed]