Product management biases
How to do product management without cognitive biases?
The process of product management is susceptible to cognitive bias just like any other task that we perform. But as product management leaders we are at risk of taking the entire company down with us. So what can we do? How can we develop a product without bias? For starters we need to recognize and accept the fact that even the most experienced product owner is biased. We can do that by getting to know the minefield, and familiarize ourselves with some of the cognitive biases that we may not be aware of.
I wrote this post while reflecting on some of the mistakes I made over the last few years… and I’d like to start with Data. We often think that by basing our decisions on data we are bypassing cognitive biases altogether, that somehow data is immune to bias. So let’s look at the – data bias.
The very first thing any product management leader will say when you ask him/her at 2am in the morning – how do you make product decisions – they’ll say it’s all based on data. Analytics, real user behavior as measured by the system is the foundation for all bias-free product decisions. But is it really bias-free? What if you launched an MVP and are trying to figure out what to do next to improve usage / conversion / onboarding… is the data going to tell you that? Is the data coming in from your very early adopters really indicative of your target market? And what if this is a B2B product and your first few customers are freebies or FFF? Do you still believe it is a reliable statistical sample?
The data might help you detect major problematic areas of the lowest-common-denominator type, but it can hardly tell you what to do next. How to prioritize features for the next sprint? How to decide what to test and what to leave out? What will your ultimate target audience value enough to take the action you want them to? Data is susceptible to the congruence bias which means that we tend to only directly test the hypothesis in front of us, rather than search for and test alternative hypotheses. Did we really think of all the main use cases? Can we really A/B test all the combinations? Paul Green a professor at the Wharton School of the University of Pennsylvania thought otherwise when he developed conjoint analysis as a technique of mathematical psychology – developing profiles made of feature combinations in order to understand the implicit valuations that influence decisions.
So whether you prescribe to this or other technique, if you want to overcome the data bias, you need to get your head out of the analytics data and talk to real live users.
There are many ways to gather feedback and insights from users: from statistical surveys and tools; through deep dive interviews that utilize techniques such as the 5-whys that help explore the cause-and-effect relationships; to observations of script testing or even real life use cases. Using one or more of this methodologies reduces the chance of being biased towards the loudest or most memorable user and makes us feel like we now know what users really want/need, even if it’s not manifested in the data. But do we really?
Did we overcome the confirmation bias? The innate human tendency to reaffirm what we think of to begin with. As so beautifully said by Ronald Coase (British economist): “If you torture the data long enough, it will confess to anything.”. We don’t even notice, but when we design, even the most statistical survey, we inject our own prejudice into the questions and structure. When we listen to users we tend to hear what we want, and users tend to voice what they think we want to hear – this is their social desirability bias messing with our confirmation bias (and in this case the two cognitive biases don’t cancel each other out, they actually reinforce one another…).
So if our own personal beliefs bias our observations, perhaps we should consult with an outsider, a product expert, an advisory board member, our serial entrepreneur founder, crowd wisdom… Someone who knows what they are doing, who has dealt with this in the past, who has a strong opinion that will remove all doubt.
Product management can be a very lonely job. True that product owners work with a great many people: developers who seek their guidance and prioritization, customers and sales’ reps who feel strongly about their needs and wishes, and leadership who wants to see progress made in a methodological way but in very little time and often very few resources. It ultimately boils down to many decisions that a product manager needs to make in order to find the golden path among these, often contradicting demands. So it comes as no surprise that product managers like to rely on best practices and experts (UX…) to guide them.
But aren’t best practices the mere definition of the bandwagon effect? In our fast moving world, best practices are often accepted based on the wisdom or adoption of crowd. Investopedia defines the bandwagon effect as “a psychological phenomenon in which people do something primarily because other people are doing it, regardless of their own beliefs…”. This does not mean that every best practice is necessarily wrong for us, but it does mean that every best practice is a potential cognitive bias that might not be ‘best’ for our situation.
When it comes enveloped with overconfidence we might be in trouble. A confident expert or advisor preaching a best practice, could feel like a ray of light in the dark, lonely and doubtful night of product managers. But overconfidence is as biased as it comes, letting an expert’s subjective confidence get the better of our judgments. Such know-it-all person is often basing his overconfidence on hindsight bias. We can’t simply jump on the bandwagon because everyone else is, even if the coachman is an expert (or our boss…), and even if he/she is very confident.
So where does all this leave us? Using data, tempting as it sounds, is limited to what we have and how we look at it. But so is methodologically gathering user feedback. Tapping crowd wisdom, best practices and experts exposes us to other cognitive biases. It seems that wherever we look we are smack in the middle of the bias minefield.
Well, that’s what we signed up for, isn’t it? That’s what makes the product management work so interesting and challenging. We need to look at all three directions: data analytics, user feedback and best practices. But we need to remember that each of them has its limitations and biases. The more we are aware of these biases the better our chances are to sidestep and minimize their effects. It helps to integrate all three approaches, but always stay in doubt, ask more questions and continue the never ending quest towards validation.