When Kevin Stevens and I sat down to build out product roadmaps at Choose Energy, we spent a fair amount of time pondering the tools we had available to make iterative decisions to improve the product. Ultimately, this led us to dissect the decision making rubrics that got the company (and subsequently us) where we needed to go.
In unpacking those decisions, we realized some mistakes that were made because we as a company used the wrong tool and paid attention to the wrong things at the wrong time. What I mean is this:
As a product manager (particularly at a startup), you know it’s important to listen to your customer — that’s basically product management 101. The process roughly works as follows: understand your customer, the desires of internal stakeholders, the technical and organizational limitations of your company, and then try to squeeze blood from a stone. This post is all about that first bit: understanding your customer.
The first year I was at Choose Energy, we experienced pretty good growth. Growth is exciting. It puts rose-colored lenses in front of the world. In hindsight, we probably should have grown even more than we did. One of the main impediments was that the company spent a lot of time thinking about the wrong customer. As an organization, we were trapped by an idea of who we wanted our customers to be, not who they actually were.
After some soul searching, we accepted what is rather than what some might wish was the case. The market will tell you who (if anyone) cares about your product. It would be wise to listen. Hint: it’s not the person you’re advertising to because you wish they’d visit your site since you think your product would be nifty for them. It also isn’t the people coming to your site and bouncing… your customer is defined by the attitudes / behaviors / emotions of the people who actually buy your product. I know this sounds incredibly obvious, but it is shockingly easy for product managers to get trapped by their own cognitive bias. This mistake literally kills companies, so you can’t let it happen to you.
It is important not to be seduced by the two million millennials who visit your site every month if the people who exhibit intent to actually buy your product are 70 year old women from Nebraska. If that’s who’s buying, build your funnel for grandma.
That’s not to say you can’t look at the huge site traffic you’re getting from a demographic that is bouncing, and ask yourself why these people are coming to your site looking to serve a need, but not finding the solution. “What do they want and can I help?” But that’s different from building a checkout funnel to be understood by millennials when your product is something only retirees care about… or vice versa.
I believe it is also worth noting that sometimes you can find patterns where they aren’t apparent. Age, income, and geography are just analogies for the things that really make your customers alike. They’re a great place to start, and it’s easy to get data on them, but that’s a low level understanding of your customer. You can, and should, try to do better. It took time for us to learn this lesson, but it has yielded big dividends once we brought this to the center. If you want more, there is an excellent Harvard Business Review article covering this concept here.
I’m going to wildly oversimplify here, but two of the commonly employed tools in the kit for product managers are measuring user engagement and user testing / interviews. You probably already know this. They both have a place, but it’s important to use the best tool for the problem at hand. If I could impart one thing to you, it would be this:
Customers rarely know what they really want. They just need their pain point resolved. Your job is to figure out the best way to achieve that.
If you’re working at a startup, the odds suggest you are resource constrained. That means you’re not likely to spend tens / hundreds of thousands (or even millions) of dollars conducting expensive user interviews or focus groups. You probably do not have a 'corporate innovation lab' that handles these issues. To some extent, this has to shape the way you approach understanding problems.
The beauty of the internet is that feedback is swift. If your company has a site with 50k monthly visitors, you can just make a change and get fifty thousand people to tell you which version is better. That’s the power of A/B testing. There’s definitely an art and a science to tracking user engagement and A/B testing, and I won’t approach that here. For some good technical and strategic outlines of the why & how of conducting A/B testing, I’d highly recommend these two linked articles discussing how they do it at a little company you may have heard of, called Netflix. You can also read more from Jessie Chen, a UI/UX designer at ZapLabs who has written several additional insightful articles on these topics.
For almost any decision which can be informed by quantitative data (which is probably a broader set than one might expect, given good experiment design) this is the way to go. You get more eyes on it, your answer will arrive much faster and cheaper, and you won’t suffer from the cognitive bias of the people you interview. Instead, their clicks and dollars tell you whether you were right. If you’re not betting the farm, or trying to understand very sophisticated motivations behind customer decisions, this is your first stop.
User testing is usually not the best choice except when it is the only choice. Sometimes, qualitative data is what you need. I’ve certainly been there. You want to understand customer sentiments toward your gorgeous new product feature. Or the risk is high. Then go to Starbucks and put your product in front of people. Go find a good cohort of your customers and make a time for them to come into your office and run through your prototype. Talk to them. Ask questions. Understand.
But don’t rush past that “find a good cohort” line. It was the most important thing I wrote in the last paragraph. Picking a sample that is unrepresentative will leave you worse off than if you’d just guessed at the answer. This also ties back to that bit about understanding your customers — everything leads out of that.
Despite it’s shortcomings, sometimes user testing really is the best way. But be cautious of the inclination to put too much weight on the customer’s advice for what the feature / product should do. I’ve never yet sat in a user interview where the user deeply understood our market and also had a full comprehension of what was technically possible. You probably won’t either unless you’ve:
1) Picked someone that’s not in the right cohort (I know I sound like a broken record, but seriously, that part is important…)
2) Work for a company where the consumers are highly technical
As a result, you have to listen for cues that help explain their pain points and then use your knowledge to figure out the best solution for the core problem. Then throw a few solutions out to your user base, and figure out which is best.
Stop assuming and start listening. It’s as simple, and hard, as that.
Finally, a word about uncertainty. You can’t completely eliminate it. More importantly, you really should stop trying. Like many things, there’s a diminishing marginal return to eliminating uncertainty. Unfortunately, there’s also increasing marginal difficulty to it as well. Unless getting it wrong will kill your company, being 80% certain is probably good enough… You can always roll back. If you’re 90% certain, you probably waited too long and expended too much effort. If you’re 99% certain, you wasted time and let your competitors catch you.
Like they say at Nike: “Just ship it.” Or something like that.
If you enjoyed this piece and want to read more of my musings on venture capital, product management, artificial intelligence and the future in general, please check them out here or follow me on twitter: @jm_crowd