The way in which we measure the success of new features is one of the pillars of product management. While research and discovery are a great way to know where your product is going, a clear data strategy on how you’re going to use data to measure success is one of the most powerful tools to guide you along the way.
How measuring feature success fits into your product pipeline
Think about how we approach new features. It usually goes like this:
- Someone identifies a problem with the product or an opportunity for growth.
- A product manager conducts their research and discovery process, collecting user input and getting a clear idea of the problem.
- That same product manager rallies the team to come up with a feature that could solve the problem.
- Once the team produces an idea, key users and stakeholders give their input, as the team refines their prototypes.
- With specifications and designs finalized, the PM prioritizes that feature, and eventually, it gets developed, tested, and deployed.
Does this sound like a good product pipeline? Perhaps, but there is a key part missing: when the feature goes live, how do we know if that feature solved the intended problem? How do we know it hasn’t made the problem even worse?
We could run a survey after a few weeks and collect some feedback, right? Or perhaps we could just wait and see if we receive any user complaints? Not a good idea. We shouldn’t just deploy features and hope for the best.
What your feature success measuring strategy should include
Data analysis is your best ally when it comes to tracking the success of new features. And it’s particularly effective when it’s thought through during the discovery phase before the team even starts writing a line of code. By coming up with a data strategy, we set the feature up for success. A good strategy might include:
- Which metric(s) should be tracked.
- What success looks like.
- Tracking plan for this feature (and almost every feature has its own).
A real-world example of identifying metrics to track your feature success
Let’s see how this could translate to a real-life scenario with a simple example:
Problem: Most users are dropping from the onboarding flow a few seconds after coming in from the website.
Assumption: “Users don’t understand the necessary steps to onboard, get frustrated, and leave to look for easier alternatives.”
Solution after research and validation: build a wizard that guides the user through the onboarding process while explaining every step of the way.
Measuring the success of the feature
Metric: Percent of users who complete the onboarding process
- Current: 10% retention
What success looks like:
- 20% growth from week one (from 10% to 12%)
- Increasing overall growth every week
- Final retention of 30% within the next 3 months (the earlier the better)
Tracking plan:
- How many users dropped from the onboarding flow? At which step(s) did that happen?
- How many seconds do users stay in each step?
- Which explanatory tooltips did users click on?
- Which fields were filled out wrong, forcing the user to fill them out again?
- Which data should be collected? In this case, give preference to anonymous data such as country, browser, device, etc.
Tracking early results of the recently launched features helps to stay informed and allows to make data-driven decisions for further product development.
By managing our products in a data-driven fashion, we keep our focus on the final destination while iterating on the product and making course corrections promptly.