Search
  • Büşra Coşkuner

Demystifying Product Analytics - Product Analytics Tips Series

In many calls I notice that product people seem to shy away from analytics. I was wondering why.


I had an eye opening chat about this with Henry Latham. We noticed that analytics is difficult when you don't know how "you get there". But when you have a thinking pattern to follow and a cheat sheet at hand, (product) analytics becomes much easier.


Therefore, I have shared 9 practical tips every day on LinkedIn, based on hands-on experience that will help you get started.


This is the collection of those 9 tips.


 

#1 Fuck frameworks. But don't ignore them.


🤔 Having trouble finding the right metrics for checkpoints in your product to measure its success?


Frameworks are good and evil. They are created to reflect a pattern of thinking, a concept. They help you get started. They become evil when they are propagated in a way that you have to stick to them and that any adaptation is bad - which is wrong, of course.


So, there are two possible starting points to get to your checkpoint metrics:


❶ Start with a framework that includes definitions and questions to get you thinking. Some popular frameworks are Pirate Metrics (AARRR), HEART, GAME, GSM, etc.... Follow their thought process to get started. Change any measure of the framework that doesn't help you, or doesn't fit to your situation.


OR


❷ Have a goals-driven approach. Start with asking yourself these important questions: What is the goal of this step? What am I trying to achieve in this step? What are the questions I want to get answered?

Starting from here, asking the right follow up questions, and a good discussion will get you to the right metrics.


And guess what.

Turns out, number 2 is a framework called Goals Questions Metrics - GQM 😱

Haha, well...... I call it common sense 🤓



#2 Seeing leading and lagging indicators as funnels.


🤔 Always tried to understand the practical meaning of and coming up with leading and lagging indicators after having read yet another post about them?


To remind the difference:

⚫️ Leading indicators help you to predict the future. They're difficult to determine but it's easier for the team to measure direct impact of their actions.

⚫️ Lagging indicators reflect the impact of changes in the past. They're easy to determine but it's nearly impossible to point out direct impact of an action.


Now, I got asked by a coachee: Is user activity xyz (one out of several possible activities in their product) a leading indicator?


❗️My question was: leading indicator OF WHAT?


You see the twist?


A metric feeds into another metric, which feeds into another metric, etc. You can see the preceding metrics as, so to speak, leading indicators of the following ones.


So...

⒈ First you define what you want to measure.

⒉ Then you break it down into a metrics tree.

This way you get all the leading (and lagging) indicators of that specific point that you're looking at. The farther away, the more indirect its impact is on the goal metric.


Another way of visualizing this tree that can help you to get a different thinking is to think of the metrics along a path as a funnel.


You know how to lead a customer through a funnel and measure success and conversion on each step, right?


💁🏻‍♀️ There you go.


You have a funnel of leading and lagging indicators to start with.

And now in the next step, you can refine them and pick the ones that you want to improve and can be directly responsible for with your team.


Done.



#3 Making decisions based on data is science. At the same time it’s not. Don’t be scared to get started.


🤔 Yes, it’s true. If you don’t have a lot of data points, you can’t reach significance of your data for example in A/B tests. “That’s why it doesn’t make sense to run analytics on any product that doesn’t generate a lot of data points”, right?


❗️MMMEEERRRPPP! WROOOONG!❗️


In case you don’t have a lot of data points, which is very common for example in B2B or in B2G, you can’t follow typical metrics-tactics 1:1 that apply to data-heavy products. Instead, you remind yourself about the concepts behind those tactics and try to apply them to your situation.


2 methods will help you here: Qualitative research and Build-Measure-Learn.


❶ With qualitative research, you’ll get insights and can map out priorities. You’ll understand WHY things happen, and in a next step you can derive meaningful thresholds as validation points for your product assumptions.


And there we are with Build-Measure-Learn.


❷ Once you know what you want to learn, you define a metric around a qualitative measure and run your experiment. Your metric here doesn’t have to be super quantitative. Something like “3 sales calls scheduled after presenting sales pitch version 4” does the trick. You apply the concept of a landing page to your sales presentation, e voila you can apply similar measures to your medium of conversion.


At the same time, the Build-Measure-Learn thought process will help you to set up your success metrics from the beginning:


• You think “what do I want to learn?”

• Then “how can I measure this?”

• Ideally you build then, but alternatively you go through what you have and see were you can apply the metric that will help you learn to a specific part of your product. This way, you can set up your metrics system or dashboard.



#4 Be data-informed, not data-driven.


At this point I’ll let you explain the difference between data-informed and data-driven by Admiral Cornwell, Star Fleet, Star Trek Discovery.


Learning:


Look at data. See data. Breath data. Prioritize with data. Argue with data. Analyze and understand WHAT is happening.


But don’t follow numbers blindly. Every measure, every data point, every number must be looked at in its context. Understand its context, talk to people, get qualitative input, understand WHY things happening. Don't pick a feature because it has highest scores. Don't shut down people's ideas by throwing data at them.


Instead, leave some room for experience and intuition.



#5 Product vs. business metrics


As a product manager, you’ll probably be focused mainly on product metrics.


As a senior product manager or product leader, you’ll also be involved into business metrics and have to be able to show how your product, product group, or your product team has contributed to business success.


So, what’s the difference between product metrics and business metrics?


Product metrics are those metrics that help you understand how healthy your product is regarding usage and value creation to the user and customer.


Depending on the type of your product, these could be for example


• different core user activities


• daily/weekly/monthly active users (depends on your definition of “active” and your product’s usage interval)


• shares, like, comments


• conversion rates of different checkpoints on different funnels


• user and usage retention rates and churn rates


• traffic, % new users, click through rates (CTR)


• viral coefficient, shares


• etc.


Business metrics are the ones that tell you if your product is making money and therefore adding value to the business, and help you with business value creation. The most well known yet most difficult to measure direct impact of your activities on is obviously revenue.


Other metrics are for example


• monthly/annual recurring revenue (MRR or ARR), gross and net


• customer and revenue (ARR) retention and churn


• expansion ARR


• customer acquisition cost (CAC) : customer lifetime value (LTV) (or “unit economics”)


• average order value (AOV) or average revenue per user/customer (ARPU or ARPC)


• margins, cashflow


• etc.


If you want to become a product leader, know the difference, and when to focus on what.



#6 A goals-driven approach helps you define what you need, as mentioned in product analytics tip 1. Even when it comes to not being goal-oriented!


🤯 ”Explain please!”


There are different use cases what you need data for. Each use case requires you to approach “data” with a different mindset. That’s why there’s no one single rule of organization setup and hiring when you look out for people who “do something with data”.


As a product leader, you should be aware of at least these four common use cases:


❶ Data for setting and driving goals.


In this case, you’ll try to push for achieving a goal and therefore will reject any opportunity or idea that you think will not help you. Product teams can derive goals and metrics themselves, assuming they know what good and bad metrics are. For analyzing if you’ve achieved the goal, you’ll need a good tracking setup, both frontend tracking and backend tracking.


❷ Data for testing hypotheses.


Build-Measure-Learn. Again. Again and everywhere. In this case, you don’t do everything you can to achieve the metric that you’ve set in your experiment. The contrary, you check whether you can achieve the threshold you defined what makes your experiment successful.


❸ Data as analysis of the status quo for finding opportunities.


This is more complex. When you analyze your data, you’re looking for opportunities how you can improve your product. Get on your CSI glasses and search for inconsistencies, high-traffic-low-conversion pages, gaps, sudden exits, bounces, and anything else that just “looks weird” or “not good enough”. Only when you’re open minded and not driven by a goal, you’ll see gaps and opportunities. Combined with market analysis, your findings can even help you to create a whole new product.


❹ Data as analysis of the status quo for reporting.


You’ve heard of Tableau, MS Power BI, Google DS, Metabase, etc. In this case, you don’t analyze to find a gap, rather to show in numbers how the product is doing. Are we on the right path? How did the data behave in the last weeks in comparison? What is happening on our product? What are our users doing on the platform? Note that data can only show you WHAT is happening. WHY it’s happening needs qualitative information.



#7 How prioritization with data DOES NOT work!


🤔 Does one of these scenarios sound familiar to you:

• You have a list or a Jira/Trello board with tons of opportunities or ideas. You go to a meeting with your stakeholders where you discuss the top prio topics and try to schedule feature development into the next iteration. The way you set priorities is based on what your SH thinks is right.


OR


• You have that list, and you prioritize quickly by ROI, or in your case Impact/Effort. Without really knowing the real impact or effort the feature is going to have.


OR


• You have that list, and you prioritize in a very sophisticated way, either by calculating business cases or cost of delay. Because you read the book “Principles of Product Development - FLOW” and want to apply it 1:1 without adapting to your reality.


❗️Your prioritization is a bunch of assumptions in a row❗️


Don’t get me wrong, I’m a huge fan of pragmatic prioritization techniques like RICE, PIE, I/E, Assumption Mapping etc.


But no prio technique in the world (incl. business cases and CoD) will move you beyond politics and guessing without underlying data and validation, or in other words: evidence!


➡️ Without going deeper into good prioritization here (that’s for another post), to prioritize with data you can follow these steps:

⒈ Run whichever prioritization method you like against your list.


⒉ Decide which ideas need validation. Any low risk items (i.e. things that are easily revertible and/or don’t require a on an extremely conversion-relevant page or funnel) can be scheduled into the development pipeline. If it doesn’t work, you can revert the change without a big loss. If it works, hurray!


⒊ For the ones that need validation, first check your existing data (quant but also qual !!!) and see if you see hints that can serve as evidence.

🚫 Watch out 🚫 Don’t fall for confirmation bias! If there’s no evidence, there’s not evidence! If your data invalidates the idea, then your data invalidates the idea!!!


⒋ For anything that you can’t find evidence for, Build-Measure-Learn: Set up a well phrased hypothesis, run an experiment, analyze results, and draw consequences: kill, change, or persevere.



#8 For a good analysis, make sure you have a strong hypothesis.


🤔 Use case: Your app has only a 2 star rating in the app store. Your goal is to move this up to 4 stars within the next quarter. You see user feedback in the app store (qual input) but you don’t have enough data (quant) to understand what’s happening on which state of your product.

How will you approach this challenge?


There are many different methods that will help you get to the idea stage and come up with lots of ideas: Impact Mapping, Opportunity Solution Tree, Crazy Eights, etc. (a post for another time)


At one point, you’ll be setting up experiments. That means, you’ll need a strong hypothesis that you can run experiments against.


In product analysis tip #4 I mentioned that you should leave some space for experience and instinct. This is especially true when you lack the technical setup to have the data you’re looking for. Instead of waiting for data completeness, you should take a best guess based on experience and intuition, and try something out, i.e. run an experiment.


💡 How could an experiment look like here? Maybe showing the ratings pop-up after a specific user path?


➡️ Result: Number of ratings increased rapidly but stars went down to 1 instead of raising to 4!


❓Have we validated or invalidated the product increment?

❗️We can find this out only when we know the hypothesis behind the experiment!


• Maybe it was “We believe that people don’t like giving reviews. To verify that, we will show the “add review” pop-up after the user went through the steps x, y, and z.”

—> Invalidated! Number of ratings went up rapidly.


• Maybe “We believe that users with a good experience with our product don’t know where to give reviews. To verify that, we will show the “add review” pop-up after the user went through the steps x, y, and z.”

—> Invalidated! They see the pop-up but the stars rating went down. If this would be their problem, there would be more positive ratings, right? (Correct answer: maybe)


• How about “We believe that providing a more visible way of leaving a review will help more users to give feedback. To verify that, we will show the “add review” pop-up after the user went through the steps x, y, and z.”

—> Validated! Number of ratings went up rapidly.


We haven’t fixed the problem though, why is that? I can only guess. And I guess it’s either addressing the wrong problem, or the multiplier-effect: If a product is bad, asking for more review will just reveal even more that it’s bad.

Or in other words: 💩 in 💩 out times x 😇



#9 More often than you think you should just have an open conversation with a user and/or customer.


Because the insights will be eye-opening.


 

Follow me on LinkedIn for no-bullshit, real-world product management discussions.

I call the things by their name. Because there are enough theory and ideal world scenarios out there.


You can also find me on Twitter and Medium.



Recent Posts

See All