3. Vision for measurement
● Identify metrics that are actionable for your product
● Balance metrics across functional areas
● Focus on the signal, not the noise
● Consistently review metrics cross-functionally
● Make decisions for the long term health of the product
4. Background
Using metrics across:
Having consistent metrics as a reference point makes it possible to better
understand the impact of our decisions across these different areas.
Business Engineering
User
Experience
Service
5. Goals
1 2 3
Provide a holistic view
of product health for
teams and leadership
Inform decisions that
support our business
in the long term
Encourage explicit
goal setting for our
products
7. Serviceability and System Health
What are the maintenance costs of the product?
User Sentiment
How satisfied are our customers with the product?
Productivity
How successful are customers with our product? Can they perform the key tasks easily and
efficiently?
Engagement
How sustained is the adoption of our product?
Revenue or Business Value
How much is the product contributing to the business?
Overview
8. Serviceability and System Health: latency, ticket count, help consumption
What are the maintenance costs of the product?
User Sentiment: happiness, customer satisfaction, perceived simplicity
How satisfied are our customers with the product?
Productivity: time to magic moment, CUJ completion rate, completion time
How successful are customers with our product? Can they perform the key tasks easily and
efficiently?
Engagement: customer count, retention, critical feature adoption
How sustained is the adoption our product?
Revenue or Business Value: Corporate revenue, transaction volume, infra
costs
How much is the product contributing to the business?
Overview
9. QUESTION SAMPLE METRIC SAMPLE CALCULATION
Is the service need growing proportional to
business growth? Or is it getting more costly to
maintain the product?
● Ticket Volume ● # Incoming tickets / Product
usage
How long does it take to load a page or complete
an action?
● Latency ● Max page load time (per page,
per workflow)
How much support do we provide for the
customer to self-educate or troubleshoot?
● In-Product Help
Consumption
● # Flows with help / Total # flows
Is the number of bugs/issues growing
proportional to product (usage) growth? Or is it
getting more unstable as the product grows?
● Stability
● Development Debt
● Weighted bug count / Product
usage
● Feature requests, refactor
requests
Serviceability & System Health
What are the maintenance costs of the product?
10. User Sentiment
How satisfied are our customers with the product?
QUESTION SAMPLE METRIC SAMPLE CALCULATION
How satisfied are the customers with our product? ● Customer Satisfaction ● CSAT score (7+ out 10 is usually
considered ‘satisfied’)
How happy are the customers with our product? ● Happiness ● Survey ratings
How positive the customers talk about our
product?
● Public Sentiment ● Sentiment analysis of blogs, web
pubs about the product
What are the thoughts of our customers about the
product?
● Open-Ended Feedback ● User testing
11. Productivity
How successful are customers with the product?
QUESTION SAMPLE METRIC SAMPLE CALCULATION
How long does it take to achieve success on the
most critical task(s) in the product?
● Time to Magic
Moment
● Time from starting to set up a
campaign to campaign going live
What % of the customers can successfully
complete the CUJ?
● CUJ Completion Ratio ● Successfully completed vs.
started instances for the CUJ
How long does it take to successfully complete
the CUJ?
● CUJ Time to Complete ● Time elapsed end-to-end
between beginning and
successfully completing the CUJ
How is the performance of the entity? ● Entity Performance ● For an ad campaign: # of
impressions, views, clicks,
conversions, etc
How simple is it to complete a task/achieve goal? ● Simplicity ● Easy to understand (cognitive
effort)
● Easy to do (manual effort)
12. Engagement
How sustained is the adoption of the product?
QUESTION SAMPLE METRIC SAMPLE CALCULATION
Do we have more customers using the
product? Are the customers using the product
more?
● Usage Count ● # of Users
● # of Campaigns created
● # of Sessions
What % of users stop using our product? What
% of the customers re-started using the
product?
● Retention (churn,
reactivation, loyalty)
● % of Customers not spending
● % of Inactive customers
What % of users adopt new features? ● Critical Feature
Adoption
● Percentage of customers
adopting features important
for the business (to be
defined per product)
13. Revenue or Business Value
How much is the product contributing to the business?
QUESTION METRIC SAMPLE CALCULATION
How much money does the company make from
the product?
● Revenue for the
company
● Total customer spend
How much does it cost to keep the infrastructure
running and transaction costs (e.g., payment
fees)
● Cost for the company ● Infrastructure resource costs
How much business is the product generating?
[for non-revenue-generating businesses]
● Business Volume ● Number of transactions
● $ volume of transactions
18. Work with a small group to develop a proposal. Consider:
● Who from PM, UX, Eng, and Service will drive this process?
● How do we define our product(s)?
● Who will sign off on the metrics?
● What is the timeline?
Develop a proposal
20. ● Make connections with people already championing measurement
● Learn what you have to build on
● Understand where incremental work can make a big difference
● Don’t go it alone
Know your starting point
22. Broad generation
● Stakeholder conversations or interviews
● Business goals
● Product Excellence principles
● Expert review
23. Goals • Signals • Metrics
Goals: What are we trying to accomplish?
Signals: How might success or failure in the goals actually manifest
itself in user behavior or attitudes?
Metrics: What specific data will you track over time?
24. Metrics
– Actionable
– Action-required
– Required for context
Interpretation
– Time comparison
– Benchmarking
Assessment
– Accountability
Active sessions per customer
Bad metric
Tooltip clicks per Critical User Journey
Good metric
Revenue distribution
Include or don’t include?
Good metrics in context
25. Value as a PE Metric
Technicalfeasibility
As a team:
● Determine the value of each metric
○ Actionable?
○ Contextual but key indicator?
● Scope the work needed to collect
the data for each item
Prioritize
27. ● Which of your ideal metrics are you already capturing or measuring?
Who owns those metrics today?
● What are the barriers to getting to your ideal metrics? What
resources or access would it take? Who can help?
● What proxy metrics are available? Could any of these proxies be
misleading?
● Where can you show quick value to keep momentum and
demonstrate impact? What expectations do you need to set on
timing for this?
Clarify the plan
28. ● Who will own delivery of these metrics each quarter?
● Who will be involved in analysis?
● Who will be responsible for reporting on them?
● Your goals and metrics will evolve over time; how frequently will you
commit to revisiting and recalibrating these metrics?
Establish ownership
30. Best practices
● Establish best practices for instrumentation and data collection
● Work with your team to ensure user-identifying data: important to move
from product-centric logging to user-centric logging.
● Product-defined metrics & productivity metrics may require custom
logging. Check for accuracy and completeness as data becomes
available.
● As logging is created, surprises of gaps in logging or incorrect logging
happen often.
31. Data collection
● Let metrics guide the data needs (not vice versa)
● Work with Eng and IT for data collection
● Identify experts on interpreting data (e.g., sudden changes)
● Leverage existing data stores where possible
(e.g., data warehouse)
● Apply rigour in data collection: code reviews, bug bashes
● Carefully manage access for sensitive data
33. ● Define what the business/product aims to achieve and when it aims to
achieve them
● Represent an improvement from the current state
● Set annually with quarterly breakdown and adjustments if necessary
About targets
34. ● Ideally, establish a baseline over several Qs to understand normal
fluctuations in data - e.g., variance by season
● When no baseline or benchmark is available, consider:
○ Using comparable product or feature as a benchmark (e.g., n
customers for the first quarter)
○ Hard requirements (e.g., latency)
○ Business goals (e.g., 100% retention)
Setting targets
36. ● UX Measurement and PE culture grow hand-in-hand.
● Eng and PM eager to see UX Measurement happen. But enthusiasm not enough.
● It’s easy to settle on superficial metrics or blunt tools.
● Eng buy-in and support by Lead(s) required.
● Start small, with enthusiasts and with a hero metric.
● Execution takes time and requires cross-functional stakeholder contributions.
● It’s hard work: successful UX Measurement requires resource commitment.
● Don’t compromise on excellence.
Lessons learned
We are always making tradeoffs when we are making decisions about how to build and launch products. We can launch a feature quickly, which might satisfy our clients’ immediate needs but lead to engineering or UX debt down the road. Or we might build an ideal user experience, but sacrifice speed or increase maintenance costs as a result. We will always need to make tradeoffs, but seeing those tradeoffs reflected in hard metrics allows us to make better informed decisions along the way.
1 - provide a view across functions that helps us and our partners understand the health of our products, and escalate to leadership as needed
2 - give us a framework to support decisons that benefit our business in the longer term, rather than the next quarter
3 - over time, establish baselines on these metrics and set goals
- how we got to super
- overview of framework - for each, brainstorm for your own product
- questions: what do you think of this? Where do you see gaps for your products? What do you think your partners will say?
The SUPER framework incorporates these metrics through five dimensions. While we are driving towards consistency across these categories, product areas and product teams have the flexibility to select metrics that are actionable and meaningful to their specific area.
Helps to find teammates who are already interested or engaged in measurement.
Share the lack of data your product has today, or the lack of a unified view of the product’s health
Think about how this will benefit them - make their job easier, move the needle on something they care about. How does it align with their existing goals?
Can it be integrated into initiatives that are already underway?
Be clear about what commitment you’re asking for.
Once you know who your partners are for leading this --
Need to identify who from PM, UX, Eng, and Service will drive this process.
A cross functional team is critical to success, since SUPER covers measurement from across different areas. You might also want to consider other areas, like sales or marketing, that should be included in the discussion.
What is the timeline?
Set goals for when you will complete Phases 1-3 (understanding the current state, defining metrics, and developing a roadmap).
Goal is to make the right connections across functions to understand what is being done now.
If there is existing measurement work, you want to find out about it and use it as a foundation. These are people who are likely to be your allies. Make sure you give them credit for what they’re doing already, and use them as inspiration on how to move forward.
You can also leverage what they’re doing to make a bigger difference - if they’re doing analyses regularly but not reporting it upwards, you can help with that. Or help to make tweaks that will take it to the next level.
Whatever you do, don’t go it alone. Work with your partners in UX and allies across functions to expand your reach and get this done faster.
Need to develop a broad understanding of what metrics are possible and what is important to the team.
Stakeholders - again: product, eng, UX research and design, support, sales, marketing.
Work to understand business goals. Compare to the roadmap for your product. If that doesn’t exist, work with your partners and our leadership to get stakeholder interviews to get a sense of strategy for the next 12-18 months.
Review PE principles - focused utility, simple design, crafted execution - how do these show up in your product, and how can you measure them?
Understand what other teams are doing-share your plans with your friendly local quant researcher or analyst to get their experienced input
Goals - YT = engagement (enjoy videos & discover more videos/channels)
Signals - # videos watched by a user
Metrics - avg mins spent watching videos per user per day
After broad generation, you’ll start to narrow. As you do this, consider whether these metrics are actionable for your team. By actionable, we mean there is an unequivocal understanding which way is better, and moving the metric is within our reach.
You also want to consider what is most meaningful for interpreting the metric. For ex, is it more meaningful to look at the raw total monthly active users, or percentage change month over month? Or is it more meaningful to look at totals, or break it down by user segments?
Finally, we want to be able to assess the state of these metrics in order to hold ourselves and our teams accountable to our goals.
Bad metric - active sessions per customer
Not clear if good or bad.
Good metric tooltips clicks per user journey
Clear relationship with user goal, can actively change the user journey and see how it affects this metric.
Include or not - Revenue distribution
More contextual - revenue is a trailing metric, often not tied to factors we can control. But important to the business.
This step is important because if you have too many metrics, it will be difficult to interpret what is really happening and know what the team should pay attention to.
Even if you’ve been tracking a metric for a long time, you should revisit it - is it really accurate to your team’s goals? Is it actionable?
It can be hard to let go of, but if it’s not something your team can act on or a key indicator of performance, it may be time to let it go.
PRDs should have success metrics and instrumentation hooks.
Instrumentation
Eng buy-in: (non-UX champions = trusted partners)
cross-functional stakeholder contributions: feeding data