The document discusses metrics for measuring the performance of a product management group, including business value delivered, technical debt, and wastes related to product management. It provides suggestions for tracking adoption of features, technical debt, scope changes, sunk costs, rework, workload compared to development, and release overhead and frequency. Key performance indicators include lead time, work-in-progress, and cycle times for different classes of service. Reducing overhead and increasing appropriate release frequency can help align business needs with development capabilities.
2. Main things we want to pay attention to Performance of the Production floor – covered elsewhere (Simple KPIs Slides) Performance of the Product Management group: Business Value Wastes related to PM Technical Debt
3. Business value We care about outcome – features delivered, adopted, used, paid for How can we measure this? Manage a kanban at the high level features level, that tracks when features are adopted, and upon first paying customer. Then see how much WIP of features not yet adopted we have, LEAd/Cycle time to adoption, features that we dropped on the way.
4. Debt A lot of time debt is taken due to PM decision We want to track how much debt we have, and take action to minimize it. E.g. we need to release CustomerFeatureXnow, so we don’t “automate tests”/”code it correctly”, so every work on ModuleY which is used in FeatureXis slower, until this is fixed
5. Tracking debt in kanban Have debt card type that is created when debt is taken on Track amount of debt versus overall WIP/Backlog See whether stable, improving, worsening trend Decide on policy for dealing with debt – WIP Limit, etc. Track the cycle time and WIP for debt cards to see whether they get the SLA they deserve
6. Wastes related to PM Waiting for PM PM related Churn / Context switching / Expediting Sunk Costs Rework due to late feedback by PM
7. Waiting for PM Look at the CFD, observe the size of the PM-related queues over time. Especially Pending PM Review which is in the middle of DEV/Test And Ready-MMFs as well as DEV Ready in some cases which depend on PM approval Advanced – in the cycle time performance report, focus on PM areas When looking at exceptions to Cycle time, participate in the root cause analysis, and see if interaction with PM was part of the long cycle time.
8. PM related Churn / Context switching / Expediting Add Expedited class of service Can be used by PM to override priorities in DEV WIP – just to top of queue, don’t override current WIP Add emergency class of service Can also override current WIP Assumption – This is value trumps flow. We give up efficiency when we use these COSs
9. Measuring the effect of value over flow COS Look at cycle times for different kinds of classes of service Look at distribution of different COS in the WIP
10. Look at amount of changes in scope Replace – need visualization that shows scope changes in content Add – can simply look at total scope for a “Release” and observe whether its growing
11. Case Study – Typical release behavior Added Scope Growth in Feature Cost / Dark Matter
12. Dark Matter – Is where we thought a feature costs X But then, during breakdown, analysis, creating iteration stories, we understand it actually costs X+D Then, PM decides whether to scope to fit down to X again, or D is worth it. Worthwhile tracking our behaviour on this, and learning from it. What is the right D number/percent? Good question! Can be observed in the CFD for a release.
13. Sunk Costs Add a LANE that collects features/stories that are “ON HOLD” – the Recycle Bin in the archive area The amount of work done on them is the sunk cost Amount of work hard to measure, so use alternative: CYCLE TIME – look at cycle time for end lane being the recycle bin
14. Rework due to late feedback by PM Will appear as high cycle times Will appear as moving back cards on the board (need to find way to measure) Can use special Class of service / card type to identify these kinds of stories better for measurement/tracking purposes
15. Workload compared to DEV See how much workload is in PM compared to DEV Look for trends and major changes that can indicate: Bottleneck in PM Idle and slack capacity – expect to see PM seeing clients/customers at those times
16. Release OVERHEAD “How often do we release? What does it cost us?” The Usual Suspects PM wants to release more often. He wants the ready feature to be out there as soon as possible. R&D usually wants to limit the amount of releases, as they cost a lot, and R&D doesn’t like to do hardening
17. How often do we release On kanban, simply add a card type of “Release” and flow it thru the board to signify releases. The size should be the hardening cost planned for this release. based on SPC and other charts, you can understand: Plan versus actual on hardening costs/times/dates Frequency of actual releases Ratio of hardening work compared to feature work (see next slide for a view on this)
18. Release Overhead This metric shows how much effort is spent on releasing versus developing. The aim is to reduce the overhead of each release, such that the organization can increase the frequency of releases to meet business expectations.
19. Reducing the release overhead Two things we need to do: Reduce the overhead of each release Make sure our release frequency makes sense
20. Reducing the release overhead of each release Invest in reducing legacy hardening debt As the PM you’ll be asked to invest. Ask for a plan that associates investment of X days with Y days of reduction in hardening cost. Decide what is your investment horizon Based on the horizon, the X/Y ratio, and the current frequency of releases, make your decision Typical areas of investment - Continuous Integration, Automation of EVERYTHING (Including platform matrix, performance, any thing that is currently in the hardening plan) Avoid hardening debt while developing new features Build quality in – don’t let defects wait for the end Consider different types of releases – e.g. Majors, Feature/Service Packs, Patch Bundles And associate the relevant risk-driven hardening work for each
21. Does our release frequency make sense? First step – have the visibility How many releases What kinds of releases – scheduled major “Trains”, emergency fixes “Ambulance?”, “special delivery/Taxis” Second step – lay out what the business actually needs and is willing to pay for Are those aligned? A lot of the times, you will see “Trains” and “Taxis”. Think of investing in a “Subway” a predictable frequent release mechanism