Warning! You can still build the wrong product using agile. In Eric Ries’ book The Lean Startup, he poses the question: What if we found ourselves building something that nobody wanted? In that case, what would it matter if we did it on time and on budget? We often assume the Product Owner is smart enough to define the right product. But what if we are wrong? Michael Hall shares lean startup principles and how they can be applied to ensure that the product we are building is righteous. Learn new agile concepts such as hypothesis-driven project vision, knowledge broker personas, learning maps, minimum learning product, experiment backlogs, experiment test iterations, validated learning, and pivot/persevere decisions. Case studies and Michael’s first-hand product experience emphasize the learning points. New and mature agilistas alike will leave the session armed with Lean Startup agile techniques that can be applied immediately on their agile projects.
Applying Lean Startup Principles to Agile Projects
1. AT13
Concurrent Session
11/12/15 3:00pm
“Applying Lean Startup Principles to Agile
Projects”
Presented by:
Michael Hall
Improving Enterprises
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 · 904-278-0524 · info@techwell.com · www.techwell.com
2. Michael Hall
Improving Enterprises
As Principal Consultant at Improving Enterprises, Michael Hall specializes in new product
development using agile methods and Lean Startup principles. Michael has more than thirty
years’ experience developing and delivering large-scale cloud-based systems, next-generation
mobility solutions, mobile apps, embedded device systems, and wireless telecom systems. This
deep technical experience gives Michael an excellent base of real-world product development
knowledge and insightful understanding of the challenges a team is likely to face when
transitioning from traditional to agile. An early adopter of agile methods, Michael has led several
successful enterprise-wide transformations to agile.
4. 2
3
Applicability
It’s for everyone: startups, new product development, new features, etc.
4
Mantra
The goal of any project is to figure out the
right thing to build.
“What if we found ourselves building something that
nobody wanted? In that case, what did it matter if we
did it on time and on budget?”
Look carefully at this picture – what is wrong?
5. 3
5
Thinking
Question at the start of a typical project:
• Can we build a solution for that problem?
Lean Startup questions at the start:
• Do consumers recognize they have a
problem?
• If there was a solution, would they buy it?
• Would they buy it from us?
• Can we build a solution for that problem?
• Can we build a sustainable business
around this product/service?
Ask “Should it be built?” instead of “Can it be built?”
6
Framework
• Adaptable framework
• Based on scientific methods
• A journey of “discovery”
- Subject the vision to constant
hypothesis testing
- React to customer feedback
- Bypass work that does not lead to
learning
- Adapt to what the data is telling you
“Successful entrepreneurs had the foresight, ability, and tools to discover
which part of their plans were working brilliantly and which were misguided,
and adapt their strategies accordingly.
6. 4
Lean Startup - Principles
7
8
Assumptions as Hypotheses
• Identify your project/feature assumptions (continuously)
• Reword them as hypotheses
“The XYZ change will prove that customers want to ...” (value)
“The ABC feature will increase new customers by at least 15%.” (growth)
Avoid:
• Acting as if assumptions are true and proceeding anyway! – Leaps of Faith
• Taking statements for granted
• Reports from anyone other than the customer
7. 5
9
Experiments
• Think of your project as a set of small experiments
• Break business plan down to its component parts – and test them
• Define experiments to test each hypothesis
• Results of the experiments guide decisions about product direction
Experiments allow us to transition from guesses to knowledge.
Case Study: Zappos
10
Exercise
Handout
• Think about a project you are working on
• Name one big Assumption
• Reword the assumption as a Hypothesis
“The XYZ change will prove that customers want to ...” (value)
“The ABC feature will increase new customers by at least 15%.” (growth)
• List 2 – 3 Experiments (user story names, work items, tasks, etc.) that will help
prove or disprove the Hypothesis
8. 6
11
Validated Learning
• The result of the experiments
• Can be positive or negative changes
• Empirical data from the customer
• “Learn lessons early” rather than “build
features and fix bugs”
• Faster and more accurate than market
forecasting and classical business planning
The measure of an effective team is how much validated learning did we
achieve (as opposed to how much did we build).
12
A Tale of Two Teams
• What to build?
• Passionate debates
• Suits decide
• Implement several features at a time
• Celebrates any positive perception
• Clear baseline metric
• Hypothesis on how to improve metric
• Experiments to test hypothesis
• Empirical data from customer usage
• Celebrates learning
9. 7
13
Small Batch Size
• Allows us to identify quality problems sooner
• Pull – each step pulls the parts needed from
the previous step, Toyota JIT production
• As soon as we formulate a hypothesis, run the
experiment as quickly as possible using the
smallest batch size to get the job done!
“Large batch sizes can create a death spiral of re-doing work.”
14
Build – Measure - Learn
the smallest
batch possible
qualitative and quantitative
Decision!
Minimize time
through loop
from the data
10. 8
15
Minimum Viable Product
16
Minimum Viable Product
• The resultant output of successive Build – Measure – Learn loops
• Remove/Avoid any effort that does not lead to learning
• Goal of MVP – test your hypotheses, achieve validated learning
• Decision after learning: pivot/persevere/quit
• Iterate toward launchable product
“The only way to win is
to learn faster than
anyone else.”
11. 9
17
Case Study: Dropbox
• Very popular web-based file-sharing service
• Initial MVP: a YouTube video
• Targeted to early adopters
• Beta waiting list went from 5,000 to 75,000 overnight
• Company now worth over $1B
18
MVP Patterns
• Concierge MVP – personalized service as a learning activity
• Wizard of Oz MVP – behind the scenes humans doing the work
• Case Study: Aardvark
• Low-quality MVP
• Case Study: Craigslist
• Case Study: IMVU avatar teleportation
• Smoke test - marketing materials
• UI mockups
• Etc.
12. 10
19
Pivot, Persevere, or Quit
• Based on the validated learnings of an MVP, decide!
• Pivot – structured course correction designed to test new hypotheses
• Persevere – continue on with next set of hypotheses
• Quit – cancel the project and move on to the next one
“There is no bigger destroyer of creative potential than the
misguided decision to persevere.”
20
Case Study: Potbelly Sandwiches
• Started out as an antique store
• Began selling sandwiches to drive traffic to the stores in
the hopes of selling more antiques
• Lines formed out the door
• Pivoted to a sandwich store
• Today over 280 sandwich stores nationwide
13. 11
21
Pivot Types
• Zoom-in pivot – refocus product on what was previously considered one feature
• Zoom-out pivot – single feature is inadequate, so add features
• Customer segment pivot
• Customer need pivot (Potbelly)
• Platform pivot
• Business architecture pivot
• Value capture pivot
• Engine of growth pivot
• Channel pivot
• Technology pivot
Pivots take courage!
22
Others
• Innovation Accounting
• Engine of Growth
• Adaptive Organization
14. 12
Application to
Agile Projects
23
24
Case Study: DCAPI
• Goal is to accurately measure the user’s video play time
• Measurement messages are from video players
- Start, Stop
- Playhead position
- Ads
- Etc.
• Original product was a downloadable SDK integrated into apps
- Logistical issues when software changes
- Different SDK for each OS
- High certification costs
• New product: web service to receive measurement messages
- Defined Restful API called DCAPI
15. 13
25
Get Started: Hypotheses-Driven Vision
• Develop a classic vision board
• List assumptions
• Continue to identify assumptions as you go
• Translate implicit assumptions into explicit testable hypotheses
• List hypotheses
26
DCAPI Vision Board
Target Group Needs Product Value
For clients who have a need for capturing census-based usage analytics on their connected devices/applications, Data
Collection API (DCAPI) is a cloud-based service that provides a simple easy-to-understand way of reporting measurements.
Unlike classic embedded SDK approaches, the DCAPI will provide a direct reporting experience based on web service calls.
Digital customers
• CBS Interactive
• MobiTV
• A&E Apps
• Crown Media
• Univision Apps
• Pandora
• Viacom
• Fox News
• DirecTV
• NBCU Apps
• AT&T
• JW Player
• Yelp
• Roku
• Xbox
• Connected TVs
• PlayStation
• Ease of measurement reporting
• Use of familiar programmatic
approach
• Less software development
• No need to download/integrate
SDK
• One solution for all digital
• Cloud-based
• Transparent evolution
• Linear scaling as demand
grows
• Fault tolerant
• Increase revenues
• Satisfy pent-up
demand
• Increase digital
footprint
• 1-stop-shop
Assumptions Hypotheses
• Customers will prefer DCAPI over the
embedded SDK
• DCAPI will make it easier to certify apps
• DCAPI can handle a large amount of users
• DCAPI will need a super-fast DB
• An early release to friendly customers will
provide good feedback
• > 80% of all customers will prefer DCAPI
• DCAPI can be self-certified by customers
• DCAPI can handle 50K simultaneous
sessions
• Redis is the best DB for DCAPI
• An initial release can be built with limited (but
valuable) functionality for early adopters
Vision Statement
16. 14
27
Get Empathetic: Knowledge Broker Personas
• A special form of persona
• But emphasizes the knowledge sharing that each can bring
• Customer Archetype – humanizes the proposed target user
Knowledge Sharing
• Will consider the new API approach
• Can explain advantages of Restful API over SDK
• Can provide feedback on error handling
• Can give strong opinion on Offline message handling
• Can share the CBS-I deployment schedules
• Etc.
Kyle Fisher – Senior Software Developer at CBS Interactive
Personal Profile
Kyle is a 42 year-old mobile software
developer. He is familiar with video
streaming and codecs such as MP3,
Vorbis, and AAC. He understands
transport protocols such as MMS,
RTP, HLS, and Adobe’s HDS. His
platform of choice is iOS, but can
work in Android when needed.
Kyle has previously used our
downloadable SDK for the CBS-I
video app.
Background
• 42 year-old Caucasian male
• Father of two
• Enjoys hockey
• Loves anything mobile app related
Attributes
• Upper middle class
• Technically savvy
• Carries latest iPhone
• Has iPad at home
• Enjoys social media
“I want an easy-to-use well defined
Restful API for my mobile apps to
report usage analytics.”
Kyle’s Product-Content Needs
• Simple API
• Parameters are readily available
• Inline https: invocations
• Uncomplicated state diagram
• Guidance on how to handle offline scenarios
• Succinct API specification
28
Get Organized: Learning Maps
• Create a story map on a wall
• But organize and prioritize it by
Hypothesis from left to right
• Which will deliver the most learning?
• Which learnings are most crucial?
• Which learnings reduce risk?
• Which are most crucial in answering
“Are we building the right product?”
• For each hypothesis, name the user
stories and/or work items
• Prioritize the user stories top to bottom
17. 15
29
DCAPI Learning Map
Hypotheses
Experiments:
Stories, Work Items
30
Tee It Up: Experiment Backlogs
• Similar to Scrum product backlog
• But is learning-based prioritization
• List of all experiments 1..n
• Stories, work items, research, etc.
• Tagged with Hypothesis name/description
18. 16
31
DCAPI Experiment Backlog
Cloud API DCAPI-72 Minimum Product Epic Open Unresolved 3/13/2015 19:05 3/23/2015 11:15 Hypothesis: An incomplete DCAPI can be built that is "good enough" for luminary clients.
Cloud API DCAPI-4 Start session, collection switch
enabled
Story Open Unresolved 3/13/2015 14:17 3/23/2015 9:00 As an application, I want to start a session, so that I can begin reporting metrics to the downsteam
systems.
• When I request a session, I expect that my session is started by DCAPI.
• When I request a session, I expect to receive a successful return code from DCAPI.
• When I request a session, I expect to receive a unique session id that I can use in subsequent
DCAPI calls. I also expect to receive an opt-out URL that I can display in my Privacy page.
• When I request a session and DCAPI is unable to start my session, I expect to receive an error
code that indicates the reason.
• When I request a session and my device/application is opted-out, I expect to receive an error code
that indicates opted-out as the reason.
• When I request a session, I expect DCAPI to read in my config file from the Config system. The
Config file contains variable name mappings that allow me to use my own defined variables instead of
the Nielsen defaults.
• When I enable the collection switch after it was previously disabled, I expect measurement
collection to begin again.
• When I enable the collection switch after it was previously enabled, I expect measurement collection
to continue as previously.
Cloud API DCAPI-5 Start play Story Open Unresolved 3/13/2015 14:17 3/19/2015 16:09 As an application, I want to start play, so that I can report the exact timestamp when media has
started playing.
Acceptance Criteria
• When I start play, I expect to receive a successful return code from DCAPI
• When there is an error in the data transmission to DCAPI, I expect to receive an error response
code.
Note: Start play is sent when media content begins playing. This occurs after the app requests the
content to play (request start play) and the (potential) buffering time occurs
Cloud API DCAPI-8 Pause/stop play Story Open Unresolved 3/13/2015 14:18 3/23/2015 8:59 As an application, I want to report when I pause play, so that I can send metrics to DCAPI
• When I am sending pause for live content, I expect DCAPI to accept the data as defined in the API
such as event, UTC time, and type.
• When I am sending pause for VOD content, I expect DCAPI to accept the data as defined in the API
such as event, offset time, and type.
• When I am sending pause, I expect DCAPI to be able to accept my data every 10 seconds.
• When I send pause, I expect an OK response code.
• When there is an error in the data transmission to DCAPI, I expect an error response code.
• When I send pause to DCAPI, I expect DCAPI to send a ping to Census based on the applied
business logic.
32
Get Focused: Minimum Learning Product (MLP)
• Similar to MVP, but much smaller
• Learning-based, not viable product-
based
• Smallest chunk of the Learning Map
that can be developed to learn
something important
• Real or mock form
• Goal is to get just enough learnings
- Then pivot, persevere, or quit
• Choosing the MLP replaces classic
Scrum sprint planning
- Break into tasks if it helps
19. 17
33
DCAPI Minimum Learning Product
MLP
34
Experiment Test Iteration
• Experiment Test Iteration (ETI)
• Similar to Scrum sprint but variable time length
- Depends on size of experiment
- Get through Build/Measure/Learn as quickly as possible!
Scrum: fixed iteration length
ETI: variable iteration length
ETI 1
3 days
ETI 2
5 days
ETI 3
9 days
ETI 4
17 days
ETI 5
6 days
ETI 6
7 days
20. 18
35
Build It: ETI
• Build out the MLP
• Measure progress based on validated learning
• Use modified storyboard showing Validated column
Story To Do In Work Done Validated
36
Get More Data: Learning Results Period
• Obtaining results from knowledge brokers
• Sometimes the validation takes longer than the end of the ETI
• Run this in parallel with the next ETI
- Defer pivot/persevere/quit decision until this data is in
ETI 1
3 days
ETI 2
5 days
ETI 3
9 days
ETI 4
17 days
ETI 5
6 days
ETI 6
7 days
ETI3 Learning
Results Period
Pivot/Persevere/Quit
21. 19
37
Demo It: ETI Review
• Dev team demos their progress
• Discuss learnings obtained from the Learning Results Period
• Experiment findings are discussed with stakeholders
• Decision: pivot, persevere, or quit
38
Think About It: ETI Retrospective
• Team discusses
- What went well, what did not go well
- How to get better
• A spirit of “continuous improvement”
• Plus:
- How is the team feeling about the assumptions?
- Are there any not identified previously?
22. 20
39
Rinse & Repeat: MLPs
• Build series of MLPs to reach final launchable product
• Use innovation accounting to “tune the growth engine”
• Be courageous in pivot/persevere/quit decisions
40
Team Dynamics
• Scrum team becomes a small “innovation factory”
- Responsible for code and/or artifacts that prove/disprove a hypothesis
- Continuous innovation
• Practicing the art of “genchi genbutsu”
- “Go and see”
- The only way to truly understand the requirements is to get out of the
office and spend time with the customer
- Gemba – the real place
- Don’t rely on information from other sources
23. 21
41
Gemba Walk
• Gemba Walk
- Go see the actual process
- Purposeful attempt to learn what is really going on
- Direct customer interaction
- Ask questions
- Show respect
- Learn
42
Team Dynamics (cont)
• ScrumMaster becomes “shusa”
• Chief engineer responsible for guiding the product to success
• Guides team on experiments and MLPs to product
24. 22
Conclusion
43
44
Conclusion
• Lean Startup principles can and should be used in
Agile projects
- To help insure we build the right thing
• Approximately 10 techniques presented, but there
are probably even more
• This could be the next major evolution of Agile!
“If we stopped wasting people’s time, what
would they do with it?
We have no real concept of what is possible.”
25. 23
45
Introducing: Gemba
• Gemba: a validated learning Agile method
Gemba
Scrum Lean Startup Lean
46
Gemba Manifesto
We value
• Validated learning over reasonable assumptions
• Data-driven decisions over plausible-sounding arguments
• Building minimum learning products over additional features
• The courage to build the right thing over something that might work
27. 25
49
Innovation Accounting
• Measure the progress of innovation
towards validated learning – instead of
burn rate or $
• Three steps
• Use MVP to establish real data
• Tune the engine from baseline towards ideal
• Pivot, persevere, or quit
• Use actionable metrics – clear
cause&effect
• Split-test of a feature caused 20% increase in
sales
• Per-customer metrics
• Cohort metrics – groups of customers
• Avoid vanity metrics
• Number of hits to a website
• Action to take is not obvious
“If you are building the wrong thing,
optimizing the product or its marketing
will not yield significant results.”
50
Engine of Growth
• Use a small set of actionable metrics
• Customer acquisition cost
• Activations
• Retention
• Revenue
• Referrals
• Consider viral coefficient
• How many friends will each customer bring?
• Case Study: Hotmail
• “Tune” the engine every time learning occurs
28. 26
51
Adaptive Organization
• Auto-adjust process and performance based on
current learnings
• Andon cord – anyone can stop the production line!
• Slow down – invest in preventing issues
• Ask “Why?” 5 times to get to root cause
Avoid
• Handoffs and approvals
• Making decisions on plausible-sounding arguments
• Low quality products
• Defects