Anúncio
Anúncio

Mais conteúdo relacionado

Apresentações para você(20)

Similar a Practical Use Case: How Dosh Uses Feature Experiments To Accelerate Mobile Development(20)

Anúncio

Mais de Optimizely(20)

Anúncio

Practical Use Case: How Dosh Uses Feature Experiments To Accelerate Mobile Development

  1. How Dosh Uses Feature Experiments To Accelerate Mobile Development Nathan Chapman Senior Software Engineer Dosh
  2. Nathan Chapman ABOUT ME Senior Software Engineer, Mobile Platform Team Focused on Experimentation & Iteration
  3. What is Dosh? Card-Linked Offer Platform
  4. Mobile App Updates take time to propagate
  5. A/B TEST SAMPLE const variation = optimizely.activate( 'email_first_test', userId, ); if (variation === 'variation_1') { // execute code for variation_1 } else { // execute default code (control) }
  6. Solution: Use Features and Feature Experiments for Mobile
  7. Benefits: ● Larger testable audience means faster results ● Additional time to build experimentation code ● Iterate on experiments (versioning) ● Isolates QA of features from experiments
  8. Not All Conversion Events are “quick”
  9. Unsplash: @claybanks
  10. Solution: Ensure Consistent Experiences
  11. Technique: ● Never modify an experiment ● Use distinct audiences when experiments overlap in any way
  12. Experimentation Platform
  13. MOBILE REQUEST FOR FEATURES & EXPERIMENTS { experiments { name variant } features { name enabled variables } }
  14. MOBILE REQUEST FOR “ACTIVATING” FEATURE EXPERIMENTS mutation { activateFeatureExperiments( features: ["onboarding_card_link_v2"] ) { success } }
  15. MOBILE REQUEST FOR TRACKING EVENTS mutation { trackEvent(event: "card_link_completed") { success } }
  16. EXPERIMENT RESULTS
  17. ROLLOUT! 🚀
  18. EXPERIMENTS-SERVICE External: activateExperiments (A/B tests) activateFeatureExperiments getEnabledFeatures getUserExperiments getUserFeatures track Internal: updateOptimizelyDatafile optimizelyDatafileWebhook
  19. OTHER USEFUL TOOLING ● Glossary of Terms ● Idea Submission Template ● Feature/Experiment Specification Template ● Experimentation Board in Trello to track progress
  20. Learnings
  21. URGENCY MESSAGING TEST 1 ITERATION (SO FAR) +10.75% improvement in # of purchases +42.63% improvement in order value
  22. CARD LINK BINDING TEST 2 ITERATIONS Winner: Unskippable +7.36% improvement in card link rate
  23. “PROJECT JOIN” TEST 2 ITERATIONS No measurable change / inconclusive
  24. “EMAIL FIRST” TEST 1 ITERATION +15.93% improvement in signups
  25. Be Open To New Ideas
  26. Thank you! Join us on Slack for Q&A optimize.ly/dev-community

Notas do Editor

  1. Dosh is a leading card-linked offer platform focused on solving attribution for in-store purchases for our merchants and giving our consumers automatic cash back. Our goal is to take the advertising spend that would typically have no attribution associated with it (think billboards, subways, and radio spots) and provide a digital bridge between consumers and brick-and-mortar retailers where that money goes back into the consumers’ wallets. This gives merchants the tools to analyze ROI, as well as segment and target offers to specific consumers. Our consumer-facing product is a mobile app available on Android and iOS. Our consumers download the mobile app, link their card, browse offers, and when they make a purchase at a merchant on our platform, they automatically receive cash back in their Dosh wallet. One thing that’s certain is mobile development and maintenance can be challenging. I’m going to share a couple of the unique problems we’ve run into and our solutions to them.
  2. 35% within 3 days, 75% within 5 days, 85% within 7 days, and 90% within 2 weeks
  3. This is sample code for a normal A/B test. If we released this with the clients using the Optimizely Mobile SDK, we’d have to wait much longer for our results due to adoption times. If we wanted to use what we learned from the outcome of this experiment to do another iteration, it would require another release specifically keyed off of the new experiment name and another 2 weeks or so to start getting meaningful results. We’re a startup so we have to learn and iterate quickly.
  4. We don’t use A/B tests on our mobile clients, but they’re totally fine to use in web and backend services. Instead, for mobile, we create features in Optimizely, build them out and release them with the client code (usually with the feature turned off), and later run experiments on those features so we can maximize our audience sizes and get results faster.
  5. This gives us time to build out everything necessary on the backend for the experiment like new services, functions, databases, and events. By the time we’re ready for the feature experiment, mobile client adoption for that feature will likely be over 90%. This also allows us to iterate on experiments (by versioning them) and we can use what we learn along the way from the previous iterations to test on all versions of the client that know about a given feature. And lastly, it helps our QA process because we can test a feature and all permutations in the client in isolation from any experiment code (like audience selection, event tracking, and variation assignment)
  6. Another thing to call out is that not all conversion events are quick. When it comes to the activation point of an experiment, some conversion events are “fast.” In our case, user signups typically occur within 5-10 mins of activation, but other conversions are slower like transactions or referrals.
  7. Due to the nature of our business, many of our most important conversion events can occur a long time after bucketing a user into an experiment. It’s important for us to continually deliver the same experience to a user for the duration of an experiment, so we don’t misinterpret our results. We’re hoping to build this capability into our platform soon, but for now our solution involves a couple of things:
  8. One is to never modify anything about an experiment (even traffic allocation). If we create a new version of an experiment, we make sure to change the audience between those experiments to ensure that a user who’s seen a specific variation in the past would never see a new, different variation. The other is to ensure that experiments don’t overlap in their behavior in the client or in the metrics they track, and if they do, again the solution is to ensure different audiences between them. Otherwise, we have no way of knowing what actually caused a user’s conversion event and we will likely misinterpret our results.
  9. Next, I’m going to show how we’ve setup our experimentation platform to support our mobile clients in addition to our web and backend services.
  10. When the app launches or auth state changes, a request is sent to the server asking for all experiments and features for a given user. The clients will use this features array to determine what to show the user, but experiments are also handed back and tracked with all of our analytic events so we know the variation that was given to the user for each experiment.
  11. When the app is about to access or show a specific feature or set of features, they call our “activateFeatureExperiments” endpoint. This tells our experiments-service to activate all feature experiments currently active for the given user and feature names, so we can start tracking events for the user in those experiments. In this example, we’re testing a new credit card linking flow.
  12. And here’s how we’d track that conversion event for the user once they enter their credit card information with a call to trackEvent with the card_link_completed event.
  13. Once we’ve gathered enough results on the experiment to make a decision...
  14. We can do a feature rollout and end our experiment
  15. Our experiments-service is behind the scenes. It’s responsible for fetching the datafile, looking up user profile attributes for audience selection, and interacting with the Optimizely SDK. It exposes these functions at the top for consuming applications (like client gateways and other backend services) as well as some internal functions for validating and storing the Optimizely datafile. Backend services that need to activate one or more A/B tests will just call activateExperiments with the experiment names. The track function is called by various backend services when important business events occur and it sends those to Optimizely.
  16. Confluence Glossary, idea submission template, feature/experiment specification template, etc.
  17. For this experiment, we were testing different variations of copy for the actions that drive users into the signup flow. We tried two different experiments and neither drove measurable results like we were hoping. We decided to end both experiments early with inconclusive results and re-focus on other feature experiments.
  18. Note: this improvement rate is *before* any re-engagement emails. We’re estimating conservatively an additional 10% improvement after re-engagement emails are sent
  19. Lastly, it’s important to be open-minded and listen to the ideas of everyone in your company. Our experiment suggestions have come from product managers, designers, engineers, and more. And we’ve setup additional tooling to facilitate ideas from everyone in the org, like the idea submission template. Experimentation is all about safely releasing impacting features and learning as you go.
Anúncio