Anúncio
Anúncio

Mais conteúdo relacionado

Apresentações para você(20)

Anúncio

Mais de Optimizely(20)

Anúncio

AMC Networks Experiments Faster on the Server Side

  1. AMC Experiments Faster on the Server-side Jon Keilson VP Product Management AMC NETWORKS Yoshitaka Ito VP Information Platforms AMC NETWORKS
  2. Too many product changes rolled out that weren’t measured incrementally. Hard to distinguish feature performance with content noise. Thinking about solving business problems but not the user problems. Why we invested in experimentation
  3. How we invested in experimentation Hypothesis Generation Process: ● Identify the user problem ● Test multiple solutions ● Clear metrics Test solutions quickly and inexpensively to get signal To get started experimenting quickly we implemented Optimizely purely on the front end using Optimizely Full Stack
  4. Our first win +2% increase in vid views Auto advance enhancement: Took time to ultimately determine it was positive due to content differences.
  5. They can’t all be winners Guest home experiment: ● 20% decrease in video views for new users (based on small sample) ● Test your assumptions! ● We needed to test smaller changes faster
  6. The Problem: We could not get enough velocity building experiments at the application level
  7. The Challenges 5Brands 8Platforms ?Experiments * * = { Massive Headache }
  8. The Solution: Server-side Driven Experimentation
  9. Deciding on Application Rebuild Goal: Architect to execute 80% of experimentation efficiently ● Create templated apps ● Define roles and responsibilities of the Backend, Backend for Frontend (BFF), and Client Apps ● Define clear types of experiments to run
  10. Templated Apps Apps are made of components that can be placed in many different layouts
  11. Application Roles Backend Services ● Act as data source Backend for Frontend (BFF): ● Provide business logic ● Control layout and styles Apps: ● Know the context ● Own user interactions Apps
  12. Experimentation Responsibilities Be data set for variations - Understand context passed - Bucket users - return appropriate layout and dataset - Know who, what, where - Present experiment to the user Backend AppsBFF
  13. Client Side ● If it is a painted door ● New features impacting many areas of the apps Server-Side ● Fine-tune existing feature ● New features that can be feature flagged at server level ● If there are many variations across platforms ● Content experiment When we use Client-Side vs Server-Side with Full Stack
  14. Client-side Experiment Architecture - Painted Door
  15. Client-side Experiment Architecture - New Feature
  16. Server-side Experiment Architecture
  17. Outcome: Operational Efficiency ● App Quality of Service metrics ○ App launch time ○ Time to interact ● Faster time to market ○ Release frequency ○ Experimentation frequency ○ Reducing dependencies on 3rd parties

Notas do Editor

  1. Yoshi and Jon to deliver intros with backgrounds.
  2. Yoshi and Jon to deliver intros with backgrounds. [ID: CON BO1] AMC Experiments Faster on the Server-side Speakers: Jon Keilison, VP Product Management, AMC Networks Yoshi Ito, VP Information Networks, AMC Networks SESSION TYPE: Customer TRACK: Product, Scale, Engineering
  3. Jon
  4. Jon
  5. Jon
  6. Jon: This was an expensive test because we ultimately built this on both front end and front end Assumptions Taught us we should be doing more painted doors - explain a painted door!
  7. Yoshi As jon mentioned, we learned there are huge values to experimentation for us both from understanding our users as well as avoiding unnecessary risks, and ultimately trying to focus on how our customers really leverage our product Unfortunately, we also learned we had some key limitations. The biggest of which, we learned is that we could not experiment at velocity we wanted and maintain structure / process / code well enough to take advantages to gain learnings through our planning and delivery pipelines Some of these challenges were inheritent of products we work on
  8. So - what are our challenges? In short - its permutation. We manage 5 brands on 8 streaming platforms. We work on our streaming platforms and we we need to be where our customers are: We are on - iOS, tvOS, Android, Android Tv, FireTV, Roku, Samsung, and Web At same time - we have 5 very distinctively unique brands in our portafolio AMC, BBCA, IFC, WE tv, and Sundance As you may imagine - we find that all of our audiences acts quite differently across all these brands / platforms how to experiment across theses permutations efficiently were just... very hard We also couldn’t easily control when our experiments gets released - because there are certification process for apps, we weren’t always able to release these experiments when we wanted With not all the learnings are easily translatable into actionable items, we had to find ways to do this efficiently and to maximize our scale.
  9. So what did we do? Enter - Server-side Driven Experimentation In a nutshell - we decided to move decision making process to server side and power our experiments there whenever possible
  10. Yoshi How did we approach all of this? Well, after month and months of adding duck tape after duck tape until you can no longer see the facade the application - we came to our senses and we needed to do some replatforming. One of the key learning of not being able to gain experimental velocity is that is that, we needed a have a focus on how to experiment just like anything else in our product stack to do it well. We decided to focus on creating structure that will allow us to do 80% of our experiments efficiently instead of trying to do everything. In order to achieve this, we tackled it from 3 angles. First - We looked to make changes in how our apps were written. We updated our apps to become more templated so we can control major aspects of app from server side Second - We decided to assign roles within the stack so developers had clear frameworks as to where logic should be added for experiments Third - We agreed on types experiments we want to focus on running efficiently
  11. First thing we focused on is, app. During our internal discussions, we quickly came to realization apps needs to be better componentized, so it can be templatized. We needed to have units which we can control from backend in a flexible and reliable way. So, working closely with Product and design, we vetted our application’s UX components from ground up - breaking down every aspect our components - from fonts, color, styles, positioning, type of action it can trigger - and ensured the reusability of those components in the templates across the entire app. Think of it this way - Imagine if all visual components blocks can be moved to across screen and different layouts reliable, it is easy to understand control you gain
  12. Now that we have decided on templating system, we needed to assign roles what each of our systems needs to do to execute this strategy efficiently. Our systems, broadly speaking, are comprised of 3 components - Backend Services, Backend for Frontend, and Applications. Roles they play are... Backend, I am sure everyone can guess - Is responsible to provide key meta data - content info. As well as user profile information Backend for Frontend We introduced backend for frontend in our architecture, which often is used to avoid monolithic APIs but we felt this is actually a great way to extract complexity out of frontend and provide means for us to control templating easier, and as a result it can help us make decision on bucketing and control user experience Apps It needs to handles the user interaction, as it is the only portion of our app that comes in contact with the users So we wanted to make sure they are well aware of who, what, where, and how of our users In essence, we wanted to build a thinnest app that can be controlled via API
  13. How does these roles manifest itself in the responsibilities for experiment? Let’s start with backend - it is not surprising backend is the data that provides what we need for variations. If we need to show different set of content across different experiement, this is the source App - it still remains owner of the context owner, and place where experiment needs to get presented to end user BFF is where the hard work is done it takes in context from the app Buckets users based on the context Communicate with backend to gather appropriate set of information And return them as a instruction set for App to consume This gave us clear way of how systems needed to work
  14. Yoshi As much as we’d love to claim we solved all of our woes - of course, you still need to run client side experiments. We tend to think about general cases around how experiments need to be run. When do we run tests Client-side? If you want to run a painted door, as Jon referred to before, there is no way to do this via backend - for example - adding a casting button to gouge customer interest in new casting feature. This still remained the cheap way to allow product teams to get initial sample of what they need without defining full fledged component Another type of experiments we still want to do on the client side is -if there are new features that can not easily be componentized. For example, if feature requires bucketing has to persist across multiple interactions of the apps, complexity becomes too much for server side to maintain When do we run experiments in Server-side? If we are fine tuning different view for existing features - what happens if we tweak how out content is displayed poster image vs 16x9 image. This plays really nicely into templating concept. If we are introducing features that are very much boolean - where there can be simple on-off switch as a block, i.e. exposing new set of CTA to subset of user Of course important case for us is if an experiment require validation across many platforms, i.e. showing new way to feature content on different device times Also, as a content company - content experiments to validate ow content does in different positioning is important and much more scalable with template systems Giving myself little moment to nerdi out as an engineer - i want to show few diagram how these experiment cases work:
  15. First is client side architecture - used for painted door In this model, 3 components talks to each other to make decision. Context Handler handles the context App Core / UX instantiates against context handler And SDK Wrapper interacts directly to context handler to create bucketing Based on decision making, app core will paint the UX Key to notice here is that the experiments handled here are experiments that requires no backend data, it is isolated and works really well with painted door because it keeps the scope narrow
  16. Next here is Client side experiment architecture - such as for the case when we work on features that works across multiple section of the app. In this case, you notice that app is still bucketing the experiment at app level, but it is requesting data from BFF - which then retrieves required information for backend. So the app and the clietn SDK retains the control of how experiments are carried out as well as what data is being retrieved
  17. And last but not least - this is the model we are using most often. App core talks to context handler Gather the information and converse with Content Compiler in BFF Content Compiler makes bucketing decision using Optimizely SDK Content Compiler gather backend dataset based on the SDK decision You see, how in this case, our application remains relatively unaware, and BFF controls our logic to run the experiment With all of that said, I am now going to pass it back to Jon.
  18. Jon: By decoupling the features on the front end from application changes, we can release experiment and features faster This allows for fast iteration This ultimately also allows for more stable applications and improves our QoS since we aren’t making as many large changes to the apps.
Anúncio