Presentation on the history and future of the Netflix API. This presentation walks through how the API was formed, why it needs a redesign and some of the principles that will be applied in the redesign effort.
This presentation was given at the Mashery Evolution of Distribution session in San Francisco on June 2, 2011.
NO1 Certified Black magic specialist,Expert in Pakistan Amil Baba kala ilam E...
History and Future of the Netflix API - Mashery Evolution of Distribution
1. The Netflix API How Netflix Launched an API and Evolved it to Serve Millions on Hundreds of Devices By Daniel Jacobson
2. Who Am I? Director of Engineering for Netflix API since October 2010 Tenure at NPR from 1999 to 2010 Built the custom CMS in 2002 Extended system for RSS and Podcasting Launched the NPR API in 2008 Launched NPR redesign in 2009
3. Netflix Overview Netflix offers subscriptions to unlimited streaming movies and TV shows for a very low price About 700 operational employees, 300 engineers More than 23 million subscribers in US and Canada Market capitalization is about $12B Responsible for about 30% of US bandwidth during peak hours (by some accounts)
5. Original Charter for the Netflix API Expose Netflix metadata and services to the public developer community to “let 1,000 flowers bloom”. That community will build rich and exciting new tools and services to improve the value of Netflix to our customers.
6. Netflix API There are currently over 18,000 flowers
17. New Charter for the Netflix API Build and maintain an infinitely scalable data distribution pipeline for getting metadata and services from internal Netflix systems to streaming client apps on all platforms in the format and/or delivery method that is most optimal for each app and platform.
18. API Personalization Engine User Info Movie Metadata Movie Ratings Similar Movies Reviews A/B Test Engine
30. Key Lessons from Netflix Understand target audiences Think big Think mobile Think international Design API for critical audiences first Internalize the API as part of your engineering DNA If building a public API, help them bloom
In the beginning, the Netflix API only supported the 1,000 flowers. Now there are over 18,000 of them.
Here are some examples of companies or individual developers that have built apps using the Netflix API.
Along the way, Netflix launched another developer community-driven program – The Netflix Prize. This was a program that offered $1,000,000 to the first team that could improve the recommendation algorithm by 10%. It took 2.5 years for the prize to be awarded.
Then streaming started taking off for Netflix and the API became a viable option for getting Netflix streaming onto devices.
This is what is known as the “eat your own dogfood” model.If you have to eat the dog food, you are more likely to make it taste good. For us, if we build an API for ourselves, others can be assured that it will taste good because we will be eating it too.
The shift due to streaming results in a redefinition of target audiences for the API.
So, the API interaction model started to look different with the addition of all of these devices.
Today, we have hundreds of devices being run off the API, driving tremendous business growth.
And a few weeks ago, we added another set of devices – select Android phones.
As a result of the device implementations against the API, the Netflix API traffic has gone through the roofAveraging (today) nearly 30B requests per monthWith peak traffic at about 20,000 requests per second
And the proportion of API requests coming to the Netflix API through the public has gone way down!The 1,000 flowers represent less than ½ a percent of the total API traffic
Now the Netflix API has a new charter…
Netflix engineering teams are set up in support of this charter.Internal engineering teams produce or manage content and algorithmic output (the bottom row).Different engineering and product teams are responsible for presentation layers on devices (top row).The API is responsible for delivering the content from the internal engineering teams to the device presentation layers. The API is also responsible for scaling horizontally to handle the growing load.
Again, the Netflix API traffic has gone through the roof in this model.
Metrics like 30B requests per month sound great, don’t they? The reality is that this number is concerning…
In the web world, increasing request numbers mean increasing opportunity of ad impressions, which means increasing opportunity for generating revenue. And when you hit certain thresholds in impressions, CPMs start to rise, which means even more money.
Some companies go further to generate more page views by adding things like pagination on article pages. These additional page views translate into additional ad impressions.
But for systems that yield output that looks like this, such as APIs, ad impressions are not part of the game. As a result, the increase in requests don’t translate into more revenue. In fact, they translate into more expenses. That is, to handle more requests requires more servers, more systems-admins, a potentially different application architecture, etc.
We are challenging ourselves to redesign the API to see if those same 30 billion requests could have been 5 billion or perhaps even less, assuming everything else remained the same. Through more targeted API designs based on what we have learned through our metrics, we will be able to reduce our API traffic as Netflix’ overall traffic grows.
As we embark on the redesign of the API, we will plan to design it for our key audiences… the devices.
And the redesigned API for the devices will be trickled down to other parties.
Along the way, we plan to design the API for the audience that we want, not the audience we have. That doesn’t necessarily mean that we will implement the system for the dream audience. It just means that our designs should allow for the system to scale to the dream audience.
For Netflix, we will be designing our server architecture, using AWS, to be highly scalable across many different areas. The API redesign will help us scale the software better to handle the scaling service more effectively and efficiently.