Can traditional live linear content distribution models be effectively evolved from existing satellite communication networks to pure IP-based cloud-centric transit? In this session we will take a look at requirements that must be met to facilitate wide-scale distribution of content at low latency with high levels of availability, durability, reliability and throughput. We’ll look at best practices for high availability and resilience, take a deep dive into topics such as effective erasure correction and deterministic network topologies, factor in advantages around lower cost for compute and bandwidth when utilizing cloud-based infrastructure, and arrive at a reference architecture that can be used to drive B2B content distribution through the cloud at scale.
9. Traditional multi-hop satellite distribution
First HopSecond Hop
Field Source / EncoderHeadend / ProcessingAffiliate Spoke / Decoder
10. Examining this model
▪ Static End to end configuration
▪ Capex-based lifecycles
▪ Constant cost vs. capacity churn
▪ Long-term (media + format + codec) investments
▪ Lack of Agility
▪ Fixed workflow
▪ Fragmented infrastructure (Technology changes)
▪ Mismatched bandwidth constraints
11. Distribution is constantly evolving
▪ Adoption of IP networks & devices
▪ Increase in Bandwidth, connectivity,
capacity
▪ Cloud-based options
▪ Flexibility, Scalability and Agility
12. Workflows are becoming more complex
▪ Highly iterative ecosystem - H.265, 4K,
HTML5
▪ Constantly growing target device landscape
!
▪ Agility is the key
▪ Durability, reliability, throughput and
latency improvements
13. There are perceived limitations
▪ Sacrificing media quality
▪ Adopting non-traditional protocols
▪ The cost of new distribution networks
▪ Ability to scale without impact
▪ Maintaining low latency
▪ A general increase in complexity
17. ▪ Multiple datacenter footprints
▪ High speed, costly IP transit
▪ Local ops staff to manage infrastructure
▪ A massive cap-ex outlay
▪ Development staff to build this out
In a non-cloud solution …
19. Edge Locations
Availability Zone
Region
Dallas (2)
St.Louis
Miami
JacksonvilleLos Angeles (2)
Seattle
Ashburn (3)
Newark
New York (3)
Dublin
London (2)
Amsterdam (2)
Stockholm
Frankfurt (2)Paris (2)
Singapore(2)
Hong Kong (2)
Tokyo (2)
Sao Paulo
South Bend
San Jose
Palo Alto
Hayward
Osaka
Milan
Sydney
Madrid
Seoul
Mumbai
Chennai
Global distribution footprint
20. c
▪ Compute Intensive
Intel ES-2666 v3 (Haswell) optimized specifically for EC2
▪ Memory Intensive
Lowest price point per GiB of RAM
▪ GPUs
1,536 CUDA cores
4GB of video memory
▪ Enhanced Networking
Higher PPS, Lower network jitter, low latency
▪ IO Intensive
SSD Storage, EBS Optimized
▪ High Storage
24 x 2000 GiB per instance
AMI
EBS
Instance
Store
Amazon EC2
Instance
Massively scalable compute
Size instance by
Application need
21. Launch a CloudFormation stack
with all the infrastructure
resources for a specific project
!
!
!
!
Autoscale the stack as
appropriate
AMI
CloudFormation
Template
CloudFormation
Terminate
Template
Automated Infrastructure Layers based on Project
Scope
22. AWS Ecosystem (License included in hourly* pricing)
INGEST STORE MANAGE SECUREPROCESS
CREATE
MONETIZE
INTEGRATEDELIVER
24. What if we evolved the second hop?
First HopSecond Hop
Field Source / EncoderHeadend / ProcessingAffiliate Spoke / Decoder
25. What if we evolved the second hop?
Second Hop
Headend / ProcessingAffiliate Spoke / Decoder
▪ Approach:
▪ Up/downlink: dedicated and
internet-based IP links
▪ Direct Connect
▪ for ‘uplink’
▪ for stream consumption
▪ Concerns:
▪ FEC
▪ ~500ms + RTT latency
Direct Connect
Secure VPN
Route53
26. Could this also benefit the first hop?
First HopSecond Hop
Field Source / EncoderHeadend / ProcessingAffiliate Spoke / Decoder
27. Bandwidth & transport
Second Hop
Headend / ProcessingAffiliate Spoke / Decoder
$
$
$
!
!
!
▪ Satellite:
▪ $3-5000/Mhz/mo* (~$30k/20Mbits*) + Spoke
costs
▪ Fixed b/w ceiling cap
▪ AWS
▪ b/w to deliver an HD stream ~ $500/mo*
▪ Pay as you go model
!
▪ FEC can be implemented on UDP layer
(ARQ, SRT) for jitter/latency/reliability
▪ Sub 1Gb Direct Connect (100Mbps)
▪ availability stream ingest (1:1, 1:N)
28. Hub & spoke
Second Hop
Headend / ProcessingAffiliate Spoke / Decoder
$
$
$
$ $
!
!
!
▪ Cost
▪ Uplink & downlink equipment
(dish, LNBs) vs. IP-only
equipment
▪ Processing (transcode)
equipment duplicated at every
facility in a non-shared model
!
▪ Content and Processing
Gravity
▪ AWS (S3+Glacier)
▪ Reduce cost, simplify workflows,
and flexible capacity
29. Receiver Ecosystem
Second Hop
Headend / ProcessingAffiliate Spoke / Decoder
$
$
$
$ $
!
!
!
!
!
▪ Fragmented, costly, un-
coordinated and lack of
agility
▪ Duplicate hardware for B2B proxies
and media processing at spoke
▪ B2C spoke content pushed to a CDN
for receiver ecosystem distribution
▪ CDN does not provide ability to
implement custom workflows
31. Traditional multi-hop satellite distribution
First HopSecond Hop
Field Source / EncoderHeadend / ProcessingAffiliate Spoke / Decoder
32. Multi-hop distribution with AWS
First Hop
Field Source / EncoderHeadend / ProcessingAffiliate Spoke
Ingest
Fan Out
Egress
Scale Out
Multi-Region, Multi-AZ
Cellular
Internet
Direct Connect
Secure VPN
Internet
S3
Glacier
Route53
33. Open up other avenues for your content
Field Source
EncoderHeadend / ProcessingAffiliate Spoke
Ingest
Fan Out
Egress
Scale Out
Multi-Region, Multi-AZ
Cellular
Internet
Direct Connect
Secure VPN
Internet
Amazon S3
Glacier
Route53
• Additional Workflows
• Transient infrastructure
• Templatize
Environments for quick
POCs
• Cloud Bursting
(utilizing on-prem)
35. 10GbpsNetwork
placement
groups
▪ Capacity plan for hundreds of live
HD streams and contribution silos
▪ Low latency high throughput
▪ Combine with regional replication
and Route53 for true nearest-
neighbor latency
Highly scalable infrastructure
36. c4
g2m3
High Capacity Egress
GPU Transcode
Ingest
Multi-path distribution
Encoder
Broadcast
Decode
Low Bitrate
Proxy
▪ Fan out / fan in
▪ Size workflow to compute
▪ Flexible multi-format
HLS w/ Cloudfront CDN
MPEG-UDP w/FEC
Dedicated Pipe
37. Amazon Glacier (Life Cycle Policies)
Amazon S3
▪ Segment media into S3
▪ Periodically archive to Glacier
▪ Time-windowed hot content with
infinite cold store
▪ Store/Retrieve to local edit stations
via high-speed partner appliances
▪ Affiliates can make use of storage
infrastructure (transcode)
Media lifecycle management
39. ▪ Deployed in one afternoon into AWS VPC
▪ Co-ordinated cross-country by a team of 3 – headend
operations, en/decoder manufacturer, and AWS
▪ 6Mbps 1080p60 MPEG-UDP w/FEC stream
▪ Distribution over public internet
▪ 200ms encoder to AWS, AWS to decoder latency
▪ Lower measured latency than existing satellite 2nd
hop
▪ 40 day ingress uptime with no video dropouts
Proof of concept
40.
41. South Lower Hall: SL9016
http://aws.amazon.com/digital-media/
!
Come check out…
Presentation Theater, Meet the AWS experts, cool demos
and Solutions Ecosystem
!
!
!
!
Thank you