2. 2
Learning objectives
To understand the nature of multimedia data and the
scheduling and resource issues associated with it.
To become familiar with the components and design
of distributed multimedia applications.
To understand the nature of quality of service and
the system support that it requires.
To explore the design of a state-of-the-art, scalable
video file service; illustrating a radically novel design
approach for quality of service.
*
3. A distributed multimedia system
Wide area gateway Video
3
server
Digital
TV/radio
server
Video camera
and mike
Local network Local network
Figure 15.1
Applications:
– non-interactive: net radio and TV, video-on-demand, e-learning, ...
– interactive: voice &video conference, interactive TV, tele-medicine, multi-user
games, live music, ...
*
4. Multimedia in a mobile environment
4
Applications:
– Emergency response systems, mobile commerce, phone service,
entertainment, games, ...
*
5. Characteristics of multimedia applications
Large quantities of continuous data
Timely and smooth delivery is critical
– deadlines
– throughput and response time guarantees
Interactive MM applications require low round-trip delays
Need to co-exist with other applications
5
– must not hog resources
Reconfiguration is a common occurrence
– varying resource requirements
Resources required:
– Processor cycles in workstations
– and servers
– Network bandwidth (+ latency)
– Dedicated memory
– Disk bandwidth (for stored media)
At the right time
and in the right quantities
*
6. 6
Application requirements
Network phone and audio conferencing
– relatively low bandwidth (~ 64 Kbits/sec), but delay times must be short ( <
250 ms round-trip)
Video on demand services
– High bandwidth (~ 10 Mbits/s), critical deadlines, latency not critical
Simple video conference
– Many high-bandwidth streams to each node (~1.5 Mbits/s each), high
bandwidth, low latency ( < 100 ms round-trip), synchronised states.
Music rehearsal and performance facility
– high bandwidth (~1.4 Mbits/s), very low latency (< 100 ms round trip), highly
synchronised media (sound and video < 50 ms).
*
7. System support issues and requirements
Scheduling and resource allocation in most current OS’s
divides the resources equally amongst all comers (processes)
– no limit on load
– can’t guarantee throughput or response time
MM and other time-critical applications require resource
allocation and scheduling to meet deadlines
– Quality of Service (QoS) management
Admission control: controls demand
QoS negotiation: enables applications to negotiate admission and
reconfigurations
Resource management: guarantees availability of resources for
admitted applications
– real-time processor and other resource scheduling
7
*
8. Characteristics of typical multimedia streams
Data rate
(approximate)
9
Sample or frame
size frequency
Telephone speech 64 kbps 8 bits 8000/sec
CD-quality sound 1.4 Mbps 16 bits 44,000/sec
Standard TV video
120 Mbps up to 640 x 480
24/sec
(uncompressed)
pixels x 16 bits
Standard TV video
(MPEG-1 compressed)
1.5 Mbps variable 24/sec
HDTV video
(uncompressed)
1000–3000 Mbps up to 1920 x 1080
pixels x 24 bits
24–60/sec
HDTV video
MPEG-2 compressed)
10–30 Mbps variable 24–60/sec
Figure 15.3
*
9. Typical infrastructure components for multimedia applications
PC/workstation PC/workstation
K
A G
L
C Video
This application involves multiple concurrent processes in the
10
Camera
Microphones
Screen
Mixer
Codec
D
B
Window system
store
Network
connections
M
Codec
Codec
H
Window
system
Video file system
: multimedia stream
White boxes represent media
processing components, many
of which are implemented
in software, including:
codec: coding/decoding filter
mixer: sound-mixing component
*
Figures 15.4 & 15.5
Component Bandwidth Latency Loss rate Resources required
Camera Out: 10 frames/sec, raw video
640x480x16 bits
Zero
A Codec In:
Out:
10 frames/sec, raw video
MPEG-1 stream
Interactive Low 10 ms CPU each 100 ms;
10 Mbytes RAM
B Mixer In:
Out:
2 44 kbps audio
1 44 kbps audio
Interactive Very low 1 ms CPU each 100 ms;
1 Mbytes RAM
H Window
system
In:
Out:
various
50 frame/sec framebuffer
Interactive Low 5 ms CPU each 100 ms;
5 Mbytes RAM
K Network
connection
In/Out: MPEG-1 stream, approx.
1.5 Mbps
Interactive Low 1.5 Mbps, low-loss
stream protocol
L Network
connection
In/Out: Audio 44 kbps Interactive Very low 44 kbps, very low-loss
stream protocol
PCs
Other applications may also be running concurrently on the
same computers
They all share processing and network resources
10. Quality of service management
Allocate resources to application processes
– according to their needs in order to achieve the desired quality of multimedia
11
delivery
Scheduling and resource allocation in most current OS’s
divides the resources equally amongst all processes
– no limit on load
– can’t guarantee throughput or response time
Elements of Quality of Service (QoS) management
– Admission control: controls demand
– QoS negotiation: enables applications to negotiate admission and
reconfigurations
– Resource management: guarantees availability of resources for
admitted applications
– real-time processor and other resource scheduling
*
11. Admission control QoS negoti ation
QoS manager ev aluates new requirements
12
The QoS manager’s task
Application components specify their QoS
requirements to QoS manager
Yes No
Yes No
Flow spec.
Resource contract
against the av ailable resources.
Suf f icient?
Reserve the requested resources
Allow application to proceed
Application runs with resources as
per resource cont ract
Negot iate reduced resource provision with application.
Agreement?
Do not allow application to proceed
Application not if ies QoS manager of
increased resource requirements
*
Figure 15.6
12. Figure 15.8 The RFC 1363 Flow Spec
13
QoS Parameters
Bandwidth
– rate of flow of multimedia data
Latency
– time required for the end-to-end transmission of a single data element
Jitter
variation in latency :– dL/dt
Loss rate
burstiness
maximum rate
acceptable latency
– the proportion of data elements that can be dropped or delivered late
*
Protocol version
Maximum transmission unit
Token bucket rate
Token bucket size
Maximum transmission rate
Minimum delay noticed
Maximum delay variation
Loss sensitivity
Burst loss sensitivity
Loss interval
Quality of guarantee
Bandwidth:
Delay:
Loss:
acceptable jitter
percentage per T
maximum consec-utive
loss
T
value
13. Figure 15.8 The RFC 1363 Flow Spec
Managing the flow of multimedia data
14
Flows are variable
burstiness
maximum rate
acceptable latency
– video compression methods such as MPEG (1-4) are based on
similarities between consecutive frames
– can produce large variations in data rate
Burstiness
– Linear bounded arrival process (LBAP) model:
maximum flow per interval t = Rt + B (R = average rate, B = max. burst)
– buffer requirements are determined by burstiness
– Latency and jitter are affected (buffers introduce additional delays)
Traffic shaping
– method for scheduling the way a buffer is emptied
*
Protocol version
Maximum transmission unit
Token bucket rate
Token bucket size
Maximum transmission rate
Minimum delay noticed
Maximum delay variation
Loss sensitivity
Burst loss sensitivity
Loss interval
Quality of guarantee
Bandwidth:
Delay:
Loss:
acceptable jitter
percentage per T
maximum consec-utive
loss
T
value
14. Traffic shaping algorithms – leaky bucket algorithm
15
(a) Leaky bucket
*
Figure 15.7
process 1
process 2
analogue of leaky bucket:
– process 1 places data into a buffer in bursts
– process 2 in scheduled to remove data regularly in smaller amounts
– size of buffer, B determines:
maximum permissible burst without loss
maximum delay
overkill?
15. Traffic shaping algorithms – token bucket algorithm
16
process 2
Token generator
(b) Token bucket
*
Figure 15.7
tokens: permits to place x bytes
into output buffer
Implements LBAP
process 1
process 3
– process 1 delivers data in bursts
– process 2 generates tokens at a fixed rate
– process 3 receives tokens and exploits them to deliver output as quickly as it
gets data from process 1
Result: bursts in output can occur when some tokens have accumulated
16. 17
Admission control
Admission control delivers a contract to the application
guaranteeing:
For each computer:
cpu time, available at specific intervals
memory
For each network connection:
bandwidth
latency
For disks, etc.:
bandwifth
latency
Before admission, it must assess resource requirements and
reserve them for the application
– Flow specs provide some information for admission control, but not all - assessment
procedures are needed
– there is an optimisation problem:
clients don't use all of the resources that they requested
flow specs may permit a range of qualities
– Admission controller must negotiate with applications to produce an acceptable result
*
17. 18
Resource management
Scheduling of resources
to meet the existing guarantees:
Fair scheduling allows all processes some portion of the resources based on
fairness:
E.g. round-robin scheduling (equal turns), fair queuing (keep queue lengths equal)
not appropriate for real-time MM because there are deadlines for the delivery of
data
Real-time scheduling traditionally used in special OS for system control
applications - e.g. avionics. RT schedulers must ensure that tasks are
completed by a scheduled time.
Real-time MM requires real-time scheduling with very frequent deadlines.
Suitable types of scheduling are:
Earliest deadline first (EDF)
Rate-monotonic
e.g. for each computer:
cpu time, available at specific intervals
memory
*
EDF scheduling
Each task specifies a deadline T and CPU seconds S to the scheduler for each
work item (e.g. video frame). EDF scheduler schedules the task to run at least
S seconds before T (and pre-empts it after S if it hasn't yielded).
It has been shown that EDF will find a schedule that meets the deadlines, if
one exists. (But for MM, S is likely to be a millisecond or so, and there is a
danger that the scheduler may have to run so frequently that it hogs the cpu).
Rate-monotonic scheduling assigns priorities to tasks according to tasks
according to their rate of data throughput (or workload). Uses less CPU for
scheduling decisions. Has been shown to work well where total workload is <
69% of CPU.
18. 19
Scaling and filtering
Source
Targets
High bandwidth
Medium bandwidth
Low bandwidth
*
Figure 15.9
Scaling reduces flow rate at source
– temporal: skip frames or audio samples
– spatial: reduce frame size or audio sample quality
Filtering reduces flow at intermediate points
– RSVP is a QoS negotiation protocol that negotiates the rate at each
intermediate node, working from targets to the source.
19. 20
QoS and the Internet
Very little QoS in the Internet at present
– New protocols to support QoS have been developed, but their implementation
raises some difficult issues about the management of resources in the
Internet.
RSVP
IPv6 header layout
– Network resource reservation
– Doesn’t ensure enforcement of reservations
RTP
– Real time data transmission over IP
need to avoid adding undesirable complexity to the Internet
IPv6 has some hooks for it
*
20. Tiger design goals
Video on demand for a large number of users
21
Quality of service
Scalable and distributed
Low cost hardware
Fault tolerant
*
Tiger
Network
Clients
21. 22
Tiger architecture
Storage organization
– Striping
– Mirroring
Distributed schedule
Tolerate failure of any single computer or disk
Network support
Other functions
– pause, stop, start
*
22. Tiger video file server hardware configuration
Controller
Figure 15.10
low-bandwidth network
0 n+1 1 n+2 2 n+3 3 n+4 n 2n+1
Cub 0 Cub 1 Cub 2 Cub 3 Cub n
ATM switching network
video distribution to clients
23
*
Start/Stop
requests from clients
high-bandwidth
Cubs and controllers
are standard PCs
Each movie is stored in 0.5 MB blocks (~7000) across all disks in the order of the disk
numbers, wrapping around after n+1 blocks.
Block i is mirrored in smaller blocks on disks i+1 to i+d where d is the decluster factor
23. 2 1 0
Stream capacity of a disk = T/t (typically ~ 5)
Stream capacity of a cub with n disks = n x T/t
24
Tiger schedule
block play time T
block service
time t
*
Figure 15.11
Cub algorithm:
1. Read the next block into buffer storage at the Cub.
2. Packetize the block and deliver it to the Cub’s ATM network controller with the
address of the client computer.
3. Update viewer state in the schedule to show the new next block and play sequence
number and pass the updated slot to the next Cub.
4. Clients buffer blocks and schedule their display on screen.
in time t
Network address of client
FileID for current movie
Number of next block
Viewer's next play slot
Viewer state:
slot 0
viewer 4
state
viewer client viewer state:
slot 1
free
slot 2
free
slot 3
viewer 0
state
slot 4
viewer 3
state
slot 5
viewer 2
state
slot 6
free
slot 7
viewer 1
state
24. Tiger performance and scalability
25
1994 measurements:
– 5 x cubs: 133 MHz Pentium Win NT, 3 x 2Gb disks each, ATM
network.
– supported streaming movies to 68 clients simultaneously without lost
frames.
– with one cub down, frame loss rate 0.02%
1997 measurements:
– 14 x cubs: 4 disks each, ATM network
– supported streaming 2 Mbps movies to 602 clients simultaneously with
loss rate of < .01%
– with one cub failed, loss rate <.04%
The designers suggested that Tiger could be scaled to 1000
cubs supporting 30,000 clients.
25. 26
Summary
MM applications and systems require new system
mechanisms to handle large volumes of time-dependent data
in real time (media streams).
The most important mechanism is QoS management, which
includes resource negotiation, admission control, resource
reservation and resource management.
Negotiation and admission control ensure that resources are
not over-allocated, resource management ensures that
admitted tasks receive the resources they were allocated.
Tiger file server: case study in scalable design of a stream-oriented
service with QoS.