SlideShare uma empresa Scribd logo
1 de 277
Instructor: Scott Carrey Course Evaluation: www.vs.edu/survey
Instructor: Scott Carrey Course Evaluation: www.vs.edu/survey
A high definition TV is one that offers significantly
higher resolution as compared to the traditional
prevailing system.
A high definition TV is one that offers significantly
higher resolution as compared to the traditional
prevailing system.
It’s essentially a marketing term more than
any one specific standard. If a capture or
playback device can do so with higher
quality than what we have been
considering standard def, then it can be
deemed hi-def. For our purposes though
we are going to focus on media
Traditionally HDTV was analog, then
digital, and in both cases comprised of
data.
Key Moments In HD History
Key Moments In HD History
-1936 Britain/1938 France, start transmitting in what
they refer to as HDTV. Earlier systems had as few as
30 lines of resolution, these ran 240(p) described as
sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only)
-1958, U.S.S.R creates “Transformator” the first high
resolution television capable of producing an image
composed of 1,125 lines. Aimed at teleconferencing
for military command, ended up a research project
never deploying in the military or broadcasting.
-1936 Britain/1938 France, start transmitting in what
they refer to as HDTV. Earlier systems had as few as
30 lines of resolution, these ran 240(p) described as
sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only)
Key Moments In HD History
-1960’s, Development (on what we consider HDTV today)
began by Japanese state broadcaster NHK. In 1979
marketed to consumers as “Hi-Vision” or MUSE
(multiple sub-Nyquist sampling encoding) (1080i/1125 lines)
-1958, U.S.S.R creates “Transformator” the first high
resolution television capable of producing an image
composed of 1,125 lines. Aimed at teleconferencing
for military command, ended up a research project
never deploying in the military or broadcasting.
-1936 Britain/1938 France, start transmitting in what
they refer to as HDTV. Earlier systems had as few as
30 lines of resolution, these ran 240(p) described as
sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only)
Key Moments In HD History
-1981 MUSE demo’d in the US, Regan declares “a
matter of national interest” to develop HDTV in the USA
-1960’s, Development (on what we consider HDTV today)
began by Japanese state broadcaster NHK. In 1979
marketed to consumers as “Hi-Vision” or MUSE
(multiple sub-Nyquist sampling encoding) (1080i/1125 lines)
-1958, U.S.S.R creates “Transformator” the first high
resolution television capable of producing an image
composed of 1,125 lines. Aimed at teleconferencing
for military command, ended up a research project
never deploying in the military or broadcasting.
-1936 Britain/1938 France, start transmitting in what
they refer to as HDTV. Earlier systems had as few as
30 lines of resolution, these ran 240(p) described as
sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only)
Key Moments In HD History
-1986 First commercial introduction of HDTV
production equipment in the US begins. (1990 first
broadcasts, 1996+ Mainstream Adoption)
-1981 MUSE demo’d in the US, Regan declares “a
matter of national interest” to develop HDTV in the USA
-1960’s, Development (on what we consider HDTV today)
began by Japanese state broadcaster NHK. In 1979
marketed to consumers as “Hi-Vision” or MUSE
(multiple sub-Nyquist sampling encoding) (1080i/1125 lines)
-1958, U.S.S.R creates “Transformator” the first high
resolution television capable of producing an image
composed of 1,125 lines. Aimed at teleconferencing
for military command, ended up a research project
never deploying in the military or broadcasting.
-1936 Britain/1938 France, start transmitting in what
they refer to as HDTV. Earlier systems had as few as
30 lines of resolution, these ran 240(p) described as
sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only)
Key Moments In HD History
-NHK: In 1988, Olympic Games shot in HDTV.
Bell Systems ships HD signal over fiber optics.
Key Moments In HD History
-NHK: In 1988, Olympic Games shot in HDTV.
Bell Systems ships HD signal over fiber optics.
Parallel Present Day Moment
(look @ where we came from to see where we are going)
-Beijing Olympics first to Stream
Key Moments In HD History
-NHK succeeded in showing the world's first
Hi-vision (HDTV) pictures of our planet, taken
from the space shuttle "Discovery" which went
into orbit on October 29, 1998.
Key Moments In HD History
-NHK succeeded in showing the world's first
Hi-vision (HDTV) pictures of our planet, taken
from the space shuttle "Discovery" which went
into orbit on October 29, 1998.
Parallel Present Day Moment
(look @ where we came from to see where we are going)
-Skype Call From Space Lab
Key Moments In HD History
 We Sample the world around us (encoding) and
Process as DATA (replicating human functions),
or in the case of documents, texts, CGI, etc.
generated as DATA, directly within a Software
Tool.
 DATA is Created & Read as Binary Information
 This Binary Info is what
makes up Any & All media
 It is the DNA of DATA!
BIT (short for “binary digit”) is the smallest unit of
measurable data and contains two possible states
represented by a 1 or 0 - sometimes referred to as
On or Off, High or Low, True or False
Measuring, Converting &
Communicating in Bits!
1 Byte = 8-bits
Byte is short for
“Binary Term”
1 Byte = 8-bits
Byte = Binary Term
Byte
1 Byte = 8-bits
1 Kilobyte = ????-bits
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1KB = 1024 Bytes = 8192 Bits
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1 Megabyte = 8388608-bits or 1024KB
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1 Megabyte = 8388608-bits
1 Gigabyte = 8589934592-bits
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1 Megabyte = 8388608-bits
1 Gigabyte = 8589934592-bits
1 Terabyte = 1099511627776-bits
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1 Megabyte = 8388608-bits
1 Gigabyte = 8589934592-bits
1 Terabyte = 1099511627776-bits
1 Petabyte = 1125899906842624-bits
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1 Megabyte = 8388608-bits
1 Gigabyte = 8589934592-bits
1 Terabyte = 1099511627776-bits
1 Petabyte = 1125899906842624-bits
1 Exabyte = 1152921504606846976-bits
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1 Megabyte = 8388608-bits
1 Gigabyte = 8589934592-bits
1 Terabyte = 1099511627776-bits
1 Petabyte = 1125899906842624-bits
1 Exabyte = 1152921504606846976-bits
1 Zettabyte = 9444732965739290427392-bits
1 Byte = 8-bits
1 Kilobyte = 8192-bits
1 Megabyte = 8388608-bits
1 Gigabyte = 8589934592-bits
1 Terabyte = 1099511627776-bits
1 Petabyte = 1125899906842624-bits
1 Exabyte = 1152921504606846976-bits
1 Zettabyte = 9444732965739290427392-bits
1 Yottabyte = 9671406556917033397649408-bits
ADDED SINCE Early 2000’s
(Though Not Standardized)
Brontobyte
1 Brontobyte = 1024 Yottabytes
or
1237940039285380274899124224 Bytes
or
Multiply the above by 8 = # of bits
ADDED SINCE 2011
(Though Not Standardized)
Geopbyte
1 Geopbyte = 1024 Brontobytes
Or
1267650600228229401496703205376 Bytes
or
Multiply the above by 8 = # of bits
Bits that are processed per unit of time (bps or bpm). In General the
higher the Bit Rate = higher Quality (requiring more bandwidth to deliver.)
Bits that are processed per unit of time (bps or bpm). In General the
higher the Bit Rate = higher Quality (requiring more bandwidth to deliver.)
BANDWIDTH
The term Bandwidth or Throughput, denotes the achieved Bit Rate a
computer network over a logical or physical communication link can
deliver.
Bandwidth must be high enough to meet the Data Rate, in order to carry
enough info to sustain the succession of images required by video.
Communication paths usually consist of a series of links, each with their
own bandwidth. If one of these is much slower than the rest, it is said to
be a Bandwidth Bottleneck.
1,024 bit/s = 1 Kbit/s (one kilobit or one thousand bits per second)
1,048,576 bit/s = 1 Mbit/s (one megabit or one million bits per second)
1,073,741,824 bit/s = 1 Gbit/s (one gigabit or one billion bits per second)
AUDIO
32 kbit/s — MW (AM) quality
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
AUDIO
VIDEO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
AUDIO
VIDEO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
1.25 Mbit/s — VCD quality
VIDEO
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
1.25 Mbit/s — VCD quality
5 Mbit/s — DVD quality
VIDEO
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
1.25 Mbit/s — VCD quality
5 Mbit/s — DVD quality
15 Mbit/s — HDTV quality
VIDEO
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
1.25 Mbit/s — VCD quality
5 Mbit/s — DVD quality
15 Mbit/s — HDTV quality
36 Mbit/s — HD DVD quality
VIDEO
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
1.25 Mbit/s — VCD quality
5 Mbit/s — DVD quality
15 Mbit/s — HDTV quality
36 Mbit/s — HD DVD quality
54 Mbit/s — Blu-ray Disc quality
VIDEO
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
1.25 Mbit/s — VCD quality
5 Mbit/s — DVD quality
15 Mbit/s — HDTV quality
36 Mbit/s — HD DVD quality
54 Mbit/s — Blu-ray Disc quality
VIDEO
AUDIO
32 kbit/s — MW (AM) quality
96 kbit/s — FM quality
128–160 kbit/s — Standard Bitrate quality
192 kbit/s — DAB (Digital Audio Broadcasting) quality.
224–320 kbit/s — Near CD quality.
800 bit/s — minimum necessary for recognizable speech
8 kbit/s — telephone quality (using speech codecs)
500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.)
1411 kbit/s — PCM sound format of Compact Disc Digital Audio
16 kbit/s — videophone quality
128 – 384 kbit/s — business-oriented videoconferencing
1.25 Mbit/s — VCD quality
5 Mbit/s — DVD quality
15 Mbit/s — HDTV quality
36 Mbit/s — HD DVD quality
54 Mbit/s — Blu-ray Disc quality
140 – 230MB/s — Uncompressed HD @ 8b/24p/1080 – 10b/29.97
VIDEO
AUDIO
USB 2.0 480 Mbp/s (60MB/s)
USB 3.0 5 Gbp/s (625MB/s)
FIREWIRE 400 Mbps (50MB/s)
FIREWIRE 800 Mbps (100MB/s)
FireWire S1600 (1.6Gb), S3200 (3.2Gb) &IEEE P1394d (6.4Gb)
SATA/ESATA 325 MB/s (1.5-3Gbp/s) esata3=6Gb
THUNDERBOLT 10 Gbp/s - Bi-Directional (up/down)
“Transfer a full-length HD movie in less than 30 seconds".
 HD Component
 Serial Digital Interface & HD-SDI
 Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)
 HDMI
 HD Component
 Serial Digital Interface & HD-SDI
 Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)
 HDMI
 HD Component
 Serial Digital Interface & HD-SDI
 Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)
 HDMI
 HD Component
 Serial Digital Interface & HD-SDI
 Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)
 HDMI
 HD Component
 Serial Digital Interface & HD-SDI
 Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)
 HDMI
Break Time
SMPTE 259M STANDARD
Describes 10-Bit Serial Digital
Operating at 143/270/360 Mb/s.
KEY SMPTE HD STANDARDS
KEY SMPTE HD STANDARDS
SMPTE 274M
KEY SMPTE HD STANDARDS
SMPTE 274M
SMPTE 292M
KEY SMPTE HD STANDARDS
SMPTE 274M
SMPTE 292M
SMPTE 296M
KEY SMPTE HD STANDARDS
SMPTE 274M
SMPTE 292M
SMPTE 296M
SMPTE 370M
KEY SMPTE HD STANDARDS
SMPTE 274M
SMPTE 292M
SMPTE 296M
SMPTE 370M
SMPTE 372M
The HD choices began
when the ATSC created the
digital television table of 36
digital broadcast (DTV)
formats. Of those 36
formats, 12 are high
definition. These are the
formats that the United
States government has
determined will be the
standard for digital
broadcasting.
HD/essentials
ATSC
DIGITAL FORMATS
COMPONENTS OF HD/DATA FILES
• Frame size
COMPONENTS OF HD/DATA FILES
• Frame size
• Frame rate
COMPONENTS OF HD/DATA FILES
• Frame size
• Frame rate
• Frame recording method
COMPONENTS OF HD/DATA FILES
• Frame size
• Frame rate
• Frame recording method
• Color Space/Encoding Method
COMPONENTS OF HD/DATA FILES
• Frame size
• Frame rate
• Frame recording method
• Color Space/Encoding Method
• Bit depth
COMPONENTS OF HD/DATA FILES
• Frame size
• Frame rate
• Frame recording method
• Color Space/Encoding Method
• Bit depth
• Compression (Data Reduction)
COMPONENTS OF HD/DATA FILES
• Frame size
• Frame rate
• Frame recording method
• Color Space/Encoding Method
• Bit depth
• Compression (Data Reduction)
• MetaData
COMPONENTS OF HD/DATA FILES
 Screen Aspect Ratio
 Pixel Aspect (Square 1 vs Non-Sq .9)
 Frame Size = Screen/Display Resolution
 Screen Dimension
 Screen Real-Estate
Frame Size Considerations
H
W
ASPECT RATIO
The ASPECT RATIO of an IMAGE is
its WIDTH divided by its HEIGHT.
ASPECT RATIO
XxY
(pronounced "x-to-y") (pronounced "x-by-y")
X:Y and
The ASPECT RATIO of an IMAGE is
its WIDTH divided by its HEIGHT.
4x3
1.33x1
4:3
1.33:1
ASPECT RATIO
XxY
(pronounced "x-to-y") (pronounced "x-by-y")
X:Y and
The ASPECT RATIO of an IMAGE is
its WIDTH divided by its HEIGHT.
16x9
1.78:1
4x3
1.33:1
IMAGE ASPECT RATIO
The ASPECT RATIO of an IMAGE is
its WIDTH divided by its HEIGHT.
PIXEL ASPECT RATIO
PIXEL ASPECT RATIO
Pixel Aspect Ratio (PAR) is the ratio of width to
height of ONE PIXEL in an image
PIXEL ASPECT RATIO
What is a Pixel?
In digital video images - a pixel, or pel, (both
short for “picture element”) are a single point in
a raster image; the smallest addressable
screen element in a display device; the
smallest unit of a picture that can be
represented by a single color; and it generally
is the smallest unit we can control in an image
(however there are high-end systems that allow for subpixel
based manipuliation, which take averages of neighboring
pixels, for microprecision).
Pixels are arranged in a two-dimensional grid,
and are often represented using dots or
squares. Each pixel is a sample of an original
image; more samples typically provide more
accurate representations of the original.
Groups of Pixels together form the images we
see, the shape, smoothness, size & color
tones. PIXEL ASPECT RATIO effects the
Shape of our Image.
PIXEL ASPECT RATIO
Square vs Non-Square (Rectangular) Pixels
Pixel Aspect Ratio (PAR) is the ratio of width to
height of ONE PIXEL in an image
PIXEL ASPECT RATIO
Square vs Non-Square (Rectangular) Pixels
Pixel Aspect Ratio (PAR) is the ratio of width to
height of ONE PIXEL in an image
More about Pixels
PIXEL ASPECT RATIO
The intensity of each pixel is
variable. In color image systems, a
color is typically represented by
three or four component intensities
such as red, green, and blue, or
cyan, magenta, yellow, and black.
PIXEL ASPECT RATIO
Bits per pixel
The number of distinct colors that can be represented by a pixel
depends on the number of bits per pixel (bpp). A 1 bpp image uses 1-bit
for each pixel, so each pixel can be either on or off. Each additional bit
doubles the number of colors available, so a 2 bpp image can have 4
colors, and a 3 bpp image can have 8 colors:
1 bpp, 21 = 2 colors (monochrome)
2 bpp, 22 = 4 colors
3 bpp, 23 = 8 colors
...
8 bpp, 28 = 256 colors
16 bpp, 216 = 65,536 colors ("Highcolor" )
24 bpp, 224 ≈ 16.8 million colors ("Truecolor")
ASPECT RATIO HISTORY
ASPECT RATIO HISTORY
• Edison, Eastman, Dickson + Scissors = 35mm/1.33
(officially adopted as a standard in 1917)
ASPECT RATIO HISTORY
• Edison, Eastman, Dickson + Scissors = 35mm/1.33
(officially adopted as a standard in 1917)
ASPECT RATIO HISTORY
• Edison, Eastman, Dickson + Scissors = 35mm/1.33
(officially adopted as a standard in 1917)
• 1st Sound Stripe on Film (Movietone) = 35mm/1.16
ASPECT RATIO HISTORY
• Edison, Eastman, Dickson + Scissors = 35mm/1.33
(officially adopted as a standard in 1917)
• 1st Sound Stripe on Film (Movietone) = 35mm/1.16
• Academy Aperture – 35mm/1.37 (1931-1952 Standard)
Filmed Projected
ASPECT RATIO HISTORY
• Edison, Eastman, Dickson + Scissors = 35mm/1.33
(officially adopted as a standard in 1917)
• 1st Sound Stripe on Film (Movietone) = 35mm/1.16
• Academy Aperture – 35mm/1.37 (1931-1952 Standard)
• First Projected Widescreen – 35mm/1.66
(1953 Paramount Release of “Shane” – this paralleled the release of Color TV Broadcast)
Filmed
Projected
ASPECT RATIO HISTORY
• Edison, Eastman, Dickson + Scissors = 35mm/1.33
(officially adopted as a standard in 1917)
• 1st Sound Stripe on Film (Movietone) = 35mm/1.16
• Academy Aperture – 35mm/1.37 (1931-1952 Standard)
• First Projected Widescreen – 35mm/1.66
(1953 Paramount Release of “Shane” – this paralleled the release of Color TV Broadcast)
• MGM & Disney intro 1.75, followed by Uni & Columbia
Pictures use of what became Theatrical Standard 1.85 –
using “Soft Mattes” (exposing Full Academy Ap/Protected for
1.85) & “Hard Mattes” (exposing just 1.85)
4:3
1.33/1.37: 1
1.85
16.7: 9
Standard US Cinema
Widescreen
16:10
1.60:1
Apple Cinema Displays???
(As of 2010, TVs have been
introduced with A 2.37 aspect
ratio marketed as "21:9
cinema displays". This aspect
ratio is not recognized by
storage and transmission
standards.)
2.35/2.39/2.40: 1
Anamorphic
1.44: 1
IMAX
70mm - runs thru
camera & projector
Horizontally, allowing
for larger image area
15:9
1.66:1
(A compromise between
the 1.85:1 theatrical ratio
and the 1.33:1 ratio used
for home video. Originally
a flat ratio invented by
Paramount Pictures, now
a standard among
several European
countries; native Super
16 mm frame ratio.
Sometimes this ratio is
rounded up to 1.67:1, this
format is also used on
the Nintendo 3DS's top
screen as well.
16:9
1.77/1.78: 1
COMMON ASPECT RATIOS
ASPECT RATIO
Problems Arising From Multiple
Aspect Ratios
Original aspect ratio (OAR)
Vs.
Modified aspect ratio (MAR)
ASPECT RATIO
Problems Arising From Multiple
Aspect Ratios
ORIGINAL ASPECT RATIOS
MODIFIED ASPECT RATIOS
FRAME SIZE (Resolution)
The Number of Pixels a Display System Can Display
640x480 1920x1080
SD FRAME SIZES
720 x 480 NTSC DV
720 x 576 PAL
HD FRAME SIZES
1280 x 720
720 Horizontal by 1280 Vertical Lines of Resolution
HD FRAME SIZES
1280 x 720
720 Horizontal by 1280 Vertical Lines of Resolution
1920 x 1080
1080 Horizontal by 1920 Vertical Lines of Resolution
HD FRAME SIZES
1280 x 720
720 Horizontal by 1280 Vertical Lines of Resolution
1920 x 1080
1080 Horizontal by 1920 Vertical Lines of Resolution
FULL RASTER vs. SQUEEZED
HD FRAME SIZES
960 x 720p
DVCPro HD
1280 x 1080i
DVCPro HD
HD FRAME SIZES
1440 x 1080i
Sony HDV
960 x 720p
Old JVC HDV
SCREEN DIMENSION & ASPECT RATIO CALCULATOR
http://www.silisoftware.com/tools/screen.php
http://andrew.hedges.name/experiments/aspect_ratio
Some popular dimensions:
Standard Def = 4:3 (width:height) = 320x240, 640x480,
800x600, 1024x768
Widescreen or HD = 16:9 (width:height) = 640x360,
800x450, 960x540, 1024x576, 1280x720, and 1920x1080
“The Available Display Area
And
How Content is Arranged within it”
Data Calculators
AJA DataCalc Software Tool (Desktop & iPhone)
 http://www.aja.com/en/products/software/
MPC Digital Data Rate/Storage Calculator (Online & iPhone)
 http://www.moving-
picture.com/index.php?option=com_content&view=article&id=614&catid=13&Itemid
=853
File Size/Data Rate Calculator (On-line Tool)
 http://www.hdslr-cinema.com/tools/filesize.php?w
Digital Glossaries
 http://www.quantel.com/repository/files/library_DigitalFactBook_20th.pdf
 http://www.learn.usa.canon.com/dlc/glossary/listing.spr
Kodak Site
 http://motion.kodak.com/motion/
Digital Cinema Society (DCS) Tech Tips
 http://www.digitalcinemasociety.org/TechTips.php
Break Time
STANDARD FRAME RATES
Interlaced
STANDARD FRAME RATES
60i
(actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames)
Interlaced
STANDARD FRAME RATES
60i
(actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97
frames)
50i
(50 interlaced fields = 25 frames)
Interlaced
STANDARD FRAME RATES
Progressive
STANDARD FRAME RATES
Progressive
24p
(Usually 23.976-frame progressive / also referred to as 23.98)
STANDARD FRAME RATES
Progressive
24p
(Usually 23.976-frame progressive / also referred to as 23.98)
25p
(25-frame progressive)
STANDARD FRAME RATES
Progressive
24p
(Usually 23.976-frame progressive / also referred to as 23.98)
25p
(25-frame progressive)
30p
(Usually 29.97-frame progressive)
STANDARD FRAME RATES
Progressive
24p
(Usually 23.976-frame progressive / also referred to as 23.98)
25p
(25-frame progressive)
30p
(Usually 29.97-frame progressive)
60p
(Usually 59.94-frame progressive)
ATSC FRAME RATES
36 Formats, Two For Each When One Considers the NTSC-compatible frame rates as well as integer
frame rates.
ATSC HD FRAME RATES
12 High-Definition Formats Designed to Integrate with NTSC
UNIVERSAL FORMAT
US BROADCASTER HDTV
More on Reducing the Amount of Data
Using Frame Rate
KEY CONCEPTS FOR WORKING WITH
HD (DATA) INFORMATION
R G B
The Video image is sampled at a sample rate and per pixel.
Each Pixel has a Color Channel Value for each of the R, G & B.
(white=R100,G100,B100 / Black=R0,G0,B0)
These sampled pixels represent the original image and when
muxed (combined) together from each color channel = SUM OF
FINAL IMAGE (Color Value + Intensity-brightness, contrast, gamma)
A 4th Channel or “Alpha” is often present as well, and controls
Transparency for each pixel. The pass-thru or hold-out aspects
of this channel produce the matte that allows for what is
displayed from the other 3 channels**
Y Pb Pr
LUMA BLUE - LUMA (C-Y) RED - LUMA (R-Y)
(or Analog Component)
Y Cb Cr
LUMA BLUE - LUMA (C-Y) RED - LUMA (R-Y)
(or Digital Component)
ENCODING
Via Color Differencing
Differences in reacting to light and color
Monitors are Linear & Film is Logarithmic
On a monitor, there is a one-to-one correspondence between energy (think
exposure) and brightness. Each time you increase the signal to the monitor
by 1 volt, you get exactly the same incremental increase in brightness.
On film, however, the increase in brightness (emulsion density) is a result
of the logarithm of the increase in
exposure.
Original Image Linear image in Log
Viewer
Log image in Linear
Viewer
Accurate tonality Lows suppressed and
highs accentuated.
Highs flattened and
lows boosted.
YIQ & YU V
(or rec601)
Video Profiles - Rec601 & Rec709
ITU Recomendations
Video Profiles - Rec601 & Rec709
ITU Recomendations
RAW Profiles - Log-C & Bayered Images
Native RAW Image
With LUT Applied
ARRI Alexa & RED One & Epic
(S-Log) S-Gamut & A.C.E.S
X:X:X
CHROMA SUBSAMPLING
X:X:X
CHROMA SUBSAMPLING
X:X:X
CHROMA SUBSAMPLING
X:X:X
CHROMA SUBSAMPLING
X:X:X
CHROMA SUBSAMPLING
X:X:X
CHROMA SUBSAMPLING
X:X:X
CHROMA SUBSAMPLING
X:X:X
CHROMA SUBSAMPLING
Practice of encoding images by
implementing less resolution for Chroma
information than for Luma information.
CHROMA SUBSAMPLING
4 : 4 : 4
CHROMA SUBSAMPLING
4 : 4 : 4
R’G’B’
CHROMA SUBSAMPLING
4 : 4 : 4
NO SUBSAMPLING
R’G’B’
CHROMA SUBSAMPLING
4 : 4 : 4
Y:Cb:Cr
CHROMA SUBSAMPLING
CHROMA SUBSAMPLING
4:2:2
CHROMA SUBSAMPLING
4:2:2
Luma horizontal sampling reference
(originally, luma f
s
as multiple of 3 MGz)
CHROMA SUBSAMPLING
4:2:2
Luma horizontal sampling reference
(originally, luma f
s
as multiple of 3 MGz)
Chroma decreased by 50%, Bandwidth decreased by 1/3
CHROMA SUBSAMPLING
4:2:2:4
Luma horizontal sampling reference
(originally, luma f
s
as multiple of 3 MGz)
If present, same as luma digit; indicates
alpha (key) component
Chroma decreased by 50%, Bandwidth decreased by 1/3
CHROMA SUBSAMPLING
CHROMA SUBSAMPLING
CHROMA SUBSAMPLING
CHROMA SUBSAMPLING
CHROMA SUBSAMPLING
More on Reducing the amount of Data
using CHROMA SUBSAMPLING
4:4:4 10-Bit
4:2:2 8-Bit
4:2:2 10-Bit
R G B
8 8 8
8 Bits Per Pixel
(Sometimes referred to as 24-bit on 3-color composite)
Allows 256 Colors to represent each color channel
16,777,216 Colors Possible
R G B
10 10 10
10 Bits Per Pixel
(Sometimes referred to as 30-bit on 3-color composite)
Allows 1024 Colors to represent each channel
1,073,741,824 Colors Possible
Effect of Bit-Depth
(bpp = # tones)
Why is Bit-Depth Important?
High Bit Depth Low Bit Depth
Bit-Depth & Displays
Michael Cioni
www.lightirondigital.com
THE DOWN & DIRTY GUIDE TO
COMPRESSION
THE DOWN & DIRTY GUIDE TO
COMPRESSION
THE DOWN & DIRTY GUIDE TO
COMPRESSION
THE DOWN & DIRTY GUIDE TO
COMPRESSION
THE DOWN & DIRTY GUIDE TO
COMPRESSION
THE DOWN & DIRTY GUIDE TO
COMPRESSION
Codec
A combination of the words compression and decompression.
A codec is mathematical algorithm, designed to reduce the amount of
data in a file or stream by eliminating redundancy, and then later
restore that file or stream back to its original form as closely as
possible. – (Show 10100101001 eg.)
Codecs vs. Containers/Wrappers
Acquisition Codecs, Editing Codecs, Distribution Codecs
h.264 & mpeg AVCHD, mpeg4 –IN- QT/.mov, .Mp4, .M4p, FLV, F4V, 3GP
un-comp, DV, mpeg IMX & AVCHD, Pro-res –IN- MXF, QT/.Mov, AVI
mpeg2, h.264, AAC –IN- QT/.mov, VOB, .mpeg2 & BDAV (bluray), Mp4, Mp3
VP8, ACM, Vorbis, VC-1 –IN- WebM, WMV/WMA
QT & AVI Containers each support
over 160 different Codecs
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSY vs. LOSSLESS
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSY vs. LOSSLESS
LOSSY – A form of data compression where the
decoded information IS NOT exactly the same as
the originally encoded file. Data is lost.
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSY vs. LOSSLESS
LOSSY – A form of data compression where the
decoded information IS NOT exactly the same as
the originally encoded file. Data is lost.
LOSSLESS – A form of data compression where
the decoded information IS exactly the same as the
originally encoded file. No data is lost.
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSY IMAGE FORMATS
Cartesian Perceptual Compression: Also known as CPC
DiVx
Fractal compression
HAM, hardware compression of color information used in Amiga computers
ICER, used by the Mars Rovers: related to JPEG 2000 in its use of wavelets
JPEG
JPEG 2000, JPEG's successor format that uses wavelets, for Lossy or Lossless
compression.
JBIG2
PGF, Progressive Graphics File (lossless or lossy compression)
Wavelet compression
S3TC texture compression for 3D computer graphics hardware
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSY VIDEO FORMATS
H.261
H.263
H.264
MNG (supports JPEG sprites)
Motion JPEG
MPEG-1 Part 2
MPEG-2 Part 2
MPEG-4 Part 2 and Part 10 (AVC)
Ogg Theora (noted for its lack of patent restrictions)
Sorenson video codec
VC-1
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSY AUDIO FORMATS
AAC
ADPCM
ATRAC
Dolby AC-3
MP2
MP3
Musepack
Ogg Vorbis (noted for its lack of patent restrictions)
WMA
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSLESS IMAGE FORMATS
ABO – Adaptive Binary Optimization
GIF – (lossless, but contains a very limited number color range)
JBIG2 – (lossless or lossy compression of B&W images)
JPEG-LS – (lossless/near-lossless compression standard)
JPEG 2000 – (includes lossless compression method, as proven by Sunil Kumar, Prof
San Diego State University)
JPEG XR - formerly WMPhoto and HD Photo, includes a lossless compression method
PGF – Progressive Graphics File (lossless or lossy compression)
PNG – Portable Network Graphics
TIFF - Tagged Image File Format
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSLESS VIDEO FORMATS
Animation codec
CorePNG
FFV1
JPEG 2000
Huffyuv
Lagarith
MSU Lossless Video Codec
SheerVideo
THE DOWN & DIRTY GUIDE TO
COMPRESSION
LOSSLESS AUDIO FORMATS
Apple Lossless – ALAC (Apple Lossless Audio Codec)
ATRAC Advanced Lossless
Audio Lossless Coding – also known as MPEG-4 ALS
MPEG-4 SLS – also known as HD-AAC
Direct Stream Transfer – DST
Dolby TrueHD
DTS-HD Master Audio
Free Lossless Audio Codec – FLAC
Meridian Lossless Packing – MLP
Monkey's Audio – Monkey's Audio APE
OptimFROG
RealPlayer – RealAudio Lossless
Shorten – SHN
TTA – True Audio Lossless
WavPack – WavPack lossless
WMA Lossless – Windows Media Lossless
COMMON VIDEO COMPRESSION FORMATS
• AVC
VCEG & MPEG
• AVR
Avid Technology
• DVC
SMPTE
• DNxHD
Avid Technology
• H.264
VCEG & MPEG
• JFIF
JPEG/Avid Technology
• JPEG
JPEG
• JPEG 2000
JPEG
• ProRes
Apple
• M-JPEG
JPEG
• M-JPEG 2000
JPEG
• MPEG
MPEG
• MPEG-2
MPEG
• MPEG-4
VCEG & MPEG
• VC-1
SMPTE
• WMV 9
Microsoft Corp.
Format File Size Video Quality Video bitrate
H.264 MP4 8.92MB better 768
AVI 15.00 MB inferior 1200
QuickTime MOV 15.20 MB inferior 768
MKV 15.50 MB inferior 1200
WMV 18.40 MB inferior 1200
WebM 30.20 MB best 1200
INTRAFRAME
Intraframe compression refers to video where
each frame is compressed independently of
nearby frames.
INTRAFRAME
Intraframe compression refers to video where
each frame is compressed independently of
nearby frames.
1. Used in formats like DV, DNxHD, ProRes,
Animation, and M-JPEG.
INTRAFRAME
Intraframe compression refers to video where
each frame is compressed independently of
nearby frames.
1. Used in formats like DV, DNxHD, ProRes,
Animation, and M-JPEG.
2. Can be lossy or lossless. Most common in
editing and graphics work. Can create very
large files and not always ideal for real-time
playback.
INTERFRAME
Intrerframe compression refers to video where
some frames are compressed based on frames
either before or after it in the video stream.
INTERFRAME
Intrerframe compression refers to video where
some frames are compressed based on frames
either before or after it in the video stream.
1. Used in formats like HDV, MPEG-2, MPEG-4,
H.264, and XD-CAM.
INTERFRAME
Intrerframe compression refers to video where
some frames are compressed based on frames
either before or after it in the video stream.
1. Used in formats like HDV, MPEG-2, MPEG-4,
H.264, and XD-CAM.
2. Almost always lossy. Most commonly used as
camera formats or delivery formats. Capable of
much smaller files, but difficult to edit with.
More about
Inter & Intra-Frame Compression
Other Compression Schemes
DCT
A discrete cosine transform (DCT) expresses a
sequence of finitely many data points in terms of
a sum of cosine functions oscillating at different
frequencies.
DCT
A discrete cosine transform (DCT)
expresses a sequence of finitely many data
points in terms of a sum of cosine functions
oscillating at different frequencies.
“BLAH BLAH BLAH”
DCT
A discrete cosine transform (DCT) expresses a
sequence of finitely many data points in terms of
a sum of cosine functions oscillating at different
frequencies.
1. DCT is used in nearly all common video formats like
JPEG, MPEG, DV, DNxHD, ProRes, etc.
DCT
A discrete cosine transform (DCT) expresses a
sequence of finitely many data points in terms of
a sum of cosine functions oscillating at different
frequencies.
1. DCT is used in nearly all common video formats like
JPEG, MPEG, DV, DNxHD, ProRes, etc.
2. Can be lossy or lossless. Highly compressed images
will often have artifacts along edges, lose color fidelity,
and/or become blocky and pixelated.
WAVELET
A technique for video compression that treats the
image like a series of waves, known as wavelets,
starting with large waves and progressively
getting smaller based on the level of compression
desired.
WAVELET
A technique for video compression that treats the
image like a series of waves, known as wavelets,
starting with large waves and progressively
getting smaller based on the level of compression
desired.
1. Wavelet is a newer technology used in compressions
like JPEG 2000 and CineForm.
WAVELET
A technique for video compression that treats the
image like a series of waves, known as wavelets,
starting with large waves and progressively
getting smaller based on the level of compression
desired.
1. Wavelet is a newer technology used in compressions
like JPEG 2000 and CineForm.
2. Can be lossy or lossless. Highly compressed images
will rarely create artifacts, but can become soft/blurry.
MPEG BASICS
In MPEG encoding, a group of pictures, or
GOP, specifies the order in which intra-frames
and inter frames are arranged.
The GOP is a group of successive pictures within
an MPEG-coded video stream. Each MPEG-
coded video stream consists of successive GOPs.
From the MPEG pictures contained in it the
visible frames are generated.
THE 3
PRIMARY FRAME COMPRESSIONS
THE 3
PRIMARY FRAME COMPRESSIONS
• I-Frames (I-Picture, Intra Frames)
THE 3
PRIMARY FRAME COMPRESSIONS
• I-Frames (I-Picture, Intra Frames)
• P-Frames (Predicted Frames)
THE 3
PRIMARY FRAME COMPRESSIONS
• I-Frames (I-Picture, Intra Frames)
• P-Frames (Predicted Frames)
• B-Frames (Bi-Directional Frames)
MPEG-2 BIT RATE DETAILS
4 Mbit/s - Low Level Encoding
5 Mbit/s - DVD
15 Mbit/s - Main Level
60 Mbit/s - High-14
80 Mbit/s - High Level
ATSC Broadcast Standards - 19.4 Mbit/s for Low HD and 38
Mbit/s for High End.
KEY MASTERING COMPRESSIONS
KEY MASTERING COMPRESSIONS
• DNxHD
AVID TECHNOLOGY
KEY MASTERING COMPRESSIONS
• DNxHD
AVID TECHNOLOGY
• ProRES 422, ProRES 444
APPLE
KEY MASTERING COMPRESSIONS
• DNxHD
AVID TECHNOLOGY
• ProRES 422, ProRES 444
APPLE
• Prospect HD/4K, Neo HD/4K/3D
CINEFORM
NEW TECHNOLOGY
Thunderbolt
Solid State Drives
Cloud Encoding
Connected TV’s
TeraDisc
All of us are acquiring and creating more and more high-density, high-resolution content. Collect, store
and find your valuable personal and commercial content using a single 1TB TeraDisc. 250 hours of HDTV
or 300,000 digital photos.
Empowering the Enterprise
The healthcare, public, entertainment, security, financial and business sectors can inexpensively archive
vast amounts of data at the desktop. Totally meeting compliance regulations with bit-by-bit WORM
recording. Readily integrates into today’s archiving solutions. Longevity of greater than 50 years.
1 Trillion Bytes on a Single Disc
Enables the reading and writing of 200 layers of data on a single DVD-size disc. Uses advanced material
polymer technology engineered to create an optical media with unique light-sensitive properties.
Inexpensive drives able to reach consumer form factor and pricing.
Mempile’s game-changing 2-photon technology revolutionizes consumer and enterprise archiving – the
removable TeraDisc offers high capacity, low cost, permanence and ease of use.
NEW TECHNOLOGY
50 terabyte flash
drive made of bug
protein
This idea first started out by coating DVDs
with a layer of protein so that one day solid
state memory could hold so much
information that storing data on your
computer hard drive will be obsolete
NEW TECHNOLOGY
3D Stereoscopic
10/6/11
Webisodes & Mobisodes
10/6/11
Interactive TV
10/6/11
Gaming
10/6/11
Social Media & TV
10/6/11
Emerging Media
199 HD/ DATA Essentials
Scott Carrey
Course Evaluation: www.vs.edu/survey
scott@scarrey.com

Mais conteúdo relacionado

Mais procurados

NEXT GENERATION BROADCASTING TECHNOLOGY
NEXT GENERATION BROADCASTING TECHNOLOGYNEXT GENERATION BROADCASTING TECHNOLOGY
NEXT GENERATION BROADCASTING TECHNOLOGYVinayagam Mariappan
 
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGESVIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGESDr. Mohieddin Moradi
 
Analog TV Systems/Digital TV Systems/3DTV
Analog TV Systems/Digital TV Systems/3DTVAnalog TV Systems/Digital TV Systems/3DTV
Analog TV Systems/Digital TV Systems/3DTVSumudu Wasantha
 
DVB-T2 Lite vs. DAB+ for Digital Radio (English version)
DVB-T2 Lite vs. DAB+ for Digital Radio (English version)DVB-T2 Lite vs. DAB+ for Digital Radio (English version)
DVB-T2 Lite vs. DAB+ for Digital Radio (English version)Open Channel ApS | U-Media ApS
 
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Dr. Mohieddin Moradi
 
Modern broadcast camera techniques, set up & operation
Modern broadcast camera techniques, set up & operationModern broadcast camera techniques, set up & operation
Modern broadcast camera techniques, set up & operationDr. Mohieddin Moradi
 
TV Broadcasting R&D at CRC-Canada
TV Broadcasting R&D at CRC-CanadaTV Broadcasting R&D at CRC-Canada
TV Broadcasting R&D at CRC-CanadaBernard Caron
 
DVB-T2 Lite for Digital Radio by Kenneth Wenzel
DVB-T2 Lite for Digital Radio by Kenneth WenzelDVB-T2 Lite for Digital Radio by Kenneth Wenzel
DVB-T2 Lite for Digital Radio by Kenneth WenzelYOZZO
 
Guide for Preparing FM Radio Transmitter Coverage Maps
Guide for Preparing FM Radio Transmitter Coverage MapsGuide for Preparing FM Radio Transmitter Coverage Maps
Guide for Preparing FM Radio Transmitter Coverage MapsFrank Massa
 
"4K/ UHD Advanced"
"4K/ UHD Advanced""4K/ UHD Advanced"
"4K/ UHD Advanced"Mesclado
 
High Dynamic Range: An Introduction
High Dynamic Range: An IntroductionHigh Dynamic Range: An Introduction
High Dynamic Range: An IntroductionThuong Nguyen Canh
 
Ultra High Defination TV
Ultra High Defination TVUltra High Defination TV
Ultra High Defination TVRohit Choudhury
 

Mais procurados (20)

DVB-T2 Lite | First Deployments, First Experiences.
DVB-T2 Lite | First Deployments, First Experiences.DVB-T2 Lite | First Deployments, First Experiences.
DVB-T2 Lite | First Deployments, First Experiences.
 
NEXT GENERATION BROADCASTING TECHNOLOGY
NEXT GENERATION BROADCASTING TECHNOLOGYNEXT GENERATION BROADCASTING TECHNOLOGY
NEXT GENERATION BROADCASTING TECHNOLOGY
 
Presentacion instituto 4 nov_english
Presentacion instituto 4 nov_englishPresentacion instituto 4 nov_english
Presentacion instituto 4 nov_english
 
Video audio
Video audioVideo audio
Video audio
 
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGESVIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN,   OPPORTUNITIES & CHALLENGES
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
 
HDTV pro
HDTV proHDTV pro
HDTV pro
 
Analog TV Systems/Digital TV Systems/3DTV
Analog TV Systems/Digital TV Systems/3DTVAnalog TV Systems/Digital TV Systems/3DTV
Analog TV Systems/Digital TV Systems/3DTV
 
DVB-T2 Lite vs. DAB+ for Digital Radio (English version)
DVB-T2 Lite vs. DAB+ for Digital Radio (English version)DVB-T2 Lite vs. DAB+ for Digital Radio (English version)
DVB-T2 Lite vs. DAB+ for Digital Radio (English version)
 
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
 
Modern broadcast camera techniques, set up & operation
Modern broadcast camera techniques, set up & operationModern broadcast camera techniques, set up & operation
Modern broadcast camera techniques, set up & operation
 
TV Broadcasting R&D at CRC-Canada
TV Broadcasting R&D at CRC-CanadaTV Broadcasting R&D at CRC-Canada
TV Broadcasting R&D at CRC-Canada
 
DVB-T/-T2 Devices | Original Network ID & LCN
DVB-T/-T2 Devices | Original Network ID & LCNDVB-T/-T2 Devices | Original Network ID & LCN
DVB-T/-T2 Devices | Original Network ID & LCN
 
DVB-T2 Lite for Digital Radio by Kenneth Wenzel
DVB-T2 Lite for Digital Radio by Kenneth WenzelDVB-T2 Lite for Digital Radio by Kenneth Wenzel
DVB-T2 Lite for Digital Radio by Kenneth Wenzel
 
Guide for Preparing FM Radio Transmitter Coverage Maps
Guide for Preparing FM Radio Transmitter Coverage MapsGuide for Preparing FM Radio Transmitter Coverage Maps
Guide for Preparing FM Radio Transmitter Coverage Maps
 
4K Display Technology
4K Display Technology4K Display Technology
4K Display Technology
 
Satellite dvb
Satellite dvbSatellite dvb
Satellite dvb
 
"4K/ UHD Advanced"
"4K/ UHD Advanced""4K/ UHD Advanced"
"4K/ UHD Advanced"
 
High Dynamic Range: An Introduction
High Dynamic Range: An IntroductionHigh Dynamic Range: An Introduction
High Dynamic Range: An Introduction
 
Hd tv
Hd tvHd tv
Hd tv
 
Ultra High Defination TV
Ultra High Defination TVUltra High Defination TV
Ultra High Defination TV
 

Destaque

TVU Tech Episode 1 extra information
TVU Tech Episode 1 extra informationTVU Tech Episode 1 extra information
TVU Tech Episode 1 extra informationtvutech
 
02.m3 cms sys-req4mediastreaming
02.m3 cms sys-req4mediastreaming02.m3 cms sys-req4mediastreaming
02.m3 cms sys-req4mediastreamingtarensi
 
Avlm 2009 Compression Erik Luyten
Avlm 2009  Compression   Erik LuytenAvlm 2009  Compression   Erik Luyten
Avlm 2009 Compression Erik Luytenavlm2009avnet
 
Optimisation and Compression Intro
Optimisation and Compression IntroOptimisation and Compression Intro
Optimisation and Compression IntroJames Uren
 

Destaque (6)

MPEG2whitepaper
MPEG2whitepaperMPEG2whitepaper
MPEG2whitepaper
 
TVU Tech Episode 1 extra information
TVU Tech Episode 1 extra informationTVU Tech Episode 1 extra information
TVU Tech Episode 1 extra information
 
02.m3 cms sys-req4mediastreaming
02.m3 cms sys-req4mediastreaming02.m3 cms sys-req4mediastreaming
02.m3 cms sys-req4mediastreaming
 
Chap62
Chap62Chap62
Chap62
 
Avlm 2009 Compression Erik Luyten
Avlm 2009  Compression   Erik LuytenAvlm 2009  Compression   Erik Luyten
Avlm 2009 Compression Erik Luyten
 
Optimisation and Compression Intro
Optimisation and Compression IntroOptimisation and Compression Intro
Optimisation and Compression Intro
 

Semelhante a Vs199 hd/data essentials sc master rev3_10_2013_compressed_4_slideshare

Elisha Attia Logtel from ANALOG 2 HDTV
Elisha Attia Logtel from ANALOG 2 HDTVElisha Attia Logtel from ANALOG 2 HDTV
Elisha Attia Logtel from ANALOG 2 HDTVElysée (Elisha) Attia
 
DTV Technical Overview
DTV Technical OverviewDTV Technical Overview
DTV Technical OverviewAmos Tsai
 
A seminar presentation on HDTV, 3DTV
A seminar presentation on HDTV, 3DTVA seminar presentation on HDTV, 3DTV
A seminar presentation on HDTV, 3DTVAbhinav Vatsya
 
Quantitative Analysis
Quantitative AnalysisQuantitative Analysis
Quantitative AnalysisVideoguy
 
Video tech final
Video tech finalVideo tech final
Video tech finalKieran Ryan
 
#Digital Caribbean: Dr Peter Siebert, DVB Project Office
#Digital Caribbean: Dr Peter Siebert, DVB Project Office#Digital Caribbean: Dr Peter Siebert, DVB Project Office
#Digital Caribbean: Dr Peter Siebert, DVB Project OfficeCommonwealthBroadcastingAssoc
 
Broadcast day-2010-ses-world-skies-sspi
Broadcast day-2010-ses-world-skies-sspiBroadcast day-2010-ses-world-skies-sspi
Broadcast day-2010-ses-world-skies-sspiSSPI Brasil
 
HD Radio Innovation
HD Radio Innovation HD Radio Innovation
HD Radio Innovation Nautel
 
OpenTech 2008 - The Child of Baird and Berners-Lee
OpenTech 2008 - The Child of Baird and Berners-LeeOpenTech 2008 - The Child of Baird and Berners-Lee
OpenTech 2008 - The Child of Baird and Berners-Leetomski
 
STREAMING and BROADCASTING CHEAT SHEET
STREAMING and BROADCASTING CHEAT SHEETSTREAMING and BROADCASTING CHEAT SHEET
STREAMING and BROADCASTING CHEAT SHEETAndy W. Kochendorfer
 
8k high resolution camera
8k high resolution camera8k high resolution camera
8k high resolution cameraAnkit Tandekar
 
Enensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV NetworksEnensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV NetworksSematron UK Ltd
 
simple video compression
simple video compression simple video compression
simple video compression LaLit DuBey
 

Semelhante a Vs199 hd/data essentials sc master rev3_10_2013_compressed_4_slideshare (20)

RGB Broadcast Company Profile
RGB Broadcast Company ProfileRGB Broadcast Company Profile
RGB Broadcast Company Profile
 
Elisha Attia Logtel from ANALOG 2 HDTV
Elisha Attia Logtel from ANALOG 2 HDTVElisha Attia Logtel from ANALOG 2 HDTV
Elisha Attia Logtel from ANALOG 2 HDTV
 
DTV Technical Overview
DTV Technical OverviewDTV Technical Overview
DTV Technical Overview
 
A seminar presentation on HDTV, 3DTV
A seminar presentation on HDTV, 3DTVA seminar presentation on HDTV, 3DTV
A seminar presentation on HDTV, 3DTV
 
Quantitative Analysis
Quantitative AnalysisQuantitative Analysis
Quantitative Analysis
 
Video tech final
Video tech finalVideo tech final
Video tech final
 
#Digital Caribbean: Dr Peter Siebert, DVB Project Office
#Digital Caribbean: Dr Peter Siebert, DVB Project Office#Digital Caribbean: Dr Peter Siebert, DVB Project Office
#Digital Caribbean: Dr Peter Siebert, DVB Project Office
 
Broadcast day-2010-ses-world-skies-sspi
Broadcast day-2010-ses-world-skies-sspiBroadcast day-2010-ses-world-skies-sspi
Broadcast day-2010-ses-world-skies-sspi
 
HD Radio Innovation
HD Radio Innovation HD Radio Innovation
HD Radio Innovation
 
OpenTech 2008 - The Child of Baird and Berners-Lee
OpenTech 2008 - The Child of Baird and Berners-LeeOpenTech 2008 - The Child of Baird and Berners-Lee
OpenTech 2008 - The Child of Baird and Berners-Lee
 
STREAMING and BROADCASTING CHEAT SHEET
STREAMING and BROADCASTING CHEAT SHEETSTREAMING and BROADCASTING CHEAT SHEET
STREAMING and BROADCASTING CHEAT SHEET
 
HDTV
HDTVHDTV
HDTV
 
8k high resolution camera
8k high resolution camera8k high resolution camera
8k high resolution camera
 
Feature uhdtv
Feature uhdtvFeature uhdtv
Feature uhdtv
 
A glance-at-voip
A glance-at-voipA glance-at-voip
A glance-at-voip
 
Enensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV NetworksEnensys -Content Repurposing for Mobile TV Networks
Enensys -Content Repurposing for Mobile TV Networks
 
simple video compression
simple video compression simple video compression
simple video compression
 
Lecture01
Lecture01Lecture01
Lecture01
 
Feature uhdtv
Feature uhdtvFeature uhdtv
Feature uhdtv
 
Feature uhdtv
Feature uhdtvFeature uhdtv
Feature uhdtv
 

Último

Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 

Último (20)

Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 

Vs199 hd/data essentials sc master rev3_10_2013_compressed_4_slideshare

  • 1. Instructor: Scott Carrey Course Evaluation: www.vs.edu/survey
  • 2. Instructor: Scott Carrey Course Evaluation: www.vs.edu/survey
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13. A high definition TV is one that offers significantly higher resolution as compared to the traditional prevailing system.
  • 14. A high definition TV is one that offers significantly higher resolution as compared to the traditional prevailing system. It’s essentially a marketing term more than any one specific standard. If a capture or playback device can do so with higher quality than what we have been considering standard def, then it can be deemed hi-def. For our purposes though we are going to focus on media Traditionally HDTV was analog, then digital, and in both cases comprised of data.
  • 15.
  • 16.
  • 17. Key Moments In HD History
  • 18. Key Moments In HD History -1936 Britain/1938 France, start transmitting in what they refer to as HDTV. Earlier systems had as few as 30 lines of resolution, these ran 240(p) described as sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only)
  • 19. -1958, U.S.S.R creates “Transformator” the first high resolution television capable of producing an image composed of 1,125 lines. Aimed at teleconferencing for military command, ended up a research project never deploying in the military or broadcasting. -1936 Britain/1938 France, start transmitting in what they refer to as HDTV. Earlier systems had as few as 30 lines of resolution, these ran 240(p) described as sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only) Key Moments In HD History
  • 20. -1960’s, Development (on what we consider HDTV today) began by Japanese state broadcaster NHK. In 1979 marketed to consumers as “Hi-Vision” or MUSE (multiple sub-Nyquist sampling encoding) (1080i/1125 lines) -1958, U.S.S.R creates “Transformator” the first high resolution television capable of producing an image composed of 1,125 lines. Aimed at teleconferencing for military command, ended up a research project never deploying in the military or broadcasting. -1936 Britain/1938 France, start transmitting in what they refer to as HDTV. Earlier systems had as few as 30 lines of resolution, these ran 240(p) described as sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only) Key Moments In HD History
  • 21. -1981 MUSE demo’d in the US, Regan declares “a matter of national interest” to develop HDTV in the USA -1960’s, Development (on what we consider HDTV today) began by Japanese state broadcaster NHK. In 1979 marketed to consumers as “Hi-Vision” or MUSE (multiple sub-Nyquist sampling encoding) (1080i/1125 lines) -1958, U.S.S.R creates “Transformator” the first high resolution television capable of producing an image composed of 1,125 lines. Aimed at teleconferencing for military command, ended up a research project never deploying in the military or broadcasting. -1936 Britain/1938 France, start transmitting in what they refer to as HDTV. Earlier systems had as few as 30 lines of resolution, these ran 240(p) described as sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only) Key Moments In HD History
  • 22. -1986 First commercial introduction of HDTV production equipment in the US begins. (1990 first broadcasts, 1996+ Mainstream Adoption) -1981 MUSE demo’d in the US, Regan declares “a matter of national interest” to develop HDTV in the USA -1960’s, Development (on what we consider HDTV today) began by Japanese state broadcaster NHK. In 1979 marketed to consumers as “Hi-Vision” or MUSE (multiple sub-Nyquist sampling encoding) (1080i/1125 lines) -1958, U.S.S.R creates “Transformator” the first high resolution television capable of producing an image composed of 1,125 lines. Aimed at teleconferencing for military command, ended up a research project never deploying in the military or broadcasting. -1936 Britain/1938 France, start transmitting in what they refer to as HDTV. Earlier systems had as few as 30 lines of resolution, these ran 240(p) described as sequential, 377i, 441i, & in ’48, 768i(HD by today but b/w only) Key Moments In HD History
  • 23. -NHK: In 1988, Olympic Games shot in HDTV. Bell Systems ships HD signal over fiber optics. Key Moments In HD History
  • 24. -NHK: In 1988, Olympic Games shot in HDTV. Bell Systems ships HD signal over fiber optics. Parallel Present Day Moment (look @ where we came from to see where we are going) -Beijing Olympics first to Stream Key Moments In HD History
  • 25. -NHK succeeded in showing the world's first Hi-vision (HDTV) pictures of our planet, taken from the space shuttle "Discovery" which went into orbit on October 29, 1998. Key Moments In HD History
  • 26. -NHK succeeded in showing the world's first Hi-vision (HDTV) pictures of our planet, taken from the space shuttle "Discovery" which went into orbit on October 29, 1998. Parallel Present Day Moment (look @ where we came from to see where we are going) -Skype Call From Space Lab Key Moments In HD History
  • 27.
  • 28.
  • 29.  We Sample the world around us (encoding) and Process as DATA (replicating human functions), or in the case of documents, texts, CGI, etc. generated as DATA, directly within a Software Tool.  DATA is Created & Read as Binary Information  This Binary Info is what makes up Any & All media  It is the DNA of DATA!
  • 30.
  • 31. BIT (short for “binary digit”) is the smallest unit of measurable data and contains two possible states represented by a 1 or 0 - sometimes referred to as On or Off, High or Low, True or False
  • 32.
  • 33.
  • 35. 1 Byte = 8-bits Byte is short for “Binary Term”
  • 36. 1 Byte = 8-bits Byte = Binary Term Byte
  • 37. 1 Byte = 8-bits 1 Kilobyte = ????-bits
  • 38. 1 Byte = 8-bits 1 Kilobyte = 8192-bits
  • 39. 1KB = 1024 Bytes = 8192 Bits
  • 40. 1 Byte = 8-bits 1 Kilobyte = 8192-bits 1 Megabyte = 8388608-bits or 1024KB
  • 41. 1 Byte = 8-bits 1 Kilobyte = 8192-bits 1 Megabyte = 8388608-bits 1 Gigabyte = 8589934592-bits
  • 42. 1 Byte = 8-bits 1 Kilobyte = 8192-bits 1 Megabyte = 8388608-bits 1 Gigabyte = 8589934592-bits 1 Terabyte = 1099511627776-bits
  • 43. 1 Byte = 8-bits 1 Kilobyte = 8192-bits 1 Megabyte = 8388608-bits 1 Gigabyte = 8589934592-bits 1 Terabyte = 1099511627776-bits 1 Petabyte = 1125899906842624-bits
  • 44. 1 Byte = 8-bits 1 Kilobyte = 8192-bits 1 Megabyte = 8388608-bits 1 Gigabyte = 8589934592-bits 1 Terabyte = 1099511627776-bits 1 Petabyte = 1125899906842624-bits 1 Exabyte = 1152921504606846976-bits
  • 45. 1 Byte = 8-bits 1 Kilobyte = 8192-bits 1 Megabyte = 8388608-bits 1 Gigabyte = 8589934592-bits 1 Terabyte = 1099511627776-bits 1 Petabyte = 1125899906842624-bits 1 Exabyte = 1152921504606846976-bits 1 Zettabyte = 9444732965739290427392-bits
  • 46. 1 Byte = 8-bits 1 Kilobyte = 8192-bits 1 Megabyte = 8388608-bits 1 Gigabyte = 8589934592-bits 1 Terabyte = 1099511627776-bits 1 Petabyte = 1125899906842624-bits 1 Exabyte = 1152921504606846976-bits 1 Zettabyte = 9444732965739290427392-bits 1 Yottabyte = 9671406556917033397649408-bits
  • 47. ADDED SINCE Early 2000’s (Though Not Standardized) Brontobyte 1 Brontobyte = 1024 Yottabytes or 1237940039285380274899124224 Bytes or Multiply the above by 8 = # of bits
  • 48. ADDED SINCE 2011 (Though Not Standardized) Geopbyte 1 Geopbyte = 1024 Brontobytes Or 1267650600228229401496703205376 Bytes or Multiply the above by 8 = # of bits
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54. Bits that are processed per unit of time (bps or bpm). In General the higher the Bit Rate = higher Quality (requiring more bandwidth to deliver.)
  • 55. Bits that are processed per unit of time (bps or bpm). In General the higher the Bit Rate = higher Quality (requiring more bandwidth to deliver.) BANDWIDTH The term Bandwidth or Throughput, denotes the achieved Bit Rate a computer network over a logical or physical communication link can deliver. Bandwidth must be high enough to meet the Data Rate, in order to carry enough info to sustain the succession of images required by video. Communication paths usually consist of a series of links, each with their own bandwidth. If one of these is much slower than the rest, it is said to be a Bandwidth Bottleneck.
  • 56. 1,024 bit/s = 1 Kbit/s (one kilobit or one thousand bits per second) 1,048,576 bit/s = 1 Mbit/s (one megabit or one million bits per second) 1,073,741,824 bit/s = 1 Gbit/s (one gigabit or one billion bits per second)
  • 57.
  • 58. AUDIO
  • 59. 32 kbit/s — MW (AM) quality AUDIO
  • 60. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality AUDIO
  • 61. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality AUDIO
  • 62. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. AUDIO
  • 63. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. AUDIO
  • 64. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech AUDIO
  • 65. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) AUDIO
  • 66. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) AUDIO
  • 67. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio AUDIO
  • 68. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality AUDIO VIDEO
  • 69. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing AUDIO VIDEO
  • 70. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing 1.25 Mbit/s — VCD quality VIDEO AUDIO
  • 71. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing 1.25 Mbit/s — VCD quality 5 Mbit/s — DVD quality VIDEO AUDIO
  • 72. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing 1.25 Mbit/s — VCD quality 5 Mbit/s — DVD quality 15 Mbit/s — HDTV quality VIDEO AUDIO
  • 73. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing 1.25 Mbit/s — VCD quality 5 Mbit/s — DVD quality 15 Mbit/s — HDTV quality 36 Mbit/s — HD DVD quality VIDEO AUDIO
  • 74. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing 1.25 Mbit/s — VCD quality 5 Mbit/s — DVD quality 15 Mbit/s — HDTV quality 36 Mbit/s — HD DVD quality 54 Mbit/s — Blu-ray Disc quality VIDEO AUDIO
  • 75. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing 1.25 Mbit/s — VCD quality 5 Mbit/s — DVD quality 15 Mbit/s — HDTV quality 36 Mbit/s — HD DVD quality 54 Mbit/s — Blu-ray Disc quality VIDEO AUDIO
  • 76. 32 kbit/s — MW (AM) quality 96 kbit/s — FM quality 128–160 kbit/s — Standard Bitrate quality 192 kbit/s — DAB (Digital Audio Broadcasting) quality. 224–320 kbit/s — Near CD quality. 800 bit/s — minimum necessary for recognizable speech 8 kbit/s — telephone quality (using speech codecs) 500 kbit/s–1 Mbit/s — lossless audio (FLAC, WMA LossL, etc.) 1411 kbit/s — PCM sound format of Compact Disc Digital Audio 16 kbit/s — videophone quality 128 – 384 kbit/s — business-oriented videoconferencing 1.25 Mbit/s — VCD quality 5 Mbit/s — DVD quality 15 Mbit/s — HDTV quality 36 Mbit/s — HD DVD quality 54 Mbit/s — Blu-ray Disc quality 140 – 230MB/s — Uncompressed HD @ 8b/24p/1080 – 10b/29.97 VIDEO AUDIO
  • 77. USB 2.0 480 Mbp/s (60MB/s) USB 3.0 5 Gbp/s (625MB/s) FIREWIRE 400 Mbps (50MB/s) FIREWIRE 800 Mbps (100MB/s) FireWire S1600 (1.6Gb), S3200 (3.2Gb) &IEEE P1394d (6.4Gb) SATA/ESATA 325 MB/s (1.5-3Gbp/s) esata3=6Gb THUNDERBOLT 10 Gbp/s - Bi-Directional (up/down) “Transfer a full-length HD movie in less than 30 seconds".
  • 78.  HD Component  Serial Digital Interface & HD-SDI  Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)  HDMI
  • 79.  HD Component  Serial Digital Interface & HD-SDI  Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)  HDMI
  • 80.  HD Component  Serial Digital Interface & HD-SDI  Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)  HDMI
  • 81.  HD Component  Serial Digital Interface & HD-SDI  Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)  HDMI
  • 82.  HD Component  Serial Digital Interface & HD-SDI  Dual Link HD-SDI (for uncomp. RGB/4:4:4/10bit & DCP)  HDMI
  • 83.
  • 84.
  • 85.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95. SMPTE 259M STANDARD Describes 10-Bit Serial Digital Operating at 143/270/360 Mb/s.
  • 96. KEY SMPTE HD STANDARDS
  • 97. KEY SMPTE HD STANDARDS SMPTE 274M
  • 98. KEY SMPTE HD STANDARDS SMPTE 274M SMPTE 292M
  • 99. KEY SMPTE HD STANDARDS SMPTE 274M SMPTE 292M SMPTE 296M
  • 100. KEY SMPTE HD STANDARDS SMPTE 274M SMPTE 292M SMPTE 296M SMPTE 370M
  • 101. KEY SMPTE HD STANDARDS SMPTE 274M SMPTE 292M SMPTE 296M SMPTE 370M SMPTE 372M
  • 102. The HD choices began when the ATSC created the digital television table of 36 digital broadcast (DTV) formats. Of those 36 formats, 12 are high definition. These are the formats that the United States government has determined will be the standard for digital broadcasting. HD/essentials ATSC DIGITAL FORMATS
  • 103.
  • 105. • Frame size COMPONENTS OF HD/DATA FILES
  • 106. • Frame size • Frame rate COMPONENTS OF HD/DATA FILES
  • 107. • Frame size • Frame rate • Frame recording method COMPONENTS OF HD/DATA FILES
  • 108. • Frame size • Frame rate • Frame recording method • Color Space/Encoding Method COMPONENTS OF HD/DATA FILES
  • 109. • Frame size • Frame rate • Frame recording method • Color Space/Encoding Method • Bit depth COMPONENTS OF HD/DATA FILES
  • 110. • Frame size • Frame rate • Frame recording method • Color Space/Encoding Method • Bit depth • Compression (Data Reduction) COMPONENTS OF HD/DATA FILES
  • 111. • Frame size • Frame rate • Frame recording method • Color Space/Encoding Method • Bit depth • Compression (Data Reduction) • MetaData COMPONENTS OF HD/DATA FILES
  • 112.  Screen Aspect Ratio  Pixel Aspect (Square 1 vs Non-Sq .9)  Frame Size = Screen/Display Resolution  Screen Dimension  Screen Real-Estate Frame Size Considerations
  • 113. H W ASPECT RATIO The ASPECT RATIO of an IMAGE is its WIDTH divided by its HEIGHT.
  • 114. ASPECT RATIO XxY (pronounced "x-to-y") (pronounced "x-by-y") X:Y and The ASPECT RATIO of an IMAGE is its WIDTH divided by its HEIGHT.
  • 115. 4x3 1.33x1 4:3 1.33:1 ASPECT RATIO XxY (pronounced "x-to-y") (pronounced "x-by-y") X:Y and The ASPECT RATIO of an IMAGE is its WIDTH divided by its HEIGHT.
  • 116. 16x9 1.78:1 4x3 1.33:1 IMAGE ASPECT RATIO The ASPECT RATIO of an IMAGE is its WIDTH divided by its HEIGHT.
  • 118. PIXEL ASPECT RATIO Pixel Aspect Ratio (PAR) is the ratio of width to height of ONE PIXEL in an image
  • 119. PIXEL ASPECT RATIO What is a Pixel? In digital video images - a pixel, or pel, (both short for “picture element”) are a single point in a raster image; the smallest addressable screen element in a display device; the smallest unit of a picture that can be represented by a single color; and it generally is the smallest unit we can control in an image (however there are high-end systems that allow for subpixel based manipuliation, which take averages of neighboring pixels, for microprecision). Pixels are arranged in a two-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. Groups of Pixels together form the images we see, the shape, smoothness, size & color tones. PIXEL ASPECT RATIO effects the Shape of our Image.
  • 120. PIXEL ASPECT RATIO Square vs Non-Square (Rectangular) Pixels Pixel Aspect Ratio (PAR) is the ratio of width to height of ONE PIXEL in an image
  • 121. PIXEL ASPECT RATIO Square vs Non-Square (Rectangular) Pixels Pixel Aspect Ratio (PAR) is the ratio of width to height of ONE PIXEL in an image
  • 122. More about Pixels PIXEL ASPECT RATIO The intensity of each pixel is variable. In color image systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black.
  • 123. PIXEL ASPECT RATIO Bits per pixel The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A 1 bpp image uses 1-bit for each pixel, so each pixel can be either on or off. Each additional bit doubles the number of colors available, so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors: 1 bpp, 21 = 2 colors (monochrome) 2 bpp, 22 = 4 colors 3 bpp, 23 = 8 colors ... 8 bpp, 28 = 256 colors 16 bpp, 216 = 65,536 colors ("Highcolor" ) 24 bpp, 224 ≈ 16.8 million colors ("Truecolor")
  • 125. ASPECT RATIO HISTORY • Edison, Eastman, Dickson + Scissors = 35mm/1.33 (officially adopted as a standard in 1917)
  • 126. ASPECT RATIO HISTORY • Edison, Eastman, Dickson + Scissors = 35mm/1.33 (officially adopted as a standard in 1917)
  • 127. ASPECT RATIO HISTORY • Edison, Eastman, Dickson + Scissors = 35mm/1.33 (officially adopted as a standard in 1917) • 1st Sound Stripe on Film (Movietone) = 35mm/1.16
  • 128. ASPECT RATIO HISTORY • Edison, Eastman, Dickson + Scissors = 35mm/1.33 (officially adopted as a standard in 1917) • 1st Sound Stripe on Film (Movietone) = 35mm/1.16 • Academy Aperture – 35mm/1.37 (1931-1952 Standard) Filmed Projected
  • 129. ASPECT RATIO HISTORY • Edison, Eastman, Dickson + Scissors = 35mm/1.33 (officially adopted as a standard in 1917) • 1st Sound Stripe on Film (Movietone) = 35mm/1.16 • Academy Aperture – 35mm/1.37 (1931-1952 Standard) • First Projected Widescreen – 35mm/1.66 (1953 Paramount Release of “Shane” – this paralleled the release of Color TV Broadcast) Filmed Projected
  • 130. ASPECT RATIO HISTORY • Edison, Eastman, Dickson + Scissors = 35mm/1.33 (officially adopted as a standard in 1917) • 1st Sound Stripe on Film (Movietone) = 35mm/1.16 • Academy Aperture – 35mm/1.37 (1931-1952 Standard) • First Projected Widescreen – 35mm/1.66 (1953 Paramount Release of “Shane” – this paralleled the release of Color TV Broadcast) • MGM & Disney intro 1.75, followed by Uni & Columbia Pictures use of what became Theatrical Standard 1.85 – using “Soft Mattes” (exposing Full Academy Ap/Protected for 1.85) & “Hard Mattes” (exposing just 1.85)
  • 131. 4:3 1.33/1.37: 1 1.85 16.7: 9 Standard US Cinema Widescreen 16:10 1.60:1 Apple Cinema Displays??? (As of 2010, TVs have been introduced with A 2.37 aspect ratio marketed as "21:9 cinema displays". This aspect ratio is not recognized by storage and transmission standards.) 2.35/2.39/2.40: 1 Anamorphic 1.44: 1 IMAX 70mm - runs thru camera & projector Horizontally, allowing for larger image area 15:9 1.66:1 (A compromise between the 1.85:1 theatrical ratio and the 1.33:1 ratio used for home video. Originally a flat ratio invented by Paramount Pictures, now a standard among several European countries; native Super 16 mm frame ratio. Sometimes this ratio is rounded up to 1.67:1, this format is also used on the Nintendo 3DS's top screen as well. 16:9 1.77/1.78: 1 COMMON ASPECT RATIOS
  • 132. ASPECT RATIO Problems Arising From Multiple Aspect Ratios
  • 133. Original aspect ratio (OAR) Vs. Modified aspect ratio (MAR) ASPECT RATIO Problems Arising From Multiple Aspect Ratios
  • 136. FRAME SIZE (Resolution) The Number of Pixels a Display System Can Display
  • 138. SD FRAME SIZES 720 x 480 NTSC DV 720 x 576 PAL
  • 139. HD FRAME SIZES 1280 x 720 720 Horizontal by 1280 Vertical Lines of Resolution
  • 140. HD FRAME SIZES 1280 x 720 720 Horizontal by 1280 Vertical Lines of Resolution 1920 x 1080 1080 Horizontal by 1920 Vertical Lines of Resolution
  • 141. HD FRAME SIZES 1280 x 720 720 Horizontal by 1280 Vertical Lines of Resolution 1920 x 1080 1080 Horizontal by 1920 Vertical Lines of Resolution FULL RASTER vs. SQUEEZED
  • 142. HD FRAME SIZES 960 x 720p DVCPro HD 1280 x 1080i DVCPro HD
  • 143. HD FRAME SIZES 1440 x 1080i Sony HDV 960 x 720p Old JVC HDV
  • 144.
  • 145.
  • 146.
  • 147.
  • 148.
  • 149.
  • 150. SCREEN DIMENSION & ASPECT RATIO CALCULATOR http://www.silisoftware.com/tools/screen.php http://andrew.hedges.name/experiments/aspect_ratio Some popular dimensions: Standard Def = 4:3 (width:height) = 320x240, 640x480, 800x600, 1024x768 Widescreen or HD = 16:9 (width:height) = 640x360, 800x450, 960x540, 1024x576, 1280x720, and 1920x1080
  • 151. “The Available Display Area And How Content is Arranged within it”
  • 152. Data Calculators AJA DataCalc Software Tool (Desktop & iPhone)  http://www.aja.com/en/products/software/ MPC Digital Data Rate/Storage Calculator (Online & iPhone)  http://www.moving- picture.com/index.php?option=com_content&view=article&id=614&catid=13&Itemid =853 File Size/Data Rate Calculator (On-line Tool)  http://www.hdslr-cinema.com/tools/filesize.php?w Digital Glossaries  http://www.quantel.com/repository/files/library_DigitalFactBook_20th.pdf  http://www.learn.usa.canon.com/dlc/glossary/listing.spr Kodak Site  http://motion.kodak.com/motion/ Digital Cinema Society (DCS) Tech Tips  http://www.digitalcinemasociety.org/TechTips.php
  • 154.
  • 155.
  • 156.
  • 157.
  • 158.
  • 159.
  • 160.
  • 161.
  • 162.
  • 163.
  • 165. STANDARD FRAME RATES 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) Interlaced
  • 166. STANDARD FRAME RATES 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) 50i (50 interlaced fields = 25 frames) Interlaced
  • 168. STANDARD FRAME RATES Progressive 24p (Usually 23.976-frame progressive / also referred to as 23.98)
  • 169. STANDARD FRAME RATES Progressive 24p (Usually 23.976-frame progressive / also referred to as 23.98) 25p (25-frame progressive)
  • 170. STANDARD FRAME RATES Progressive 24p (Usually 23.976-frame progressive / also referred to as 23.98) 25p (25-frame progressive) 30p (Usually 29.97-frame progressive)
  • 171. STANDARD FRAME RATES Progressive 24p (Usually 23.976-frame progressive / also referred to as 23.98) 25p (25-frame progressive) 30p (Usually 29.97-frame progressive) 60p (Usually 59.94-frame progressive)
  • 172. ATSC FRAME RATES 36 Formats, Two For Each When One Considers the NTSC-compatible frame rates as well as integer frame rates.
  • 173. ATSC HD FRAME RATES 12 High-Definition Formats Designed to Integrate with NTSC
  • 176. More on Reducing the Amount of Data Using Frame Rate
  • 177. KEY CONCEPTS FOR WORKING WITH HD (DATA) INFORMATION
  • 178. R G B The Video image is sampled at a sample rate and per pixel. Each Pixel has a Color Channel Value for each of the R, G & B. (white=R100,G100,B100 / Black=R0,G0,B0) These sampled pixels represent the original image and when muxed (combined) together from each color channel = SUM OF FINAL IMAGE (Color Value + Intensity-brightness, contrast, gamma) A 4th Channel or “Alpha” is often present as well, and controls Transparency for each pixel. The pass-thru or hold-out aspects of this channel produce the matte that allows for what is displayed from the other 3 channels**
  • 179. Y Pb Pr LUMA BLUE - LUMA (C-Y) RED - LUMA (R-Y) (or Analog Component)
  • 180. Y Cb Cr LUMA BLUE - LUMA (C-Y) RED - LUMA (R-Y) (or Digital Component)
  • 182. Differences in reacting to light and color Monitors are Linear & Film is Logarithmic On a monitor, there is a one-to-one correspondence between energy (think exposure) and brightness. Each time you increase the signal to the monitor by 1 volt, you get exactly the same incremental increase in brightness. On film, however, the increase in brightness (emulsion density) is a result of the logarithm of the increase in exposure. Original Image Linear image in Log Viewer Log image in Linear Viewer Accurate tonality Lows suppressed and highs accentuated. Highs flattened and lows boosted.
  • 183. YIQ & YU V (or rec601) Video Profiles - Rec601 & Rec709 ITU Recomendations
  • 184. Video Profiles - Rec601 & Rec709 ITU Recomendations
  • 185. RAW Profiles - Log-C & Bayered Images Native RAW Image With LUT Applied ARRI Alexa & RED One & Epic
  • 186. (S-Log) S-Gamut & A.C.E.S
  • 195. Practice of encoding images by implementing less resolution for Chroma information than for Luma information. CHROMA SUBSAMPLING
  • 196. 4 : 4 : 4 CHROMA SUBSAMPLING
  • 197. 4 : 4 : 4 R’G’B’ CHROMA SUBSAMPLING
  • 198. 4 : 4 : 4 NO SUBSAMPLING R’G’B’ CHROMA SUBSAMPLING
  • 199. 4 : 4 : 4 Y:Cb:Cr CHROMA SUBSAMPLING
  • 202. 4:2:2 Luma horizontal sampling reference (originally, luma f s as multiple of 3 MGz) CHROMA SUBSAMPLING
  • 203. 4:2:2 Luma horizontal sampling reference (originally, luma f s as multiple of 3 MGz) Chroma decreased by 50%, Bandwidth decreased by 1/3 CHROMA SUBSAMPLING
  • 204. 4:2:2:4 Luma horizontal sampling reference (originally, luma f s as multiple of 3 MGz) If present, same as luma digit; indicates alpha (key) component Chroma decreased by 50%, Bandwidth decreased by 1/3 CHROMA SUBSAMPLING
  • 209. More on Reducing the amount of Data using CHROMA SUBSAMPLING
  • 211. R G B 8 8 8 8 Bits Per Pixel (Sometimes referred to as 24-bit on 3-color composite) Allows 256 Colors to represent each color channel 16,777,216 Colors Possible
  • 212. R G B 10 10 10 10 Bits Per Pixel (Sometimes referred to as 30-bit on 3-color composite) Allows 1024 Colors to represent each channel 1,073,741,824 Colors Possible
  • 214. Why is Bit-Depth Important?
  • 215. High Bit Depth Low Bit Depth
  • 218. THE DOWN & DIRTY GUIDE TO COMPRESSION
  • 219. THE DOWN & DIRTY GUIDE TO COMPRESSION
  • 220. THE DOWN & DIRTY GUIDE TO COMPRESSION
  • 221. THE DOWN & DIRTY GUIDE TO COMPRESSION
  • 222. THE DOWN & DIRTY GUIDE TO COMPRESSION
  • 223. THE DOWN & DIRTY GUIDE TO COMPRESSION Codec A combination of the words compression and decompression. A codec is mathematical algorithm, designed to reduce the amount of data in a file or stream by eliminating redundancy, and then later restore that file or stream back to its original form as closely as possible. – (Show 10100101001 eg.)
  • 224. Codecs vs. Containers/Wrappers Acquisition Codecs, Editing Codecs, Distribution Codecs h.264 & mpeg AVCHD, mpeg4 –IN- QT/.mov, .Mp4, .M4p, FLV, F4V, 3GP un-comp, DV, mpeg IMX & AVCHD, Pro-res –IN- MXF, QT/.Mov, AVI mpeg2, h.264, AAC –IN- QT/.mov, VOB, .mpeg2 & BDAV (bluray), Mp4, Mp3 VP8, ACM, Vorbis, VC-1 –IN- WebM, WMV/WMA QT & AVI Containers each support over 160 different Codecs
  • 225. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSY vs. LOSSLESS
  • 226. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSY vs. LOSSLESS LOSSY – A form of data compression where the decoded information IS NOT exactly the same as the originally encoded file. Data is lost.
  • 227. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSY vs. LOSSLESS LOSSY – A form of data compression where the decoded information IS NOT exactly the same as the originally encoded file. Data is lost. LOSSLESS – A form of data compression where the decoded information IS exactly the same as the originally encoded file. No data is lost.
  • 228. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSY IMAGE FORMATS Cartesian Perceptual Compression: Also known as CPC DiVx Fractal compression HAM, hardware compression of color information used in Amiga computers ICER, used by the Mars Rovers: related to JPEG 2000 in its use of wavelets JPEG JPEG 2000, JPEG's successor format that uses wavelets, for Lossy or Lossless compression. JBIG2 PGF, Progressive Graphics File (lossless or lossy compression) Wavelet compression S3TC texture compression for 3D computer graphics hardware
  • 229. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSY VIDEO FORMATS H.261 H.263 H.264 MNG (supports JPEG sprites) Motion JPEG MPEG-1 Part 2 MPEG-2 Part 2 MPEG-4 Part 2 and Part 10 (AVC) Ogg Theora (noted for its lack of patent restrictions) Sorenson video codec VC-1
  • 230. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSY AUDIO FORMATS AAC ADPCM ATRAC Dolby AC-3 MP2 MP3 Musepack Ogg Vorbis (noted for its lack of patent restrictions) WMA
  • 231. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSLESS IMAGE FORMATS ABO – Adaptive Binary Optimization GIF – (lossless, but contains a very limited number color range) JBIG2 – (lossless or lossy compression of B&W images) JPEG-LS – (lossless/near-lossless compression standard) JPEG 2000 – (includes lossless compression method, as proven by Sunil Kumar, Prof San Diego State University) JPEG XR - formerly WMPhoto and HD Photo, includes a lossless compression method PGF – Progressive Graphics File (lossless or lossy compression) PNG – Portable Network Graphics TIFF - Tagged Image File Format
  • 232. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSLESS VIDEO FORMATS Animation codec CorePNG FFV1 JPEG 2000 Huffyuv Lagarith MSU Lossless Video Codec SheerVideo
  • 233. THE DOWN & DIRTY GUIDE TO COMPRESSION LOSSLESS AUDIO FORMATS Apple Lossless – ALAC (Apple Lossless Audio Codec) ATRAC Advanced Lossless Audio Lossless Coding – also known as MPEG-4 ALS MPEG-4 SLS – also known as HD-AAC Direct Stream Transfer – DST Dolby TrueHD DTS-HD Master Audio Free Lossless Audio Codec – FLAC Meridian Lossless Packing – MLP Monkey's Audio – Monkey's Audio APE OptimFROG RealPlayer – RealAudio Lossless Shorten – SHN TTA – True Audio Lossless WavPack – WavPack lossless WMA Lossless – Windows Media Lossless
  • 234. COMMON VIDEO COMPRESSION FORMATS • AVC VCEG & MPEG • AVR Avid Technology • DVC SMPTE • DNxHD Avid Technology • H.264 VCEG & MPEG • JFIF JPEG/Avid Technology • JPEG JPEG • JPEG 2000 JPEG • ProRes Apple • M-JPEG JPEG • M-JPEG 2000 JPEG • MPEG MPEG • MPEG-2 MPEG • MPEG-4 VCEG & MPEG • VC-1 SMPTE • WMV 9 Microsoft Corp.
  • 235. Format File Size Video Quality Video bitrate H.264 MP4 8.92MB better 768 AVI 15.00 MB inferior 1200 QuickTime MOV 15.20 MB inferior 768 MKV 15.50 MB inferior 1200 WMV 18.40 MB inferior 1200 WebM 30.20 MB best 1200
  • 236. INTRAFRAME Intraframe compression refers to video where each frame is compressed independently of nearby frames.
  • 237. INTRAFRAME Intraframe compression refers to video where each frame is compressed independently of nearby frames. 1. Used in formats like DV, DNxHD, ProRes, Animation, and M-JPEG.
  • 238. INTRAFRAME Intraframe compression refers to video where each frame is compressed independently of nearby frames. 1. Used in formats like DV, DNxHD, ProRes, Animation, and M-JPEG. 2. Can be lossy or lossless. Most common in editing and graphics work. Can create very large files and not always ideal for real-time playback.
  • 239. INTERFRAME Intrerframe compression refers to video where some frames are compressed based on frames either before or after it in the video stream.
  • 240. INTERFRAME Intrerframe compression refers to video where some frames are compressed based on frames either before or after it in the video stream. 1. Used in formats like HDV, MPEG-2, MPEG-4, H.264, and XD-CAM.
  • 241. INTERFRAME Intrerframe compression refers to video where some frames are compressed based on frames either before or after it in the video stream. 1. Used in formats like HDV, MPEG-2, MPEG-4, H.264, and XD-CAM. 2. Almost always lossy. Most commonly used as camera formats or delivery formats. Capable of much smaller files, but difficult to edit with.
  • 242.
  • 243. More about Inter & Intra-Frame Compression
  • 245. DCT A discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies.
  • 246. DCT A discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. “BLAH BLAH BLAH”
  • 247. DCT A discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. 1. DCT is used in nearly all common video formats like JPEG, MPEG, DV, DNxHD, ProRes, etc.
  • 248. DCT A discrete cosine transform (DCT) expresses a sequence of finitely many data points in terms of a sum of cosine functions oscillating at different frequencies. 1. DCT is used in nearly all common video formats like JPEG, MPEG, DV, DNxHD, ProRes, etc. 2. Can be lossy or lossless. Highly compressed images will often have artifacts along edges, lose color fidelity, and/or become blocky and pixelated.
  • 249.
  • 250.
  • 251.
  • 252.
  • 253. WAVELET A technique for video compression that treats the image like a series of waves, known as wavelets, starting with large waves and progressively getting smaller based on the level of compression desired.
  • 254. WAVELET A technique for video compression that treats the image like a series of waves, known as wavelets, starting with large waves and progressively getting smaller based on the level of compression desired. 1. Wavelet is a newer technology used in compressions like JPEG 2000 and CineForm.
  • 255. WAVELET A technique for video compression that treats the image like a series of waves, known as wavelets, starting with large waves and progressively getting smaller based on the level of compression desired. 1. Wavelet is a newer technology used in compressions like JPEG 2000 and CineForm. 2. Can be lossy or lossless. Highly compressed images will rarely create artifacts, but can become soft/blurry.
  • 256. MPEG BASICS In MPEG encoding, a group of pictures, or GOP, specifies the order in which intra-frames and inter frames are arranged. The GOP is a group of successive pictures within an MPEG-coded video stream. Each MPEG- coded video stream consists of successive GOPs. From the MPEG pictures contained in it the visible frames are generated.
  • 257. THE 3 PRIMARY FRAME COMPRESSIONS
  • 258. THE 3 PRIMARY FRAME COMPRESSIONS • I-Frames (I-Picture, Intra Frames)
  • 259. THE 3 PRIMARY FRAME COMPRESSIONS • I-Frames (I-Picture, Intra Frames) • P-Frames (Predicted Frames)
  • 260. THE 3 PRIMARY FRAME COMPRESSIONS • I-Frames (I-Picture, Intra Frames) • P-Frames (Predicted Frames) • B-Frames (Bi-Directional Frames)
  • 261. MPEG-2 BIT RATE DETAILS 4 Mbit/s - Low Level Encoding 5 Mbit/s - DVD 15 Mbit/s - Main Level 60 Mbit/s - High-14 80 Mbit/s - High Level ATSC Broadcast Standards - 19.4 Mbit/s for Low HD and 38 Mbit/s for High End.
  • 263. KEY MASTERING COMPRESSIONS • DNxHD AVID TECHNOLOGY
  • 264. KEY MASTERING COMPRESSIONS • DNxHD AVID TECHNOLOGY • ProRES 422, ProRES 444 APPLE
  • 265. KEY MASTERING COMPRESSIONS • DNxHD AVID TECHNOLOGY • ProRES 422, ProRES 444 APPLE • Prospect HD/4K, Neo HD/4K/3D CINEFORM
  • 266. NEW TECHNOLOGY Thunderbolt Solid State Drives Cloud Encoding Connected TV’s
  • 267.
  • 268.
  • 269. TeraDisc All of us are acquiring and creating more and more high-density, high-resolution content. Collect, store and find your valuable personal and commercial content using a single 1TB TeraDisc. 250 hours of HDTV or 300,000 digital photos. Empowering the Enterprise The healthcare, public, entertainment, security, financial and business sectors can inexpensively archive vast amounts of data at the desktop. Totally meeting compliance regulations with bit-by-bit WORM recording. Readily integrates into today’s archiving solutions. Longevity of greater than 50 years. 1 Trillion Bytes on a Single Disc Enables the reading and writing of 200 layers of data on a single DVD-size disc. Uses advanced material polymer technology engineered to create an optical media with unique light-sensitive properties. Inexpensive drives able to reach consumer form factor and pricing. Mempile’s game-changing 2-photon technology revolutionizes consumer and enterprise archiving – the removable TeraDisc offers high capacity, low cost, permanence and ease of use.
  • 270. NEW TECHNOLOGY 50 terabyte flash drive made of bug protein This idea first started out by coating DVDs with a layer of protein so that one day solid state memory could hold so much information that storing data on your computer hard drive will be obsolete
  • 277. 199 HD/ DATA Essentials Scott Carrey Course Evaluation: www.vs.edu/survey scott@scarrey.com

Notas do Editor

  1. For those of you who are not aware this is an information based class…basically ranging from HD for Dummies to an Intermediate Level knowledge. Now obviously this is a one day course so something have to be abbreviated or merely referred to…but my hope is that everyone will get something out of this class…even if it is to establish that you are aware of this information…which we have found most people are lacking in even some of the most basic HD knowledge. Some students I have talked to are literally of the impression that they just need to know enough to get by and this is okay…however, I believe there is benefit to a deeper knowledge and this is speaking from personal experience and those who I have known and worked with for years. There are situations that you walk in as an editor, assistant, online editor… where you can really walk yourself through any problem…not because you are familiar with that problem, but because you understand the underlying concepts, software and hardware that you must somehow work with to solve the problem.
  2. For those of you who are not aware this is an information based class…basically ranging from HD for Dummies to an Intermediate Level knowledge. Now obviously this is a one day course so something have to be abbreviated or merely referred to…but my hope is that everyone will get something out of this class…even if it is to establish that you are aware of this information…which we have found most people are lacking in even some of the most basic HD knowledge. Some students I have talked to are literally of the impression that they just need to know enough to get by and this is okay…however, I believe there is benefit to a deeper knowledge and this is speaking from personal experience and those who I have known and worked with for years. There are situations that you walk in as an editor, assistant, online editor… where you can really walk yourself through any problem…not because you are familiar with that problem, but because you understand the underlying concepts, software and hardware that you must somehow work with to solve the problem.
  3. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  4. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  5. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  6. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  7. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  8. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  9. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  10. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  11. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  12. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  13. HD simply means higher resolution which means clearer images. So, garbage in HD is still garbage, you can just see it clearer.
  14. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  15. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  16. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  17. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  18. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  19. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  20. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  21. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  22. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  23. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  24. Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  25. We’ll talk more about what BINARY INFO IS and HOW IT DIFFERS FROM DECIMAL INFO
  26. Bits are the basic building blocks of a computer. All digital information is stored with a succession of 1s and 0s – THIS IS KNOWN AS BINARY or BASE 2 - as opposed to the way we are used to representing numbers which is known as Decimal or BASE 10. Explain concept of Base 2 vs Base 10 and Counting Principals. The numbers REPRESENT a possible state of something, not the actual numerical value. So in a survey where you can answer either YES or NO, the answers can be REPRESENTED by a SINGLE BIT, where 1=Yes and 0=No – If we wanted to record 1 person’s response we could do this with 1 bit of data; ask 10 million people and and we would need 10 million bits. To express more complex information with more choices, requires the use of more bits, generating more data. More Data means larger files. Larger files mean more work demanded to access and read this dataComputers do not have pictures, or sound, or color - , just bits that represent the states necessary to store and display them. With Images, This is the basis for the term “Bitmap” as in a Bitmapped Image - one where Pixels are spatially mapped, displaying or storing as a digital image – but really there is no image, only BITS. However, by arranging these bits in certain patterns and defining rules for what those patterns mean; computer displays and output devices can create digital pictures and sound, and recreate them exactly the same way time and time again.. HD Video requires a lot of bits and managing these is much of the basis of what this course is about.
  27. Like in the English language where a single letter doesn’t mean much alone, but when strung together with other letters to form words, becomes useful to communicate information – so to with DATA we usually don’t refer to individual bits, but rather a collection of them.
  28. The first logical collection of bits is “A Byte” comprised of a set of 8 individual bits. Since we know each bit can represent 2 states, 2 to the 8th power, or 2 states per bit in a byte = 256 possible states of something that can be represented by a single byte or 8bits, For example, such as 8bit color Images, which allow for 256 possible shades of a color that can be produced. These 256 states are represented by a value from 0 to 255. So in a “3-color channel” image like RGB, each pixel in all 3 of the Primary Color Channels in an 8 bit color image is said to have 1 Byte per pixel each with a value for the intensity of that color channel, ranging from 0-255. A value of 0 red,0 blue, & 0 green produces no color or black, and values of 255,255,255 produces white. Varying the combinations of values per channels is what allows an 8bit color image to be represented with millions of possible colors that a pixel can produce. This is what we call Bit Depth and we’ll look at this in more detail later on.So if we think of Bits sort of like letters, and then by stringing them together form Bytes, which create Words, that can then be read by a device such as a computer or television or ipod, etc. But the basis for any and all data, of which media is data, HD Video is data, Music, Documents, any and all information is made up of BITS. The number of bits is what will determine the number of possible states that can be represented and the more Bits, the more possibilities such as higher resolution, sharper and truer colors, and ultimately the file sizes of our media, because of the amount of bits used to represent these states.
  29. In fact that is the very way computers read information. So as you see here bits together make bytes and groups of Bytes are called WORDSThe size of a word varies from one computer to another, depending on the CPU. For computers with a 16-bit CPU, a word is 16 bits (2 bytes). On large mainframes, a word can be as long as 64 bits (8 bytes).Some computers and programming languages distinguish between shortwords and longwords. A shortword is usually 2 bytes long, while a longword is 4 bytes. -------summary:-------1 bit = 0 or 11 Byte = 8 bits 1 word = 2 Bytes = 2 X (8 bits) = 16 bitsDouble Word = 4 Bytes=4 X (8 bits)= 32 bitsHalf a Byte or 4 bits is called???A Nibble or a Nybble
  30. A Kilobyte is 1024 bytes, or 8192 bits.
  31. A Kilobyte is 1024 bytes, or 8192 bits.
  32. A Kilobyte is 1024 bytes, or 8192 bits.
  33. Numbers continue from here, increasing by a factor of 1024 at each step.
  34. Numbers continue from here, increasing by a factor of 1024 at each step.A TERABYTE CAN HOLD – approximately 200,000 photos or standard mp3 tracks
  35. A PETABYTE or 1024 Terabytes can hold about 500 billion pages of standard written text and 1 and Half Petabytes us the size of the 10 Billion Photos on Facebook
  36. A thousand-twenty four Petabytes is called an Exabyte and the Library of Congress
  37. Numbers continue from here, increasing by a factor of 1024 at each step.In 2010 we cracked the 1 Zettabyte barrier for the very first time with an estimated 1.2 Zettabytes of information created and replicated. That’s over 1.2 Billion Gigabytes of data.
  38. Numbers continue from here, increasing by a factor of 1024 at each step.Using current standard broadband it would take almost 11 Trillion years to download a Yottabyte file from the internetIt was once thought that a 1.4MB Hi-Density Floppy was more storage than anyone would ever possible need and hard to imagine what they would do with more. Same with Terabyte Drives and so on…
  39. Numbers continue from here, increasing by a factor of 1024 at each step.
  40. Numbers continue from here, increasing by a factor of 1024 at each step.
  41. Numbers continue from here, increasing by a factor of 1024 at each step.
  42. Numbers continue from here, increasing by a factor of 1024 at each step.
  43. So what do bits have to do with HD?
  44. Everything in digital media has to do with bits per second, which is known as the bit rate. The higher the bit rate, the more information can be stored about the signal. General rule of thumb: Higher bit rate = higher quality.
  45. Bit rate and bandwidth are different. Bit rate refers to a file or stream of information. Bandwidth is a measurement of how much information can be transferred over a communication link (like a network or a USB cable)
  46. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  47. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described. Many of these numbers are approximations, and this list was obtained from Bell Laboratories.
  48. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  49. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  50. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  51. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  52. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  53. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  54. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  55. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  56. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  57. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  58. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  59. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  60. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  61. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  62. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  63. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  64. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  65. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  66. files stored digitally on a computer are represented by a series of 1s and 0s stored on the computer. Groups of these 1s and 0s are known as bits and it is the number of bits used which dictates how detailed the image (or audio) is described
  67. FireWire, uses a "Peer-to-Peer" architecture in which the peripherals are intelligent and can negotiate bus conflicts to determine which device can best control a data transfer  Hi-Speed USB 2.0 uses a "Master-Slave" architecture in which the computer handles all arbitration functions and dictates data flow to, from and between the attached peripherals (adding additional system overhead and resulting in slower data flow control) On average Firewire 400 is 33-40% faster reading rate than USB 2.0On average Firewire 400 is 16-25% faster writing rate than USB 2.0
  68. There are two ways to transmit digital video: composite or component. Composite signals are no longer in common use, and today's methods all involve Digital Component signals, but analog component is still seen, such as svhs andbetacam sp. The big difference between Composite & Component, is that in Component video, the video signal is separated into a luma (brightness) component, and two color difference components (chroma). Keeping the luma (brightness) and chroma (color) components separate,results in better color resolution and reproduction than when the luma & chroma elements of video are combined or composited as in Composite VideoSo one way to input or output HD Video is via a Component Connection
  69. So one way to input or output HD Video is via a Component Connection as seen here. I should note that all images begin and end as RGB but along the way they regularly get converted as a means of managing the large amounts of data being communicated. So though Component is better than composite, it generally is converting into what is referred to as YUV Component (where Y represents the Luma and the U & V each a color differenced or chroma component), well look at this in greater detail but the main point to take away is that HD Component is HD, but somewhat comprised in it’s ability to fully represent our image, such as in reducing the number of colors that can be produced or compressing the amount of pixels used to create and display the image. Again we’ll look at this in more detail, for there are many reasons why we intentionally will choose to throw away information. Ideally though we want to be able to make this choice later in the workflow and record at Full Bandwidth or as Uncompressed Video and to do this requires a greater bandwidth connection, leading us to our next Connection Type…
  70. Serial Digital Interface or (SDI) is the standard for high-end, uncompressed digital video formats such as D1, D5, and Digital Betacam.High Definition Serial Digital Interface (HD-SDI) is a higher-bandwidth version of SDI designed for the extremely high data rates of Uncompressed HD Video. It might look like a regular composite BNC video cable, but it’s not. HD-SDI is a high-speed serial interface between video equipment that carries Digital HD video and up to 8 channels of uncompressed 48 kHz audio on one cable.The video signal carried on the cable is both uncompressed and unencrypted. HD-SDI is transmitted at 1.485 Gbits/sec or, in a new standard for dual-link over a single cable, 3 Gbits/sec. The 3 Gbits/sec version supports the older standard but is designed for all 4:4:4 RGB workflows over a single connector (instead of dual-link HD-SDI) or full resolution 2K film playback at 2048 x 1556 pixels (at 24 fps).IMPORTANT:SDI cables may look like composite coax and both use a BNC connector, but the impedance of cable and connector is different for HD-SDI. Attempting to use a regular composite BNC cable A 75 Ohm Composite video cable with 75 Ohm BNC connectors will probably work over short distances (up to 6’ for example) if used on HD-SDI equipment, but over longer distances the Composite cable and connectors will give failed connections, lost sync or other transfer issues.HD-SDI is not suitable for very long distance transmission as it has been designed for short distances and the high data transmission rate would fail over extended distances. HD-SDI interfaces tend to be found in the higher end, more expensive decks, supporting “professional” formats or professional versions of decks.
  71. Until the advent and adoption of 3 Gbits/sec HD-SDI, two connections working in parallel were used to carry 4:4:4 RGB (full bandwidth) from Cameras and Decks to Computers. Each carrying ½ of the information and require as well, a high-performance disk array (a set of disk drives grouped together to read and write in parallel), in order to accommodate the high data rates you’ll work with.
  72. (High Definition Multimedia Interface used for transmitting uncompressed digital signals. This connection can be found on digital (Hi Def) televisions, cable and satellite set-top boxes, Blu-ray players and computers. In addition, many cameras have HDMI connections. HDMI supports standard, enhanced definition and all HD formats supported within the US ATSC broadcast with up to 8 channels of audio. Audio up to 192 kHz sample rate at 24bit sample depth is supported. (Compare that with “CD quality” at 48 kHz sample rate at 16bit sample depth.)In post production environments it is increasingly being used as a display connection between different video interfaces Type A HDMI is backwards compatible with single-link DVI, a connection type commonly found on newer graphics cards in computers. This allows a DVI output from a computer to connect to an HDMI display by means of an adapter. Most previous communication types required actually modifying the signal type, not just adapting the connection as is possible between HDMI & DVI as well as Display Port and Thunderbolt. Keep in mind though that the transfer rate will still be handled at the level of the lowest bandwidth connector, but at least it can display.MOST of your Pro-Sumer Level gear will have HD Component and/or HDMI, while HIGHER END may have both of those as well, but will PRIMARILY RELY ON HD-SDI or DUAL-LINK HD-SDI and likely even 3G/Dual Link.
  73. While not always the case, again, the basic rule of thumb:Higher Bit Rate = Higher Quality
  74. While not always the case, again, the basic rule of thumb:Higher Bit Rate = Higher Quality
  75. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  76. Who set the standards and what do I need to know?The Moving Picture Experts Group, commonly referred to as simply MPEG, is a working group of ISO/IEC charged with the development of video and audio encoding standards. Its first meeting was in May of 1988 in Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG's official designation is ISO/IEC JTC1/SC29 WG11.MPEG has standardized the following compression formats and ancillary standards:MPEG-1: Initial video and audio compression standard. Later used as the standard for Video CD, and includes the popular Layer 3 (MP3) audio compression format. MPEG-2: Transport, video and audio standards for broadcast-quality television. Used for over-the-air digital televisionATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD, and with slight modifications, as the .VOB (Video OBject) files that carry the images on DVDs. MPEG-3: Originally designed for HDTV, but abandoned when it was realized that MPEG-2 (with extensions) was sufficient for HDTV. (not to be confused with MP3, which is MPEG-1 Audio Layer 3.) MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management. Several new higher efficiency video standards (newer than MPEG-2 Video) are included (an alternative to MPEG-2 Video), notably: MPEG-4 Part 2 (or Advanced Simple Profile) and MPEG-4 Part 10 (or Advanced Video Coding or H.264). MPEG-4 Part 10 may be used on HD DVD and Blu-ray discs, along with VC-1 and MPEG-2. Pro-MPEG – the Professional-MPEG Forum – is an association of broadcasters, program makers, equipment manufacturers, and component suppliers with interests in realizing the interoperability of professional television equipment, according to the implementation requirements of broadcasters and other end-users. The Forum has been in existence for approximately six years and has over 130 members.Independence, openness, and non-commerciality are fiercely maintained to ensure all organizations and individuals can participate and contribute. The SMPTE and the EBU are two key partner organizations, and the output of the Forum's work on operating ranges and file formats has been submitted to SMPTE for standardization. (Professional MPEG Forum) An organization founded in London in 1998 for the advancement of the MPEG-2 standard. The Forum helped develop the MXF file format for exchanging video production data between servers. The offices of the secretariat were located at BBC Radio Northampton. ATSC Standards document a digital television format which will replace (in the United States) the analog NTSC television system[1] by February 17, 2009.[2] It was developed by the Advanced Television Systems Committee.The high definition television standards defined by the ATSC produce wide screen16:9 images up to 1920×1080 pixels in size — more than six times the display resolution of the earlier standard. However, a host of different image sizes are also supported, so that up to six standard-definition "virtual channels" can be broadcast on a single 6 MHzTV channel.The Society of Motion Picture and Television Engineers or SMPTE, (pronounced /ˈsɪmpti/ and sometimes /ˈsʌmpti/), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is an international professional association, based in the United States of America, of engineers working in the motion imaging industries. An internationally-recognized standards developing organization, SMPTE has over 400 standards, Recommended Practices and Engineering Guidelines for television, motion pictures, digital cinema, audio and medical imaging. In addition to development and publication of standards documents, SMPTE publishes a journal, provides assistance to members with employment, and performs other industry-related functions.The Media Dispatch Group was created in 2003 as an activity of the Professional-MPEG Forum to create a vendor-neutral open technology to create integrated solutions for the professional exchange of large media files securely over IP networks. Members of the Media Dispatch Group include representatives from broadcasters, facility houses, equipment manufacturers, and the digital cinema production community, as well as liaisons with the wider standards community.The European Broadcasting Union (EBU; French: L'Union Européenne de Radio-Télévision ("UER"), and unrelated to the European Union) was formed on 12 February1950 by 23 broadcasting organisations from Europe and the Mediterranean at a conference in the coastal resort of Torquay in Devon, England. In 1993, the International Radio and Television Organisation (OIRT), an equivalent organisation of broadcasters from Central and Eastern Europe, was merged with the EBU.Networks: ABC, NBC, CBS, and Time Warner are associates.
  77. Who set the standards and what do I need to know?The Moving Picture Experts Group, commonly referred to as simply MPEG, is a working group of ISO/IEC charged with the development of video and audio encoding standards. Its first meeting was in May of 1988 in Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG's official designation is ISO/IEC JTC1/SC29 WG11.MPEG has standardized the following compression formats and ancillary standards:MPEG-1: Initial video and audio compression standard. Later used as the standard for Video CD, and includes the popular Layer 3 (MP3) audio compression format. MPEG-2: Transport, video and audio standards for broadcast-quality television. Used for over-the-air digital televisionATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD, and with slight modifications, as the .VOB (Video OBject) files that carry the images on DVDs. MPEG-3: Originally designed for HDTV, but abandoned when it was realized that MPEG-2 (with extensions) was sufficient for HDTV. (not to be confused with MP3, which is MPEG-1 Audio Layer 3.) MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management. Several new higher efficiency video standards (newer than MPEG-2 Video) are included (an alternative to MPEG-2 Video), notably: MPEG-4 Part 2 (or Advanced Simple Profile) and MPEG-4 Part 10 (or Advanced Video Coding or H.264). MPEG-4 Part 10 may be used on HD DVD and Blu-ray discs, along with VC-1 and MPEG-2. Pro-MPEG – the Professional-MPEG Forum – is an association of broadcasters, program makers, equipment manufacturers, and component suppliers with interests in realizing the interoperability of professional television equipment, according to the implementation requirements of broadcasters and other end-users. The Forum has been in existence for approximately six years and has over 130 members.Independence, openness, and non-commerciality are fiercely maintained to ensure all organizations and individuals can participate and contribute. The SMPTE and the EBU are two key partner organizations, and the output of the Forum's work on operating ranges and file formats has been submitted to SMPTE for standardization. (Professional MPEG Forum) An organization founded in London in 1998 for the advancement of the MPEG-2 standard. The Forum helped develop the MXF file format for exchanging video production data between servers. The offices of the secretariat were located at BBC Radio Northampton. ATSC Standards document a digital television format which will replace (in the United States) the analog NTSC television system[1] by February 17, 2009.[2] It was developed by the Advanced Television Systems Committee.The high definition television standards defined by the ATSC produce wide screen16:9 images up to 1920×1080 pixels in size — more than six times the display resolution of the earlier standard. However, a host of different image sizes are also supported, so that up to six standard-definition "virtual channels" can be broadcast on a single 6 MHzTV channel.The Society of Motion Picture and Television Engineers or SMPTE, (pronounced /ˈsɪmpti/ and sometimes /ˈsʌmpti/), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is an international professional association, based in the United States of America, of engineers working in the motion imaging industries. An internationally-recognized standards developing organization, SMPTE has over 400 standards, Recommended Practices and Engineering Guidelines for television, motion pictures, digital cinema, audio and medical imaging. In addition to development and publication of standards documents, SMPTE publishes a journal, provides assistance to members with employment, and performs other industry-related functions.The Media Dispatch Group was created in 2003 as an activity of the Professional-MPEG Forum to create a vendor-neutral open technology to create integrated solutions for the professional exchange of large media files securely over IP networks. Members of the Media Dispatch Group include representatives from broadcasters, facility houses, equipment manufacturers, and the digital cinema production community, as well as liaisons with the wider standards community.The European Broadcasting Union (EBU; French: L'Union Européenne de Radio-Télévision ("UER"), and unrelated to the European Union) was formed on 12 February1950 by 23 broadcasting organisations from Europe and the Mediterranean at a conference in the coastal resort of Torquay in Devon, England. In 1993, the International Radio and Television Organisation (OIRT), an equivalent organisation of broadcasters from Central and Eastern Europe, was merged with the EBU.Networks: ABC, NBC, CBS, and Time Warner are associates.
  78. Who set the standards and what do I need to know?The Moving Picture Experts Group, commonly referred to as simply MPEG, is a working group of ISO/IEC charged with the development of video and audio encoding standards. Its first meeting was in May of 1988 in Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG's official designation is ISO/IEC JTC1/SC29 WG11.MPEG has standardized the following compression formats and ancillary standards:MPEG-1: Initial video and audio compression standard. Later used as the standard for Video CD, and includes the popular Layer 3 (MP3) audio compression format. MPEG-2: Transport, video and audio standards for broadcast-quality television. Used for over-the-air digital televisionATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD, and with slight modifications, as the .VOB (Video OBject) files that carry the images on DVDs. MPEG-3: Originally designed for HDTV, but abandoned when it was realized that MPEG-2 (with extensions) was sufficient for HDTV. (not to be confused with MP3, which is MPEG-1 Audio Layer 3.) MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management. Several new higher efficiency video standards (newer than MPEG-2 Video) are included (an alternative to MPEG-2 Video), notably: MPEG-4 Part 2 (or Advanced Simple Profile) and MPEG-4 Part 10 (or Advanced Video Coding or H.264). MPEG-4 Part 10 may be used on HD DVD and Blu-ray discs, along with VC-1 and MPEG-2. Pro-MPEG – the Professional-MPEG Forum – is an association of broadcasters, program makers, equipment manufacturers, and component suppliers with interests in realizing the interoperability of professional television equipment, according to the implementation requirements of broadcasters and other end-users. The Forum has been in existence for approximately six years and has over 130 members.Independence, openness, and non-commerciality are fiercely maintained to ensure all organizations and individuals can participate and contribute. The SMPTE and the EBU are two key partner organizations, and the output of the Forum's work on operating ranges and file formats has been submitted to SMPTE for standardization. (Professional MPEG Forum) An organization founded in London in 1998 for the advancement of the MPEG-2 standard. The Forum helped develop the MXF file format for exchanging video production data between servers. The offices of the secretariat were located at BBC Radio Northampton. ATSC Standards document a digital television format which will replace (in the United States) the analog NTSC television system[1] by February 17, 2009.[2] It was developed by the Advanced Television Systems Committee.The high definition television standards defined by the ATSC produce wide screen16:9 images up to 1920×1080 pixels in size — more than six times the display resolution of the earlier standard. However, a host of different image sizes are also supported, so that up to six standard-definition "virtual channels" can be broadcast on a single 6 MHzTV channel.The Society of Motion Picture and Television Engineers or SMPTE, (pronounced /ˈsɪmpti/ and sometimes /ˈsʌmpti/), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is an international professional association, based in the United States of America, of engineers working in the motion imaging industries. An internationally-recognized standards developing organization, SMPTE has over 400 standards, Recommended Practices and Engineering Guidelines for television, motion pictures, digital cinema, audio and medical imaging. In addition to development and publication of standards documents, SMPTE publishes a journal, provides assistance to members with employment, and performs other industry-related functions.The Media Dispatch Group was created in 2003 as an activity of the Professional-MPEG Forum to create a vendor-neutral open technology to create integrated solutions for the professional exchange of large media files securely over IP networks. Members of the Media Dispatch Group include representatives from broadcasters, facility houses, equipment manufacturers, and the digital cinema production community, as well as liaisons with the wider standards community.The European Broadcasting Union (EBU; French: L'Union Européenne de Radio-Télévision ("UER"), and unrelated to the European Union) was formed on 12 February1950 by 23 broadcasting organisations from Europe and the Mediterranean at a conference in the coastal resort of Torquay in Devon, England. In 1993, the International Radio and Television Organisation (OIRT), an equivalent organisation of broadcasters from Central and Eastern Europe, was merged with the EBU.Networks: ABC, NBC, CBS, and Time Warner are associates.
  79. Who set the standards and what do I need to know?The Moving Picture Experts Group, commonly referred to as simply MPEG, is a working group of ISO/IEC charged with the development of video and audio encoding standards. Its first meeting was in May of 1988 in Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG's official designation is ISO/IEC JTC1/SC29 WG11.MPEG has standardized the following compression formats and ancillary standards:MPEG-1: Initial video and audio compression standard. Later used as the standard for Video CD, and includes the popular Layer 3 (MP3) audio compression format. MPEG-2: Transport, video and audio standards for broadcast-quality television. Used for over-the-air digital televisionATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD, and with slight modifications, as the .VOB (Video OBject) files that carry the images on DVDs. MPEG-3: Originally designed for HDTV, but abandoned when it was realized that MPEG-2 (with extensions) was sufficient for HDTV. (not to be confused with MP3, which is MPEG-1 Audio Layer 3.) MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management. Several new higher efficiency video standards (newer than MPEG-2 Video) are included (an alternative to MPEG-2 Video), notably: MPEG-4 Part 2 (or Advanced Simple Profile) and MPEG-4 Part 10 (or Advanced Video Coding or H.264). MPEG-4 Part 10 may be used on HD DVD and Blu-ray discs, along with VC-1 and MPEG-2. Pro-MPEG – the Professional-MPEG Forum – is an association of broadcasters, program makers, equipment manufacturers, and component suppliers with interests in realizing the interoperability of professional television equipment, according to the implementation requirements of broadcasters and other end-users. The Forum has been in existence for approximately six years and has over 130 members.Independence, openness, and non-commerciality are fiercely maintained to ensure all organizations and individuals can participate and contribute. The SMPTE and the EBU are two key partner organizations, and the output of the Forum's work on operating ranges and file formats has been submitted to SMPTE for standardization. (Professional MPEG Forum) An organization founded in London in 1998 for the advancement of the MPEG-2 standard. The Forum helped develop the MXF file format for exchanging video production data between servers. The offices of the secretariat were located at BBC Radio Northampton. ATSC Standards document a digital television format which will replace (in the United States) the analog NTSC television system[1] by February 17, 2009.[2] It was developed by the Advanced Television Systems Committee.The high definition television standards defined by the ATSC produce wide screen16:9 images up to 1920×1080 pixels in size — more than six times the display resolution of the earlier standard. However, a host of different image sizes are also supported, so that up to six standard-definition "virtual channels" can be broadcast on a single 6 MHzTV channel.The Society of Motion Picture and Television Engineers or SMPTE, (pronounced /ˈsɪmpti/ and sometimes /ˈsʌmpti/), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is an international professional association, based in the United States of America, of engineers working in the motion imaging industries. An internationally-recognized standards developing organization, SMPTE has over 400 standards, Recommended Practices and Engineering Guidelines for television, motion pictures, digital cinema, audio and medical imaging. In addition to development and publication of standards documents, SMPTE publishes a journal, provides assistance to members with employment, and performs other industry-related functions.The Media Dispatch Group was created in 2003 as an activity of the Professional-MPEG Forum to create a vendor-neutral open technology to create integrated solutions for the professional exchange of large media files securely over IP networks. Members of the Media Dispatch Group include representatives from broadcasters, facility houses, equipment manufacturers, and the digital cinema production community, as well as liaisons with the wider standards community.The European Broadcasting Union (EBU; French: L'Union Européenne de Radio-Télévision ("UER"), and unrelated to the European Union) was formed on 12 February1950 by 23 broadcasting organisations from Europe and the Mediterranean at a conference in the coastal resort of Torquay in Devon, England. In 1993, the International Radio and Television Organisation (OIRT), an equivalent organisation of broadcasters from Central and Eastern Europe, was merged with the EBU.Networks: ABC, NBC, CBS, and Time Warner are associates.
  80. Who set the standards and what do I need to know?The Moving Picture Experts Group, commonly referred to as simply MPEG, is a working group of ISO/IEC charged with the development of video and audio encoding standards. Its first meeting was in May of 1988 in Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG's official designation is ISO/IEC JTC1/SC29 WG11.MPEG has standardized the following compression formats and ancillary standards:MPEG-1: Initial video and audio compression standard. Later used as the standard for Video CD, and includes the popular Layer 3 (MP3) audio compression format. MPEG-2: Transport, video and audio standards for broadcast-quality television. Used for over-the-air digital televisionATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD, and with slight modifications, as the .VOB (Video OBject) files that carry the images on DVDs. MPEG-3: Originally designed for HDTV, but abandoned when it was realized that MPEG-2 (with extensions) was sufficient for HDTV. (not to be confused with MP3, which is MPEG-1 Audio Layer 3.) MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management. Several new higher efficiency video standards (newer than MPEG-2 Video) are included (an alternative to MPEG-2 Video), notably: MPEG-4 Part 2 (or Advanced Simple Profile) and MPEG-4 Part 10 (or Advanced Video Coding or H.264). MPEG-4 Part 10 may be used on HD DVD and Blu-ray discs, along with VC-1 and MPEG-2. Pro-MPEG – the Professional-MPEG Forum – is an association of broadcasters, program makers, equipment manufacturers, and component suppliers with interests in realizing the interoperability of professional television equipment, according to the implementation requirements of broadcasters and other end-users. The Forum has been in existence for approximately six years and has over 130 members.Independence, openness, and non-commerciality are fiercely maintained to ensure all organizations and individuals can participate and contribute. The SMPTE and the EBU are two key partner organizations, and the output of the Forum's work on operating ranges and file formats has been submitted to SMPTE for standardization. (Professional MPEG Forum) An organization founded in London in 1998 for the advancement of the MPEG-2 standard. The Forum helped develop the MXF file format for exchanging video production data between servers. The offices of the secretariat were located at BBC Radio Northampton. ATSC Standards document a digital television format which replaced (in the United States) the analog NTSC television system[1] in February 17, 2009.[2] It was developed by the Advanced Television Systems Committee.The high definition television standards defined by the ATSC produce wide screen16:9 images up to 1920×1080 pixels in size — more than six times the display resolution of the earlier standard. However, a host of different image sizes are also supported, so that up to six standard-definition "virtual channels" can be broadcast on a single 6 MHzTV channel.The Society of Motion Picture and Television Engineers or SMPTE, (pronounced /ˈsɪmpti/ and sometimes /ˈsʌmpti/), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is an international professional association, based in the United States of America, of engineers working in the motion imaging industries. An internationally-recognized standards developing organization, SMPTE has over 400 standards, Recommended Practices and Engineering Guidelines for television, motion pictures, digital cinema, audio and medical imaging. In addition to development and publication of standards documents, SMPTE publishes a journal, provides assistance to members with employment, and performs other industry-related functions.The Media Dispatch Group was created in 2003 as an activity of the Professional-MPEG Forum to create a vendor-neutral open technology to create integrated solutions for the professional exchange of large media files securely over IP networks. Members of the Media Dispatch Group include representatives from broadcasters, facility houses, equipment manufacturers, and the digital cinema production community, as well as liaisons with the wider standards community.The European Broadcasting Union (EBU; French: L'Union Européenne de Radio-Télévision ("UER"), and unrelated to the European Union) was formed on 12 February1950 by 23 broadcasting organisations from Europe and the Mediterranean at a conference in the coastal resort of Torquay in Devon, England. In 1993, the International Radio and Television Organisation (OIRT), an equivalent organisation of broadcasters from Central and Eastern Europe, was merged with the EBU.Networks: ABC, NBC, CBS, and Time Warner are associates.
  81. Who set the standards and what do I need to know?The Moving Picture Experts Group, commonly referred to as simply MPEG, is a working group of ISO/IEC charged with the development of video and audio encoding standards. Its first meeting was in May of 1988 in Ottawa, Canada. As of late 2005, MPEG has grown to include approximately 350 members per meeting from various industries, universities, and research institutions. MPEG's official designation is ISO/IEC JTC1/SC29 WG11.MPEG has standardized the following compression formats and ancillary standards:MPEG-1: Initial video and audio compression standard. Later used as the standard for Video CD, and includes the popular Layer 3 (MP3) audio compression format. MPEG-2: Transport, video and audio standards for broadcast-quality television. Used for over-the-air digital televisionATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD, and with slight modifications, as the .VOB (Video OBject) files that carry the images on DVDs. MPEG-3: Originally designed for HDTV, but abandoned when it was realized that MPEG-2 (with extensions) was sufficient for HDTV. (not to be confused with MP3, which is MPEG-1 Audio Layer 3.) MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management. Several new higher efficiency video standards (newer than MPEG-2 Video) are included (an alternative to MPEG-2 Video), notably: MPEG-4 Part 2 (or Advanced Simple Profile) and MPEG-4 Part 10 (or Advanced Video Coding or H.264). MPEG-4 Part 10 may be used on HD DVD and Blu-ray discs, along with VC-1 and MPEG-2. Pro-MPEG – the Professional-MPEG Forum – is an association of broadcasters, program makers, equipment manufacturers, and component suppliers with interests in realizing the interoperability of professional television equipment, according to the implementation requirements of broadcasters and other end-users. The Forum has been in existence for approximately six years and has over 130 members.Independence, openness, and non-commerciality are fiercely maintained to ensure all organizations and individuals can participate and contribute. The SMPTE and the EBU are two key partner organizations, and the output of the Forum's work on operating ranges and file formats has been submitted to SMPTE for standardization. (Professional MPEG Forum) An organization founded in London in 1998 for the advancement of the MPEG-2 standard. The Forum helped develop the MXF file format for exchanging video production data between servers. The offices of the secretariat were located at BBC Radio Northampton. ATSC Standards document a digital television format which will replace (in the United States) the analog NTSC television system[1] by February 17, 2009.[2] It was developed by the Advanced Television Systems Committee.The high definition television standards defined by the ATSC produce wide screen16:9 images up to 1920×1080 pixels in size — more than six times the display resolution of the earlier standard. However, a host of different image sizes are also supported, so that up to six standard-definition "virtual channels" can be broadcast on a single 6 MHzTV channel.The Society of Motion Picture and Television Engineers or SMPTE, (pronounced /ˈsɪmpti/ and sometimes /ˈsʌmpti/), founded in 1916 as the Society of Motion Picture Engineers or SMPE, is an international professional association, based in the United States of America, of engineers working in the motion imaging industries. An internationally-recognized standards developing organization, SMPTE has over 400 standards, Recommended Practices and Engineering Guidelines for television, motion pictures, digital cinema, audio and medical imaging. In addition to development and publication of standards documents, SMPTE publishes a journal, provides assistance to members with employment, and performs other industry-related functions.The Media Dispatch Group was created in 2003 as an activity of the Professional-MPEG Forum to create a vendor-neutral open technology to create integrated solutions for the professional exchange of large media files securely over IP networks. Members of the Media Dispatch Group include representatives from broadcasters, facility houses, equipment manufacturers, and the digital cinema production community, as well as liaisons with the wider standards community.The European Broadcasting Union (EBU; French: L'Union Européenne de Radio-Télévision ("UER"), and unrelated to the European Union) was formed on 12 February1950 by 23 broadcasting organisations from Europe and the Mediterranean at a conference in the coastal resort of Torquay in Devon, England. In 1993, the International Radio and Television Organisation (OIRT), an equivalent organisation of broadcasters from Central and Eastern Europe, was merged with the EBU.Networks: ABC, NBC, CBS, and Time Warner are associates.
  82. SMPTE 356M Television specification for a professional video format, it is composed of MPEG-2 Video 4:2:2 I-frame only and 8 channel AES3 audio streams. These AES3 audio usually contain 24 bit PCM audio samples. It is possible to find video bitrates of 30, 40 and 50 MBit/s. (SMPTE D10 is described)SMPTE D11, also known as HDCAM, is a standard for the compression of high-definition digital video. D11 source picture rates can be 24, 25 or 30 frames per second progressive scan, or 50 or 60 fields per second interlaced; compression yields output bit rates ranging from 112 to 140 Mbit/s. Each D11 source frame is composed of a luminance channel at 1920 x 1080 pixels and a chrominance channel at 960 x 1080 pixels. During compression, each frame's luminance channel is subsampled to 1440 x 1080 pixels, while the chrominance channel is subsampled to 480 x 1080 pixels. The decoder restores the output sample grid to 1920 x 1080 pixels by interpolation.SMPTE 259M is a standard published by SMPTE which "... describes a 10-bit serial digital interface operating at 143/270/360 Mb/s." [1]The goal of SMPTE 259M is to define a Serial Digital Interface (based on a coax cable) this interface is usually called SDI or SD-SDI.There are 4 bitrates defined, which are normally used to transfer the following standard video formats:"SMPTE 292M" is a standard published by SMPTE which expands upon SMPTE 259M and SMPTE 344M allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for High Definition video.[1]This standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.
  83. SMPTE 274M defines 1080-line HD television scanning for multiple picture rates. These are all 1920 x1080 pixels and define progressive frame rates of 60, 59.94, 50,30, 29.97, 25, 24 and 23.98Hz as well as interface rates at 60, 59.94 and 50Hz. "SMPTE 292M" is a standard published by SMPTE which expands upon SMPTE 259M and SMPTE 344M allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for High Definition video.[1]This standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.SMPTE 296M defines 720-line x 1280 pixel HD television scanning for progressive 60 and 59.94 Hz picture rates."SMPTE 344M" is a standard published by SMPTE which expands upon SMPTE 259M allowing for bit-rates of 540 Mbit/s[1], allowing EDTV resolutions of 480p and 576p.This standard is part of a family of standards that define a Serial Digital Interface.SMPTE 372M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2.970 Gbit/s, and 2.970/1.001 Gbit/s over two wires. These bit-rates are sufficient for 1080p video.[1]This standard is essentially known as dual-link HD-SDI and is part of a family of standards that define a Serial Digital Interface.
  84. SMPTE 274M defines 1080-line HD television scanning for multiple picture rates. These are all 1920 x1080 pixels and define progressive frame rates of 60, 59.94, 50,30, 29.97, 25, 24 and 23.98Hz as well as interface rates at 60, 59.94 and 50Hz. "SMPTE 292M" is a standard published by SMPTE which expands upon SMPTE 259M and SMPTE 344M allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for High Definition video.[1]This standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.SMPTE 296M defines 720-line x 1280 pixel HD television scanning for progressive 60 and 59.94 Hz picture rates."SMPTE 344M" is a standard published by SMPTE which expands upon SMPTE 259M allowing for bit-rates of 540 Mbit/s[1], allowing EDTV resolutions of 480p and 576p.This standard is part of a family of standards that define a Serial Digital Interface.SMPTE 372M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2.970 Gbit/s, and 2.970/1.001 Gbit/s over two wires. These bit-rates are sufficient for 1080p video.[1]This standard is essentially known as dual-link HD-SDI and is part of a family of standards that define a Serial Digital Interface.
  85. SMPTE 274M defines 1080-line HD television scanning for multiple picture rates. These are all 1920 x1080 pixels and define progressive frame rates of 60, 59.94, 50,30, 29.97, 25, 24 and 23.98Hz as well as interface rates at 60, 59.94 and 50Hz. "SMPTE 292M" is a standard published by SMPTE which expands upon SMPTE 259M and SMPTE 344M allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for High Definition video.[1]This standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.SMPTE 296M defines 720-line x 1280 pixel HD television scanning for progressive 60 and 59.94 Hz picture rates."SMPTE 344M" is a standard published by SMPTE which expands upon SMPTE 259M allowing for bit-rates of 540 Mbit/s[1], allowing EDTV resolutions of 480p and 576p.SMPTE 372M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2.970 Gbit/s, and 2.970/1.001 Gbit/s over two wires. These bit-rates are sufficient for 1080p video.[1]This standard is essentially known as dual-link HD-SDI and is part of a family of standards that define a Serial Digital Interface.
  86. SMPTE 274M defines 1080-line HD television scanning for multiple picture rates. These are all 1920 x1080 pixels and define progressive frame rates of 60, 59.94, 50,30, 29.97, 25, 24 and 23.98Hz as well as interface rates at 60, 59.94 and 50Hz. "SMPTE 292M" is a standard published by SMPTE which expands upon SMPTE 259M and SMPTE 344M allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for High Definition video.[1]This standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.SMPTE 296M defines 720-line x 1280 pixel HD television scanning for progressive 60 and 59.94 Hz picture rates."SMPTE 344M" is a standard published by SMPTE which expands upon SMPTE 259M allowing for bit-rates of 540 Mbit/s[1], allowing EDTV resolutions of 480p and 576p.This standard is part of a family of standards that define a Serial Digital Interface.SMPTE 372M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2.970 Gbit/s, and 2.970/1.001 Gbit/s over two wires. These bit-rates are sufficient for 1080p video.[1]This standard is essentially known as dual-link HD-SDI and is part of a family of standards that define a Serial Digital Interface.
  87. SMPTE 274M defines 1080-line HD television scanning for multiple picture rates. These are all 1920 x1080 pixels and define progressive frame rates of 60, 59.94, 50,30, 29.97, 25, 24 and 23.98Hz as well as interface rates at 60, 59.94 and 50Hz. "SMPTE 292M" is a standard published by SMPTE which expands upon SMPTE 259M and SMPTE 344M allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for High Definition video.[1]This standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.SMPTE 296M defines 720-line x 1280 pixel HD television scanning for progressive 60 and 59.94 Hz picture rates."SMPTE 344M" is a standard published by SMPTE which expands upon SMPTE 259M allowing for bit-rates of 540 Mbit/s[1], allowing EDTV resolutions of 480p and 576p.This standard is part of a family of standards that define a Serial Digital Interface.SMPTE 372M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2.970 Gbit/s, and 2.970/1.001 Gbit/s over two wires. These bit-rates are sufficient for 1080p video.[1]This standard is essentially known as dual-link HD-SDI and is part of a family of standards that define a Serial Digital Interface.
  88. 1920x1080 SMPTE 274M defines 1080-line HD television scanning for multiple picture rates. These are all 1920 x1080 pixels and define progressive frame rates of 60, 59.94, 50,30, 29.97, 25, 24 and 23.98Hz as well as interface rates at 60, 59.94 and 50Hz. HD-SDI "SMPTE 292M" is a standard published by SMPTE which expands upon SMPTE 259M and SMPTE 344M allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for High Definition video.[1]This standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.1280x720 SMPTE 296M defines 720-line x 1280 pixel HD television scanning for progressive 60 and 59.94 Hz picture rates.DVCPRO HD SMPTE 370SMPTE 372M is a standard published by SMPTE which expands upon SMPTE 259M, SMPTE 344M, and SMPTE 292M allowing for bit-rates of 2.970 Gbit/s, and 2.970/1.001 Gbit/s over two wires. These bit-rates are sufficient for 1080p video.[1]This standard is essentially known as dual-link HD-SDI and is part of a family of standards that define a Serial Digital Interface.
  89. The HD choices began when the ATSC created the digital television table of 36 digital broadcast (DTV) formats. Of those 36 formats, 12 are high definition. These are the formats that the United States government has determined will be the standard for digital broadcasting. Just as there are many compatible production formats developed for NTSC broadcast, the 12 high definition formats also have a number of compatible production formats to choose from. However, where NTSC has a single frame rate and a single frame size, the DTV high definition format has a dozen different choices. As a result, there are even more possibilities when it comes to the hardware that captures and records those images. Also, as technology improved, each NTSC production format was basically compatible with the next. However, in the high definition world, not all the frame rates are compatible with each other. The net result is that there is often confusion about which formatshould be used.Wikipedia (conflict with research): Japan had the earliest working HDTV system, with design efforts going back to 1979. The country began broadcasting analog HDTV signals in the late 1980s using an interlaced resolution of 1035 or 1080 active lines (1035i) or 1125 total lines.The Japanese system, developed by NHKScience and Technical Research Laboratories (STRL) in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as "Hi-Vision" by NHK.
  90. In this class, we are going to cover a number of topics:History of HD – What does HD mean, where did HD come from, and how long has it been around?Before you can Understand HD – The basic building blocks of HD (and all things digital video).Standards – What is a standard, why do we need them, and who decides?Components of HD – What makes up an HD signal, and what makes them different from each other?Compression Essentials – What is compression, how does it help/hurt, and what things do I need to know to survive?Camera / Decks – Different HD Camera and Deck formats and how each of them works with Editorial and GraphicsHD Online Editing – The basics of finishing an HD Project.New Technologies – The Future of HD.
  91. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  92. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  93. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  94. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  95. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  96. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  97. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  98. “High definition” refers to a family of high quality video image and sound formats that has recently become very popular both in the broadcasting community and the consumer market. High definition (HD) in the United States was initially defined as any video format that had more than 720 (horizontal) lines of vertical resolution. The ATSC (Advanced Television Systems Committee) created a digital television (DTV) broadcast table that defined not only the vertical resolution but also other aspects of the HD frame rate and size. This table defined two sizes of high definition images: 720 (horizontal) by 1280 (vertical) lines and 1080 (horizontal) by 1920(vertical) lines of resolution. Along with the two frame sizes, there is also a choice of frame rates: 23.98, 24, 29.97, 30, 59.94, and 60 frames per second.
  99. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  100. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  101. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  102. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  103. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  104. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  105. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  106. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  107. Frame rate, or frame frequency, is the measurement of the frequency (rate) at which an imagingdevice produces unique consecutive images called frames.
  108. Frame rate, or frame frequency, is the measurement of the frequency (rate) at which an imagingdevice produces unique consecutive images called frames.
  109. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  110. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  111. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  112. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  113. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  114. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  115. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  116. The ATSC digital broadcasting table set the standards for United States digital broadcasting. It also allowed the transition from the NTSC frame rate of 29.97 to other integer frame rates. Once the analog broadcast frequencies are eliminated (when these frequencies are returned to the United States government), the move toward true 30 frames per second production will probably be quite rapid. The 29.97 frames per second frame rate that NTSC employs has caused a series of complications over the past 50 years. There are many professionals, including myself, who will not be sorry to see this “almost 30” format leave the production and postproduction work flow.People’s resistance to calling frame rates exactly what they are has confused many of those people trying to understand the complexities of the high definition environment. As mentioned earlier, HD frame rates can be fractional (although these numbers are displayed with a decimal, like 29.97, 59.94, 23.98, etc.) or whole numbers (24, 30, 60).The 18 fractional frame rates are designed to be compatible with the NTSC 29.97 frame rate. The remaining frame rates are whole numbers (24, 30, and 60 frames per second). When the NTSC analog broadcasting is terminated, there will probably be a movement toward the whole numbers as frame rates. However, the ATSC broadcast table will still list fractional frame rates, allowing these programs to be broadcast in their original format and received by digital receivers.􀂄 60i (actually 59.94, or 60 x 1000/1001 to be more precise; 60 interlaced fields = 29.97 frames) isthe standard video field rate per second that has been used for NTSC television for decades,whether from a broadcast signal, rented DVD, or home camcorder. (When NTSC color wasintroduced, the older rate of 60 fields per second was reduced by a factor of 1000/1001 to avoidinterference between the chroma subcarrier and the broadcast sound carrier.)􀂄 50i (50 interlaced fields = 25 frames) is the standard video field rate per second for PAL andSECAM television.􀂄 30p, or 30-frame progressive, is a noninterlaced format and produces video at 30 frames persecond. Progressive (noninterlaced) scanning mimics a film camera's frame-by-frame imagecapture and gives clarity for high speed subjects and a cinematic-like appearance. Shooting in 30pmode offers video with no interlace artifacts. This frame rate originated in the 1980s in the musicvideo industry.􀂄 The 24p frame rate is also a noninterlaced format, and is now widely adopted by those planningon transferring a video signal to film. But film- and video-makers turn to 24p for the "cine"-lookeven if their productions are not going to be transferred to film, simply because of the "look" ofthe frame rate. When transferred to NTSC television, the rate is effectively slowed to 23.976 fps,and when transferred to PAL or SECAM it is sped up to 25 fps.35 mm movie cameras use a standard exposure rate of 24 frames per second, though many cameras offerrates of 23.976 fps for NTSC television and 25 fps for PAL/SECAM.􀂄 25p is a video format which runs twenty-five progressive (hence the "P") frames per second. Thisframerate is derived from the PAL television standard of 50i (or 25 interlaced frames per second).While 25p captures only half the motion that normal 50i PAL registers, it yields a higher verticalresolution on moving subjects. It is also better suited to progressive-scan output (e.g. on LCDdisplays, computer monitors and projectors) because the interlacing is absent. Like 24p, 25p isoften used to achieve "cine"-look.􀂄 60p is a progressive format used in high-end HDTV systems. While it is not technically part ofthe ATSC or DVB broadcast
  117. This digital broadcasting chart includes standard definition digital formats, enhanced definition, and high definition. In this author’s opinion, there are 12 HD formats (listed in Table 1.2) along with the remaining 24 ED and SD formats. Note that although there are 18 formats listed, there are actually two for each when one considers the NTSC-compatible frame rates as well as the integer frame rates. These fractional rates are designed to be compatible with the 29.97 NTSC frame rate. However, digital broadcasting does not require fractional frame rates and these will probably become obsolete as analog broadcasting comes to a close.
  118. The HD ATSC broadcast table shown displays the 12 high definition broadcast formats, six of which are designed to integrate with the NTSC broadcast frame rate. When the analog NTSC broadcasting frequencies are returned to the federal government in February of 2009, the integer frame rates will probably be used more often. Many professionals think there are only six high definition digital broadcast formats, but these are the NTSC compatible frame rates. The others are integer frame rates either used for true film transfer or for future integer frame rates. Note that the only interlaced format is the 1080 frame size.
  119. The “Universal” FormatOne high definition frame rate, 1080p23.98, is able to be convertedto many other high def frame rates and sizes. As a result, this formatis informally called a universal format. As an example, if oneshoots a program and edits in 1080p23.98 and outputs the resultingprogram in the same format, the edited master can be converted toalmost any format including PAL and standard definition NTSC,often directly from a video playback deck. In many cases, the nonlineareditor can also play out the images at other frame rates andsizes.Although this frame rate has the advantage of being able to convertto other high definition formats, it may not be acceptable as a productionformat for a particular network. Many networks requirethat a program be shot and delivered in a specific frame rate and size.A rate of 23.98 frames per second has a unique look and may not bethe best choice when a production contains a great deal of action ormovement. Some clients do not want their camera’s original footageshot at 23.98, even though it could then be converted to thespecific delivery requirement.If a company is creating a show for a specific network, sometimesthe choice becomes easier. NBC, HDNet, Discovery HD, HBO, andCBS air 1080i59.94. ABC and ESPN air their programs in 720p59.94.●
  120. While by no means an exhastive list, here are a number of HD channels and the type of HD they employ. 720p is more common for sports, and 1080i for drama and movies. Notice that no one, at the writing of this class, broadcasts in 1080p.
  121. Chroma subsampling is the practice of encoding images by implementing less resolution for chromainformation than for luma information. It is used in many video encoding schemes — both analog and digital — and also in JPEG encoding.Y′CbCr is not an absolute color space, it is a way of encodingRGB information. The actual color displayed depends on the actual RGB colorants used to display the signal. Historically: Research was done in the early days when we were deciding to go from Black and White to Color.Bit depth refers to the quantification of the three values that make up a high definition signal: Y, Cb, and Cr. The Y represents the luma or black and white value in the picture. Cb represents the “color difference” of blue minus luma (B-Y), and Cr is red minus luma (R-Y).With these three values, a red, green, and blue picture with luma values can be calculated and displayed.An 8-bit depth means there are 8 bits of information for each of these three values that describe a pixel or 24 bits per pixel. An 8-bit depth allows 256 colors to be displayed at one time. A 10-bit depth allows 1024 colors to be displayed. The human eye cannot resolve much more than 1024 colors.A 10-bit depth is “better” because a greater amount of color informationis recorded, but this signal consumes much more tape/diskspace. Yet for color correction latitude and effects (green screen,blue screen, color correction), 10 bit is preferable for high-end HDproductions. Most broadcasters consider 8 bit adequate for production,whereas filmmakers want 10 or even 12 if possible.
  122. Chroma subsampling is the practice of encoding images by implementing less resolution for chromainformation than for luma information. It is used in many video encoding schemes — both analog and digital — and also in JPEG encoding.Y′CbCr is not an absolute color space, it is a way of encodingRGB information. The actual color displayed depends on the actual RGB colorants used to display the signal. Historically: Research was done in the early days when we were deciding to go from Black and White to Color.Bit depth refers to the quantification of the three values that make up a high definition signal: Y, Cb, and Cr. The Y represents the luma or black and white value in the picture. Cb represents the “color difference” of blue minus luma (B-Y), and Cr is red minus luma (R-Y).With these three values, a red, green, and blue picture with luma values can be calculated and displayed.An 8-bit depth means there are 8 bits of information for each of these three values that describe a pixel or 24 bits per pixel. An 8-bit depth allows 256 colors to be displayed at one time. A 10-bit depth allows 1024 colors to be displayed. The human eye cannot resolve much more than 1024 colors.A 10-bit depth is “better” because a greater amount of color informationis recorded, but this signal consumes much more tape/diskspace. Yet for color correction latitude and effects (green screen,blue screen, color correction), 10 bit is preferable for high-end HDproductions. Most broadcasters consider 8 bit adequate for production,whereas filmmakers want 10 or even 12 if possible.
  123. YPbPr is a color space used in video electronics, in particular in reference to component video cables. YPbPr is the analog version of the YCBCR color space; the two are numerically equivalent, but YPBPR is designed for use in analog systems whereas YCBCR is intended for digital video.
  124. YPbPr is a color space used in video electronics, in particular in reference to component video cables. YPbPr is the analog version of the YCBCR color space; the two are numerically equivalent, but YPBPR is designed for use in analog systems whereas YCBCR is intended for digital video.
  125. Y'UV refers to an analog encoding scheme while Y'CbCr refers to a digital encoding scheme. One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often used erroneously to refer to Y'CbCr encoding. Hence, expressions like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr since there simply is no such thing as 4:x:x in analog encoding (such as YUV).Historically, YUV color space was developed to provide compatibility between color and black /white analog television systems. YUV color image information transmitted in the TV signal allows proper reproducing an image contents at the both types of TV receivers, at the color TV sets as well as at the black / white TV sets. The Y'UV color model is used in the NTSC, PAL, and SECAMcomposite color video standards. Previous black-and-white systems used only luma (Y') information. Color information (U and V) was added separately via a sub-carrier so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native black-and-white format.black and white TVs decode only the Y part of the signal
  126. Y'UV refers to an analog encoding scheme while Y'CbCr refers to a digital encoding scheme. One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often used erroneously to refer to Y'CbCr encoding. Hence, expressions like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr since there simply is no such thing as 4:x:x in analog encoding (such as YUV).Historically, YUV color space was developed to provide compatibility between color and black /white analog television systems. YUV color image information transmitted in the TV signal allows proper reproducing an image contents at the both types of TV receivers, at the color TV sets as well as at the black / white TV sets. The Y'UV color model is used in the NTSC, PAL, and SECAMcomposite color video standards. Previous black-and-white systems used only luma (Y') information. Color information (U and V) was added separately via a sub-carrier so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native black-and-white format.black and white TVs decode only the Y part of the signal
  127. Y'UV refers to an analog encoding scheme while Y'CbCr refers to a digital encoding scheme. One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often used erroneously to refer to Y'CbCr encoding. Hence, expressions like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr since there simply is no such thing as 4:x:x in analog encoding (such as YUV).Historically, YUV color space was developed to provide compatibility between color and black /white analog television systems. YUV color image information transmitted in the TV signal allows proper reproducing an image contents at the both types of TV receivers, at the color TV sets as well as at the black / white TV sets. The Y'UV color model is used in the NTSC, PAL, and SECAMcomposite color video standards. Previous black-and-white systems used only luma (Y') information. Color information (U and V) was added separately via a sub-carrier so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native black-and-white format.black and white TVs decode only the Y part of the signal
  128. Y'UV refers to an analog encoding scheme while Y'CbCr refers to a digital encoding scheme. One difference between the two is that the scale factors on the chroma components (U, V, Cb, and Cr) are different. However, the term YUV is often used erroneously to refer to Y'CbCr encoding. Hence, expressions like "4:2:2 YUV" always refer to 4:2:2 Y'CbCr since there simply is no such thing as 4:x:x in analog encoding (such as YUV).Historically, YUV color space was developed to provide compatibility between color and black /white analog television systems. YUV color image information transmitted in the TV signal allows proper reproducing an image contents at the both types of TV receivers, at the color TV sets as well as at the black / white TV sets. The Y'UV color model is used in the NTSC, PAL, and SECAMcomposite color video standards. Previous black-and-white systems used only luma (Y') information. Color information (U and V) was added separately via a sub-carrier so that a black-and-white receiver would still be able to receive and display a color picture transmission in the receiver's native black-and-white format.black and white TVs decode only the Y part of the signal
  129. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  130. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  131. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  132. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  133. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  134. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  135. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  136. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  137. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.4:2:2The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of a video signal by one-third with little to no visual difference.Many high-end digital video formats and interfaces use this scheme:AVC-Intra 100Digital BetacamDVCPRO50 and DVCPRO HDDigital-SCCIR 601 / Serial Digital Interface / D1ProRes 422XDCAM HD4224:2:1Although this mode is technically defined, very few software or hardware codecs use this sampling mode. Cb horizontal resolution is twice as low as one of Cr (and four times as low as one of Y). This exploits the fact that human eye is less sensitive to blue color than to red. NTSC is similar, in using lower resolution for blue/red than yellow/green, which in turn has less resolution than luma.4:1:1In 4:1:1 chroma subsampling, the horizontal color resolution is quartered, and the bandwidth is halved compared to no chroma subsampling. Initially, 4:1:1 chroma subsampling of the DV format was not considered to be broadcast quality and was only acceptable for low-end and consumer applications.[2][3] Currently, DV-based formats (which use 4:1:1 chroma subsampling) are used professionally in electronic news gathering and in playout servers. DV has also been sporadically used in feature films and in digital cinematography.In the NTSC system, if the luma is sampled at 13.5 MHz, then this means that the Cr and Cb signals will each be sampled at 3.35 MHz, which corresponds to a maximum Nyquist bandwidth of 1.6875 MHz, whereas traditional "high-end broadcast analog NTSC encoder" would have a Nyquist bandwidth of 1.5 MHz and 0.5 MHz for the I/Q channels. However in most equipment, especially cheap TV sets and VHS/Betamax VCR's the chroma channels have only the 0.5 MHz bandwidth for both Cr and Cb (or equivalently for I/Q). Thus the DV system actually provides a superior color bandwidth compared to the best composite analog specifications for NTSC, despite having only 1/4 of the chromabandwith of a "full" digital signal.Formats that use 4:1:1 chroma subsampling include:DVCPRO (NTSCandPAL)NTSC DV and DVCAMD-74:2:0Different variants of 4:2:0 chroma configurations are found in:All versions of MPEG, including MPEG-2 implementations such as DVD (although some profiles of MPEG-4 allow higher-quality sampling schemes such as 4:4:4)PAL DV and DVCAMHDVAVCHD and AVC-Intra 50Apple Intermediate Codecmost common JPEG/JFIF, H.261, and MJPEG implementationsVC-1Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically.There are three variants of 4:2:0 schemes, having different horizontal and vertical siting. [4]
  138. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.
  139. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.
  140. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.
  141. 4:4:4 Y'CbCrEach of the three Y'CbCr components have the same sample rate. This scheme is sometimes used in high-end film scanners and cinematic postproduction. Two links (connections) are normally required to carry this bandwidth: Link A would carry a 4:2:2 signal, Link B a 0:2:2, when combined would make 4:4:4.4:4:4 R'G'B' (no subsampling)Note that "4:4:4" may instead be referring to R'G'B' color space, which implicitly does not have any chroma subsampling at all. Formats such as HDCAM SR can record 4:4:4 R'G'B' over dual-link HD-SDI.
  142. Bit depth refers to the quantification of the three values that make up a high definition signal: Y, Cb, and Cr. The Y represents the luma or black and white value in the picture. Cb represents the “color difference” of blue minus luma (B-Y), and Cr is red minus luma (R-Y).With these three values, a red, green, and blue picture with luma values can be calculated and displayed.An 8-bit depth means there are 8 bits of information for each of these three values that describe a pixel or 24 bits per pixel. An 8-bit depth allows 256 colors to be displayed at one time. A 10-bit depth allows 1024 colors to be displayed. The human eye cannot resolve much more than 1024 colors.A 10-bit depth is “better” because a greater amount of color informationis recorded, but this signal consumes much more tape/diskspace. Yet for color correction latitude and effects (green screen,blue screen, color correction), 10 bit is preferable for high-end HDproductions. Most broadcasters consider 8 bit adequate for production,whereas filmmakers want 10 or even 12 if possible.
  143. Bit depth refers to the quantification of the three values that make up a high definition signal: Y, Cb, and Cr. The Y represents the luma or black and white value in the picture. Cb represents the “color difference” of blue minus luma (B-Y), and Cr is red minus luma (R-Y).With these three values, a red, green, and blue picture with luma values can be calculated and displayed.An 8-bit depth means there are 8 bits of information for each of these three values that describe a pixel or 24 bits per pixel. An 8-bit depth allows 256 colors to be displayed at one time. A 10-bit depth allows 1024 colors to be displayed. The human eye cannot resolve much more than 1024 colors.A 10-bit depth is “better” because a greater amount of color informationis recorded, but this signal consumes much more tape/diskspace. Yet for color correction latitude and effects (green screen,blue screen, color correction), 10 bit is preferable for high-end HDproductions. Most broadcasters consider 8 bit adequate for production,whereas filmmakers want 10 or even 12 if possible.
  144. Bit depth refers to the quantification of the three values that make up a high definition signal: Y, Cb, and Cr. The Y represents the luma or black and white value in the picture. Cb represents the “color difference” of blue minus luma (B-Y), and Cr is red minus luma (R-Y).With these three values, a red, green, and blue picture with luma values can be calculated and displayed.An 8-bit depth means there are 8 bits of information for each of these three values that describe a pixel or 24 bits per pixel. An 8-bit depth allows 256 colors to be displayed at one time. A 10-bit depth allows 1024 colors to be displayed. The human eye cannot resolve much more than 1024 colors.A 10-bit depth is “better” because a greater amount of color informationis recorded, but this signal consumes much more tape/diskspace. Yet for color correction latitude and effects (green screen,blue screen, color correction), 10 bit is preferable for high-end HDproductions. Most broadcasters consider 8 bit adequate for production,whereas filmmakers want 10 or even 12 if possible.
  145. Bit depth refers to the quantification of the three values that make up a high definition signal: Y, Cb, and Cr. The Y represents the luma or black and white value in the picture. Cb represents the “color difference” of blue minus luma (B-Y), and Cr is red minus luma (R-Y).With these three values, a red, green, and blue picture with luma values can be calculated and displayed.An 8-bit depth means there are 8 bits of information for each of these three values that describe a pixel or 24 bits per pixel. An 8-bit depth allows 256 colors to be displayed at one time. A 10-bit depth allows 1024 colors to be displayed. The human eye cannot resolve much more than 1024 colors.A 10-bit depth is “better” because a greater amount of color informationis recorded, but this signal consumes much more tape/diskspace. Yet for color correction latitude and effects (green screen,blue screen, color correction), 10 bit is preferable for high-end HDproductions. Most broadcasters consider 8 bit adequate for production,whereas filmmakers want 10 or even 12 if possible.
  146. The triforce of Compression:The goal of all compression types are three fold. Make the image as small as possible with the highest quality as quickly as can be achieved.The term CODEC refers to compressor/decompressor
  147. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  148. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  149. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  150. Video Compression Formats are utilized by the individual camera manufacturers to create their individual camera formats. Many times they manipulate the compression to create their take of that compression.Discrete Cosine Transform is used as a first stage of manydigital video compression schemes including JPEG andMPEG-2 and –4. It converts 8 x 8 pixel blocks of pictures toexpress them as frequencies and amplitudes. This may notreduce the data but it does arrange the image informationso that it can. As the high frequency, low amplitude detailis least noticeable their coefficients are progressively
  151. Progressive or noninterlaced scanning is any method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame are drawn alternatively (each image now called a field) are drawn.Historical Fact: The system was originally known as "sequential scanning" when it was used in the Baird 240 line television transmissions from Alexandra Palace, England in 1936. It was also used in Baird's experimental transmissions using 30 lines in the 1920s.
  152. Video Compression Formats are utilized by the individual camera manufacturers to create their individual camera formats. Many times they manipulate the compression to create their take of that compression.
  153. Video Compression Formats are utilized by the individual camera manufacturers to create their individual camera formats. Many times they manipulate the compression to create their take of that compression.
  154. Video Compression Formats are utilized by the individual camera manufacturers to create their individual camera formats. Many times they manipulate the compression to create their take of that compression.
  155. Three of the most common mastering codecs for Avid, Final Cut, and Premiere