SlideShare a Scribd company logo
1 of 116
Chapter No:- 02
Presented By,
Er. Swapnil Kaware,
Chapter Name:- Source and Channel Coding
DCT Notes By, Er. Swapnil Kaware
Communication System
The purpose of a Communication System is to transport an information
bearing signal from a source to a user destination via a communication
channel.
ELEMENTS OF DIGITAL COMMUNICATION SYSTEMS:
1. Analog Information Sources.
2. Digital Information Sources.
(i). Analog Information Sources → Microphone actuated by a speech, TV
Camera scanning a scene, continuous amplitude signals.
(ii). Digital Information Sources → These are teletype or the numerical
output of computer which consists of a sequence of discrete symbols or
letters.
DCT Notes By, Er. Swapnil Kaware
DCT Notes By, Er. Swapnil Kaware
(i). Source Coding:- Code data to more efficiently represent
the information.
*Reduces “size” of data.
*Analog - Encode analog source data into a binary format.
*Digital - Reduce the “size” of digital source data.
(ii). Channel Coding:- Code data for transmission over a
noisy communication channel.
*Increases “size” of data.
*Digit: - Add redundancy to identify and correct errors.
*Analog - Represent digital values by analog signals.
Communication System
DCT Notes By, Er. Swapnil Kaware
Communication System
DCT Notes By, Er. Swapnil Kaware
Coding in communication system
• In the engineering sense, coding can be classified into four
areas:
• Encryption:- to encrypt information for security purpose.
• Data Compression:- to reduce space for the data stream.
• Data Translation:- to change the form of representation of
the information so that it can be transmitted over a
communication channel.
• Error Control:- to encode a signal so that error occurred
can be detected and possibly corrected.
DCT Notes By, Er. Swapnil Kaware
Source Coding
(i). The process by which information symbols are mapped to alphabetical symbols
is called source coding.
(ii). The mapping is generally performed in sequences or groups of information and
alphabetical symbols.
(iii). it must be performed in such a manner that it guarantees the exact recovery
of the information symbol back from the alphabetical symbols otherwise it will
destroy the basic theme of the source coding.
(iv). The source coding is called lossless compression if the information symbols are
exactly recovered from the alphabetical symbols otherwise it is called lossy
compression.
(v). The Source coding also known as compression or bit-rate reduction process.
DCT Notes By, Er. Swapnil Kaware
(vi). It is the process of removing redundancy from the
source symbols, which essentially reduces data size.
(vii). Source coding is a vital part of any communication
system as it helps to use disk space and transmission
bandwidth efficiently.
(viii). The source coding can either be lossy or lossless.
(ix). In case of lossless encoding, error free
reconstruction of source symbols is possible,
whereas, exact reconstruction of source symbols is
not possible in case of lossy encoding.
Source Coding
DCT Notes By, Er. Swapnil Kaware
• Suppose a word ‘Zebra’ is going to be sent out. Before this
information can be transmitted to the channel, it is first
translated into a stream of bits (‘0’ and ‘1’).
• The process is called source coding. There are many
commonly used ways to translate that.
• For example, if ASCII code is used, each alphabet will be
represented by 7-bit so called the code word.
• The alphabets ‘Z’, ‘e’, ‘b’, ‘r’, ‘a’, will be encoded as
‘1010101’, ‘0110110’, ‘0010110’, ‘0010111’, ‘0001110’.
Source Coding
DCT Notes By, Er. Swapnil Kaware
Source Encoder / Decoder
Source Encoder ( or Source Coder):-
(i). It converts the input i.e. symbol sequence into a binary sequence of 0’s and 1’s by
assigning code words to the symbols in the input sequence.
(ii). For e.g. :-If a source set is having hundred symbols, then the number of bits
used to represent each symbol will be 7 because 2⁷=128 unique combinations are
available.
(iii). The important parameters of a source encoder are block size, code word
lengths, average data rate and the efficiency of the coder (i.e. actual output data
rate compared to the minimum achievable rate).
DCT Notes By, Er. Swapnil Kaware
Source Decoder:-
(i). Source decoder converts the binary output of the channel
decoder into a symbol sequence.
(ii). The decoder for a system using fixed – length code words is quite
simple, but the decoder for a system using variable – length code
words will be very complex.
(iii). Aim of the source coding is to remove the redundancy in the
transmitting information, so that bandwidth required for
transmission is minimized.
(iv). Based on the probability of the symbol code word is assigned.
Higher the probability, shorter is the codeword.
Ex: Huffman coding.
Source Encoder / Decoder
DCT Notes By, Er. Swapnil Kaware
Huffman coding.
• Rules:-
• Order the symbols (‘Z’, ‘e’, ‘b’, ‘r’, ‘a’) by decreasing probability and
denote them as S1, to Sn (n = 5 for this case).
• Combine the two symbols (Sn, Sn-1) having the lowest probabilities.
• Assign ‘1’ as the last symbol of Sn-1 and ‘0’ as the last symbol of Sn.
• Form a new source alphabet of (n-1) symbols by combining Sn-1 and Sn
into a new symbol S’n-1 with probability P’n-1 = Pn-1 + Pn.
• Repeat the above steps until the final source alphabet has only one
symbol with probability equals to 1.
DCT Notes By, Er. Swapnil Kaware
Huffman coding.
DCT Notes By, Er. Swapnil Kaware
• The Channel Encoder will add bits to the message bits to be transmitted
systematically.
• After passing through the channel, the Channel decoder will detect and correct
the errors.
• A simple example is to send ‘000’ (‘111’ correspondingly) instead of sending
only one ‘0’ (‘1’ correspondingly) to the channel.
• Due to noise in the channel, the received bits may become ‘001’. But since
either ‘000’ or ‘111’ could have been sent.
• By majority logic decoding scheme, it will be decoded as ‘000’ and therefore
the message has been a ‘0’.
• In general the channel encoder will divides the input message bits into blocks of
k messages bits and replaces each k message bits block with a n-bit code word
by introducing (n-k) check bits to each message block.
Channel Encoder / Decoder:
DCT Notes By, Er. Swapnil Kaware
Channel Encoder / Decoder:
There are two methods of channel coding.
1. Block Coding: (i). The encoder takes a block of ‘k’ information
bits from the source encoder and adds ‘r’ error control bits,
where ‘r’ is dependent on ‘k’ and error control capabilities
desired.
2. Convolution Coding: (i). The information bearing message
stream is encoded in a continuous fashion by continuously
interleaving information bits and error control bits.
DCT Notes By, Er. Swapnil Kaware
Block Coding:
• Denoted by (n, k) a block code is a collection of code words
each with length n, k information bits and r = n – k check bits.
• It is linear if it is closed under addition mod 2.
• A Generator Matrix G (of order k × n) is used to generate the
code. (3.1) G = [ Ik P ] k × n.
• where Ik is the k × k identity matrix and P is a k × (n – k) matrix
selected to give desirable properties to the code produced.
• For example, denote D to be the message, G to be the
generator matrix, C to be code word.
DCT Notes By, Er. Swapnil Kaware
Average Mutual Information & Entropy
Information theory answers two fundamental
questions in communication theory:
1. What is the ultimate lossless data compression?
2. What is the ultimate transmission rate of reliable
communication?
Information theory gives insight into the problems of
statistical inference, computer science, investments
and many other fields.
DCT Notes By, Er. Swapnil Kaware
• Entropy is a measure of the number of specific ways in
which a system may be arranged, often taken to be a
measure of disorder, or a measure of progressing towards
thermodynamic equilibrium.
• The entropy of an isolated system never decreases, because
isolated systems spontaneously evolve towards
thermodynamic equilibrium, which is the state of maximum
entropy.
• Entropy was originally defined for a thermodynamically
reversible process as where the entropy (S) is found from
the uniform thermodynamic temperature (T) of a closed
system divided into an incremental reversible transfer of
heat into that system (dQ).
Entropy
DCT Notes By, Er. Swapnil Kaware
• The above definition is sometimes called the macroscopic
definition of entropy because it can be used without regard
to any microscopic picture of the contents of a system.
• In thermodynamics, entropy has been found to be more
generally useful and it has several other formulations.
• Entropy was discovered when it was noticed to be a
quantity that behaves as a function of state.
• Entropy is an extensive property, but it is often given as
an intensive property of specific entropy as entropy per
unit mass or entropy per mole.
Entropy
DCT Notes By, Er. Swapnil Kaware
Entropy
(i). The entropy of a random variable is a function which attempts to
characterize the “unpredictability” of a random variable.
(ii). Consider a random variable X representing the number that
comes up on a roulette wheel and a random variable Y
representing the number that comes up on a fair 6-sided die.
(iii). The entropy of X is greater than the entropy of Y . In addition to
the numbers 1 through 6, the values on the roulette wheel can
take on the values 7 through 36.
(iii). In some sense, it is less predictable.
DCT Notes By, Er. Swapnil Kaware
(iv). But entropy is not just about the number of possible
outcomes. It is also about their frequency.
(v). For example, let Z be the outcome of a weighted six
sided die that comes up 90% of the time as a “2”. Z has
lower entropy than Y representing a fair 6-sided die.
(vi). The weighted die is less unpredictable, in some sense.
But entropy is not a vague concept.
(vii). It has a precise mathematical definition. In particular, if
a random variable X takes on values in a set X = {x1, x2, ...,
xn}.
Entropy
DCT Notes By, Er. Swapnil Kaware
The most fundamental concept of information
theory is the entropy. The entropy of a random
variable X is defined by,
The entropy is non-negative. It is zero when the
random variable is “certain” to be predicted.
Entropy
DCT Notes By, Er. Swapnil Kaware
Joint Entropy
(i). Joint entropy is the entropy of a joint probability
distribution, or a multi-valued random variable.
(ii). For example, one might wish to the know the joint
entropy of a distribution of people defined by hair
color C and eye color E, where C can take on 4
different values from a set C and E can take on 3
values from a set E.
(iii). If P(E,C) defines the joint probability distribution of
hair color and eye color.
DCT Notes By, Er. Swapnil Kaware
Joint and Conditional Entropy
For two random variables X and Y , the joint
entropy is defined by,
The conditional entropy is defined by,
DCT Notes By, Er. Swapnil Kaware
Mutual information
(i). Mutual information is a quantity that measures a relationship
between two random variables that are sampled simultaneously.
(ii). In particular, it measures how much information is
communicated, on average, in one random variable about
another.
(iii). Intuitively, one might ask, how much does one random variable
tell me about another?
• For example, suppose X represents the roll of a fair 6-sided die,
and Y represents whether the roll is even (0 if even, 1 if odd).
Clearly, the value of Y tells us something about the value of X and
vice versa.
• That is, these variables share mutual information.
DCT Notes By, Er. Swapnil Kaware
(iv). On the other hand, if X represents the roll of one
fair die, and Z represents the roll of another fair
die, then X and Z share no mutual information.
(v). The roll of one die does not contain any
information about the outcome of the other die.
(vi). An important theorem from information theory
says that the mutual information between two
variables is 0 if and only if the two variables are
statistically independent.
Mutual information
DCT Notes By, Er. Swapnil Kaware
Mutual Information
(i). The mutual information of X and Y is defined by,
(ii). Note that the mutual information is symmetric
in the arguments. That is,
(iii). Mutual information is also non-negative, as we
will show in a minute.
DCT Notes By, Er. Swapnil Kaware
Mutual Information and Entropy
It follows from definition of entropy and mutual
information that,
The mutual information is the reduction of entropy
of X when Y is known.
DCT Notes By, Er. Swapnil Kaware
Conditional Mutual Information
The conditional mutual information of X and Y
given Z is defined by,
DCT Notes By, Er. Swapnil Kaware
Chain Rules of Mutual Information
It can be shown from the definitions that the
mutual information of (X; Y ) and Z is the sum of
the mutual information of X and Z and the
conditional mutual information of Y and Z given X.
That is,
Chain Rules of Entropy
From the definition of entropy, it can be shown that
for two random variables X and Y , the joint entropy
is the sum of the entropy of X and the conditional
entropy of Y given X,
DCT Notes By, Er. Swapnil Kaware
Discrete Memory Less Source
(i). Suppose that a probabilistic experiment involves the observation of the output
emitted by a discrete source during every unit of time (signaling interval).
(ii). The source output is modeled as a discrete random variable S, which takes on
symbols from a alphabet.
(iii). We assume that the symbols emitted by the source during successive signaling
intervals are statistically independent.
(iv). A source having the properties just described is called a discrete memory less
source.
(v). If the source symbols occur with different probabilities, and the probability ‘pk’
is low, then there is more surprise, and therefore information, when symbol ‘sk’
is emitted by the source.
DCT Notes By, Er. Swapnil Kaware
Lempel- Ziv algorithm
An innovative, radically different method was introduced in1977 by Abraham
Lempel and Jacob Ziv.
This technique (called Lempel-Ziv) actually consists of two considerably
different algorithms, LZ77 and LZ78. LZ78 inserts one- or multi-character, non-
overlapping, distinct patterns of the message to be encoded in a Dictionary.
The multi-character patterns are of the form: C0C1 . . . Cn-1Cn. The prefix of
a pattern consists of all the pattern characters except the last: C0C1 . . . Cn-1
LZ78 O/P:
DCT Notes By, Er. Swapnil Kaware
Lempel-Ziv algorithm
(i). Encode a string by finding the longest match
anywhere within a window of past symbols and
represents the string by a pointer to location of the
match within the window and the length of the
match.
(ii). This algorithm is simple to implement and has
become popular as one of the early standard
algorithms for file compression on computers because
of its speed and efficiency.
(iii). It is also used for data compression in high-7 speed
modems.
DCT Notes By, Er. Swapnil Kaware
Lempel-Ziv algorithm
 As mentioned earlier, static coding schemes require some
knowledge about the data before encoding takes place.
 Universal coding schemes, like LZW, do not require advance
knowledge and can build such knowledge on-the-fly.
 LZW is the foremost technique for general purpose data
compression due to its simplicity and versatility.
 It is the basis of many PC utilities that claim to “double the
capacity of your hard drive”.
 LZW compression uses a code table, with 4096 as a common
choice for the number of table entries.
DCT Notes By, Er. Swapnil Kaware
 Codes 0-255 in the code table are always assigned to
represent single bytes from the input file.
 When encoding begins the code table contains only the first
256 entries, with the remainder of the table being blanks.
 Compression is achieved by using codes 256 through 4095
to represent sequences of bytes.
 As the encoding continues, LZW identifies repeated
sequences in the data, and adds them to the code table.
 Decoding is achieved by taking each code from the
compressed file, and translating it through the code table to
find what character or characters it represents.
Lempel-Ziv algorithm
DCT Notes By, Er. Swapnil Kaware
1 Initialize table with single character strings
2 P = first input character
3 WHILE not end of input stream
4 C = next input character
5 IF P + C is in the string table
6 P = P + C
7 ELSE
8 output the code for P
9 add P + C to the string table
10 P = C
11 END WHILE
12 output code for P
Lempel-Ziv algorithm
DCT Notes By, Er. Swapnil Kaware
Example 1: Compression using LZW
Example 1: Use the LZW algorithm to compress the
string
BABAABAAA
DCT Notes By, Er. Swapnil Kaware
Example 1: LZW Compression Step 1
BABAABAAA P=A
C=empty
STRING TABLEENCODER OUTPUT
stringcodewordrepresentingoutput code
BA256B66
DCT Notes By, Er. Swapnil Kaware
Example 1: LZW Compression Step 2
BABAABAAA P=B
C=empty
STRING TABLEENCODER OUTPUT
stringcodewordrepresentingoutput code
BA256B66
AB257A65
DCT Notes By, Er. Swapnil Kaware
Example 1: LZW Compression Step 3
BABAABAAA P=A
C=empty
STRING TABLEENCODER OUTPUT
stringcodewordrepresentingoutput code
BA256B66
AB257A65
BAA258BA256
DCT Notes By, Er. Swapnil Kaware
Example 1: LZW Compression Step 4
BABAABAAA P=A
C=empty
STRING TABLEENCODER OUTPUT
stringcodewordrepresentingoutput code
BA256B66
AB257A65
BAA258BA256
ABA259AB257
DCT Notes By, Er. Swapnil Kaware
Example 1: LZW Compression Step 5
BABAABAAA P=A
C=A
STRING TABLEENCODER OUTPUT
stringcodewordrepresentingoutput code
BA256B66
AB257A65
BAA258BA256
ABA259AB257
AA260A65
Example 1: LZW Compression Step 6
BABAABAAA P=AA
C=empty
STRING TABLEENCODER OUTPUT
stringcodewordrepresentingoutput code
BA256B66
AB257A65
BAA258BA256
ABA259AB257
AA260A65
AA260
LZW Decompression
 The LZW decompressor creates the same string table
during decompression.
 It starts with the first 256 table entries initialized to single
characters.
 The string table is updated for each character in the input
stream, except the first one.
 Decoding achieved by reading codes and translating them
through the code table being built.
DCT Notes By, Er. Swapnil Kaware
LZW Decompression Algorithm
1 Initialize table with single character strings
2 OLD = first input code
3 output translation of OLD
4 WHILE not end of input stream
5 NEW = next input code
6 IF NEW is not in the string table
7 S = translation of OLD
8 S = S + C
9 ELSE
10 S = translation of NEW
11 output S
12 C = first character of S
13 OLD + C to the string table
14 OLD = NEW
15 END WHILE
DCT Notes By, Er. Swapnil Kaware
Example 2: LZW Decompression 1
Example 2: Use LZW to decompress the output sequence of
Example 1:
<66><65><256><257><65><260>.
DCT Notes By, Er. Swapnil Kaware
Example 2: LZW Decompression Step 1
<66><65><256><257><65><260> Old = 65 S = A
New = 66 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
DCT Notes By, Er. Swapnil Kaware
Example 2: LZW Decompression Step 2
<66><65><256><257><65><260> Old = 256 S = BA
New = 256 C = B
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
DCT Notes By, Er. Swapnil Kaware
Example 2: LZW Decompression Step 3
<66><65><256><257><65><260> Old = 257 S = AB
New = 257 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
BAA258AB
DCT Notes By, Er. Swapnil Kaware
Example 2: LZW Decompression Step 4
<66><65><256><257><65><260> Old = 65 S = A
New = 65 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
BAA258AB
ABA259A
DCT Notes By, Er. Swapnil Kaware
Example 2: LZW Decompression Step 5
<66><65><256><257><65><260> Old = 260 S = AA
New = 260 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
BAA258AB
ABA259A
AA260AA
DCT Notes By, Er. Swapnil Kaware
LZW Summary
 This algorithm compresses repetitive sequences of data well.
 Since the code words are 12 bits, any single encoded character
will expand the data size rather than reduce it.
 In this example, 72 bits are represented with 72 bits of data. After
a reasonable string table is built, compression improves
dramatically.
 Advantages of LZW over Huffman:
 LZW requires no prior information about the input data stream.
 LZW can compress the input stream in one single pass.
 Another advantage of LZW its simplicity, allowing fast
execution.
DCT Notes By, Er. Swapnil Kaware
LZW: Limitations
 What happens when the dictionary gets too large (i.e., when all the 4096
locations have been used)?
 Here are some options usually implemented:
 Simply forget about adding any more entries and use the table as is.
 Throw the dictionary away when it reaches a certain size.
 Throw the dictionary away when it is no longer effective at compression.
 Clear entries 256-4095 and start building the dictionary again.
 Some clever schemes rebuild a string table from the last N input
characters.
DCT Notes By, Er. Swapnil Kaware
Coding of analog sources
Three type of analog source encoding:-
Temporal Waveform coding :design to represent digitally the time-domain characteristic of the signal
(i). Pulse-code modulation (PCM).
(ii). Differential pulse-code modulation (DPCM).
(iii). Delta modulation(DM).
• Spectral waveform coding: signal waveform is sub divided into different frequency
band and either the time waveform in each band or its spectral characteristics are encoded.
• Each subband can be encoded in time-domain waveform or Each subband can be encoded in frequency-
domain waveform).
• Model-based coding:Based on the mathematical model of source.(The Source is modeled as
a linear system that results in the observed source output).
• Instead of transmitted samples of the source, the parameters of the linear system are transmitted with
an appropriate excitation table.
• If the parameters are sufficient small, provides large compression).
DCT Notes By, Er. Swapnil Kaware
Rate–Distortion Theory
(i). In rate–distortion theory, the rate is usually understood as the number
of bits per data sample to be stored or transmitted.
(ii). In the most simple case (which is actually used in most cases), the
distortion is defined as the expected value of the square of the
difference between input and output signal (i.e., the mean squared
error ).
(iii). However, since we know that most lossy compression techniques
operate on data that will be perceived by human consumers (listening
to music, watching pictures and video).
(iv). The distortion measure should preferably be modeled on
human perception and perhaps aesthetics: much like the use
of probability in lossless compression, distortion measures can
ultimately be identified with loss functions as used in
Bayesian estimation and decision theory.
DCT Notes By, Er. Swapnil Kaware
(v). In audio compression, perceptual models (and
therefore perceptual distortion measures) are
relatively well developed and routinely used in
compression techniques such as MP3 or Vorbis, but
are often not easy to include in rate–distortion
theory.
(vi). In image and video compression, the human
perception models are less well developed and
inclusion is mostly limited to
the JPEG and MPEG weighting
(quantization, normalization) matrix.
Rate–Distortion Theory
DCT Notes By, Er. Swapnil Kaware
Rate Distortion Function
• RATE :
• It is the number of bits per data sample to be stored or transmitted.
• DISTORTION:
• It is defined as the variance of the difference between input and
output.
• 1.hamming distance
• 2.squared error
• Rate distortion theory is the branch of information theory addressing
the problem of determining the minimal amount of entropy or
information that should be communicated over a channel such that the
source can be reconstructed at the receiver with given distortion.
DCT Notes By, Er. Swapnil Kaware
Rate Distortion Function
DCT Notes By, Er. Swapnil Kaware
Variance of Input and Output Image
(Example of distortion)
DCT Notes By, Er. Swapnil Kaware
Quantization
• Quantization, in mathematics and digital signal processing, is the
process of mapping a large set of input values to a smaller set –
such as rounding values to some unit of precision.
• A device oralgorithmic function that performs quantization is
called a quantizer. The round-off error introduced by quantization
is referred to as quantization error.
• In analog-to-digital conversion, the difference between the actual
analog value and quantized digital value is called quantization
error or quantization distortion.
• This error is either due to rounding or truncation.
DCT Notes By, Er. Swapnil Kaware
• The error signal is sometimes considered as an additional
random signal called quantization noise.
• Quantization is involved to some degree in nearly all digital
signal processing, as the process of representing a signal in
digital form ordinarily involves rounding.
• Quantization also forms the core of essentially all lossy
compression algorithms.
• A quantizer describes the relation between the encoder
input values and the decoder output values
Quantization
DCT Notes By, Er. Swapnil Kaware
Quantization
DCT Notes By, Er. Swapnil Kaware
Quantizer
• The design of the quantizer has a significant impact on the
amount of compression obtained and loss incurred in a lossy
compression scheme.
• Quantizer:- Quantizer in nothing but the combination of encoder
mapping and decode mapping.
(i). Encoder Mapping:-
• – The encoder divides the range of source into a number of
intervals.
• – Each interval is represented by a distinct codeword.
(ii). Decoder mapping.
• – For each received codeword, the decoder generates a
reconstruct value.
DCT Notes By, Er. Swapnil Kaware
Components of a Quantizer
1. Encoder mapping: Divides the range of values that the
source generates into a number of intervals.
2. Each interval is then mapped to a codeword. It is a
many-to-one irreversible mapping.
3. The code word only identifies the interval, not the
original value.
4. If the source or sample value comes from a analog
source, it is called a A/D converter.
DCT Notes By, Er. Swapnil Kaware
Mapping of a 3-bit Encoder
000 001 010 011 100 101 110 111
…
Codes
-3.0 -2.0 -1.0 0 1.0 2.0 3.0 input
DCT Notes By, Er. Swapnil Kaware
Mapping of a 3-bit D/A Converter
Input Codes Output
000 -3.5
001 -2.5
010 -1.5
011 -0.5
100 0.5
101 1.5
110 2.5
111 3.5
DCT Notes By, Er. Swapnil Kaware
Components of a Quantizer
Decoder:- Given the code word, the decoder gives a an
estimated value that the source might have generated.
Usually, it is the midpoint of the interval but a more
accurate estimate will depend on the distribution of the
values in the interval.
In estimating the value, the decoder might generate some
errors.
DCT Notes By, Er. Swapnil Kaware
Digitizing a Sine Wave
t 4cos(2*Pi*t)
A/D
Output
D/A
Output Error
0.1 3.8 111 3.5 0.3
0.1 3.2 111 3.5 -0
0.2 2.4 110 2.5 -0
0.2 1.2 101 1.5 -0
DCT Notes By, Er. Swapnil Kaware
Step Encoder
DCT Notes By, Er. Swapnil Kaware
Step Encoder
DCT Notes By, Er. Swapnil Kaware
Scalar Quantization
• Many of the fundamental ideas of quantization and
compression are easily introduced in the simple context
of scalar quantization.
• An example: any real number x can be rounded off to the
nearest integer, say
q(x) = round(x)
• Maps the real line R (a continuous space) into a discrete
space.
DCT Notes By, Er. Swapnil Kaware
Vector Quantization
• Vector Quantization Rule:- Vector quantization (VQ) of X
may be viewed as the classification of the outcomes of X
into a discrete number of sets or cells in N-space
• Each cell is represented by a vector output Yj
• Given a distance measure d(x, y), we have
• VQ output: Q(X) = Yj iff d(X, Yj) < d(X, Yi), " i ¹ j.
• Quantization region: Vj = { X: d(X, Yj) < d(X, Yi), " i ¹ j}.
DCT Notes By, Er. Swapnil Kaware
Vector Quantization
DCT Notes By, Er. Swapnil Kaware
The Schematic of a Vector Quantizer
DCT Notes By, Er. Swapnil Kaware
BCH Codes
• BCH (Bose –Chaudhuri – Hocquenghem) Codes form
a large class of multiple ramdom error-correcting
codes.
• They ware first discovered by A.Hocquenghem in
1959 and independently by R.C.Bose and D.K.Ray-
Chaudhuri in 1960.
• BCH codes are cyclic codes. Binary BCH are most
popular.
• The first decoding algorithm for binary BCH codesDCT Notes By, Er. Swapnil Kaware
BCH codes
• In coding theory the BCH codes form a class of cyclic error-
correcting codes that are constructed using finite fields.
• BCH codes were invented in 1959 by French
mathematician Alexis Hocquenghem, and independently in
1960 by Raj Bose and D. K. Ray-Chaudhuri.
• The acronym BCH comprises the initials of these inventors'
names.
• One of the key features of BCH codes is that during code
design, there is a precise control over the number of symbol
errors correctable by the code.
DCT Notes By, Er. Swapnil Kaware
• it is possible to design binary BCH codes that can correct
multiple bit errors.
• Another advantage of BCH codes is the ease with which
they can be decoded, namely, via an algebraic method
known as syndrome decoding.
• This simplifies the design of the decoder for these codes,
using small low-power electronic hardware.
• BCH codes are used in applications like satellite
communications,compact disc players, DVDs, disk
drives, solid-state drives and two-dimensional bar codes.
BCH codes
DCT Notes By, Er. Swapnil Kaware
Parameters BCH
codesBlock length :
Message Size (bits) :
Minimum Distance :
• The code is a t- error
correcting
• For example, for m=6, t=3
This is a triple-error-
mtnk
12n m
12min td
7132
1836
6312
min
6
d
kn
n
DCT Notes By, Er. Swapnil Kaware
Generator Polynomial of
BCH Codes
• Let α be a primitive element in GF(2m).
For 1≤ i ≤ t, let m2i-1(x) be the
minimum polynomial of the field
element α2i-1 .
• The generator polynomial g(X) of a t-
error-correcting primitive BCH codes of
length 2m-1 is given by
LCM : Least Common Multiple
• Note that degree of g(x) is less.
))(),...,(),(()( 1231 XmXmxmLCMxg t
DCT Notes By, Er. Swapnil Kaware
BCH Encoding
• Let m(x) be the message
polynomial to be encoded
Where mi GF(2m).
• Dividing x2tm(x) by g(x), we have
Where p(x) is the remainder.
• Then u(x) is the codeword
polynomial for the message m(x).
1
110 ...)( k
k xmxmmxm
)()()()(2
xpxgxqxmx t
12
1210 ...)( t
t xpxppxp
)()()( xmxxpxu kn
DCT Notes By, Er. Swapnil Kaware
Decoding of BCH codes
• Consider a BCH code with n=2m-1
and generator polynomial g(x).
• Suppose a code polynomial v(x) is
transmitted
Let r(x) be the received
polynomial.
1
110 ...)( n
n xvxvvxv
1
110 ...)( n
n xrxrrxr
DCT Notes By, Er. Swapnil Kaware
Reed–Solomon (RS) codes
• In coding theory, Reed–Solomon (RS) codes are non-
binary cyclic error-correcting codes invented by Irving S.
Reed and Gustave Solomon.
• They described a systematic way of building codes that could
detect and correct multiple random symbol errors.
• By adding t check symbols to the data, an RS code can detect
any combination of up to t erroneous symbols, or correct up to
⌊t/2⌋ symbols.
• As an erasure code, it can correct up to t known erasures, or it
can detect and correct combinations of errors and erasures.
DCT Notes By, Er. Swapnil Kaware
• Furthermore, RS codes are suitable as multiple-burst bit-error
correcting codes, since a sequence of b + 1 consecutive bit
errors can affect at most two symbols of size b.
• The choice of t is up to the designer of the code, and may be
selected within wide limits.
• In Reed–Solomon coding, source symbols are viewed as
coefficients of a polynomial p(x) over a finite field.
• The original idea was to create n code symbols from k source
symbols by oversampling p(x) at n > k distinct points, transmit
the sampled points, and use interpolation techniques at the
receiver to recover the original message.
Reed–Solomon (RS) codes
• That is not how RS codes are used today. Instead, RS codes are viewed
as cyclic BCH codes, where encoding symbols are derived from the
coefficients of a polynomial constructed by multiplying p(x) with a
cyclic generator polynomial.
• This gives rise to efficient decoding algorithms (described below).
• Reed–Solomon codes have since found important applications from
deep-space communication to consumer electronics.
• They are prominently used in consumer electronics such
as CDs, DVDs, Blu-ray Discs, in data transmission technologies such
as DSL and WiMAX, in broadcast systems such as DVB and ATSC, and in
computer applications such as RAID 6 systems;
Reed–Solomon (RS) codes
DCT Notes By, Er. Swapnil Kaware
Motivation for RS Soft Decision
Decoder
Hard decision decoder does not fully exploit the decoding capability
Efficient soft decision decoding of RS codes remains an open problem
RS Coded Turbo Equalization
System
-
+
a priori
extrinsic
interleaving
a priori
extrinsic
ΠΣ
source
RS Encoder
interleaving
PR Encoder
sink
hard decision
+
AWGN
+
RS
Decoder
Channel
Equalizer
de-interleaving
Π
1
Π
Σ
Soft input soft output (SISO) algorithm is favorable
DCT Notes By, Er. Swapnil Kaware
Applications
• Data storageBar code
• Data transmission
• Space transmission
DCT Notes By, Er. Swapnil Kaware
Reed Muller Codes
• Reed Muller codes are some of the oldest error correcting codes.
• Error correcting codes are very useful in sending information over long
distances or through channels where errors might occur in the message.
• They have become more prevalent as telecommunications have
expanded and developed a use for codes that can self-correct.
• Reed Muller codes were invented in 1954 by D. E. Muller and I. S. Reed.
In 1972,
• A Reed Muller code was used by Mariner 9 to transmit black and white
photographs of Mars.
• Reed Muller codes are relatively easy to decode.
DCT Notes By, Er. Swapnil Kaware
Decoding Reed Muller:-
 Decoding Reed Muller encoded messages is more complex than
encoding them.
 The theory behind encoding and decoding is based on the distance
between vectors.
 The distance between any two vectors is the number of places in the
two vectors that have different values.
 The basis for Reed Muller encoding is the assumption that the closest
codeword in <(r;m) to the received message is the original encoded
message.
Reed Muller Codes
• This method of decoding is given by the following
algorithm: Apply Steps 1 and 2.
• The vector spaces used in this paper consist of strings of
length 2m, where m is a positive integer, of numbers in
• F2 = f0.
• The code words of a Reed Muller code form a subspace of
such a space.
• Vectors can be manipulated by three main operations:
addition, multiplication, and the dot product.
Reed Muller Codes
• For two vectors x = (x1; x2; : : : ; xn) and y = (y1; y2; :
: : ; yn), addition is dened by x + y = (x1 + y1; x2 + y2;
: : : ; xn + yn);
• where each xi or yi is either 1 or 0, and
• 1 + 1 = 0; 0 + 1 = 1; 1 + 0 = 1; 0 + 0 = 0:
• For example, if x and y are dened as x = (10011110)
and y = (11100001), then the sum of x and y is x + y
= (10011110) + (11100001) = (01111111):
Reed Muller Codes
DCT Notes By, Er. Swapnil Kaware
Convolution codes
DCT Notes By, Er. Swapnil Kaware
• In telecommunication, a convolutional code is a type of error-
correcting code in which each m-bit information symbol (each m-bit
string) to be encoded is transformed into an n-bit symbol, where m/n is
the code rate (n ≥ m).
• The transformation is a function of the last k information symbols,
where k is the constraint length of the code.
• Convolutional codes are used extensively in numerous applications in
order to achieve reliable data transfer, including digital video,
radio, mobile communication, and satellite communication.
• These codes are often implemented in concatenation with a hard-
decision code, particularly Reed Solomon.
• Prior to turbo codes, such constructions were the most efficient,
coming closest to the Shannon limit.
Convolution codes
DCT Notes By, Er. Swapnil Kaware
• Convolutional Encoding:-
• To convolutionally encode data, start with k memory
registers, each holding 1 input bit. Unless otherwise
specified, all memory registers start with a value of 0.
• The encoder has n modulo-2adders (a modulo 2 adder
can be implemented with a single Boolean XOR gate,
where the logic is: 0+0 = 0, 0+1 = 1, 1+0 = 1, 1+1 = 0),
and n generator polynomials — one for each adder (see
figure below).
• An input bit m1 is fed into the leftmost register. Using the
generator polynomials and the existing values in the
remaining registers, the encoder outputs n bits.
Convolution codes
DCT Notes By, Er. Swapnil Kaware
• Now bit shift all register values to the right (m1 moves
to m0, m0 moves to m-1) and wait for the next input bit.
• If there are no remaining input bits, the encoder continues
output until all registers have returned to the zero state.
• The figure below is a rate 1/3 (m/n) encoder with
constraint length (k) of 3. Generator polynomials are G1 =
(1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits
are calculated (modulo 2) as follows:
• n1 = m1 + m0 + m-1n2 = m0 + m-1n3 = m1 + m-1.
Convolution codes
DCT Notes By, Er. Swapnil Kaware
Convolution codes
DCT Notes By, Er. Swapnil Kaware
Convolutional codes
• Convolutional codes map information to code bits
sequentially by convolving a sequence of information
bits with “generator” sequences.
• A convolutional encoder encodes K information bits to
N>K code bits at one time step.
• Convolutional codes can be regarded as block codes
for which the encoder has a certain structure such
that we can express the encoding operation as
convolution.
DCT Notes By, Er. Swapnil Kaware
• The convolutional code is linear.
• Code bits generated at time step i are affected by information
bits up to M time steps i – 1, i – 2, …, i – M back in time. M is
the maximal delay of information bits in the encoder.
• Code memory is the (minimal) number of registers to construct
an encoding circuit for the code.
• Constraint length is the overall number of information bits
affecting code bits generated at time step i: =code memory +
K=MK + K=(M + 1)K.
• A convolutional code is systematic if the N code bits generated
at time step i contain the K information bits.
Convolutional codes
DCT Notes By, Er. Swapnil Kaware
99
Modified State Diagram
A path from (00) to (00) is denoted by
Di (weight)
Lj (length)
Nk (# info 1‘s)
DCT Notes By, Er. Swapnil Kaware
100
Transfer Function
• The transfer function T(D,L,N)
T(D,L,N)
D L
DNL(1 L)
5 3
1
DCT Notes By, Er. Swapnil Kaware
•The distance properties and the error rate performance
of a convolutional code can be obtained from its transfer
function.
• Since a convolutional code is linear, the set of Hamming
distances of the code sequences generated up to some
stages in the trellis, from the all-zero code sequence, is
the same as the set of distances of the code sequences
with respect to any other code sequence.
•
•Thus, we assume that the all-zero path is the input to the
encoder
Transfer Function
DCT Notes By, Er. Swapnil Kaware
DCT Notes By, Er. Swapnil Kaware
103
Transfer Function
• Performing long division:
T(D,L,N) = D5L3N + D6L4N2 + D6L5N2 + D7L5N3 + ….
• If interested in the Hamming distance property of the code only,
set N = 1 and L = 1 to get the distance transfer function:
T (D) = D5 + 2D6 + 4D7 + …
There is one code sequence of weight 5. Therefore dfree=5.
There are two code sequences of weight 6,
four code sequences of weight 7, ….
DCT Notes By, Er. Swapnil Kaware
104
Performance
• The event error probability is
defined as the probability that the
decoder selects a code sequence
that was not transmitted
• For two codewords the Pairwise
Error Probability is
• The upperbound for the event
error probability is given by
d
d
2/d
p1
pdidi
d
2
1di
))p1(p(4(
)p1(2)p1(p
i
d
)d(PEP
dcetandisatcodewordofnumbertheis)d(Awhere
)d(PEP)d(AP
freedd
event
correct
node
incorrect
DCT Notes By, Er. Swapnil Kaware
105
Performance
• using the T(D,N,L), we can formulate this as
• The bit error rate (not probability) is written as
)p1(p2D;1NLevent )N,L,D(TP
)p1(p2D;1N;1LdN
d
bit )N,L,D(TP
DCT Notes By, Er. Swapnil Kaware
Viterbi Decoding Algorithm
• The Viterbi algorithm is a standard component of tens of millions of high-
speed modems. It is a key building block of modern information infrastructure
• The symbol "VA" is ubiquitous in the block diagrams of modern receivers.
• Essentially: the VA finds a path through any Markov graph, which is a sequence
of states governed by a Markov chain.
• Many practical applications:
– convolutional decoding and channel trellis decoding.
– fading communication channels,
– partial response channels in recording systems,
– optical character recognition,
– voice recognition.
– DNA sequence analysis
– etc.
DCT Notes By, Er. Swapnil Kaware
• A Viterbi decoder uses the Viterbi algorithm for decoding a bits tream
that has been encoded using a convolutional code.
• There are other algorithms for decoding a convolutionally encoded
stream (for example, the Fano algorithm).
• The Viterbi algorithm is the most resource-consuming, but it does
the maximum likelihood decoding.
• It is most often used for decoding convolutional codes with constraint
lengths k<=10, but values up to k=15 are used in practice.
• Viterbi decoding was developed by Andrew J. Viterbi and published in
the paper "Error Bounds for Convolutional Codes and an
Asymptotically Optimum Decoding Algorithm", IEEE Transactions on
Information Theory, Volume IT-13, pages 260-269, in April, 1967.
Viterbi Decoding Algorithm
DCT Notes By, Er. Swapnil Kaware
• Applications:-
(i). The Viterbi decoding algorithm is widely used in the following areas:
(ii). Radio communication: digital TV (ATSC, QAM, DVB-T, etc.), radio
relay, satellite communications, PSK31 digital mode for amateur radio.
(iii). Decoding trellis-coded modulation (TCM), the technique used in
telephone-line modems to squeeze high spectral efficiency out of
3 kHz-bandwidth analog telephone lines.
(iv). Computer storage devices such as hard disk drives.
(v). Automatic speech recognition.
Viterbi Decoding Algorithm
DCT Notes By, Er. Swapnil Kaware
Viterbi Decoding
• Viterbi decoding is one of two types of decoding algorithms
used with convolutional encoding-the other type is sequential
decoding.
• Sequential decoding has the advantage that it can perform
very well with long-constraint-length convolutional codes, but
it has a variable decoding time.
• A discussion of sequential decoding algorithms is beyond the
scope of this tutorial; the reader can find sources discussing
this topic in the Books about Forward Error Correction section
of the bibliography.
• Viterbi decoding has the advantage that it has a fixed decoding
time. It is well suited to hardware decoder implementation.
DCT Notes By, Er. Swapnil Kaware
• But its computational requirements grow exponentially as a function of
the constraint length, so it is usually limited in practice to constraint
lengths of K = 9 or less.
• Stanford Telecom produces a K = 9 Viterbi decoder that operates at rates
up to 96 kbps, and a K = 7 Viterbi decoder that operates at up to 45
Mbps.
• Advanced Wireless Technologies offers a K = 9 Viterbi decoder that
operates at rates up to 2 Mbps.
• NTT has announced a Viterbi decoder that operates at 60 Mbps, but I
don't know its commercial availability.
• Moore's Law applies to Viterbi decoders as well as to microprocessors,
so consider the rates mentioned above as a snapshot of the state-of-
the-art taken in early 1999.
Viterbi Decoding
DCT Notes By, Er. Swapnil Kaware
Trellis Coded
Modulation.
• In telecommunication, trellis modulation (also known as trellis
coded modulation, or simply TCM).
• TCM is a modulation scheme which allows highly efficient
transmission of information over band-limited channels such
as telephone lines.
• Trellis modulation was invented by Gottfried Ungerboeck who is
working for IBM in the 1970s.
DCT Notes By, Er. Swapnil Kaware
• The name trellis was coined because a state diagram of
the technique, when drawn on paper, closely resembles
the trellis latticeused in rose gardens.
• The scheme is basically a convolutional code of rates
(r,r+1).
• Ungerboeck's unique contribution is to apply the parity
check on a per symbol basis instead of the older technique
of applying it to the bit stream then modulating the bits.
• The key idea he termed Mapping by Set Partitions.
Trellis Coded Modulation.
DCT Notes By, Er. Swapnil Kaware
• This idea was to group the symbols in a tree like fashion
then separate them into two limbs of equal size.
• At each limb of the tree, the symbols were further apart.
• Although hard to visualize in multi-dimensions, a simple
one dimension example illustrates the basic procedure.
• Suppose the symbols are located at [1, 2, 3, 4, ...].
• Then take all odd symbols and place them in one group,
and the even symbols in the second group.
Trellis Coded Modulation.
DCT Notes By, Er. Swapnil Kaware
• This is not quite accurate because Ungerboeck was looking at the two
dimensional problem, but the principle is the same, take every other one for
each group and repeat the procedure for each tree limb.
• He next described a method of assigning the encoded bit stream onto the
symbols in a very systematic procedure. Once this procedure was fully
described, his next step was to program the algorithms into a computer and let
the computer search for the best codes.
• The results were astonishing. Even the most simple code (4 state) produced
error rates nearly one one-thousandth of an equivalent uncoded system.
• For two years Ungerboeck kept these results private and only conveyed them
to close colleagues.
• Finally, in 1982, Ungerboeck published a paper describing the principles of
trellis modulation.
Trellis Coded Modulation.
DCT Notes By, Er. Swapnil Kaware
References
1. “Digital and Analog Communication Systems” –
K. Sam Shanmugam, John Wiley.
2. “An introduction to Analog and Digital Communication”-
Simon Haykin, John Wiley.
3. “Digital Communication- Fundamentals & Applications” –
Bernard Sklar, Pearson Education.
4. “Analog & Digital Communications”-
HSU, Tata Mcgraw Hill, II edition.
DCT Notes By, Er. Swapnil Kaware
Thank You!!!!!
Have A Nice Day!!!!!
DCT Notes By, Er. Swapnil Kaware
(svkaware@yahoo.co.in)

More Related Content

What's hot (20)

Multirate DSP
Multirate DSPMultirate DSP
Multirate DSP
 
Source coding
Source coding Source coding
Source coding
 
Pulse Modulation ppt
Pulse Modulation pptPulse Modulation ppt
Pulse Modulation ppt
 
Pn sequence
Pn sequencePn sequence
Pn sequence
 
Modulation Techniques for Mobile Radio
Modulation Techniques for Mobile RadioModulation Techniques for Mobile Radio
Modulation Techniques for Mobile Radio
 
Pn sequence
Pn sequencePn sequence
Pn sequence
 
Sampling
SamplingSampling
Sampling
 
Base band transmission
Base band transmissionBase band transmission
Base band transmission
 
Amplitude modulation
Amplitude modulationAmplitude modulation
Amplitude modulation
 
Digital modulation
Digital modulationDigital modulation
Digital modulation
 
ASK,FSK and M-PSK using Matlab
ASK,FSK and M-PSK using MatlabASK,FSK and M-PSK using Matlab
ASK,FSK and M-PSK using Matlab
 
Ec 2401 wireless communication unit 3
Ec 2401 wireless communication   unit 3Ec 2401 wireless communication   unit 3
Ec 2401 wireless communication unit 3
 
Dc unit iv
Dc unit ivDc unit iv
Dc unit iv
 
carrier synchronization
carrier synchronizationcarrier synchronization
carrier synchronization
 
PSK (PHASE SHIFT KEYING )
PSK (PHASE SHIFT KEYING )PSK (PHASE SHIFT KEYING )
PSK (PHASE SHIFT KEYING )
 
4.5 equalizers and its types
4.5   equalizers and its types4.5   equalizers and its types
4.5 equalizers and its types
 
Encoder for (7,3) cyclic code using matlab
Encoder for (7,3) cyclic code using matlabEncoder for (7,3) cyclic code using matlab
Encoder for (7,3) cyclic code using matlab
 
M ary psk modulation
M ary psk modulationM ary psk modulation
M ary psk modulation
 
Digital Communication 1
Digital Communication 1Digital Communication 1
Digital Communication 1
 
Digital modulation technique
Digital modulation techniqueDigital modulation technique
Digital modulation technique
 

Viewers also liked

Viewers also liked (8)

Digital Communication ppt
Digital Communication pptDigital Communication ppt
Digital Communication ppt
 
Eye diagram
Eye diagramEye diagram
Eye diagram
 
Isi and nyquist criterion
Isi and nyquist criterionIsi and nyquist criterion
Isi and nyquist criterion
 
Sampling
SamplingSampling
Sampling
 
Du binary signalling
Du binary signallingDu binary signalling
Du binary signalling
 
Matched filter
Matched filterMatched filter
Matched filter
 
Pcm
PcmPcm
Pcm
 
Digital modulation techniques
Digital modulation techniquesDigital modulation techniques
Digital modulation techniques
 

Similar to Digital Communication Techniques

Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...IJERA Editor
 
3F4ecc.ppt
3F4ecc.ppt3F4ecc.ppt
3F4ecc.pptAnnymus
 
Paper id 312201514
Paper id 312201514Paper id 312201514
Paper id 312201514IJRAT
 
Data Communication & Computer Networks:Digital Signal Encoding
Data Communication & Computer Networks:Digital Signal EncodingData Communication & Computer Networks:Digital Signal Encoding
Data Communication & Computer Networks:Digital Signal EncodingDr Rajiv Srivastava
 
computer networks Error Detection Methods.pdf
computer networks Error Detection Methods.pdfcomputer networks Error Detection Methods.pdf
computer networks Error Detection Methods.pdfBalasubramanian699229
 
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...Paulo_Vangui
 
Coding Scheme/ Information theory/ Error coding scheme
Coding Scheme/ Information theory/ Error coding schemeCoding Scheme/ Information theory/ Error coding scheme
Coding Scheme/ Information theory/ Error coding schemeskysunilyadav
 
Conclusion in this titty tittle 106_1.ppt
Conclusion in this titty tittle 106_1.pptConclusion in this titty tittle 106_1.ppt
Conclusion in this titty tittle 106_1.pptKelvinSerimwe
 
UNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptx
UNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptxUNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptx
UNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptxKaameshwaranKaameshw
 
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”IOSR Journals
 

Similar to Digital Communication Techniques (20)

Turbo Code
Turbo Code Turbo Code
Turbo Code
 
Unit-4.pptx
Unit-4.pptxUnit-4.pptx
Unit-4.pptx
 
Overview of Convolutional Codes
Overview of Convolutional CodesOverview of Convolutional Codes
Overview of Convolutional Codes
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
 
Image compression
Image compressionImage compression
Image compression
 
3F4ecc.ppt
3F4ecc.ppt3F4ecc.ppt
3F4ecc.ppt
 
I Tlecture 13a
I Tlecture 13aI Tlecture 13a
I Tlecture 13a
 
40120140505011
4012014050501140120140505011
40120140505011
 
Paper id 312201514
Paper id 312201514Paper id 312201514
Paper id 312201514
 
Data Communication & Computer Networks:Digital Signal Encoding
Data Communication & Computer Networks:Digital Signal EncodingData Communication & Computer Networks:Digital Signal Encoding
Data Communication & Computer Networks:Digital Signal Encoding
 
computer networks Error Detection Methods.pdf
computer networks Error Detection Methods.pdfcomputer networks Error Detection Methods.pdf
computer networks Error Detection Methods.pdf
 
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
 
Coding Scheme/ Information theory/ Error coding scheme
Coding Scheme/ Information theory/ Error coding schemeCoding Scheme/ Information theory/ Error coding scheme
Coding Scheme/ Information theory/ Error coding scheme
 
Encoding in sc
Encoding in scEncoding in sc
Encoding in sc
 
Conclusion in this titty tittle 106_1.ppt
Conclusion in this titty tittle 106_1.pptConclusion in this titty tittle 106_1.ppt
Conclusion in this titty tittle 106_1.ppt
 
UNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptx
UNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptxUNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptx
UNIT-I U20EST109 - PROBLEM SOLVING APPROACH - Copy (1).pptx
 
Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)
 
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”
“Proposed Model for Network Security Issues Using Elliptical Curve Cryptography”
 
Digital electronics
Digital electronicsDigital electronics
Digital electronics
 

More from Prof. Swapnil V. Kaware

Corona Virus Awareness by, Er. Swapnil V. Kaware
Corona Virus Awareness by, Er. Swapnil V. KawareCorona Virus Awareness by, Er. Swapnil V. Kaware
Corona Virus Awareness by, Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
Basic Electronics By, Er. Swapnil V. Kawrare
Basic Electronics By, Er. Swapnil V. KawrareBasic Electronics By, Er. Swapnil V. Kawrare
Basic Electronics By, Er. Swapnil V. KawrareProf. Swapnil V. Kaware
 
Addressing modes of 8085 by Er. Swapnil V. Kaware
Addressing modes of 8085 by Er. Swapnil V. KawareAddressing modes of 8085 by Er. Swapnil V. Kaware
Addressing modes of 8085 by Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
Best Interview Tips By, Er. Swapnil V. Kaware
Best Interview Tips By, Er. Swapnil V. KawareBest Interview Tips By, Er. Swapnil V. Kaware
Best Interview Tips By, Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
LED Lighting for Energy Efficiency By, Er. Swapnil V. Kaware
LED Lighting for Energy Efficiency By, Er. Swapnil V. KawareLED Lighting for Energy Efficiency By, Er. Swapnil V. Kaware
LED Lighting for Energy Efficiency By, Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
Microprocessor & Interfacing (Part-2) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-2) By Er. Swapnil V. KawareMicroprocessor & Interfacing (Part-2) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-2) By Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
Microprocessor & Interfacing (Part-1) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-1) By Er. Swapnil V. KawareMicroprocessor & Interfacing (Part-1) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-1) By Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
Microprocessor Lab Manual by Er. Swapnil V. Kaware
Microprocessor Lab Manual by Er. Swapnil V. KawareMicroprocessor Lab Manual by Er. Swapnil V. Kaware
Microprocessor Lab Manual by Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
Electronics Lab Manual by Er. Swapnil V. Kaware
Electronics Lab Manual by Er. Swapnil V. KawareElectronics Lab Manual by Er. Swapnil V. Kaware
Electronics Lab Manual by Er. Swapnil V. KawareProf. Swapnil V. Kaware
 
Transistors & Oscillators by Er. Swapnil Kaware
Transistors & Oscillators by Er. Swapnil KawareTransistors & Oscillators by Er. Swapnil Kaware
Transistors & Oscillators by Er. Swapnil KawareProf. Swapnil V. Kaware
 
Basic Electronics (Rectifiers) by Er. Swapnil Kaware
Basic Electronics (Rectifiers) by Er. Swapnil KawareBasic Electronics (Rectifiers) by Er. Swapnil Kaware
Basic Electronics (Rectifiers) by Er. Swapnil KawareProf. Swapnil V. Kaware
 
Digital Electronics Basics by Er. Swapnil Kaware
Digital Electronics Basics by Er. Swapnil KawareDigital Electronics Basics by Er. Swapnil Kaware
Digital Electronics Basics by Er. Swapnil KawareProf. Swapnil V. Kaware
 
Digital computer Basics by, Er. Swapnil Kaware
Digital computer Basics by, Er. Swapnil KawareDigital computer Basics by, Er. Swapnil Kaware
Digital computer Basics by, Er. Swapnil KawareProf. Swapnil V. Kaware
 
Cryptography & Network Security By, Er. Swapnil Kaware
Cryptography & Network Security By, Er. Swapnil KawareCryptography & Network Security By, Er. Swapnil Kaware
Cryptography & Network Security By, Er. Swapnil KawareProf. Swapnil V. Kaware
 
Digital signal processing By Er. Swapnil Kaware
Digital signal processing By Er. Swapnil KawareDigital signal processing By Er. Swapnil Kaware
Digital signal processing By Er. Swapnil KawareProf. Swapnil V. Kaware
 

More from Prof. Swapnil V. Kaware (20)

Corona Virus Awareness by, Er. Swapnil V. Kaware
Corona Virus Awareness by, Er. Swapnil V. KawareCorona Virus Awareness by, Er. Swapnil V. Kaware
Corona Virus Awareness by, Er. Swapnil V. Kaware
 
Basic Electronics By, Er. Swapnil V. Kawrare
Basic Electronics By, Er. Swapnil V. KawrareBasic Electronics By, Er. Swapnil V. Kawrare
Basic Electronics By, Er. Swapnil V. Kawrare
 
Addressing modes of 8085 by Er. Swapnil V. Kaware
Addressing modes of 8085 by Er. Swapnil V. KawareAddressing modes of 8085 by Er. Swapnil V. Kaware
Addressing modes of 8085 by Er. Swapnil V. Kaware
 
Best GD tips by, Er. Swapnil V. Kaware
Best GD tips by, Er. Swapnil V. KawareBest GD tips by, Er. Swapnil V. Kaware
Best GD tips by, Er. Swapnil V. Kaware
 
Best Interview Tips By, Er. Swapnil V. Kaware
Best Interview Tips By, Er. Swapnil V. KawareBest Interview Tips By, Er. Swapnil V. Kaware
Best Interview Tips By, Er. Swapnil V. Kaware
 
Chandrayaan 2 By, Er. Swapnil V. Kaware
Chandrayaan 2 By, Er. Swapnil V. KawareChandrayaan 2 By, Er. Swapnil V. Kaware
Chandrayaan 2 By, Er. Swapnil V. Kaware
 
LED Lighting for Energy Efficiency By, Er. Swapnil V. Kaware
LED Lighting for Energy Efficiency By, Er. Swapnil V. KawareLED Lighting for Energy Efficiency By, Er. Swapnil V. Kaware
LED Lighting for Energy Efficiency By, Er. Swapnil V. Kaware
 
Microprocessor & Interfacing (Part-2) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-2) By Er. Swapnil V. KawareMicroprocessor & Interfacing (Part-2) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-2) By Er. Swapnil V. Kaware
 
Microprocessor & Interfacing (Part-1) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-1) By Er. Swapnil V. KawareMicroprocessor & Interfacing (Part-1) By Er. Swapnil V. Kaware
Microprocessor & Interfacing (Part-1) By Er. Swapnil V. Kaware
 
Combinational Logic Circuits
Combinational Logic CircuitsCombinational Logic Circuits
Combinational Logic Circuits
 
Microprocessor Lab Manual by Er. Swapnil V. Kaware
Microprocessor Lab Manual by Er. Swapnil V. KawareMicroprocessor Lab Manual by Er. Swapnil V. Kaware
Microprocessor Lab Manual by Er. Swapnil V. Kaware
 
LED basics by Er. Swapnil V. Kaware
LED basics by Er. Swapnil V. KawareLED basics by Er. Swapnil V. Kaware
LED basics by Er. Swapnil V. Kaware
 
Electronics Lab Manual by Er. Swapnil V. Kaware
Electronics Lab Manual by Er. Swapnil V. KawareElectronics Lab Manual by Er. Swapnil V. Kaware
Electronics Lab Manual by Er. Swapnil V. Kaware
 
Transistors & Oscillators by Er. Swapnil Kaware
Transistors & Oscillators by Er. Swapnil KawareTransistors & Oscillators by Er. Swapnil Kaware
Transistors & Oscillators by Er. Swapnil Kaware
 
Basic Electronics (Rectifiers) by Er. Swapnil Kaware
Basic Electronics (Rectifiers) by Er. Swapnil KawareBasic Electronics (Rectifiers) by Er. Swapnil Kaware
Basic Electronics (Rectifiers) by Er. Swapnil Kaware
 
Basic Electronics by Er. Swapnil Kaware
Basic Electronics by Er. Swapnil KawareBasic Electronics by Er. Swapnil Kaware
Basic Electronics by Er. Swapnil Kaware
 
Digital Electronics Basics by Er. Swapnil Kaware
Digital Electronics Basics by Er. Swapnil KawareDigital Electronics Basics by Er. Swapnil Kaware
Digital Electronics Basics by Er. Swapnil Kaware
 
Digital computer Basics by, Er. Swapnil Kaware
Digital computer Basics by, Er. Swapnil KawareDigital computer Basics by, Er. Swapnil Kaware
Digital computer Basics by, Er. Swapnil Kaware
 
Cryptography & Network Security By, Er. Swapnil Kaware
Cryptography & Network Security By, Er. Swapnil KawareCryptography & Network Security By, Er. Swapnil Kaware
Cryptography & Network Security By, Er. Swapnil Kaware
 
Digital signal processing By Er. Swapnil Kaware
Digital signal processing By Er. Swapnil KawareDigital signal processing By Er. Swapnil Kaware
Digital signal processing By Er. Swapnil Kaware
 

Recently uploaded

Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Scott Andery
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 

Recently uploaded (20)

Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
Enhancing User Experience - Exploring the Latest Features of Tallyman Axis Lo...
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 

Digital Communication Techniques

  • 1. Chapter No:- 02 Presented By, Er. Swapnil Kaware, Chapter Name:- Source and Channel Coding DCT Notes By, Er. Swapnil Kaware
  • 2. Communication System The purpose of a Communication System is to transport an information bearing signal from a source to a user destination via a communication channel. ELEMENTS OF DIGITAL COMMUNICATION SYSTEMS: 1. Analog Information Sources. 2. Digital Information Sources. (i). Analog Information Sources → Microphone actuated by a speech, TV Camera scanning a scene, continuous amplitude signals. (ii). Digital Information Sources → These are teletype or the numerical output of computer which consists of a sequence of discrete symbols or letters. DCT Notes By, Er. Swapnil Kaware DCT Notes By, Er. Swapnil Kaware
  • 3. (i). Source Coding:- Code data to more efficiently represent the information. *Reduces “size” of data. *Analog - Encode analog source data into a binary format. *Digital - Reduce the “size” of digital source data. (ii). Channel Coding:- Code data for transmission over a noisy communication channel. *Increases “size” of data. *Digit: - Add redundancy to identify and correct errors. *Analog - Represent digital values by analog signals. Communication System DCT Notes By, Er. Swapnil Kaware
  • 4. Communication System DCT Notes By, Er. Swapnil Kaware
  • 5. Coding in communication system • In the engineering sense, coding can be classified into four areas: • Encryption:- to encrypt information for security purpose. • Data Compression:- to reduce space for the data stream. • Data Translation:- to change the form of representation of the information so that it can be transmitted over a communication channel. • Error Control:- to encode a signal so that error occurred can be detected and possibly corrected. DCT Notes By, Er. Swapnil Kaware
  • 6. Source Coding (i). The process by which information symbols are mapped to alphabetical symbols is called source coding. (ii). The mapping is generally performed in sequences or groups of information and alphabetical symbols. (iii). it must be performed in such a manner that it guarantees the exact recovery of the information symbol back from the alphabetical symbols otherwise it will destroy the basic theme of the source coding. (iv). The source coding is called lossless compression if the information symbols are exactly recovered from the alphabetical symbols otherwise it is called lossy compression. (v). The Source coding also known as compression or bit-rate reduction process. DCT Notes By, Er. Swapnil Kaware
  • 7. (vi). It is the process of removing redundancy from the source symbols, which essentially reduces data size. (vii). Source coding is a vital part of any communication system as it helps to use disk space and transmission bandwidth efficiently. (viii). The source coding can either be lossy or lossless. (ix). In case of lossless encoding, error free reconstruction of source symbols is possible, whereas, exact reconstruction of source symbols is not possible in case of lossy encoding. Source Coding DCT Notes By, Er. Swapnil Kaware
  • 8. • Suppose a word ‘Zebra’ is going to be sent out. Before this information can be transmitted to the channel, it is first translated into a stream of bits (‘0’ and ‘1’). • The process is called source coding. There are many commonly used ways to translate that. • For example, if ASCII code is used, each alphabet will be represented by 7-bit so called the code word. • The alphabets ‘Z’, ‘e’, ‘b’, ‘r’, ‘a’, will be encoded as ‘1010101’, ‘0110110’, ‘0010110’, ‘0010111’, ‘0001110’. Source Coding DCT Notes By, Er. Swapnil Kaware
  • 9. Source Encoder / Decoder Source Encoder ( or Source Coder):- (i). It converts the input i.e. symbol sequence into a binary sequence of 0’s and 1’s by assigning code words to the symbols in the input sequence. (ii). For e.g. :-If a source set is having hundred symbols, then the number of bits used to represent each symbol will be 7 because 2⁷=128 unique combinations are available. (iii). The important parameters of a source encoder are block size, code word lengths, average data rate and the efficiency of the coder (i.e. actual output data rate compared to the minimum achievable rate). DCT Notes By, Er. Swapnil Kaware
  • 10. Source Decoder:- (i). Source decoder converts the binary output of the channel decoder into a symbol sequence. (ii). The decoder for a system using fixed – length code words is quite simple, but the decoder for a system using variable – length code words will be very complex. (iii). Aim of the source coding is to remove the redundancy in the transmitting information, so that bandwidth required for transmission is minimized. (iv). Based on the probability of the symbol code word is assigned. Higher the probability, shorter is the codeword. Ex: Huffman coding. Source Encoder / Decoder DCT Notes By, Er. Swapnil Kaware
  • 11. Huffman coding. • Rules:- • Order the symbols (‘Z’, ‘e’, ‘b’, ‘r’, ‘a’) by decreasing probability and denote them as S1, to Sn (n = 5 for this case). • Combine the two symbols (Sn, Sn-1) having the lowest probabilities. • Assign ‘1’ as the last symbol of Sn-1 and ‘0’ as the last symbol of Sn. • Form a new source alphabet of (n-1) symbols by combining Sn-1 and Sn into a new symbol S’n-1 with probability P’n-1 = Pn-1 + Pn. • Repeat the above steps until the final source alphabet has only one symbol with probability equals to 1. DCT Notes By, Er. Swapnil Kaware
  • 12. Huffman coding. DCT Notes By, Er. Swapnil Kaware
  • 13. • The Channel Encoder will add bits to the message bits to be transmitted systematically. • After passing through the channel, the Channel decoder will detect and correct the errors. • A simple example is to send ‘000’ (‘111’ correspondingly) instead of sending only one ‘0’ (‘1’ correspondingly) to the channel. • Due to noise in the channel, the received bits may become ‘001’. But since either ‘000’ or ‘111’ could have been sent. • By majority logic decoding scheme, it will be decoded as ‘000’ and therefore the message has been a ‘0’. • In general the channel encoder will divides the input message bits into blocks of k messages bits and replaces each k message bits block with a n-bit code word by introducing (n-k) check bits to each message block. Channel Encoder / Decoder: DCT Notes By, Er. Swapnil Kaware
  • 14. Channel Encoder / Decoder: There are two methods of channel coding. 1. Block Coding: (i). The encoder takes a block of ‘k’ information bits from the source encoder and adds ‘r’ error control bits, where ‘r’ is dependent on ‘k’ and error control capabilities desired. 2. Convolution Coding: (i). The information bearing message stream is encoded in a continuous fashion by continuously interleaving information bits and error control bits. DCT Notes By, Er. Swapnil Kaware
  • 15. Block Coding: • Denoted by (n, k) a block code is a collection of code words each with length n, k information bits and r = n – k check bits. • It is linear if it is closed under addition mod 2. • A Generator Matrix G (of order k × n) is used to generate the code. (3.1) G = [ Ik P ] k × n. • where Ik is the k × k identity matrix and P is a k × (n – k) matrix selected to give desirable properties to the code produced. • For example, denote D to be the message, G to be the generator matrix, C to be code word. DCT Notes By, Er. Swapnil Kaware
  • 16.
  • 17. Average Mutual Information & Entropy Information theory answers two fundamental questions in communication theory: 1. What is the ultimate lossless data compression? 2. What is the ultimate transmission rate of reliable communication? Information theory gives insight into the problems of statistical inference, computer science, investments and many other fields. DCT Notes By, Er. Swapnil Kaware
  • 18. • Entropy is a measure of the number of specific ways in which a system may be arranged, often taken to be a measure of disorder, or a measure of progressing towards thermodynamic equilibrium. • The entropy of an isolated system never decreases, because isolated systems spontaneously evolve towards thermodynamic equilibrium, which is the state of maximum entropy. • Entropy was originally defined for a thermodynamically reversible process as where the entropy (S) is found from the uniform thermodynamic temperature (T) of a closed system divided into an incremental reversible transfer of heat into that system (dQ). Entropy DCT Notes By, Er. Swapnil Kaware
  • 19. • The above definition is sometimes called the macroscopic definition of entropy because it can be used without regard to any microscopic picture of the contents of a system. • In thermodynamics, entropy has been found to be more generally useful and it has several other formulations. • Entropy was discovered when it was noticed to be a quantity that behaves as a function of state. • Entropy is an extensive property, but it is often given as an intensive property of specific entropy as entropy per unit mass or entropy per mole. Entropy DCT Notes By, Er. Swapnil Kaware
  • 20. Entropy (i). The entropy of a random variable is a function which attempts to characterize the “unpredictability” of a random variable. (ii). Consider a random variable X representing the number that comes up on a roulette wheel and a random variable Y representing the number that comes up on a fair 6-sided die. (iii). The entropy of X is greater than the entropy of Y . In addition to the numbers 1 through 6, the values on the roulette wheel can take on the values 7 through 36. (iii). In some sense, it is less predictable. DCT Notes By, Er. Swapnil Kaware
  • 21. (iv). But entropy is not just about the number of possible outcomes. It is also about their frequency. (v). For example, let Z be the outcome of a weighted six sided die that comes up 90% of the time as a “2”. Z has lower entropy than Y representing a fair 6-sided die. (vi). The weighted die is less unpredictable, in some sense. But entropy is not a vague concept. (vii). It has a precise mathematical definition. In particular, if a random variable X takes on values in a set X = {x1, x2, ..., xn}. Entropy DCT Notes By, Er. Swapnil Kaware
  • 22. The most fundamental concept of information theory is the entropy. The entropy of a random variable X is defined by, The entropy is non-negative. It is zero when the random variable is “certain” to be predicted. Entropy DCT Notes By, Er. Swapnil Kaware
  • 23. Joint Entropy (i). Joint entropy is the entropy of a joint probability distribution, or a multi-valued random variable. (ii). For example, one might wish to the know the joint entropy of a distribution of people defined by hair color C and eye color E, where C can take on 4 different values from a set C and E can take on 3 values from a set E. (iii). If P(E,C) defines the joint probability distribution of hair color and eye color. DCT Notes By, Er. Swapnil Kaware
  • 24. Joint and Conditional Entropy For two random variables X and Y , the joint entropy is defined by, The conditional entropy is defined by, DCT Notes By, Er. Swapnil Kaware
  • 25. Mutual information (i). Mutual information is a quantity that measures a relationship between two random variables that are sampled simultaneously. (ii). In particular, it measures how much information is communicated, on average, in one random variable about another. (iii). Intuitively, one might ask, how much does one random variable tell me about another? • For example, suppose X represents the roll of a fair 6-sided die, and Y represents whether the roll is even (0 if even, 1 if odd). Clearly, the value of Y tells us something about the value of X and vice versa. • That is, these variables share mutual information. DCT Notes By, Er. Swapnil Kaware
  • 26. (iv). On the other hand, if X represents the roll of one fair die, and Z represents the roll of another fair die, then X and Z share no mutual information. (v). The roll of one die does not contain any information about the outcome of the other die. (vi). An important theorem from information theory says that the mutual information between two variables is 0 if and only if the two variables are statistically independent. Mutual information DCT Notes By, Er. Swapnil Kaware
  • 27. Mutual Information (i). The mutual information of X and Y is defined by, (ii). Note that the mutual information is symmetric in the arguments. That is, (iii). Mutual information is also non-negative, as we will show in a minute. DCT Notes By, Er. Swapnil Kaware
  • 28. Mutual Information and Entropy It follows from definition of entropy and mutual information that, The mutual information is the reduction of entropy of X when Y is known. DCT Notes By, Er. Swapnil Kaware
  • 29. Conditional Mutual Information The conditional mutual information of X and Y given Z is defined by, DCT Notes By, Er. Swapnil Kaware
  • 30. Chain Rules of Mutual Information It can be shown from the definitions that the mutual information of (X; Y ) and Z is the sum of the mutual information of X and Z and the conditional mutual information of Y and Z given X. That is,
  • 31. Chain Rules of Entropy From the definition of entropy, it can be shown that for two random variables X and Y , the joint entropy is the sum of the entropy of X and the conditional entropy of Y given X, DCT Notes By, Er. Swapnil Kaware
  • 32. Discrete Memory Less Source (i). Suppose that a probabilistic experiment involves the observation of the output emitted by a discrete source during every unit of time (signaling interval). (ii). The source output is modeled as a discrete random variable S, which takes on symbols from a alphabet. (iii). We assume that the symbols emitted by the source during successive signaling intervals are statistically independent. (iv). A source having the properties just described is called a discrete memory less source. (v). If the source symbols occur with different probabilities, and the probability ‘pk’ is low, then there is more surprise, and therefore information, when symbol ‘sk’ is emitted by the source. DCT Notes By, Er. Swapnil Kaware
  • 33. Lempel- Ziv algorithm An innovative, radically different method was introduced in1977 by Abraham Lempel and Jacob Ziv. This technique (called Lempel-Ziv) actually consists of two considerably different algorithms, LZ77 and LZ78. LZ78 inserts one- or multi-character, non- overlapping, distinct patterns of the message to be encoded in a Dictionary. The multi-character patterns are of the form: C0C1 . . . Cn-1Cn. The prefix of a pattern consists of all the pattern characters except the last: C0C1 . . . Cn-1 LZ78 O/P: DCT Notes By, Er. Swapnil Kaware
  • 34. Lempel-Ziv algorithm (i). Encode a string by finding the longest match anywhere within a window of past symbols and represents the string by a pointer to location of the match within the window and the length of the match. (ii). This algorithm is simple to implement and has become popular as one of the early standard algorithms for file compression on computers because of its speed and efficiency. (iii). It is also used for data compression in high-7 speed modems. DCT Notes By, Er. Swapnil Kaware
  • 35. Lempel-Ziv algorithm  As mentioned earlier, static coding schemes require some knowledge about the data before encoding takes place.  Universal coding schemes, like LZW, do not require advance knowledge and can build such knowledge on-the-fly.  LZW is the foremost technique for general purpose data compression due to its simplicity and versatility.  It is the basis of many PC utilities that claim to “double the capacity of your hard drive”.  LZW compression uses a code table, with 4096 as a common choice for the number of table entries. DCT Notes By, Er. Swapnil Kaware
  • 36.  Codes 0-255 in the code table are always assigned to represent single bytes from the input file.  When encoding begins the code table contains only the first 256 entries, with the remainder of the table being blanks.  Compression is achieved by using codes 256 through 4095 to represent sequences of bytes.  As the encoding continues, LZW identifies repeated sequences in the data, and adds them to the code table.  Decoding is achieved by taking each code from the compressed file, and translating it through the code table to find what character or characters it represents. Lempel-Ziv algorithm DCT Notes By, Er. Swapnil Kaware
  • 37. 1 Initialize table with single character strings 2 P = first input character 3 WHILE not end of input stream 4 C = next input character 5 IF P + C is in the string table 6 P = P + C 7 ELSE 8 output the code for P 9 add P + C to the string table 10 P = C 11 END WHILE 12 output code for P Lempel-Ziv algorithm DCT Notes By, Er. Swapnil Kaware
  • 38. Example 1: Compression using LZW Example 1: Use the LZW algorithm to compress the string BABAABAAA DCT Notes By, Er. Swapnil Kaware
  • 39. Example 1: LZW Compression Step 1 BABAABAAA P=A C=empty STRING TABLEENCODER OUTPUT stringcodewordrepresentingoutput code BA256B66 DCT Notes By, Er. Swapnil Kaware
  • 40. Example 1: LZW Compression Step 2 BABAABAAA P=B C=empty STRING TABLEENCODER OUTPUT stringcodewordrepresentingoutput code BA256B66 AB257A65 DCT Notes By, Er. Swapnil Kaware
  • 41. Example 1: LZW Compression Step 3 BABAABAAA P=A C=empty STRING TABLEENCODER OUTPUT stringcodewordrepresentingoutput code BA256B66 AB257A65 BAA258BA256 DCT Notes By, Er. Swapnil Kaware
  • 42. Example 1: LZW Compression Step 4 BABAABAAA P=A C=empty STRING TABLEENCODER OUTPUT stringcodewordrepresentingoutput code BA256B66 AB257A65 BAA258BA256 ABA259AB257 DCT Notes By, Er. Swapnil Kaware
  • 43. Example 1: LZW Compression Step 5 BABAABAAA P=A C=A STRING TABLEENCODER OUTPUT stringcodewordrepresentingoutput code BA256B66 AB257A65 BAA258BA256 ABA259AB257 AA260A65
  • 44. Example 1: LZW Compression Step 6 BABAABAAA P=AA C=empty STRING TABLEENCODER OUTPUT stringcodewordrepresentingoutput code BA256B66 AB257A65 BAA258BA256 ABA259AB257 AA260A65 AA260
  • 45. LZW Decompression  The LZW decompressor creates the same string table during decompression.  It starts with the first 256 table entries initialized to single characters.  The string table is updated for each character in the input stream, except the first one.  Decoding achieved by reading codes and translating them through the code table being built. DCT Notes By, Er. Swapnil Kaware
  • 46. LZW Decompression Algorithm 1 Initialize table with single character strings 2 OLD = first input code 3 output translation of OLD 4 WHILE not end of input stream 5 NEW = next input code 6 IF NEW is not in the string table 7 S = translation of OLD 8 S = S + C 9 ELSE 10 S = translation of NEW 11 output S 12 C = first character of S 13 OLD + C to the string table 14 OLD = NEW 15 END WHILE DCT Notes By, Er. Swapnil Kaware
  • 47. Example 2: LZW Decompression 1 Example 2: Use LZW to decompress the output sequence of Example 1: <66><65><256><257><65><260>. DCT Notes By, Er. Swapnil Kaware
  • 48. Example 2: LZW Decompression Step 1 <66><65><256><257><65><260> Old = 65 S = A New = 66 C = A STRING TABLEENCODER OUTPUT stringcodewordstring B BA256A DCT Notes By, Er. Swapnil Kaware
  • 49. Example 2: LZW Decompression Step 2 <66><65><256><257><65><260> Old = 256 S = BA New = 256 C = B STRING TABLEENCODER OUTPUT stringcodewordstring B BA256A AB257BA DCT Notes By, Er. Swapnil Kaware
  • 50. Example 2: LZW Decompression Step 3 <66><65><256><257><65><260> Old = 257 S = AB New = 257 C = A STRING TABLEENCODER OUTPUT stringcodewordstring B BA256A AB257BA BAA258AB DCT Notes By, Er. Swapnil Kaware
  • 51. Example 2: LZW Decompression Step 4 <66><65><256><257><65><260> Old = 65 S = A New = 65 C = A STRING TABLEENCODER OUTPUT stringcodewordstring B BA256A AB257BA BAA258AB ABA259A DCT Notes By, Er. Swapnil Kaware
  • 52. Example 2: LZW Decompression Step 5 <66><65><256><257><65><260> Old = 260 S = AA New = 260 C = A STRING TABLEENCODER OUTPUT stringcodewordstring B BA256A AB257BA BAA258AB ABA259A AA260AA DCT Notes By, Er. Swapnil Kaware
  • 53. LZW Summary  This algorithm compresses repetitive sequences of data well.  Since the code words are 12 bits, any single encoded character will expand the data size rather than reduce it.  In this example, 72 bits are represented with 72 bits of data. After a reasonable string table is built, compression improves dramatically.  Advantages of LZW over Huffman:  LZW requires no prior information about the input data stream.  LZW can compress the input stream in one single pass.  Another advantage of LZW its simplicity, allowing fast execution. DCT Notes By, Er. Swapnil Kaware
  • 54. LZW: Limitations  What happens when the dictionary gets too large (i.e., when all the 4096 locations have been used)?  Here are some options usually implemented:  Simply forget about adding any more entries and use the table as is.  Throw the dictionary away when it reaches a certain size.  Throw the dictionary away when it is no longer effective at compression.  Clear entries 256-4095 and start building the dictionary again.  Some clever schemes rebuild a string table from the last N input characters. DCT Notes By, Er. Swapnil Kaware
  • 55. Coding of analog sources Three type of analog source encoding:- Temporal Waveform coding :design to represent digitally the time-domain characteristic of the signal (i). Pulse-code modulation (PCM). (ii). Differential pulse-code modulation (DPCM). (iii). Delta modulation(DM). • Spectral waveform coding: signal waveform is sub divided into different frequency band and either the time waveform in each band or its spectral characteristics are encoded. • Each subband can be encoded in time-domain waveform or Each subband can be encoded in frequency- domain waveform). • Model-based coding:Based on the mathematical model of source.(The Source is modeled as a linear system that results in the observed source output). • Instead of transmitted samples of the source, the parameters of the linear system are transmitted with an appropriate excitation table. • If the parameters are sufficient small, provides large compression). DCT Notes By, Er. Swapnil Kaware
  • 56. Rate–Distortion Theory (i). In rate–distortion theory, the rate is usually understood as the number of bits per data sample to be stored or transmitted. (ii). In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error ). (iii). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video). (iv). The distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. DCT Notes By, Er. Swapnil Kaware
  • 57. (v). In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. (vi). In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix. Rate–Distortion Theory DCT Notes By, Er. Swapnil Kaware
  • 58. Rate Distortion Function • RATE : • It is the number of bits per data sample to be stored or transmitted. • DISTORTION: • It is defined as the variance of the difference between input and output. • 1.hamming distance • 2.squared error • Rate distortion theory is the branch of information theory addressing the problem of determining the minimal amount of entropy or information that should be communicated over a channel such that the source can be reconstructed at the receiver with given distortion. DCT Notes By, Er. Swapnil Kaware
  • 59. Rate Distortion Function DCT Notes By, Er. Swapnil Kaware
  • 60. Variance of Input and Output Image (Example of distortion) DCT Notes By, Er. Swapnil Kaware
  • 61. Quantization • Quantization, in mathematics and digital signal processing, is the process of mapping a large set of input values to a smaller set – such as rounding values to some unit of precision. • A device oralgorithmic function that performs quantization is called a quantizer. The round-off error introduced by quantization is referred to as quantization error. • In analog-to-digital conversion, the difference between the actual analog value and quantized digital value is called quantization error or quantization distortion. • This error is either due to rounding or truncation. DCT Notes By, Er. Swapnil Kaware
  • 62. • The error signal is sometimes considered as an additional random signal called quantization noise. • Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. • Quantization also forms the core of essentially all lossy compression algorithms. • A quantizer describes the relation between the encoder input values and the decoder output values Quantization DCT Notes By, Er. Swapnil Kaware
  • 63. Quantization DCT Notes By, Er. Swapnil Kaware
  • 64. Quantizer • The design of the quantizer has a significant impact on the amount of compression obtained and loss incurred in a lossy compression scheme. • Quantizer:- Quantizer in nothing but the combination of encoder mapping and decode mapping. (i). Encoder Mapping:- • – The encoder divides the range of source into a number of intervals. • – Each interval is represented by a distinct codeword. (ii). Decoder mapping. • – For each received codeword, the decoder generates a reconstruct value. DCT Notes By, Er. Swapnil Kaware
  • 65. Components of a Quantizer 1. Encoder mapping: Divides the range of values that the source generates into a number of intervals. 2. Each interval is then mapped to a codeword. It is a many-to-one irreversible mapping. 3. The code word only identifies the interval, not the original value. 4. If the source or sample value comes from a analog source, it is called a A/D converter. DCT Notes By, Er. Swapnil Kaware
  • 66. Mapping of a 3-bit Encoder 000 001 010 011 100 101 110 111 … Codes -3.0 -2.0 -1.0 0 1.0 2.0 3.0 input DCT Notes By, Er. Swapnil Kaware
  • 67. Mapping of a 3-bit D/A Converter Input Codes Output 000 -3.5 001 -2.5 010 -1.5 011 -0.5 100 0.5 101 1.5 110 2.5 111 3.5 DCT Notes By, Er. Swapnil Kaware
  • 68. Components of a Quantizer Decoder:- Given the code word, the decoder gives a an estimated value that the source might have generated. Usually, it is the midpoint of the interval but a more accurate estimate will depend on the distribution of the values in the interval. In estimating the value, the decoder might generate some errors. DCT Notes By, Er. Swapnil Kaware
  • 69. Digitizing a Sine Wave t 4cos(2*Pi*t) A/D Output D/A Output Error 0.1 3.8 111 3.5 0.3 0.1 3.2 111 3.5 -0 0.2 2.4 110 2.5 -0 0.2 1.2 101 1.5 -0 DCT Notes By, Er. Swapnil Kaware
  • 70. Step Encoder DCT Notes By, Er. Swapnil Kaware
  • 71. Step Encoder DCT Notes By, Er. Swapnil Kaware
  • 72. Scalar Quantization • Many of the fundamental ideas of quantization and compression are easily introduced in the simple context of scalar quantization. • An example: any real number x can be rounded off to the nearest integer, say q(x) = round(x) • Maps the real line R (a continuous space) into a discrete space. DCT Notes By, Er. Swapnil Kaware
  • 73. Vector Quantization • Vector Quantization Rule:- Vector quantization (VQ) of X may be viewed as the classification of the outcomes of X into a discrete number of sets or cells in N-space • Each cell is represented by a vector output Yj • Given a distance measure d(x, y), we have • VQ output: Q(X) = Yj iff d(X, Yj) < d(X, Yi), " i ¹ j. • Quantization region: Vj = { X: d(X, Yj) < d(X, Yi), " i ¹ j}. DCT Notes By, Er. Swapnil Kaware
  • 74. Vector Quantization DCT Notes By, Er. Swapnil Kaware
  • 75. The Schematic of a Vector Quantizer DCT Notes By, Er. Swapnil Kaware
  • 76. BCH Codes • BCH (Bose –Chaudhuri – Hocquenghem) Codes form a large class of multiple ramdom error-correcting codes. • They ware first discovered by A.Hocquenghem in 1959 and independently by R.C.Bose and D.K.Ray- Chaudhuri in 1960. • BCH codes are cyclic codes. Binary BCH are most popular. • The first decoding algorithm for binary BCH codesDCT Notes By, Er. Swapnil Kaware
  • 77. BCH codes • In coding theory the BCH codes form a class of cyclic error- correcting codes that are constructed using finite fields. • BCH codes were invented in 1959 by French mathematician Alexis Hocquenghem, and independently in 1960 by Raj Bose and D. K. Ray-Chaudhuri. • The acronym BCH comprises the initials of these inventors' names. • One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. DCT Notes By, Er. Swapnil Kaware
  • 78. • it is possible to design binary BCH codes that can correct multiple bit errors. • Another advantage of BCH codes is the ease with which they can be decoded, namely, via an algebraic method known as syndrome decoding. • This simplifies the design of the decoder for these codes, using small low-power electronic hardware. • BCH codes are used in applications like satellite communications,compact disc players, DVDs, disk drives, solid-state drives and two-dimensional bar codes. BCH codes DCT Notes By, Er. Swapnil Kaware
  • 79. Parameters BCH codesBlock length : Message Size (bits) : Minimum Distance : • The code is a t- error correcting • For example, for m=6, t=3 This is a triple-error- mtnk 12n m 12min td 7132 1836 6312 min 6 d kn n DCT Notes By, Er. Swapnil Kaware
  • 80. Generator Polynomial of BCH Codes • Let α be a primitive element in GF(2m). For 1≤ i ≤ t, let m2i-1(x) be the minimum polynomial of the field element α2i-1 . • The generator polynomial g(X) of a t- error-correcting primitive BCH codes of length 2m-1 is given by LCM : Least Common Multiple • Note that degree of g(x) is less. ))(),...,(),(()( 1231 XmXmxmLCMxg t DCT Notes By, Er. Swapnil Kaware
  • 81. BCH Encoding • Let m(x) be the message polynomial to be encoded Where mi GF(2m). • Dividing x2tm(x) by g(x), we have Where p(x) is the remainder. • Then u(x) is the codeword polynomial for the message m(x). 1 110 ...)( k k xmxmmxm )()()()(2 xpxgxqxmx t 12 1210 ...)( t t xpxppxp )()()( xmxxpxu kn DCT Notes By, Er. Swapnil Kaware
  • 82. Decoding of BCH codes • Consider a BCH code with n=2m-1 and generator polynomial g(x). • Suppose a code polynomial v(x) is transmitted Let r(x) be the received polynomial. 1 110 ...)( n n xvxvvxv 1 110 ...)( n n xrxrrxr DCT Notes By, Er. Swapnil Kaware
  • 83. Reed–Solomon (RS) codes • In coding theory, Reed–Solomon (RS) codes are non- binary cyclic error-correcting codes invented by Irving S. Reed and Gustave Solomon. • They described a systematic way of building codes that could detect and correct multiple random symbol errors. • By adding t check symbols to the data, an RS code can detect any combination of up to t erroneous symbols, or correct up to ⌊t/2⌋ symbols. • As an erasure code, it can correct up to t known erasures, or it can detect and correct combinations of errors and erasures. DCT Notes By, Er. Swapnil Kaware
  • 84. • Furthermore, RS codes are suitable as multiple-burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can affect at most two symbols of size b. • The choice of t is up to the designer of the code, and may be selected within wide limits. • In Reed–Solomon coding, source symbols are viewed as coefficients of a polynomial p(x) over a finite field. • The original idea was to create n code symbols from k source symbols by oversampling p(x) at n > k distinct points, transmit the sampled points, and use interpolation techniques at the receiver to recover the original message. Reed–Solomon (RS) codes
  • 85. • That is not how RS codes are used today. Instead, RS codes are viewed as cyclic BCH codes, where encoding symbols are derived from the coefficients of a polynomial constructed by multiplying p(x) with a cyclic generator polynomial. • This gives rise to efficient decoding algorithms (described below). • Reed–Solomon codes have since found important applications from deep-space communication to consumer electronics. • They are prominently used in consumer electronics such as CDs, DVDs, Blu-ray Discs, in data transmission technologies such as DSL and WiMAX, in broadcast systems such as DVB and ATSC, and in computer applications such as RAID 6 systems; Reed–Solomon (RS) codes DCT Notes By, Er. Swapnil Kaware
  • 86. Motivation for RS Soft Decision Decoder Hard decision decoder does not fully exploit the decoding capability Efficient soft decision decoding of RS codes remains an open problem RS Coded Turbo Equalization System - + a priori extrinsic interleaving a priori extrinsic ΠΣ source RS Encoder interleaving PR Encoder sink hard decision + AWGN + RS Decoder Channel Equalizer de-interleaving Π 1 Π Σ Soft input soft output (SISO) algorithm is favorable DCT Notes By, Er. Swapnil Kaware
  • 87. Applications • Data storageBar code • Data transmission • Space transmission DCT Notes By, Er. Swapnil Kaware
  • 88. Reed Muller Codes • Reed Muller codes are some of the oldest error correcting codes. • Error correcting codes are very useful in sending information over long distances or through channels where errors might occur in the message. • They have become more prevalent as telecommunications have expanded and developed a use for codes that can self-correct. • Reed Muller codes were invented in 1954 by D. E. Muller and I. S. Reed. In 1972, • A Reed Muller code was used by Mariner 9 to transmit black and white photographs of Mars. • Reed Muller codes are relatively easy to decode. DCT Notes By, Er. Swapnil Kaware
  • 89. Decoding Reed Muller:-  Decoding Reed Muller encoded messages is more complex than encoding them.  The theory behind encoding and decoding is based on the distance between vectors.  The distance between any two vectors is the number of places in the two vectors that have different values.  The basis for Reed Muller encoding is the assumption that the closest codeword in <(r;m) to the received message is the original encoded message. Reed Muller Codes
  • 90. • This method of decoding is given by the following algorithm: Apply Steps 1 and 2. • The vector spaces used in this paper consist of strings of length 2m, where m is a positive integer, of numbers in • F2 = f0. • The code words of a Reed Muller code form a subspace of such a space. • Vectors can be manipulated by three main operations: addition, multiplication, and the dot product. Reed Muller Codes
  • 91. • For two vectors x = (x1; x2; : : : ; xn) and y = (y1; y2; : : : ; yn), addition is dened by x + y = (x1 + y1; x2 + y2; : : : ; xn + yn); • where each xi or yi is either 1 or 0, and • 1 + 1 = 0; 0 + 1 = 1; 1 + 0 = 1; 0 + 0 = 0: • For example, if x and y are dened as x = (10011110) and y = (11100001), then the sum of x and y is x + y = (10011110) + (11100001) = (01111111): Reed Muller Codes DCT Notes By, Er. Swapnil Kaware
  • 92. Convolution codes DCT Notes By, Er. Swapnil Kaware
  • 93. • In telecommunication, a convolutional code is a type of error- correcting code in which each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m). • The transformation is a function of the last k information symbols, where k is the constraint length of the code. • Convolutional codes are used extensively in numerous applications in order to achieve reliable data transfer, including digital video, radio, mobile communication, and satellite communication. • These codes are often implemented in concatenation with a hard- decision code, particularly Reed Solomon. • Prior to turbo codes, such constructions were the most efficient, coming closest to the Shannon limit. Convolution codes DCT Notes By, Er. Swapnil Kaware
  • 94. • Convolutional Encoding:- • To convolutionally encode data, start with k memory registers, each holding 1 input bit. Unless otherwise specified, all memory registers start with a value of 0. • The encoder has n modulo-2adders (a modulo 2 adder can be implemented with a single Boolean XOR gate, where the logic is: 0+0 = 0, 0+1 = 1, 1+0 = 1, 1+1 = 0), and n generator polynomials — one for each adder (see figure below). • An input bit m1 is fed into the leftmost register. Using the generator polynomials and the existing values in the remaining registers, the encoder outputs n bits. Convolution codes DCT Notes By, Er. Swapnil Kaware
  • 95. • Now bit shift all register values to the right (m1 moves to m0, m0 moves to m-1) and wait for the next input bit. • If there are no remaining input bits, the encoder continues output until all registers have returned to the zero state. • The figure below is a rate 1/3 (m/n) encoder with constraint length (k) of 3. Generator polynomials are G1 = (1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits are calculated (modulo 2) as follows: • n1 = m1 + m0 + m-1n2 = m0 + m-1n3 = m1 + m-1. Convolution codes DCT Notes By, Er. Swapnil Kaware
  • 96. Convolution codes DCT Notes By, Er. Swapnil Kaware
  • 97. Convolutional codes • Convolutional codes map information to code bits sequentially by convolving a sequence of information bits with “generator” sequences. • A convolutional encoder encodes K information bits to N>K code bits at one time step. • Convolutional codes can be regarded as block codes for which the encoder has a certain structure such that we can express the encoding operation as convolution. DCT Notes By, Er. Swapnil Kaware
  • 98. • The convolutional code is linear. • Code bits generated at time step i are affected by information bits up to M time steps i – 1, i – 2, …, i – M back in time. M is the maximal delay of information bits in the encoder. • Code memory is the (minimal) number of registers to construct an encoding circuit for the code. • Constraint length is the overall number of information bits affecting code bits generated at time step i: =code memory + K=MK + K=(M + 1)K. • A convolutional code is systematic if the N code bits generated at time step i contain the K information bits. Convolutional codes DCT Notes By, Er. Swapnil Kaware
  • 99. 99 Modified State Diagram A path from (00) to (00) is denoted by Di (weight) Lj (length) Nk (# info 1‘s) DCT Notes By, Er. Swapnil Kaware
  • 100. 100 Transfer Function • The transfer function T(D,L,N) T(D,L,N) D L DNL(1 L) 5 3 1 DCT Notes By, Er. Swapnil Kaware
  • 101. •The distance properties and the error rate performance of a convolutional code can be obtained from its transfer function. • Since a convolutional code is linear, the set of Hamming distances of the code sequences generated up to some stages in the trellis, from the all-zero code sequence, is the same as the set of distances of the code sequences with respect to any other code sequence. • •Thus, we assume that the all-zero path is the input to the encoder Transfer Function DCT Notes By, Er. Swapnil Kaware
  • 102. DCT Notes By, Er. Swapnil Kaware
  • 103. 103 Transfer Function • Performing long division: T(D,L,N) = D5L3N + D6L4N2 + D6L5N2 + D7L5N3 + …. • If interested in the Hamming distance property of the code only, set N = 1 and L = 1 to get the distance transfer function: T (D) = D5 + 2D6 + 4D7 + … There is one code sequence of weight 5. Therefore dfree=5. There are two code sequences of weight 6, four code sequences of weight 7, …. DCT Notes By, Er. Swapnil Kaware
  • 104. 104 Performance • The event error probability is defined as the probability that the decoder selects a code sequence that was not transmitted • For two codewords the Pairwise Error Probability is • The upperbound for the event error probability is given by d d 2/d p1 pdidi d 2 1di ))p1(p(4( )p1(2)p1(p i d )d(PEP dcetandisatcodewordofnumbertheis)d(Awhere )d(PEP)d(AP freedd event correct node incorrect DCT Notes By, Er. Swapnil Kaware
  • 105. 105 Performance • using the T(D,N,L), we can formulate this as • The bit error rate (not probability) is written as )p1(p2D;1NLevent )N,L,D(TP )p1(p2D;1N;1LdN d bit )N,L,D(TP DCT Notes By, Er. Swapnil Kaware
  • 106. Viterbi Decoding Algorithm • The Viterbi algorithm is a standard component of tens of millions of high- speed modems. It is a key building block of modern information infrastructure • The symbol "VA" is ubiquitous in the block diagrams of modern receivers. • Essentially: the VA finds a path through any Markov graph, which is a sequence of states governed by a Markov chain. • Many practical applications: – convolutional decoding and channel trellis decoding. – fading communication channels, – partial response channels in recording systems, – optical character recognition, – voice recognition. – DNA sequence analysis – etc. DCT Notes By, Er. Swapnil Kaware
  • 107. • A Viterbi decoder uses the Viterbi algorithm for decoding a bits tream that has been encoded using a convolutional code. • There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm). • The Viterbi algorithm is the most resource-consuming, but it does the maximum likelihood decoding. • It is most often used for decoding convolutional codes with constraint lengths k<=10, but values up to k=15 are used in practice. • Viterbi decoding was developed by Andrew J. Viterbi and published in the paper "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm", IEEE Transactions on Information Theory, Volume IT-13, pages 260-269, in April, 1967. Viterbi Decoding Algorithm DCT Notes By, Er. Swapnil Kaware
  • 108. • Applications:- (i). The Viterbi decoding algorithm is widely used in the following areas: (ii). Radio communication: digital TV (ATSC, QAM, DVB-T, etc.), radio relay, satellite communications, PSK31 digital mode for amateur radio. (iii). Decoding trellis-coded modulation (TCM), the technique used in telephone-line modems to squeeze high spectral efficiency out of 3 kHz-bandwidth analog telephone lines. (iv). Computer storage devices such as hard disk drives. (v). Automatic speech recognition. Viterbi Decoding Algorithm DCT Notes By, Er. Swapnil Kaware
  • 109. Viterbi Decoding • Viterbi decoding is one of two types of decoding algorithms used with convolutional encoding-the other type is sequential decoding. • Sequential decoding has the advantage that it can perform very well with long-constraint-length convolutional codes, but it has a variable decoding time. • A discussion of sequential decoding algorithms is beyond the scope of this tutorial; the reader can find sources discussing this topic in the Books about Forward Error Correction section of the bibliography. • Viterbi decoding has the advantage that it has a fixed decoding time. It is well suited to hardware decoder implementation. DCT Notes By, Er. Swapnil Kaware
  • 110. • But its computational requirements grow exponentially as a function of the constraint length, so it is usually limited in practice to constraint lengths of K = 9 or less. • Stanford Telecom produces a K = 9 Viterbi decoder that operates at rates up to 96 kbps, and a K = 7 Viterbi decoder that operates at up to 45 Mbps. • Advanced Wireless Technologies offers a K = 9 Viterbi decoder that operates at rates up to 2 Mbps. • NTT has announced a Viterbi decoder that operates at 60 Mbps, but I don't know its commercial availability. • Moore's Law applies to Viterbi decoders as well as to microprocessors, so consider the rates mentioned above as a snapshot of the state-of- the-art taken in early 1999. Viterbi Decoding DCT Notes By, Er. Swapnil Kaware
  • 111. Trellis Coded Modulation. • In telecommunication, trellis modulation (also known as trellis coded modulation, or simply TCM). • TCM is a modulation scheme which allows highly efficient transmission of information over band-limited channels such as telephone lines. • Trellis modulation was invented by Gottfried Ungerboeck who is working for IBM in the 1970s. DCT Notes By, Er. Swapnil Kaware
  • 112. • The name trellis was coined because a state diagram of the technique, when drawn on paper, closely resembles the trellis latticeused in rose gardens. • The scheme is basically a convolutional code of rates (r,r+1). • Ungerboeck's unique contribution is to apply the parity check on a per symbol basis instead of the older technique of applying it to the bit stream then modulating the bits. • The key idea he termed Mapping by Set Partitions. Trellis Coded Modulation. DCT Notes By, Er. Swapnil Kaware
  • 113. • This idea was to group the symbols in a tree like fashion then separate them into two limbs of equal size. • At each limb of the tree, the symbols were further apart. • Although hard to visualize in multi-dimensions, a simple one dimension example illustrates the basic procedure. • Suppose the symbols are located at [1, 2, 3, 4, ...]. • Then take all odd symbols and place them in one group, and the even symbols in the second group. Trellis Coded Modulation. DCT Notes By, Er. Swapnil Kaware
  • 114. • This is not quite accurate because Ungerboeck was looking at the two dimensional problem, but the principle is the same, take every other one for each group and repeat the procedure for each tree limb. • He next described a method of assigning the encoded bit stream onto the symbols in a very systematic procedure. Once this procedure was fully described, his next step was to program the algorithms into a computer and let the computer search for the best codes. • The results were astonishing. Even the most simple code (4 state) produced error rates nearly one one-thousandth of an equivalent uncoded system. • For two years Ungerboeck kept these results private and only conveyed them to close colleagues. • Finally, in 1982, Ungerboeck published a paper describing the principles of trellis modulation. Trellis Coded Modulation. DCT Notes By, Er. Swapnil Kaware
  • 115. References 1. “Digital and Analog Communication Systems” – K. Sam Shanmugam, John Wiley. 2. “An introduction to Analog and Digital Communication”- Simon Haykin, John Wiley. 3. “Digital Communication- Fundamentals & Applications” – Bernard Sklar, Pearson Education. 4. “Analog & Digital Communications”- HSU, Tata Mcgraw Hill, II edition. DCT Notes By, Er. Swapnil Kaware
  • 116. Thank You!!!!! Have A Nice Day!!!!! DCT Notes By, Er. Swapnil Kaware (svkaware@yahoo.co.in)