SlideShare uma empresa Scribd logo
1 de 396
Baixar para ler offline
Information Theory and Coding
(CSE2013)
Presented By:
Dr. Neeraj Kumar Misra
Associate Professor
Department of SENSE,
VIT-AP University
Google Scholar: https://scholar.google.co.in/citations?user=_V5Af5kAAAAJ&hl=en
Research Gate profile: https://www.researchgate.net/profile/Neeraj_Kumar_Misra
ORCHID ID: https://orcid.org/0000-0002-7907-0276
Introduction: Dr. Neeraj Kumar Misra, Associate Professor, SENSE
 Full-time Ph.D (Under TEQIP-II, A World Bank funded Fellowship, Gov. of India)
 Post Graduate Diploma in Artificial Intelligence and Machine Learning, from, NIT Warangle.
 B.Tech and M.Tech first class with distinction.
 Three International Patent granted (Validity 2028),
Patent no. 2021105083, 2020103849, 2020102068
 9+ years of Teaching and Industry Experience
 Highest impact factor 8.907 paper published.
 Launced two special issue as Editor of MDPI journal which in SCIE category.
 Two times Young Achiever Awards in the domain of Quantum Computing
 The total 14 publications that were published in the SCI.
 Total 03 International books (ISBN: 978-3-659-34118-2, ISBN: 978-613-9-91317-6, ISBN: 978-620-2-01508-0)
 Google scholar citation 325, i10-15 and h-index-11.
 Funded Project completed 18 Lakh.
 Professional Member: IEEE, ACM, ISTE, IETE, WRC, InSc, IAENG, IRED.
 Editorial Board Member: ACTA Scientific, JCHE, IEAC, JEEE, IJCS
 Total no of Student Guided M.Tech: 10, Ph.D: 03
 Most cited paper in the year 2019.
Good-Companies
 Public Sector Companies in India
Recommended Books
Text Books
1. R. Togneri, C.J.S deSilva, “Fundamentals of Information Theory and Coding Design”, 1e,
CRS Press, Imprint: Taylor and Francis, 2003.
2. R. Bose, “Information Theory Coding and Cryptography”, 3e paperback,Tata McGraw
Hill, 2016.
3. J. A. Thomas, “Elements of Information Theory”, T. M. Cover, 2e, Wiley, 2008.
References
1. R. J. McEliece, “The Theory of Information and Coding”, Cambridge University Press,
2004.
2. S. Roman, “Coding and Information Theory”, Springer, 1997.
6
Claude Shannon
Father of Digital
Communications
“Claude Shannon's creation in the 1940's of the subject of
information theory is arguably one of the great intellectual
achievements of the twentieth century”
Bell Labs
Computing and Mathematical
Sciences Research
Q. Comments on the information content of the following message
1. Tomorrow the sun will rise from the east.
2. It will snow fall in Amaravati this winter
3. The phone will ring in the next one hour.
Ans. 1. The first statement does not carry any information since it is sure that sun always
rises from east. The probability of occurrence of first event is high or sure. Hence it
carries less or negligible information
Ik=Log1/pk=Log1/1=0
2. In the winter season snow fall in amaravati is very rare. Hence probability of
occurrence of this event is very rare, so it carries large amount of information.
3. In the third statement predicts about phone ring in the time span of one hour. It does
not mention exact time but span of one hour is mentioned. Hence it carries moderate
information.
Numerical Problem
Information Theory
and
Coding
Measure of Information
Quick Revision
Entropy Concept
 Entropy means degree of randomness or higher level of disorder and hence a higher
value of entropy
0<P<1 means log (1/p) > 0
Then H(x) > 0
 The entropy of sure (P = 1) or impossible (P = 0) event is zero
Q1. In a binary system if '0' occur with probability 1/4 and '1‘ occur with probability 3/4.
Then calculate amount of information conveyed by each bits.
Ans. I1=2 bits, I2=0.415bits
Q2. If there are M equally likely and independent message, then prove that amount of
information carried by each message be I=N bits where M=2^N and N is an integer
Q3. Prove the following statement
“If receiver knows the message being transmitted then amount of information carried is
zero”
Q4. If I1 is the information carried by message m1 and I2 is the information carried by
message m2 then prove that the amount of information carried compositely due to m1 and
m2 is I 1,2=I1+I2
Numerical Problem
Coding Theory
Q. A source emit one of 4 possible symbol X0 to X3 during each
signalling interval. The symbol occur with probability as given in
table
Find the amount of information gained by observing the source
emitting each of these symbol and find also the entropy of source.
Ans. I0=1.322bits, I1=1.737bits, I2=2.322bits, I3=3.322bits
H(x)=1.84bits/msg symbol
Probability
Symbol
P0=0.4
X0
P1=0.3
X1
P2=0.2
X2
P3=0.1
X3
 It is rate at which information is generated.
R = r . H
Where r=rate at which message are generated
H=Entropy or average information
R=r in message/second x H in information bits/message
 Unit of information rate- Information bits/second
r is represented in average number of bits of information per second.
Information Rate (R)
Q1. The entropy of sure event is
a. Zero
b. One
c. Infinity
d. None
Q2. The entropy of impossible event is
a. Zero
b. One
c. Infinity
d. None
Q3. Entropy is maximum when probability
a. P = 0
b. P = 1
c. P = ∞
d. P = 1/2
Q4. How to define the information rate
a. R = r/H
b. R = H/r
c. R = r. H
d. None
Solution of Quiz
Q5. If the bandwidth is B then what is the Nyquist rate
a. B/2
b. B
c. B/4
d. 2B
Q6. Calculate the information when P=1/4
a. 4 bits
b. 3 bits
c. 1 bits
d. 2 bits
Fill in the blank
Q1. Entropy means____Degree of randomness__________
Q2. Entropy is minimum when probability_____p=1/2____________
Q3. Unit of information______bits_________
Q4. Probability of occurrence is less then information is______more__________
Q1. a) An Analog Signal is bandwidth to B Hz sampled at Nyquist rate.
These samples are quantized into 4 levels. Each level represent message.
Thus these are 4 message. The probability of occurrence of these 4 level
are P1=P4=1/8 and P2=P3=3/8. Find our information rate of source.
b) If the transmission scheme as pervious question. Calculate information
rate if all message are equally likely.
Ans. a) R = 3.6bit/ses, b) R=4B bits/sec
Numerical Problem
Q. If the source emit m message [m1, m2, m3…………..] with probability
[p1, p2, p3………]. Calculate the entropy when all the message are equi-probable.
Q. A source produce 4 symbol with probability 1/2, 1/4, 1/8 and 1/8. For this source
practical coding scheme has an average code-word length of 2bits/symbols. Calculate
the efficiency of code
Numerical Problem
Numerical Problem
Q. For a discrete memory-less source there are 3 symbols with probability
P1=α and P2=P3. Determine the entropy of the source and sketch its
variation for different values of α
Q Find the entropy of a source in nats/symbols and Hartleys/symbols of a
source that emits one out of four symbols A, B, C, D in a statistically
independent sequence with probability 1/2, 1/4, 1/8, and 1/8
Ans. H(s)=1.2135 nats/symbols, 0.5268 Hartleys/Symbols
Q. For a discrete memory-less source there are 3 symbols with probability P1=α and P2=P3. Determine the entropy
of the source and sketch its variation for different values of α
Find the entropy of a source in nats/symbols and Hartleys/symbols of a source that emits one out of four symbols A,
B, C, D in a statistically independent sequence with probability 1/2, 1/4, 1/8, and 1/8
Q. A source transmit two independent message with
probability of P and (1-P) respectively. Prove that the
entropy is maximum when both the message are equally
likely. Plot the entropy of source versus probability (0<P<1)
Numerical Problem
Q. A source transmit two independent message with probability of P and (1-P) respectively. Prove that the entropy is maximum when
both the message are equally likely. Plot the entropy of source versus probability (0<P<1)
Numerical Problem
Q. Consider a telegraph source having two element dot and dash. The dot
duration is 0.2sec and the dash duration is 3 times of dot duration. The
probability of the dot occurring is twice that of dash and time between
symbol is 0.2 second. Assume 1200 symbol. Calculate information rate of
the telegraph source.
Ans. R=1.7218 bit/sec
Q Consider a telegraph source having two element dot and dash. The dot duration is 0.2sec and the dash duration is 3 times of dot duration. The probability of the dot
occurring is twice that of dash and time between symbol is 0.2 second. Assume 1200 symbol. Calculate information rate of the telegraph source.
Q1. An event has two possible outcomes with probability P1 = 1/2 and P2 = 1/64 . The rate
of information with 16 outcomes per second is:
a. 38/4 bit/sec
b. 38/64 bit/sec
c. 38/2 bit/sec
d. 38/32 bit/sec
Q2. For a system having 16 distinct symbols maximum entropy is obtained when
probabilities are
a. 1/8
b. 1/4
c. 1/3
d. 1/16
Q3. An Analog signal band limited to 10Khz is quantized in 8 level with probability 1/4, 1/4, 1/5, 1/5,
1/10, 1/10, 1/20, and 1/20. Find the entropy and rate of information
a. 2.84 bits/message, 56800 bit/sec-
b. 3.42 bits/message, 6.823 bit/sec
c. 1.324 bit/message. 2.768 bit/sec
d. 4.567 bit/message, 8.653 bit/sec
Quiz
Q1. A binary random variable X takes the value +2 or -2. The probability P(X = +2) = α.
The value of α for which the entropy of X is maximum is
a. 0
b. 1
c. 2
d. 0.5-
Q2. Discrete source S1 has 4 equi-probable symbols while discrete source S2 has 16 equi-
probable symbols. When the entropy of these two source is compared the entropy of:
a. S1 is greater than S2
b.S1 is less than S2 -
c. S1 is equal to S2
d. Depends on rate of symbols/second
Quiz 3
Q3. If all the message are equally likely then entropy will be
a. Minimum
b. Zero
c. Maximum
d. None
Q4. In a binary source 0s occur three times as often as 1s. What is the information
contained in the 1s?
a. 0.415 bit
b. 0.333 bit
c. 3 bit
d. 2 bit
Fill in the blank
Q5. What is the unit of entropy -------
Q6. What is the unit of Information rate-------
Extension of Zero Order Source
 Two symbol S1 and S2 with probability P1 and P2
 Second order extension of the source (Number of Source)Order of extension
22=4 (Possible combination)
 S1 S1= P1 P1 = P12
 S1 S2= P1 P2
 S2 S1= P2 P1
 S2 S2 = P2 P2 = P22
Numerical Problem
Q Consider a discrete memory-less source with alphabet S={S0, S1, S2}
with source statistics {0.7, 0.15, 0.15}
a) Calculate the entropy of source
b) Calculate the entropy of the 2nd order extension of the source
Q A source generate information with probability P1=0.1, P2=0.2, P3=0.3
and P4=0.4. Find the entropy of the system. What percentage of maximum
possible information is being generated by source.
Q Consider a discrete memory-less source with alphabet S={S0, S1, S2} with source statistics {0.7, 0.15, 0.15}
a) Calculate the entropy of source
b) Calculate the entropy of the 2nd order extension of the source
Q A source generate information with probability P1=0.1, P2=0.2, P3=0.3
and P4=0.4. Find the entropy of the system. What percentage of maximum
possible information is being generated by source.
Q1. An event has two possible outcomes with probability P1=1/2 and P2=1/64. Calculate the
rate of information with 16 outcomes per second is
A. 24/3
B. 64/6
C. 38/4-
D. None
Q2. Entropy is always
a. Negative
b. Fractional
c. Positive
d. None
Q3. How to define the efficiency of the channel
a. ή=H(s) x |H(s)|max
b. ή=H(s) + |H(s)|max
c. ή=H(s) - |H(s)|max
d. ή=H(s) / |H(s)|max-
Q4. How to define the redundancy (R) of the channel
a. R=1 + ή
b. R=1 / ή
c. R=1 x ή
d. R=1 – ή
Q1. When the base of the logarithm is 2, then the unit of measure of information is
a) Bits
b) Bytes
c) Nats
d) None of the mentioned
Q2. When probability of error during transmission is 0.5, it indicates that
a) Channel is very noisy
b) No information is received
c) Channel is very noisy & No information is received
d) None of the mentioned
Q4. When the base of the logarithm is e, the unit of measure of information is
a) Bits
b) Bytes
c) Nats
d) None of the mentioned
Q5. A binary source emitting an independent sequence of 0’s and 1’s with probabilities p
and (1-p) respectively. In which case probability is maximum
a) P=0
b) P=1
c) P=0.5
d) P=Infinity
Q6. An image use 512x512 picture elements. Each of the picture element can take any of the
4 distinguishable intensity levels. Calculate the maximum bit needed.
a) 512x512x3
b) 512x512x4
c) 512x512x2
d) 512x512x1
Numerical Problem
Q. Consider a system emitting three symbols A, B and C with respective
probability 0.7, 0.15 and 0.15. Calculate the efficiency and redundancy
Ans. Efficiency = 74.52%, Redundancy = 25.48%
Q. Prove that the nth order extension of Entropy is represented in this form
H(Sn) = n. H(s)
Q. Consider a system emitting three symbols A, B and C with respective
probability 0.7, 0.15 and 0.15. Calculate the efficiency and redundancy
Q. Prove that the nth order extension of Entropy is represented in this form
H(Sn) = n. H(s)
Q. Prove that the nth order extension of Entropy is represented in this form
H(Sn) = n. H(s)
Q3. An image use 512x512 picture elements. Each of the picture
element can take any of the 8 distinguishable intensity levels.
Calculate the maximum bit needed.
Ans. 786432 bits
Numerical Problem
Q3. An image use 512x512 picture elements. Each of the picture element can
take any of the 8 distinguishable intensity levels. Calculate the maximum bit
needed.
Q. An image use 512x512 picture elements. Each of the picture element can
take any of the 4 distinguishable intensity levels. Calculate the maximum bit
needed.
Numerical Problem
Q. An image use 512x512 picture elements. Each of the picture element can
take any of the 4 distinguishable intensity levels. Calculate the maximum bit
needed.
Joint Probability P(x, y)
 Probability that (x, y) simultaneously occur
Where x and y represent the random variable
Conditional Probability P(x/y)
P(x/y)= represent the conditional probability (Probability of x when y is known)
P(x, y)= P(x/y). P(y)
Or
P(x, y)= P (y/x). P(x)
If x and y are independent
Then P(x, y)= P(x). P(y)
Average Uncertainty of channel input Average Uncertainty of channel output
Entropy of output channel
Joint Entropy
Joint Entropy
Conditional Entropy
Conditional Entropy
H(Y/X) is the average uncertainty of channel
output gives that X was transmitted
H(X/Y) is the average uncertainty of channel
output gives that Y was transmitted
 H(X/Y) is the average uncertainty of channel output gives that Y was transmitted
 H(Y/X) is the average uncertainty of channel output gives that X was transmitted
Important Property
Mutual Information
 It is defined as the amount of information transferred, where xi is
transmitted and yi is received.
Average Mutual Information
 Amount of source information gained per received symbol
Important Properties of Mutual Information
Symmetrical Property of Mutual
Information
Mutual Information is always positive
Mutual Information is represented in
entropies
Mutual Information related to joint
entropies
Q1. Which is the correct relationship between Joint entropy and conditional
entropy?
A) H(X, Y) = H(X/Y) + H(Y)
B) H(X, Y) = H(Y/X) + H(X)
C) Both a and b
D) None
Q2. Which of the following is NOT representative of the mutual information?
A) I(X; Y) = H(X) - H(X/ Y)
B) I(X; Y) = I(Y; X)
C) I(X; Y) = H(Y) - H(X/ Y)
D) I(X; Y) = H(X) + H(Y) - H(X, Y)
Quiz
Q3. For which value(s) of p is the binary entropy function H(p) maximized?
A) 0
B) 0.5
C) 1
D) 1.2
Q4. When the event are independent then which relations is correct?
A) P(x, y)= P(x) + P(y)
B) P(x, y)= P(x) - P(y)
C) P(x, y)= P(x) / P(y)
D) P(x, y)= P(x). P(y)
Q5. How to represent conditional probability when 1 was transmitted and 0 received
A) P(1/1)
B) P(1/0)
C) P(0/0)
D) P(0/1)
Q6. There is a heavy snowfall in Amaravati means
A) Information is more
B) Information is less
C) Information is equal
D) None
Q7. 1 cm=10^-2 m means
A) Information is more
B) Information is less
C) Information is equal
D) None
Mutual Information
I(X:Y)=Uncertainty about the channel input that is
resolved by observing the channel output
Discrete Memory-less Channel
Note
 Discrete Memory-less channel means output only depends on the current input not
on the previous input.
Channel matrix represent the transition probability.
Channel Matrix
P(1/0)----> Probability of transmitted 0 and received 1.
P(0/1)----> Probability of transmitted 1 and received 0.
P(1/1) ---->Probability of transmitted 1 and received 1.
P(0/0) ---->Probability of transmitted 0 and received 0.
Channel Matrix, Noise Matrix or Probability Transition Matrix
Types of Channel
1. Lossless Channel
2. Deterministic Channel
3. Noiseless Channel
4. Binary Symmetric Channel (BSC)
Lossless Channel
Important Points
 Only one non-zero element in each column
 In lossless channel no source information is lost in transmission
 Each output has received some information
 Non-zero element in each column
 Only one non zero element in each row. This element must be unity
 Output symbol will be received for a given source symbol.
Deterministic Channel
Everything is determined
already
Noiseless Channel
It has both the property as
 Lossless
+
 Deterministic channel
Binary Symmetric Channel (BSC)
 This channel has probability distribution
Quick Revision
 Discrete memoryless channel means output only depends on the current input not on
the previous input.
 Channel matric represent the transition matrix
 P(0/1) represent the probability of transmitted is 1 and received 0
 P(Y/X) represent conditional probability matrix or channel matrix
 Joint probability means complete channel as whole
 In lossless only one non-zero elements in each column
 In deterministic channel one non-zero element in each row and element must be unity.
 Noiseless channel has satisfy both lossless and deterministic channel property.
Q1. In Lossless channel the important features are
a. Non-zero elements in each column
b. Non-zero elements in each row
c. Non-zero elements in both column and row
d. None
Q2. In Deterministic Channel the important features are
a. Non-zero elements in each column
b. Non-zero elements in each row
c. Non-zero elements in both column and row
d. None
Q3. In Noiseless Channel the important features are
a. Non-zero elements in each column
b. Non-zero elements in each row
c. Non-zero elements in both column and row
d. None
Quiz
Q4. Which property is correct for mutual information
a. I(X;Y)=I(Y;X)
b. I(X;Y)>0
c. I(X;Y) = H(Y) - H(Y/X)
d. I(X;Y)=H(X) + H(Y) - H(X;Y)
e. All of these
Q5. Channel matrix represent the
a. Transition probability
b. Steady state probability
c. Conditional probability
d. None
Fill in the blank
1. Unit of Entropy_______bits/Symbol______
2. Unit of Information rate____bit/sec_____
Calculation of Output Probability
Properties
Q Given a binary channel shown below
a. Find the channel matrix of the channel.
b. Find the P(y1) and P(y2) when P(x1)=P(x2)=0.5
c. Find the joint probability P(x1,y2) and P(x2,y1) when P(x1)=P(x2)=0.5
Hint:
1. Find conditional probability using channel diagram: P(y1/x1), P(y2/x1), P(y1/x2) and P(y2/x2)
2. All Condition probability to be fill into channel matrix P(y/x)
3. Use formula P(y)=P(x). P(y/x)
4. In P(y) matrix sum all the column then find the P(y1) and P(y2)
5. For joint probability matrix use formula P(x, y)= P(x)d. P(y/x)
Q Given a binary channel shown below
a. Find the channel matrix of the channel.
b. Find the P(y1) and P(y2) when P(x1)=P(x2)=0.5
c. Find the joint probability P(x1,y2) and P(x2,y1) when P(x1)=P(x2)=0.5
Q Given a binary channel shown below
a. Find the channel matrix of the channel.
b. Find the P(y1) and P(y2) when P(x1)=P(x2)=0.5
c. Find the joint probability P(x1,y2) and P(x2,y1) when P(x1)=P(x2)=0.5
Q. A Channel has the following channel matrix
P(y/x) =
−
−
a) Draw the channel diagram
b) If the source has equally likely inputs, compute the output probabilities
associated with the channel outputs for P = 0.2
Numerical Problem
Hint
1. From the Channel matrix P(y/x) establish the link between input and output variable.
2. Source equally likely means input variable probability P(x1)=0.5 P(x2)=0.5
3. Use formula to calculate the output symbol probability P(y)=P(x). P(y/x)
Q. A Channel has the following channel matrix
P(y/x) =
−
−
a) Draw the channel diagram
b) If the source has equally likely inputs, compute the output probabilities associated with the channel outputs for P = 0.2
Q. A Channel has the following channel matrix
P(y/x) =
−
−
a) Draw the channel diagram
b) If the source has equally likely outputs, compute the probabilities associated with the channel outputs for P = 0.2
Cascaded Channel
 Mutual Information for Channel 1: I(x, y) = H(y) - H(y/x)
 Mutual Information for Channel 2: I(y, z) = H(z) - H(z/y)
 Mutual Information for Cascaded Channel: I(x, z) = H(z) - H(z/x)
 For cascaded channel Mutual Information I(x, z) < I (x, y)
Q1. Which relationship is correct
a. P(x, y)= P(y/x). P(y)
b. P(x, y)= P(x/y). P(x)
c. P(x, y)= P(y/x). P(z)
d. P(x, y)= P(y/x). P(x)
Q2. Which relationship is correct for cascaded channel
a. P(z/x) = P (x/y). P(z/y)
b. P(z/x) = P (y/x). P(y/z)
c. P(z/x) = P (y/x). P(z/y)
d. None
Q3. An event has two possible outcomes with probability P1=1/2 and P2= 1/64. Find the
rate of information with 16 outcomes per second is
a. 19/2
b. 25/3
c. 35/6
d. None
Quiz
Q. A discrete source transmit message {x0, x1, x2} with probability {0.3, 0.4, 0.3}. The
source is connected to the channel given in the figure.
Calculate H(x), H(y), H(x, y), and H(y/x)
Numerical Problem
Hint: Useful formula to solve this question
Q. Two binary symmetric channel are connected in cascaded as shown in the figure
a)Find the channel matrix of resultant channel
b)Find P(z1) and P(z2) if P(x1)=0.6 and P(x2)=0.4
Cascaded Channel based Numerical Problem
Channel Capacity (Cs)
 The maximum mutual information is called as channel capacity (Cs).
 Channel capacity indicates the utilization factor of the channel.
Cs = max { I(x; y) } bit/Symbols-------(1)
Where, max { I(x; y) } represent the maximum value of mutual information
 Channel capacity (Cs) indicated maximum number of bits the channel is capable of
transmit through the channel per symbol.
C = r x Cs bit/sec
Where C represent the channel capacity in bit/sec and r represents the data rate.
Redundancy
Channel Efficiency
ἡ
Channel capacity
Cs or C
Redundancy = 1- ἡ
ἡ =I(x; y) / C
Cs = max { I(x; y) } bit/Symbols-
C = r x Cs bit/sec
 For symmetrical and
uniform channel
C = (log S – H)
bit/sec, where S represent
the number of output
symbol
Q1. Which type of channel does not represent any correlation
between input and output symbols?
a. Noiseless Channel
b. Lossless Channel
c. Useless Channel
d. Deterministic Channel
Q2. For M equally likely messages, the average amount of
information H is
a. H = log10M
b. H = log2M
c. H = log10M2
d. H = 2log10M
The capacity of Special Channels
1)Lossless Channel: For lossless channel Conditional Entropy, H(x/y) = 0
Cs = max { I(x; y) }….> Maximum of Mutual Information
Cs= max { H(x) }
Cs = log m
where m represents the messages
I(x; y)= H(x) – H(x/y) = H(x) – 0 = H(x)
2. Deterministic Channel: Only one non-zero element in each row
 Conditional Entropy--- H(y/x)= 0
 Information---- I = log (1/p)= log (1/1) = 0
 Entropy---- H = I. P = 0. P= 0
 Mutual information…I(x; y) = H(y) - H(y/x) = H(y)
 Channel Capacity…..Cs= max I(x;y)=max(Hy)
Cs=log n
Where n represents the different symbol
3. Noiseless Channel
H(x) = H(y), Both entropy are same
H(y/x) = H(x/y) = 0
Cs=max{I(x;y)}
Cs=log m= log n
4. Binary Symmetric Channel
Channel capacity per symbol
Cs= 1+ P log p + (1-P) log (1-p)
Mutual Information I(x;y)= H(y)+P log p + (1-P) log (1-p)
H(x)max when P(x0)=P(x1)=1/2
Where x0 represent binary ‘0’, x1 represent binary ‘1’
Mutual information I(x;y) is maximum when
C=I(x;y) when P(x0)=P(x1)=1/2
P(y0/x1)=P(y1/x0)=p
P(y0/x0)=P(y1/x1)=1-p
Q. Find the capacity of the binary symmetric channel when probability 0.5
Q. A binary symmetric channel the following noise matrix with source probability
P(x1)=2/3 and P(x2)=1/3
a. Determine H(x), H(y), H(y/x)
b. Find channel capacity C
c. Find channel efficiency and redundancy
Q1. What is the value of Information, Entropy and channel capacity, in the
deterministic channel.
a) 0, 1, log(n)
b) 1, 0 , log(n)
c) 1, 1, log(n)
d) 0, 0, log(n)
Q2. For symmetrical and uniform channel, how to define the capacity
a) C = (log S + H) bit/sec
b) C = (H + log S) bit/sec
c) C = (H - log S) bit/sec
d) C = (log S - H) bit/sec
Quiz
Q3. Channel capacity (Cs) indicated _________ number of bits the channel is capable of
transmit through the channel per symbol
a) Lowest
b) Equal
c) Maximum
d) None
Q4. Calculate the Entropy for symmetrical and uniform channel, when
P(y/x) =
/ / /
/ / /
/ / /
a) 5/8
b) 6/8
c) 7/8
d) None
Q5. For the binary symmetric channel, how to define the capacity
a) Cs= 1+ P log p + (1+P) log (1+p)
b) Cs= 1+ P log p + (1+P) log (1-p)
c) Cs= 1+ P log p + (1-P) log (1+p)
d) Cs= 1+ P log p + (1-P) log (1-p)
Numerical
Q6. Two noisy channel are cascaded whose channel matrix are given below
Where input symbol probability given as P(x1) = P(x2) = 1/2.
Find the overall mutual information I(X; Z)
Useful Hint
1. Draw the combined channel diagram and find the channel matrix
2. Find P(x, z) using formula P(x, z) = P(x). P(z/x)
3. Find entropy H(z/x) = ∑∑ P(x, z) log 1/P(z/x), and H(z)= ∑ P(z) log 1/P(z)
4. Use formula to find the overall mutal information I(x, z) = H(z) – H(z/x)
Numerical
Q2. Determine the capacity of the channel shown below
Useful Hint
1. Find the channel matrix P(y/x)
2. Find the Entropy H for consideration of one column due to
uniform and symmetrical channel
3. Use formula C = log S - H
Q. Determine the capacity of the channel shown below
Q3. Find the channel capacity
Numerical
Useful Hint
1. Find the channel matrix P(y/x)
2. Find the Entropy H for consideration of one column due to uniform and symmetrical
channel
3. Use formula C = log S - H
Q. For the channel matrix and channel capacity as shown below
Numerical
Q2. For the channel matrix given below compute the channel
capacity, given rs=1000 symbol/sec
Useful Hint
1. Find the Entropy H
3. Use formula C = (log S – H ).r bit/sec
Q. For the channel matrix given below compute the channel capacity, given rs=1000 symbol/sec
Shannon Hartley Law: Channel capacity is the maximum rate at which the data can be
transmitted through a channel without errors
 For error- free transmission Cs > r
where Cs is the channel capacity
and r is the bit rate
 Relationship between Capacity, bandwidth and Signal to noise ratio
Where S/N= Signal to noise ratio, B= Bandwidth
Signal to Noise ratio:
A signal-to-noise ratio over 0 dB indicates that the signal level is greater than the noise level.
The higher the ratio, the better the signal quality.
For example, a Wi-Fi signal with S/N of 40 dB will deliver better network services than a signal with S/N of 20 dB
Data rate or Channel capacity C= H log (1+S/N) bit/sec
or
B
Q1. Assuming that a channel has bandwidth B=3000Hz, and typical Signal to
noise ratio (SNR) of 20dB. Determine the maximum data rate that can be
achieved.
Q2. A channel has a bandwidth of 8KHZ and signal to noise ratio of 31. For the
same channel capacity if the SNR is increase to 61 then find the new channel
bandwidth of the channel.
Q3. In a Communication system the S/N ratio at the input of receiver is 15.
Determine the channel capacity of the channel if bandwidth as
a) B= 1Khz
b) B = 1MHZ
c) B = 1GHZ
Numerical Question
Q1. Assuming that a channel has bandwidth B=3000Hz, and typical Signal to
noise ratio (SNR) of 20dB. Determine the maximum data rate that can be
achieved.
Q2. A channel has a bandwidth of 8KHZ and signal to noise ratio of 31. For the same
channel capacity if the SNR is increase to 61 then find the new channel bandwidth of the
channel.
Q3. In a Communication system the S/N ratio at the input of receiver is 15.
Determine the channel capacity of the channel if bandwidth as
a) B= 1Khz
b) B = 1MHZ
c) B = 1GHZ
Q. The channel capacity under the Gaussian noise environment for a discrete
memory-less channel with a bandwidth of 4 MHz and SNR of 31 is
Q. What is the capacity of extremely noisy communication channel that has a
bandwidth of 150 MHz such that the signal to noise ratio is very much below
the noise power?
Numerical
Q. The channel capacity under the Gaussian noise environment for a discrete
memory-less channel with a bandwidth of 4 MHz and SNR of 31 is
Q. What is the capacity of extremely noisy communication channel that has a
bandwidth of 150 MHz such that the signal to noise ratio is very much below
the noise power?
Q. In the communication system, if for a given rate of information
transmission requires channel bandwidth, B1 and signal-to-noise
ratio SNR1 . If the channel bandwidth is doubled for same rate of
information then a new signal-to-noise ratio will be
Numerical
Q. In the communication system, if for a given rate of information transmission requires
channel bandwidth, B1 and signal-to-noise ratio SNR1 . If the channel bandwidth is
doubled for same rate of information then a new signal-to-noise ratio will be
Q. A communication channel with AWGN operating at SNR ratio>>1 and
bandwidth B has capacity C1. If the SNR is doubled keeping B constant then
find the resulting capacity C2 in terms of C1 and B
Numerical
Q. A communication channel with AWGN operating at SNR ratio>>1 and
bandwidth B has capacity C1. If the SNR is doubled keeping B constant then
find the resulting capacity C2 in terms of C1 and B
Fact
1. If there is more uncertainty about the message, information carried is also more.
2. I know after finishing my lecture, I will go to my cabin then amount of information
carried is zero.
3. Hi--------> I1
How are You-----> I2
Hi How are You------> I1 + I2
4. P (Sun rise in the east) > P (Sun rise in the west)
I (Sun rise in the east) < I (Sun rise in the west)
Q3 The channel capacity under the Gaussian noise environment for a discrete memory-
less channel with a bandwidth of 4 MHz and SNR of 31 is
1. 20 Mbps
2. 4 Mbps
3. 8 kbps
4. 4 kbps
Q4 The Shannon’s Theorem sets limit on the
1. Highest frequency that may be sent over channel
2. Maximum capacity of a channel with a given noise level
3. Maximum number coding levels in a channel
4. Maximum number of quantizing levels in a channel
Quiz
Q1.
Q2. In the communication system, if for a given rate of information transmission requires channel
bandwidth, B and signal-to-noise ratio SNR . If the channel bandwidth is doubled for same rate of
information then a new signal-to-noise ratio will be
Quiz
Q. The symbol A, B, C and D occur with probability 1/2, 1/4, 1/8 and 1/8 respectively. Find the
information content in the message 'BDA', where the symbol is independent.
a. 2bit
b. 4bit
c. 5bit
d. 6bit
Q. How many bits per symbol to encode 32 different symbols?
a. 2 bit/symbol
b. 4 bit/symbol
c. 5 bit/symbol
d. 6 bit/symbol
Quiz
Q. An analog signal having 4-kHz bandwidth is sampled at 1.25 times the Nyquist rate,
and each sample is quantized into one of 256 equally likely levels. Assume that the
successive samples are statistically independent.
(i) What is the information rate of this source?
(ii) Can the output of this source be transmitted without error over an AWGN channel
with a bandwidth of 10 kHz and S/N ratio of 20 dB?
(iii) Find the S/N ratio required for error- free transmission for part (ii)
Hint
1. Nyquist rate fs = 2fm
2. Nyquist rate (r) = fs X 1.25
3. If all message are equally likely then H(x) = log m
4. R= r. H(x)
5. Use channel capacity formular C= Blog (1+S/N) bit/Sec
6. For error free communication C > r
Q. An analog signal having 4-kHz bandwidth is sampled at 1.25 times the Nyquist rate, and each sample is quantized into one of 256
equally likely levels. Assume that the successive samples are statistically independent.
(i) What is the information rate of this source?
(ii) Can the output of this source be transmitted without error over an AWGN channel with a bandwidth of 10 kHz and S/N ratio of 20
dB?
(iii) Find the S/N ratio required for error- free transmission for part (ii)
Q. Consider an AWGN channel with 4-kHz bandwidth and the noise power spectral
density η/2= 10 W/Hz. The signal power required at the receiver is 0.1 mW. Calculate
the capacity of this channel.
Q. The terminal of a computer used to enter alphabetic data is connected to the
computer through a voice grade telephone having a usable bandwidth of 3KHz and
SRN=10. Assume that terminal has 128 characters, determine
a. The capacity of channel
b. The max rate of transmission without error.
Numerical Problem
Q. The terminal of a computer used to enter alphabetic data is connected to the computer through a voice grade
telephone having a usable bandwidth of 3KHz and SRN=10. Assume that terminal has 128 characters, determine
a. The capacity of channel
b. The max rate of transmission without error.
Q. A source produces 4 symbols with probabilities 1/2, 1/4,
1/8 and 1/8. For this source, a practical coding scheme has an
average code word length of 2 bits / symbol. Calculate the
efficiency of the code.
Numerical Problem
Q. A source produces 4 symbols with probabilities 1/2, 1/4, 1/8 and 1/8. For
this source, a practical coding scheme has an average code word length of 2
bits / symbol. Calculate the efficiency of the code.
Q. The output of an information source consists of 300 symbols, 60 of
which are of probability 1/32 and remaining with a probability of
1/512. The source emits 1400 symbols/sec. Assuming that the symbols
are chosen independently. Find the average information rate.
Numerical Problem
Q. The output of an information source consists of 300 symbols, 60 of which are of probability 1/32 and
remaining with a probability of 1/512. The source emits 1400 symbols/sec. Assuming that the symbols are
chosen independently. Find the average information rate.
Consider the binary information channel fully specified,
P =
3/4 1/4
2/8 6/8
,
input probabilities as 1/2 and 1/2.
Calculate
a) Output probabilities,
b) Backward probabilities.
Numerical Problem
Q. Consider the binary information channel fully specified,
P =
3/4 1/4
2/8 6/8
,
input probabilities as 1/3 and 1/3.
Calculate (a) Output probabilities, (b) Backward probabilities.
Average Information (Entropy)
E = P log (1/P)
1. Entropy is zero if the event is sure or it is impossible
H = 0 if P=1 or 0
2. Upper bound on entropy Hmax = log M
Information Rate
R = r H Information bits/second
R = (r in message/sec) (H in Information bits/message)
Unit of information rate………….> Information bits/second
Units Conversion
Extension of Discrete Memoryless Source
H (Xn) = n H (X)
Where n is the number of successive symbols in one group or block
Communication Channel
Communication channel has three main parts
1. Transmitter
2. Receiver
3. Channel
Transmitted data is discrete then it is known as Discrete channel
Proofs
Q. Find the transition matrix p(Z/X) for the cascaded channel shown:
Q. Find the transition matrix p(Z/X) for the cascaded channel shown
Numericals
Kullback–Leibler divergence
(Relative Entropy)
 D(P||Q) is a type of statistical distance: a measure of how one probability
distribution P is different from a second, reference probability distribution Q.
Joint Entropy and Conditional Entropy
Useful formula
Relation between Joint Entropy and Conditional Entropty
Hint
Find P(x, y) = P(x)d. P(y/x)
Find H(y/x)
Find I(x;y) = H(y) - H(y/x)
Case 1. α=0.5 and P=0.1 find P(y)=P(x). P(y/x)
Find H(y)
Use formula I(x;y) = H(y)+Plog P+ (1-P) log (1-P)
Case 2. α=0.5 and P=0.5 find P(y)=P(x). P(y/x)
Find H(y)
Use formula I(x;y) = H(y)+Plog P+ (1-P) log (1-P)
Hint
1. Find P(y/x)
2. Find P(y)=P(x).P(y/x)
3. Find P(x, y)
4. Find H(y)
5. Find H(y/x)
6. Use a formula for mutual information I(x,y) = H(y)-H(y/x)
7. Maximum value of mutual information is known as channel capacity
8. Maximum value of entropy is 1
Binary Erasure Channel (BEC)
• Most commonly used channel in digital communication.
• This channel has 100% data recovery. Always ensure this channel provide
the correct message.
• If a symbol is received in error it is just marked as error and discarded and
a retransmitted is requested from the transmitter using a reverse channel till
the correct symbol is received at the output.
Q1. A channel matrix has the following characteristics. The probability of P(x1)=P(x2)=0.5
Find H(X), H(Y), H(X,Y), I(X;Y) and Channel capacity (bit/sec), if r=1000 symbol/sec
Numerical
Useful Hint
1. Find P(x, y)=P(x). P(y/x)
2. Using P(x, y)…..> find P(y1) = P(y2)= P(y3)= P(y4)
3. Find H(x) = P(x1) log (1/p(x1)) + P(x2) log (1/p(x2))
4. Find H(y)= P(y1) log (1/p(y1)) + P(y2) log (1/p(y2))+ P(y3) log (1/p(y3)) + P(y4) log
(1/p(y4))
5. Find H(x, y)= ∑ P(xi, yi) log (1/ P(xi, yi) )
6. Find I(x, y) = H(x) + H(y) – H(x,y)
7. Find Cs = max I(x, y) bit/symbol
8. Find C = r x Cs bit/sec
Cascaded channels based Numerical Problems
Q. Two channel are cascaded as shown below
Given P(x1)=P(x2)=0.5
Show that I(x, z) < I(x, y)
Useful formulas
I(x; y)=H(y) – H(y/x)
I(x; z)=H(z) – H(z/x)
H(z/x)= ∑ ∑ P(x, z) log (1/
P(z/x))
H(y/x)= ∑ ∑ P(x, y) log (1/
P(y/x))
Important Steps
1. Find P(y/x) matrix
2. Find P(z/y) matrix
3. Find P(z/x) matrix
4. Find P(x, y) matrix
5. P(x, y)= P(y/x). P(x)…> P(y1)
and P(y2)…..>H(y)
6. P(z, x)= P(z/x). P(x)…..> P(z1)
and P(z2)….>H(z)
Q. Comments on the information content of the following message
1. Tomorrow the sun will rise from the east.
2. It will snow fall in Amaravati this winter
3. The phone will ring in the next one hour.
Numerical Problem
Q. Comments on the information content of the following message
1. Tomorrow the sun will rise from the east.
2. It will snow fall in Amaravati this winter
3. The phone will ring in the next one hour.
Ans. 1. The first statement does not carry any information since it is sure that sun always
rises from east. The probability of occurrence of first event is high or sure. Hence it
carries less or negligible information
Ik=Log1/pk=Log1/1=0
2. In the winter season snow fall in amaravati is very rare. Hence probability of
occurrence of this event is very rare, so it carries large amount of information.
3. In the third statement predicts about phone ring in the time span of one hour. It does
not mention exact time but span of one hour is mentioned. Hence it carries moderate
information.
Numerical Problem
Numerical Problem
Q. In the binary erasure channel (BEC), input message probability is x and (1-
x). Find the capacity of the channel.
Useful Hint
1. Find P(B/A)
2. Find H(B/A)
3. Find H(A)
4. Find P(A, B) = P(B/A). P(A)
5. From P(A,B) matrix find P(B) matrix
6. Find P(A/B) = P(A, B) / P(B)
7. From P(A/B) matrix find H (A/B) using expression H(A/B)= P(A,B) log(1/P(A/B)
8. Put H(A/B) value in channel capacity equation as mentioned below
9. C = max [H(A) – H(A/B)]. r
Useful Hint
1. Find P(B/A)
2. Find H(B/A)
3. Find H(A)
4. Find P(A, B) = P(B/A). P(A)
5. From P(A,B) matrix find P(B) matrix
6. Find P(A/B) = P(A, B) / P(B)
7. From P(A/B) matrix find H (A/B) using expression H(A/B)= P(A,B) log(1/P(A/B)
8. Put H(A/B) value in channel capacity equation as mentioned below
9. C = max [H(A) – H(A/B)]. r
Q. In a Binary Eraser channel P=0.1, P(x1)=0.6 and P(x2)=0.4.
Determine
a) mutual information,
b) channel capacity,
c) channel efficiency and
d) redundancy.
Source Coding
A conversion of a DMS output into a binary symbol sequence is known as
source coding.
The device that performs this conversion is known as source encoder.
Aim
To minimize the average bit rate required for the representation of the
source by reducing the redundancy and increasing the efficiency of the
information source
Terminology
1. Code length: The length of a codeword is the number of binary digit in the
codeword.
2. Code Efficiency
When ἡ approach to unity the code is said to be efficient
3. Code redundancy
Code Efficiency ἡ = Lmin/ L
or
ἡ = H(s)/ Lmin
Code length L = ∑ P(xi) . li
Code redundancy: 1- ἡ
Source coding Theorem
Or
Shannon’s First Theorem
The source coding theorem states that for a DMS X with entropy H(x) the average
code word length (L) per symbol is bounded by
If Lmin = H(x) then ἡ = H(x)/L
 As it establishes an error free encoding it is called noiseless coding theorem
L ≤ H(x)
Data Compaction: Variable length source coding
algorithm (Entropy Coding)
Aim of Data Compaction:
• Efficient in-terms of average number of bits per symbol.
• Original data can be reconstructed without loss of any information.
• Remove the redundancy and increase the efficiency from signal prior to
transmission
Technique for data compaction
Three technique
1. Prefix coding or instantaneous coding
2. Shannon- fano coding
3. Huffman coding
Steps in Shannon- fano coding
1. Arrange the message in the order of decreasing probability.
2. Divide the message into two almost equal probable groups.
3. The message in the first group is assigned the bit ‘0’ and the message in the
second group are assigned the bit ‘1’.
4. The procedure is repeated until no further division is possible.
Q. Encode using Shannon Fano coding
F
E
D
C
B
A
Symbols
0.05
0.08
0.12
0.20
0.25
0.30
Probability
Numerical
Q. Encode using Shannon Fano coding
F
E
D
C
B
A
Symbols
0.05
0.08
0.12
0.20
0.25
0.30
Probability
Q. Encode using Shannon Fano coding
F
E
D
C
B
A
Symbols
1/24
1/24
1/12
1/6
1/3
1/3
Probability
Numerical
Stage-5
(x5)=0.33
(x6)=0.041
Stage-4
(x4)=0.33
(x5 x6) =
0.041+0.041=
0.082
Stage-3
(x3)=0.33
(x4 x5 x6)
=0.083+0.041+0.04
1= 0.165
Stage-2
(x2)=0.33
(x3 x4 x5 x6) =
0.166+0.083+0.041
+0.041= 0.331
Stage-1
(x1) =0.33
(x2 x3 x4 x5
x6)=0.33+0.166+0.083+
0.041+0.041= 0.661
Probability
Messa
ge
x1x2x3x4
freeze
x1x2x3
freeze
x1x2 freeze
x1 freeze
0
1/3= 0.33
x1
0
1
1/3= 0.33
x2
0
1
1
1/6= 0.166
x3
0
1
1
1
1/12= 0.083
x4
0
1
1
1
1
1/24= 0.041
x5
1
1
1
1
1
1/24= 0.041
x6
Agenda
1.Quiz Test
2. Coding Technique: Huffman Coding
3. Numerical Question based on Huffman Coding
4.Problem based on Shannon fano coding
Q1. In Shannon Fano Coding all the messages in the order of
a. Increasing Probability
b. Equal Probability
c. Decreasing probability
d. None
Q2. In Shannon Fano Coding Divide the message into
a. Three almost equal probable groups
b. Four almost equal probable groups
c. Two almost equal probable groups
d. None
Q3. In Shannon Fano Coding
a. First group is assigned the bit ‘1’ and the second group are assigned the bit ‘0’.
b. First group is assigned the bit ‘1’ and the second group are assigned the bit ‘1’.
c. First group is assigned the bit ‘0’ and the second group are assigned the bit ‘1’.
d. None
Quiz
Q2. A discrete memoryless source x has six symbols x1, x2, x3, x4, x5, x6
with
P(x1)=0.30,
P(x2)=0.25,
P(x3)=0.20,
P(x4)=0.12,
P(x5)=0.08 and
P(x6)=0.05.
Using Huffman code to find the code length and efficiency.
Numerical based on Huffman Coding
P(x1)=0.30
P(x2)=0.25
P(x3)=0.20
P(x4)=0.12
P(x5)=0.08
P(x6)=0.05
Q1. A discrete memoryless source x has five symbols x1, x2, x3 with
P(x1)=0.4, P(x2)=0.19, P(x3)=0.16, P(x4)=0.15 and P(x5)=0.1. Using
Huffman code to find the code length and efficiency.
Numerical
Q1. A DMS has five symbols A, B, C, D and E with probability given
in the table.
a. Construct a Shannon fano coding and calculate the efficiency of
the code.
b. Repeat for the Huffman code and compare the results
E
D
C
B
A
Symbols
0.1
0.15
0.16
0.19
0.4
Probability
Numerical Question
Agenda
1.Quiz Test
2. Numerical Question based on Huffman Coding
3.Problem based on Shannon fano coding
Q1. Calculate the value of H(X2) if H(x)=1.5
a. 1 bits/symbols
b. 2 bits/symbols
c. 3 bits/symbols
d. None
Q2. Huffman coding technique is adopted for constructing the source code with ________
redundancy.
a. Maximum
b. Constant
c. Minimum
d. Unpredictable
Quiz
Q3. Which type of channel does not represent any correlation between input and output
symbols?
a. Noiseless Channel
b. Lossless Channel
c. Useless Channel
d. Deterministic Channel
Q4. On which factor the channel capacity depends on the communication system
a. bandwidth
b. Signal to noise ratio
c. Both a and b
d. None
Q5. Information is the
A. data
B. meaningful data
C. raw data
D. Both A and B
Q3. A discrete memoryless source x has five symbols x1, x2, x3, x4,
x5, x6 with
P(x1)=0.25,
P(x2)=0.20,
P(x3)=0.20,
P(x4)=0.15
P(x5)=0.15 and
P(x6)=0.05 . Using minimum variance Huffman code to
find the code length and efficiency.
Numerical
Q3. A discrete memoryless source x has five symbols x1, x2, x3, x4, x5, x6 with P(x1)=0.25, P(x2)=0.20,
P(x3)=0.20,P(x4)=0.15, P(x5)=0.15 and P(x6)=0.05. Using minimum variance Huffman code to find the code
length and efficiency.
Q1. Encode using Shannon Fano coding and find the efficiency.
F
E
D
C
B
A
Symbols
0.08
0.10
0.12
0.15
0.25
0.30
Probability
Numerical Question based on Shannon Fano Coding
E
D
C
B
A
Symbols
0.10
0.15
0.16
0.19
0.4
Probability
Q2. Encode using Shannon Fano coding and find the efficiency.
2
00
0
0
0.30
2
01
1
0
0.25
3
100
0
0
1
0.15
3
101
1
0
1
0.12
3
110
0
1
1
0.10
3
111
1
1
1
0.08
F
E
D
C
B
A
Symbols
0.08
0.10
0.12
0.15
0.25
0.30
Probability
Solution of Q1
E
D
C
B
A
Symbols
0.10
0.15
0.16
0.19
0.4
Probability
00
0
0
0.4
01
1
0
0.19
10
0
1
0.16
110
0
1
1
0.15
111
1
1
1
0.1
Solution of Q2
Numerical
Q3. Encode using Shannon Fano coding and find the efficiency.
H
G
F
E
D
C
B
A
0.125
0.125
0.50
0.0625
0.03125
0.0625
0.03125
0.0625
0
0
0.50
100
0
0
1
0.125
101
1
0
1
0.125
1100
0
0
1
1
0.0625
1101
1
0
1
1
0.0625
1110
0
1
1
1
0.0625
11110
0
1
1
1
1
0.03125
11111
1
1
1
1
1
0.03125
H
G
F
E
D
C
B
A
0.125
0.125
0.50
0.0625
0.03125
0.0625
0.03125
0.0625
Solution of Q3
Prefix Code:
Set of Binary Sequence such that no sequence in P is a prefix of
any other binary sequence in P
Q. Check the given code is prefix or not?
a) P= {01, 010, 110, 010}
b) P= {011, 101, 0011, 01101}
c) P= {01, 100, 101}
d) P= {001, 0100, 0101, 010101}
Numerical
Q3. A technique used in construction a source encoder is to arrange the message in the o
decreasing probability and dividing the message into two almost equal provable groups.
The message in the first group are assigned the bit ‘0’ and the message in the second
group are assigned the bit ‘1’. The procedure is repated unitl no further division is
possible. Using this algorithm. Find the code words for six message with probability as
given below
F
E
D
C
B
A
1/24
1/24
1/12
1/6
1/3
1/3
Numerical
Q Compare the Huffman coding and Shannon-fano coding algorithm for data
compression. For a discrete memoryless source X with six symbol x1, x2, x3, x4, x5 and x6.
Find the compact code for every symbol if the probability distribution is as follow
P(x1)=0.3, P(x2)=0.25, P(x3)=0.2, P(x4)=0.12, P(x5)=0.08, P(x6)=0.05
Calculate
a. Entropy of the source
b. Average length of the code.
c. Compare the efficiency and redundancy in both the coding
Q Compare the Huffman coding and Shannon-fano coding algorithm for data compression. For a discrete memoryless source X with six symbol x1, x2, x3,
x4, x5 and x6. Find the compact code for every symbol if the probability distribution is as follow P(x1)=0.3, P(x2)=0.25, P(x3)=0.2, P(x4)=0.12, P(x5)=0.08,
P(x6)=0.05
Calculate Entropy of the source, Average length of the code and Compare the efficiency and redundancy in both the coding
Numerical
Q A information source produce a sequence of independent symbols having the following
probabilities, using Huffman encoding find the efficiency
S6
S5
S4
S3
S2
S1
S0
Symbol
1/27
1/27
1/9
1/9
1/3
1/27
1/3
Probabili
ty
Agenda
Dictionary Coding:
Lempel-ziv (LZ) coding
a)Basic concept
b)Numerical questions
Lempel-Ziv-Welch (LZW) coding
a) Basic concept
b) Numerical questions
Main points
1. A drawback of the Huffman code is that it requires knowledge of a
probabilistic model of the source.
2. To overcome these practical limitations, Lempel-Ziv coding will be helpful.
3. LZ coding is used for lossless data compression.
4. In LZ coding parsing the source data stream into segments that are the
shortest sub-sequences not encountered previously.
LZ (Lempel-Ziv Coding)
Numerical
Q. Encode Message:
AABABBBABAABABBBABBABB
using LZ coding
Assumption: A denoted by 0 and B denoted by 1
Numerical
Q. Encode Message: AABABBBABAABABBBABBABB using LZ coding
Sol.
Assumption: A denoted by 0 and B denoted by 1
BB
9
ABBA
8
BB
7
ABAB
6
ABA
5
B
4
ABB
3
AB
2
A
1
7
3A
4B
5B
2A
φB
2B
1B
φA
0111
011, 0
100, 1
101, 1
10, 0
00, 1
10, 1
1, 1
0
Dictionary Coding: LZ Coding
Q. Encode message
101011011010101011
using LZ coding
Dictionary Coding: LZ Coding
Q. Encode message 101011011010101011 using LZ coding
Sol.
1011
8
010
7
101
6
01
5
11
4
10
3
0
2
1
1
6,1
5,0
3,1
2,1
1,1
1,0
Φ,0
Φ,1
(000,1) (000, 0), (001, 0), (001, 1), (010, 1), (011, 1), (101, 0), (110, 1)
Salient Feature of LZW Coding
Greedy approach
Most popular lossless compression technique
Reduces file size contained repetitive data
Fast and simple
Dictionary Coding: LZW Coding
Q. Encode and Decode
message a b a b b a b c a b a b b a
using LZW coding, if initial dictionary as
Entry
Index
a
1
b
2
c
3
Entry
Index
Encoded output
a
1
-
b
2
-
c
3
-
ab
4
1
ba
5
2
abb
6
4
bab
7
5
bc
8
2
ca
9
3
aba
10
4
abba
11
6
-
-
1
Encoded message
1, 2, 4, 5, 2, 3, 4, 6, 1
Message:
a b a b b a b c a b a b b a
Decode
a
1
b
2
c
3
ab
4
ba
5
abb
6
1
6
4
3
2
5
4
2
1
a
abb
ab
c
b
ba
ab
b
a
Entry
Index
Decoded Message:
a b a b b a b c a b a b b a
Q. Encode this code using LZW dictionary, initial dictionary is given as
wabba b wabba b wabba b wabba b woo b woo b woo
Dictionary Coding: LZW Coding
b
1
a
2
b
3
o
4
w
5
b
1
a
2
b
3
o
4
w
5
wa
6
ab
7
bb
8
ba
9
ab
10
bw
11
wab
12
bba
13
abw
14
wabb
15
bab
16
bwa
17
abb
18
babw
19
wo
20
oo
21
ob
22
bwo
23
oob
24
bwoo
25
Encoded message
wabba b wabba b wabba b wabba b woo b woo b woo
Q. Decode this code using LZW dictionary, initial dictionary is given
3, 1, 4, 6, 8, 4, 2, 1, 2, 5, 10, 6, 11, 13, 6
a
1
b
2
r
3
t
4
5
6
7
8
9
10
11
12
13
6
13
11
6
10
5
2
1
2
4
8
6
4
1
3
at
br
ba
at
tb
ra
b
a
b
t
ata
at
t
a
r
Received Message
a
1
b
2
r
3
t
4
Q. Decode this code using LZW dictionary, initial dictionary is given
3, 1, 4, 6, 8, 4, 2, 1, 2, 5, 10, 6, 11, 13, 6
a
1
b
2
r
3
t
4
ra
5
at
6
ta
7
ata
8
atat
9
tb
10
ba
11
ab
12
br
13
6
13
11
6
10
5
2
1
2
4
8
6
4
1
3
at
br
ba
at
tb
ra
b
a
b
t
ata
at
t
a
r
at
br
ba
at
tb
ra
b
a
b
t
ata
at
t
a
r
Received Message
a
1
b
2
r
3
t
4
Kraft Inequality
A necessary and sufficient condition for the existence of an instantaneous binary
code is
K= ∑ ≤
Kraft inequality assurance of the existence of instantaneously decodable code
with codeword length ni. This ni should satisfy the inequality.
Q. Consider a DMS with symbols xi=1, 2, 3, 4. Below table list of 4 possible
binary codes. Show that
a) All codes except code B not satisfy the Kraft inequality
b) Codes A and D are uniquely decodable but codes B and C are not uniquely
decodable
Numerical Based on Kraft Inequality
Code D
Code C
Code B
Code A
Symbol xi
0
0
0
00
X1
100
11
10
01
X2
110
100
11
10
X3
111
110
110
11
X4
Code B: Inequality not satisfied, so it is not uniquely decodable.
Let’s take example 110 message is received
Decode either x4 (110) or x3x1 (110), So there is confusion while decoding
Codes A and D are prefix codes. They are uniquely decodable
Code C: Inequality satisfied but not uniquely decodable.
Let’s take example 0110110 message is received
Decode either
X1x2x1x2x1= (0 11 0 11 0)
x1x4x4 = (0 110 110)
x1x4x2x1 = (0 110 11 0)
x1x2x1x4 = (0 11 0 110)
Arithmetic Coding
Main Points
Near Optimal
Fast
Separates modelling from coding
P(Symbol)
=Lower Limit
+
(Upper Limit – Lower Limit)
x Probability of the Symbol of
reference
Q. Using arithmetic coding generate the Tag value of INDIA
P(Symbol)=Lower Limit + (Upper Limit – Lower Limit) x Probability of the Symbol of reference
Q1. Decode 1 1 2 4 3 5 8 2 3 2 using LZW coding, initial dictionary is given as
a
1
b
2
c
3
Q1. Decode 1 1 2 4 3 5 8 2 3 2 using LZW coding, initial dictionary is given as
a
1
b
2
c
3
aa
4
ab
5
ba
6
aac
7
ca
8
2
3
2
8
5
3
4
2
1
1
b
c
b
ca
ab
c
aa
b
a
a
Entry
Index
Decode
a
1
b
2
c
3
Q Encode this message using LZW Coding, if initial dictionary
is given as
aabaacabcabcb
a
1
b
2
c
3
d
4
a
1
-
b
2
-
c
3
-
aa
4
1
ab
5
1
ba
6
2
aac
7
4
ca
8
3
abc
9
5
cab
10
8
bc
11
2
cb
12
3
b
13
2
Encoded message 1,1,2,4,3,5,8,2,3,2
Terminology
Code Efficiency ἡ = H(s)/ Lmin
Code length L = ∑ P(xi) . ni
Code redundancy: 1- ἡ
Variance σ2 = ∑ Pk (ni – L)2
Question Types
1. Generate the Tag (Decimal value) -------Binary value--------sequence
2. Sequence---------Generate the Tag value (Deciphering Method)
Arithmetic Coding
Question Types
1. Generate the Tag (Decimal value) -------Binary value--------sequence
2. Sequence---------Generate the Tag value (Deciphering Method)
Arithmetic Coding
Q. For given set of symbols {‘A’, ‘B’, ‘C’} the
probabilities are {0.2, 0.5 0.3} respectively,
(a) Generated the Tag value for the sequence
BAC
(b) From the Tag 0.28 generate the sequence.
Arithmetic Coding
Q.For given set of symbols {‘A’, ‘B’, ‘C’} the
probabilities are {0.5, 0.3 0.2} respectively, compute
the word for the given tag value 0.63725 using
Arithmetic decoding.
Arithmetic Coding
Q.For given set of symbols {‘A’, ‘B’, ‘C’} the probabilities are {0.5, 0.3 0.2} respectively, compute the word for the given tag value
0.63725 using Arithmetic decoding.
Q. For given set of symbols {‘A’, ‘B’, ‘C’} the
probabilities are {0.8, 0.02 0.18}
respectively,
(a) Generated the Tag value, if the sequence
1321
Arithmetic Coding
Arithmetic Coding
Main Points
Near Optimal
Fast
Separates modelling from coding
P(Symbol)
=Lower Limit + (Upper Limit – Lower Limit) x Probability of the Symbol of reference
Q. For given set of symbols {‘I’, ‘N’, ‘D’, ‘A’} the probabilities are {0.4,
0.2, 0.2, 0.2} respectively,
(a) Generated the Tag value for the sequence INDIA
(b) From the Tag 0.2136 generate the sequence.
Method- 1
Method- 2
Hamming Weight and Hamming Distance
Hamming Weight: The hamming weight of a code-words is equal to the
number of non-zero elements in the code-word.
Hamming distance: between two code-words is the number of places by
which the code-words differ. The hamming distance between two code-
words C1 and C2 is denoted by d(C1,C2)
Quiz
Q.1 The channel capacity is measured in terms of:
a. bits per channel
b. number of input channels connected
c. calls per channel
d. number of output channels connected
Q2. Channel capacity of a noise-free channel having m symbols is given by:
a. m2
b. 2m
c. Log (m)
d. M
Q3. An Ideal power limited communication channel with additive white Gaussian noise is having 4 kHz band
width and Signal to Noise ratio of 255. The channel capacity is:
a. 8 kilo bits / sec
b. 9.63 kilo bits / sec
c. 16 kilo bits / sec
d. 32 kilo bits / sec
Lemma: Valid Relationship between hamming distance and hamming weight
d (C1, C2) = w (C1 - C2)= w (C2 - C1)
Proof:
Let the Code-word A={0010, 1110}
w(0010) = c1 = 1
w(1110) = c2 = 3
w(c2 - c1) = 2
Hamming distance 0 0 1 0
1 1 1 0
d(C1,C2) = 2
Hence Proved: d (C1, C2) = w (C1 - C2)
Q1. Find the hamming weight of the given code word 11000110
A. 1
B. 2
C. 3
D. 4
Q3. Find the hamming distance in the code word
'work' ‘walk’,
‘point’ ‘paint’,
‘lane vale’
A. 3, 1, 2
B. 2, 1, 2
C. 1, 1, 1
D. 3, 2, 1
Quiz
Hamming Code
Aim: Error detection, Error correction and Encoding and decoding
Main Points:
In this coding parity bits are used to detect and correct the error.
Parity bits are the extra bit to mix with message bits.
Hamming code is used to detect and correct single bit error.
Parity bits position is decided by 2n, where n=0,1,2,3….
For (7,4) Hamming code parity bits position as follows
20=1, 21=2, 22=4………….
Parity bits values are decided as
P1--------Check 1bit and skip 1-bit (1,3,5,7,9…………..)
P2-------Check 2-bit and skip 2-bit (2,3,6,7…………….)
P3-------Check 4-bit and skip 4-bit (4,5,6,7) (12,13,14) (20,21,22)
1
2
3
4
5
6
7
P1
P2
D3
P4
D5
D6
D7
Q. Let the transmitted message be 1001, using hamming code find out
A. Encode the message and transmit
B. Include error in 6th bit position and find out the error
C. If the received code word has error in 6th bit position then correct the error
Numerical
Hamming codes are used for the purpose of error detection and correction.
Hamming codes are used for channel encoding and decoding.
Hamming code are known as linear-error correcting codes.
Hamming code encodes four bits of data into seven bits by adding three
parity bits.
Hamming code is a set of error-correction codes that can be used to detect
and correct '1' bit errors that can occur when bit stream are sent through
channel
Quick Revision
Q1. Hamming code is capable of
1. Only detects single bit error
2. Only corrects single bit error
3. Detects and corrects single bit error
4. None of the above
Q2. The Position of parity is decided by
a. 22n
b. 2n-1
c. 2n
d. None of the above
Quiz
Q3. How to decide the parity P1 in (7,4) Hamming code
a. Check 1-bit and skip 1-bit
b. Check 2-bit and skip 2-bit
c. Check 3-bit and skip 3-bit
d. None
Q4. How to decide the parity P2 in (7,4) Hamming code
a. Check 1-bit and skip 1-bit
b. Check 2-bit and skip 2-bit
c. Check 3-bit and skip 3-bit
d. None
Q5. How to decide the parity P3 in (7,4) Hamming code
a. Check 1-bit and skip 1-bit
b. Check 2-bit and skip 2-bit
c. Check 4-bit and skip 4-bit
d. None
Q6. At end of message received by receiver parity means
A. Redundant bit as they do contain useful information
B. Redundant bit as they do not contain any information
C. UnRedudant bit as they do not contain any information
D. None of these
Q7. Find the hamming weight of the given code word 11000110
A. 1
B. 2
C. 3
D. 4
Important Terminology in Hamming Code
For (n, k) Hamming Code, where n-Total length of message,
Parity bit (q)= n – k
Number of message bits k = n – q or 2q – q-1
Block Length n = 2q – 1
Rate (R) = k/n
= (n-q)/ n
= 1- q/n
= 1- q/2q – 1
Numerical related to Hamming Code
Q. A 7-bit hamming code is received as 1011011. Assume even
parity and state whether the received code is correct or wrong, if
wrong locates the bit in error.
Error detection and Correction Capabilities in Hamming Codes
Error Detection is find out by using this expression
Dmin ≥ s+1
Where s represent the errors
For eg. If Dmin = 3 then 3 ≥ s+1, s ≥ 2 that means detect only 2 error
Error Correction is find out by using this expression
Dmin ≥ 2t+1
Where t represent the errors
For eg. If Dmin = 3 then 3 ≥ 2t+1, t ≥ 1 that means correct only 1 error
Lemma Prove that Dmin ≥ 3, if hamming code detect double error and correct
single error.
Proff: For detecting double (2) error Dmin ≥ s+1
Dmin ≥ 2+1
Dmin ≥ 3
For correcting upto one (1) error Dmin ≥ 2t+1
Dmin ≥ 2(1)+1
Dmin ≥ 3
Q1. What is the correct expression for error detection in Hamming code
a. Dmin ≥ 2s+1
b. Dmin ≥ s-1
c. Dmin ≥ s+1
d. None
Q2. What is the correct expression for error correction in Hamming code
a. Dmin ≥ 2t+1
b. Dmin ≥ t-1
c. Dmin ≥ t+1
d. None
Quiz
Q3. What is the correct expression for Rate (R) in Hamming Code
a. k/2n
b. k/3n
c. k/n
d. None
Q4. What is the correct expression for parity bit (q) in Hamming code
a. q= n + k
b. q= n – 2k
c. q= n – k
d. None
Block diagram of Communication System
Parity Check Matrix (H)
Aim
Parity check matrix (H) is used at the the receiver side for channel docoding.
Parity check matrix (H) is used for detection on and correction of errors
In (n, k) Hamming codes, parity check matrix [H] is defined as:
H = [PT : I ]
Where P represent the parity matrix
and I represent the Identity matrix
Generator Matrix (G)
Aim
Generator matrix generate parity and mix with message.
In (n, k) Hamming codes, parity check matrix [H] is defined as:
G = [I: P]
Where P represent the parity matrix
and I represent the Identity matrix
Numerical related to Hamming Code
Q. Generator matrix
3x6 = k x n
Find out all possible code vector
Hint
1. Use formula C= D.G, where D- data word and G is the generator matrix
2. G- Generator Matrix G = [Ik: P]kxn
3. Data word combination is find out by 2
k
Numerical Problem
Q1. The main purpose of generator matrix is
a. Error correction
b. Error detection
c. Generate the parity bit and include in message bit
Q2. Parity check matrix is used for
a. Detection and correction of errors
b. Used at receiver side
c. Both a and b
d. None
Q3. Which expression is correct for block length
a. Block Length n = 2q + 1
b. Block Length n = 2q – 1
c. Block Length n = 2q x 1
d. None
Quiz
Presented By:
Dr. Neeraj Kumar Misra
Associate Professor
Department of SENSE,
VIT-AP University
Google Scholar: https://scholar.google.co.in/citations?user=_V5Af5kAAAAJ&hl=en
Research Gate profile: https://www.researchgate.net/profile/Neeraj_Kumar_Misra
ORCHID ID: https://orcid.org/0000-0002-7907-0276
Information Theory and Coding
06/10/2022
Numerical
Q. For a (5, 2) linear block code the generator matrix is in the form [Ik: P] where P matrix as
P=
Find out
(a) Generator matrix
(b) Parity check matrix
(c) All possible code vectors
(d) Find minimum Hamming distance
Hint
1. Use formula G = [Ik: P]
2. Use parity check matrix formula H = [PT : In-k ]
3. Use formula C= D. G where D- Data word and G is the generator matrix.
4. For total combination of data word use formula 2k
5. Count the number of 1 in the codeword then find the minimum weight
Numerical
Q. For a (6, 3) code the generator matrix G is
G=
Find out
(a) All corresponding code vectors
(b) Minimum hamming distance
(c) Verify that this code is single error correcting code
(d) Parity check matrix
(e) Determine transmitted codeword if received codeword is 100011
Hint
1. Use formula G = [Ik: P]
2. Use formula C= D. G where D- Data word and G is the generator matrix
3. Count the number of 1 in the codeword then find the minimum distance dmin
4. For error correction Dmin >2t+1
5. Use formula for parity check matrix H = [PT : I ]
6. Find syndrome S=r. [HT]
Q1. Which expression is correct for Syndrome
a. S = r / [HT]
b. S = r + [HT]
c. S = r. [HT]
d. None
Q2. Which expression is correct for error correction
a. Dmin > t+1
b. Dmin > t-1
c. Dmin > 2t+1
d. None
Quiz
Q3. Which one of the following set of gates are best suited for parity checking and parity
generation?
a. AND, OR, NOT gates
b. EX-NOR or EX-OR gates
c. NAND gates
d. NOR gates
Q4. The coding efficiency is given by
a. 1- Redundancy
b. 1 + Redundancy
c. 1/Redundancy
d. none
Q. If the hamming distance in code is 5, then the maximum number of errors correction is
a. 4
b. 3
c. 2
d. 1
Q. To guarantee the detection of up to 5 errors in all cases, the minimum Hamming
distance in a block code must be _______
a. 5
b. 6
c. 11
d. none of the above
Q. To guarantee correction of up to 5 errors in all cases, the minimum
Hamming distance in a block code must be ________
A. 5
B. 6
C. 11
D. none of the above
Q. The _____of errors is more difficult than the ______
A. correction; detection
B. detection; correction
C. creation; correction
D. creation; detection
Numerical
Q. A parity check matrix is given as
H=
suppose that received code word is 110110 then what will be decoded
received word?
a. 110010
b. 100110
c. 010110
d. 110111
Numerical
Q. For a linear (6, 3) code whose parity matrix is given as
PT=
Find out how many errors to be detect and correct.
Cyclic Codes
Cyclic codes satisfies two basic properties
a. Linearity
b. Cyclic properties.
Cyclic Code for Non-Systamatic Codeword
C(x)= m(x) . g(x)…………….. (1)
Where C(x)- code-word polynomial
m(x)- message polynomial
g(x)- generator polynomial
Format of codeword c = [message, parity]-------Systematic code-word
Message and parity bits are not in proper order...> Non-systematic code-word
Numerical Question based on Cyclic- code
Q. Code word C=10111 find the cyclic polynomial.
Q. Construct non-systematic cyclic codes (7,4) using generator
polynomial g(x)= x3+x2+1 with message (1010)
Q. If generator polynomial is given as g(x)= x3+x2+1
Find out
(a) Cyclic Encoder
(b) Codewords if message is (1110)
Q. Construct non-systematic cyclic codes (7,4) using generator
polynomial g(x)= x3+x2+1 with message (1010)
Quiz
Q1. Check the given code word is cyclic or not
c={0000, 1100, 1001, 1111}
a. Yes
b. No
c. Can not determine
d. None
Q2. Convert the given message bit m=(0110) in polynomial form
a. x3+x
b. x3+x+1
c. x2+x
d. None
Numerical Question based on Cyclic- code
Q. If generator polynomial is given as g(x)= x3+x2+1
Find out
(a) Cyclic Encoder
(b) Codewords if message is (1110)
Cyclic Code for systematic Codewords
In systematic cyclic code the codeword as
C=[message, parity]
C(x)=xn-k m(x) + p(x)
Where p(x) = Rem
( )
( )
m(x) is the message polynomial
g(x) is the generator polynomial
C(x) is the codeword polynomial
Q. Construct systematic cyclic code (7,4) using
generator polynomial g(x)= x3+x2+1 with message
(1010)
Q. Construct systematic cyclic code (7,4) using generator polynomial g(x)=
x3+x2+1 with message (1010)
Cyclic Code for generator matrix
• 1st row of parity matrix Rem
( )
• 2nd row of parity matrix Rem
( )
• 3st row of parity matrix Rem
( )
• 4st row of parity matrix Rem
( )
Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1
Then construct generator matrix
Hint
Use generator matrix format G = [Ik: P]
The find the parity matrix P rows using the below expression
1st row of parity matrix Rem
( )
2nd row of parity matrix Rem
( )
3st row of parity matrix Rem
( )
4st row of parity matrix Rem
( )
Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1. Then construct
generator matrix
Quiz
Q1. Parity Check Matrix (H) is represent in this form
1. H=[I:PT]
2. H=[PT:I]
3. H=[ - PT:I]
4. None
Q2. Code vector is defined in this form
A. C=D.G
B. C=D/G
C. C=D+G
D. C=D-G
Q3. Find the minimum hamming code for the following codes
11001100
01010101
10101000
A. 4
B. 3
C. 2
D. 1
Nonlinear Code
Linear Code
Convolution Codes
Block Codes
 Nonlinear codes
does not necessarily
produce third
codeword.
 If two cod words of
the linear code are
added by modulo-2
arithmetic then it
produce third
codeword in the
code
 Coding operation is
discrete time
convolution of input
sequence
 It accept the
message bits
continuously and
generates the
encoded sequential
continuously.
 n-number of bits
 K-message
 n-k redundant bits
or parity bits
Types of Codes
Error Correction Capabilities
Distance requirement
Name of errors detected/ corrected
Dmin>s+1
Detect upto s errors per word
Dmin>2t+1
Correct upto t errors per word
Dmin>t+s+1
Correct upto t errors and detect s > t
errors per word
Useful formula
Q. An error control code has the following parity check matrix
H=
a) Determine the generator matrix G
b) Find the codword that begin with 101
c) Decode the received codeword 110110. Comment on error
detection and correction capability of this code.
d) What is the relationship between G and H? Verify the same.
Numerical
Generator matrix and Parity Check matrix
Dual Codes
For (n, k) linear block codes- G (Generator Matrix)
H (Parity Check matrix)
For (n, n-k) linear block codes H (Generator matrix)
G (Parity check matrix)
Q. For a certain G matrix
Find all the code words of its dual code
Error Correcting Codes
Two Types
1. Block codes (In block codes Memory is not present)
2. Convolution Codes (In convolutional codes Memory is present)
3. Block codes:
Block codes follow linear block codes
Linear block codes: A code is said to be linear if the sum (Modulo-2 sum of corresponding bits of any two
code vectors results in another code vector.
Error Correcting Codes
K - number of message bits
n - number of code bits
(n-k) = q (number of parity bits)
Code rate or code efficiency r= k/n
If the code rate close to 1 for high efficiency
Code word: The encoded block of n-bits
Block length: Number of bits n after encoding
Weight of the code: Number of non-zero elements in code vector.
Error Correcting Codes
Systematic block code:
k-message bits (n-k) message bits
(n-k) parity bits k message bits
Error Correcting Codes
Generator Matrix follows (n, k) linear block codes
code word C= m. G
Size of the code word: 1xn = (1xk). (kxn)
G = [I: P]
kxn = [(kxn) {kx(n-k)}]kxn
Numerical Questions
Q. An information source produces a sequence of independent
symbols having the following probabilities, using the Huffman
binary codes to find its efficiency.
s7
s6
s5
s4
s3
s2
S1
Numerical Questions
Q. Apply Huffman coding for following message and calculate the
code length, efficiency and redundancy. A message has 8 symbols
x1, x2, x3, x4, x5, x6, x7, x8 has probability 16/32, 4/32, 2/32, 2/32,
1/32, 4/32, 2/32, 1/32 respectively.
Numerical Problem
Q. For a discrete memory-less source there are 3 symbols with probability
P1=α and P2=P3. Determine the entropy of the source and sketch its
variation for different values of α
Convolution Codes
Concept:
A convolutional encoder is a finite state machine and consists of a
tapped shift register with (L+1) stages whose outputs are connected
to modulo-2 adders through coefficient multipliers
Convolution Codes
Encoding Decoding
Code Tree State Transition diagram Trellis Diagram
Viterbi Algorithm Sequential Decoding
Code Tree:
Main Points
1. Code Tree used in both Transmitter and Receiver.
2. Code Tree is converted into a branch with applied input.
3. To form the code Tree first generate the convolution code pattern.
4. Convolution code pattern depends on the shift register.
5. In code Tree input must be in 0 and 1 form.
6. If input 0, the branch goes to the upside; if input 1, the branch goes to
the downside.
7. Code Tree means branch divided into two parts.
Q. Draw the code Trees for the three-bit shift register if the input
data 1010 also draw the state of the encoder.
Basic Terminology in Convolution Codes
1. Code rate
r = k/n
= number of message bits/ number of encoded output bits
Code dimension (n, k )
1. CRC stands for cyclic redundancy check.
At Transmitter End
At Transmitter 4-steps to be performed to generate the CRC bits
1. First find the divisor bits (n-bits)
2. Add (n-1) bits to data bits.
3. Then perform the division and find the reminder bits.
4. Remainder bits are known as CRC
5. Combined the CRC bits to data
At Receiver End
1. At the received again divide the division with the received data
2. If the reminder is zero that means there is no errors.
CRC Concept
Q. Find the CRC value and check if the received code word is correct
or wrong if data bits 1010101010 and divisor polynomial
P(x)= x4+x3+1
CRC based Problem
Trellis Diagram for Convolution codes
 Graphical diagram
 State transition from one stage to next stage
 Dot line- represents 1 input
 Dash line- represent 0 input
 M0 and m1 represent the present state of convolutin
encoder
Error Correcting Coding
Linear Block codes Convolutional codes
Hamming code cyclic code Turbo codes Trellis codes
Principle of Turbo coding
 If more length of the code means more randomness.
 If randomness is more than code is more secure.
 Turbo code is the parallel concatenation of convolution codes.
 There are three types of concatenation
a) Series
b) Parallel
c) Hybrid
Technique inTurbo coding
Series
 Parallel Hybrid
Encoder of Turbo Codes
 Parallel concatenation increases the length of code means
randomness is increased and more secure communication.
 In the encoder of turbo code, feedback path to increase the length
of the code and randomness increases.
 Interlever is represented by pie
 Interleaver is used to rearrange the message code.
Design of Encoder for Turbo Codes
Working Process of Encoder for Turbo Codes
Types of Turbo Codes
 Two type of turbo codes
a. Symmetric turbo codes: In which the constituent encoder is the
same.
Types of Turbo Codes
 Two type of turbo codes
a. Asymmetric turbo codes: In which the constituent encoder is
different.
Concept of Puncturing
 In the puncturing reduce the output.
 In turbo codes mostly three outputs using the puncturing output
to be reduced and the code rate to be increased.
 If even time, collect y1y2 output--------> code rate=1/2
 If odd time, collect y1y3 output---------> code rate=1/2
Multiple Turbo Code
 More number of encoder
 More number of interleaver
 More number of output
Questions
Q1. What do you understand by convolution codes? How are they different from block
codes?
Q2. Give block diagram explain the operation of any convolution encoder.
Q3. What is the constraint length for convolution encoder?
Q4. What are code tree, code trellis and state diagram for convolution encoder?
Q5. What is the difference between code tree and trellis diagram
Q6. What do you understand by state diagram in convolution codes.
Q7. What is the difference between linear block codes and convolutional codes
Q8. Explain Viterbi algorithm and sequential decoding of convolution codes.
Q9. Compare linear block codes, cyclic codes and convolution codes by giving their
advantage and disadvantage
Difference between linear block codes and convolution codes
Convolution codes
Linear block codes
Convolution codes are generated by
convolution between message sequence and
generating sequence
Block codes are generated by
Coding is bit by bit
Coding is block by block
Viterbi decoding is used for most likelihood
decoding
Syndrome decoding is used for most
likelihood decoding
Generating sequences are used to get code
vectors
Generating polynomial and generator matrix
aare used to get code vectors
Difference between code tree and trellis diagram
Trellis Diagram
Code tree
Trellis diagram indicates transition from
current state to next state
Code tree indicate flow of the coded signal
along the nodes of the tree
Trellis diagram is shorter or compact way of
representing coding process
Code tree is lengthy way of representing
coding process
Decoding is little complex using trellis
diagram
Decoding is very simple using code tree
Trellis diagram repeats in every stage. In
steady state trellis diagram has only stage
Code tree repeat after number of stages used
in the encoder
Convolution code is the encoding of the message bit using the previous bit as well as the
present bit, here the memory element is flipflop and the shift register is present. These
blocks are helping to encode in such a way that the present bit and previous bits are used
to get the encoded signal.
Memory elements are present
Encoding: Present state +Previous bits are used
Constant length: Number of shifts over which single message bit influence the encoder
output. Or in other words constant length represent number of flip flop present.
Convolutional Codes
Q. Using the trees diagram of 3-bit shift register, if the received code
word as 01 10 11 00 00. Find the error in the received code-words using
Viterbi algorithm
Numerical
Error Correcting Codes
Two Types
1. Linear Block codes (In block codes Memory is not present)
2. Convolution Codes (In convolutional codes Memory is present)
3. Block codes:
Block codes follow linear block codes
Linear block codes: A code is said to be linear if the sum (Modulo-2 sum of corresponding bits of any two
code vectors results in another code vector.
Error Correcting Codes
K - number of message bits
n - number of code bits
(n-k) = q (number of parity bits)
Code rate or code efficiency r= k/n
If the code rate close to 1 for high efficiency
Code word: The encoded block of n-bits
Block length: Number of bits n after encoding
Weight of the code: Number of non-zero elements in code vector.
Difference between convolution and Block codes
1. Code word depends on block
of k-bit message within that
time unit
2. Best suitable for error
detection
1. Codeword depends on block of k
message bits but also on proceeding (N-
1) blocks of message bits
2. Best suitable for error correction
3. Used blocks in convolution codes shift
register and Mod-2 adder
4. Code word depends on message and
previous message
5. Code rate (r) = k/n
Convolution Codes
Encoding Decoding
Code Tree State Transition diagram Trellis Diagram
Viterbi Algorithm Sequential Decoding
Convolution Encoder Methods
Output of the Encoder
Code Tree
State diagram
Trellis diagram
Q. Find the codewords of the following convolution encoder for the input 10011.
Draw the state diagram, tree diagram, and trellis diagram of the encoder.
Process to continue if shift register in original state
Code Tree Digaram:
Main Points
1. Code Tree used in both Transmitter and Receiver.
2. Code Tree is converted into a branch with applied input.
3. To form the code Tree first generate the convolution code pattern.
4. Convolution code pattern depends on the shift register.
5. In code Tree input must be in 0 and 1 form.
6. If input 0, the branch goes to the upside; if input 1, the branch goes to
the downside.
7. Code Tree means branch divided into two parts.
Tree diagram:
 Graphical representation
 Each branch of the tree represent an
input with the corresponding pair of
output bits indicated on the branch
Quiz
Q1. Which of the following is not a way to represent convolution code?
a) State diagram
b) Trellis diagram
c) Tree diagram
d) Linear matrix
Q2. The code in convolution coding is generated using
a) EX-OR logic
b) AND logic
c) OR logic
d) None of the above
Q3. For decoding in convolution coding, in a code tree,
a) Diverge upward when a bit is 0 and diverge downward when the bit is 1
b) Diverge downward when a bit is 0 and diverge upward when the bit is 1
c) Diverge left when a bit is 0 and diverge right when the bit is 1
d) Diverge right when a bit is 0 and diverge left when the bit is 1
Q4. A parity check code can
a. detect a single bit error
b. correct a single bit error
c. detect two-bit error
d. correct two-bit error
Q. The Hamming distance between the words 100101101 and 011011001 is
a) 4
b) 5
c) 6
d) 7
Q. Cyclic code is a subclass of convolutional code.
True
False
Q. The method used for representing convolution encoder are
a. Trellis diagram
b. State diagram
c. Code Tree diagram
d. All of the mentioned
Q In treee diagram, the number of nodes ______ at successive branching.
a. Increases by 1
b. Doubles
c. Triples
d. None of the mentioned
Convolution code is the encoding of the message bit using the previous bit as well as the
present bit, here the memory element is flipflop and the shift register is present. These
blocks are helping to encode in such a way that the present bit and previous bits are used
to get the encoded signal.
Memory elements are present
Encoding: Present state +Previous bits are used
Constant length: Number of shifts over which single message bit influence the encoder
output. Or in other words constant length represent number of flip flop present.
Convolutional Codes
Difference between linear block codes and convolution codes
Convolution codes
Linear block codes
Convolution codes are generated by
convolution between message sequence and
generating sequence
Block codes are generated by
Coding is bit by bit
Coding is block by block
Viterbi decoding is used for most likelihood
decoding
Syndrome decoding is used for most
likelihood decoding
Generating sequences are used to get code
vectors
Generating polynomial and generator matrix
aare used to get code vectors
Difference between code tree and trellis diagram
Trellis Diagram
Code tree
Trellis diagram indicates transition from
current state to next state
Code tree indicate flow of the coded signal
along the nodes of the tree
Trellis diagram is shorter or compact way
of representing coding process
Code tree is lengthy way of representing
coding process
Decoding is little complex using trellis
diagram
Decoding is very simple using code tree
Trellis diagram repeats in every stage. In
steady state trellis diagram has only stage
Code tree repeat after number of stages
used in the encoder
State diagram
In state diagram represent graph of the possible states of the encoder and
possible transition from one state to another.
State diagram provide the relationship between the encoder state, input
and output of the convolution encoder
Number of bits in state = number of shift register -1
Number of possible state = 2n
State Table
Next state (NS)
Outputs
Bits in Shift
register
Input bits
Present state (PS)
Quiz
Q1. Which among the below stated logical circuits are present in encoder and decoder used for the
implementation of cyclic codes?
A. Shift Registers
B. Modulo-2 Adders
C. Counters
D. Multiplexers
a. A & B
b. C & D
c. A & C
d. B & D
Q2. State diagram provide the relationship
a. Input only
b. Output only
c. encoder state, input and output
d. None
Q3. Number of bits in state is defined as
a. number of shift register +1
b. number of shift register +2
c. number of shift register -1
d. None
Quiz
Q1. In convolution code memory element is not present
a. True
b. False
Q2. In convolution code encoding is depend on
a. Only on Present state
b. Only on Next state
c. Both Present and next state
d. Not defined
Q3. Trellis diagram is shorter or compact way of representing coding process
a. True
b. False
Q4. In state diagram represent graph
a. Only possible states of the encoder
b. Only possible transition from one state to another
c. Both possible state and possible transition.
d. None
Trellis Diagram
In tree diagram the number of nodes at each stages increases (Like 1, 2, 4, 8, 16)
The tree becomes repetitive after the first three branches
To reduce the redundancy consider a structure called Trellis diagram
The trellis diagram is a way to show the transition between various states at the
time evolved.
In trellis diagram redundancy is removed.
Representing all the states on vertical axis and repeating this along the time axis.
Transition from one state to another state is denoted by a branch connecting the
two states on two adjacent vertical axis.
Trellis Diagram
Trellis Diagram
Quiz
Q1. To reduce the redundancy the best method
a. Code Tree
b. State diagram
c. Trellis diagram
d. None
Q2. The trellis diagram is a way to show the transition between various states in which
a. weight evolved
b. State evolved
c. time evolved
d. None
Q3. In trellis diagram all the states on
a. Horizontal axis
b. Vertical axis
c. X- axis
d. Y-axis
Quiz
Q. In code tree node is repeated as
a. 3, 5, 7, 9
b. 0, 1, 3, 5, 7
c. 1, 2, 4, 8, 16
d. 2, 8, 32, 128
Q. In trellis diagram all nodes are _______
a. connected
b. Not connected
c. out
d. None
Q. What are the convolution encoder methods
a. Code tree
b. State diagram
c. Trellis diagram
d. Veterbi algorithm
A. a, b, d
B. a, b, c
C. b, c, d
D. a, c, d
Viterbi Algorithm
Used for decoding
Most efficient method of decoding
In digital communication two factor is more important such as energy, efficiency.
Both factors are fulfilled by Viterbi algorithm
The main advantage of Viterbi algorithm as
a. Highly satisfied bit error performance (Very less errors)
b. High speed of operation (Less delay means output get quickly)
Viterbi algorithm is very less complex and fast.
The algorithm is based on add-compare-select the best path each time at each state.
Terminology: Viterbi Algorithm
Metric: It is defined as the hamming distance between the coded sequence and the
received sequence.
Surviving path: Path of the decoded signal with minimum metric.
Initialization: Set all zero state of the trellis to zero.
In Viterbi algorithm use the trellis diagram compare the paths entering the node. The
path with the lower metric is retained and other path is discarded.
Terminology: Viterbi Algorithm
Q. Find the data-word transmitted for the received word 11010111 using
Viterbi decoding algorithm
Q. A rate of 1/3 convolution code is described by g1=[111], g2=[100],
g3=[101]. Draw the encoder and code tree corresponding to this code.
Q. A rate of ½ convolution code is described by the following
generator sequence g1=[101] and g2=[110]
a. Draw the encoder diagram of this code
b. What is the constraint length of the code
c. Find the codeword for the message sequence 11001
Q. Find the data-word transmitted for the received word 11011100
using Viterbi decoding algorithm.
Practise Question Based on Convolution Code
Q. Construct the decoding table for single error correcting (7, 4)
cyclic code with g(x) = 1+x2+x3 Determine the data vector
transmitted for the following received vectors.
(a)1101101
(b) 0101000
(c) 0001100
Syndrome Decoding
1. The received code contains an error if the syndrome vector is
a) Zero
b) Non zero
c) Infinity
d) None of the mentioned
2. Syndrome is calculated by
a) H/T
b) rHT
c) rH
d) None of the mentioned
3. For a (7, 4) cyclic code, the generator polynomial g(x) = 1 + x + x3. Find the data word for the codeword 0110100
Ans__________________
4. If the constraint length of a (n, k, L) convolutional code is defined as the number of encoder output bits
influenced by each message bit, then the constraint length is given by
1. L(n + 1)
2. n(L + 1)
3. n(L + k)
4. L(n + k)
Q. ________codes are special linear block codes with one extra property. If a codeword is
rotated the result is another codeword.
a. Nonlinear b. Convolution
c. Cyclic d. None
Q. To guarantee correction of up to 5 errors in all cases, the minimum Hamming distance in a block code
must be
a. 5 b. 6
c. 11 d. None
Q. To guarantee detection of up to 5 errors in all cases, the minimum Hamming distance in a block code
must be
a. 5 b. 6
c. 11 d. None
Q. The Hamming distance between 100 and 001 is_____
a. 2 b. 0
c. 1 d. None
Q. If generator polynomial of a (7, 4) cyclic code g(x) = 1+x+x3. Find
the codewords in non-systematic form
Numerical Question based on Cyclic Code
Q. If generator polynomial of a (7, 4) cyclic code g(x) = 1+x+x3. Find
the codewords in systematic form
Q. If generator polynomial of cyclic code (7, 4) is given by g(x) =
1+x+x3 then construct generator matrix
Numerical Question based on Cyclic Code
Q. Draw a state table and state diagram for a (2,1,2) for convolution
encoder with generator sequence g(1)= {1,1,1} and g(2)= {1,0,1}
Numerical
State Table
Next state (NS)
Outputs
Bits in Shift
register
Input bits
Present state (PS)
How to Draw the Convolution Encoder diagram
How to Draw the Convolution Encoder diagram
Q1. Draw encoder diagram for (7, 3) cyclic code where generator polynomial is given as g(x)=1+x+x2+x4
Q2. Design an encoder for the (7,4) binary cyclic code generated by g(x)=1+x+x3 and verify its operation using the message
vectors (1001) and (1011)
Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1
Then construct generator matrix
Hint
Use generator matrix format G = [Ik: P]
The find the parity matrix P rows using the below expression
1st row of parity matrix Rem
( )
2nd row of parity matrix Rem
( )
3st row of parity matrix Rem
( )
4st row of parity matrix Rem
( )
Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1. Then construct
generator matrix
Cyclic Code for generator matrix
• 1st row of parity matrix Rem
( )
• 2nd row of parity matrix Rem
( )
• 3st row of parity matrix Rem
( )
• 4st row of parity matrix Rem
( )

Mais conteúdo relacionado

Mais procurados

Information Theory Coding 1
Information Theory Coding 1Information Theory Coding 1
Information Theory Coding 1Mahafuz Aveek
 
Eye diagram in Communication
Eye diagram in CommunicationEye diagram in Communication
Eye diagram in CommunicationSivanesh M
 
Digital modulation techniques...
Digital modulation techniques...Digital modulation techniques...
Digital modulation techniques...Nidhi Baranwal
 
Wireless Channels Capacity
Wireless Channels CapacityWireless Channels Capacity
Wireless Channels CapacityOka Danil
 
Parameters of multipath channel
Parameters of multipath channelParameters of multipath channel
Parameters of multipath channelNaveen Kumar
 
Decimation and Interpolation
Decimation and InterpolationDecimation and Interpolation
Decimation and InterpolationFernando Ojeda
 
Linear block coding
Linear block codingLinear block coding
Linear block codingjknm
 
Digital base band modulation
Digital base band modulationDigital base band modulation
Digital base band modulationPrajakta8895
 
Digital Communication: Channel Coding
Digital Communication: Channel CodingDigital Communication: Channel Coding
Digital Communication: Channel CodingDr. Sanjay M. Gulhane
 
M ary psk modulation
M ary psk modulationM ary psk modulation
M ary psk modulationAhmed Diaa
 
Digital Data, Digital Signal | Scrambling Techniques
Digital Data, Digital Signal | Scrambling TechniquesDigital Data, Digital Signal | Scrambling Techniques
Digital Data, Digital Signal | Scrambling TechniquesBiplap Bhattarai
 
Baseband transmission
Baseband transmissionBaseband transmission
Baseband transmissionPunk Pankaj
 
Introduction to Analog signal
Introduction to Analog signalIntroduction to Analog signal
Introduction to Analog signalHirdesh Vishwdewa
 

Mais procurados (20)

Information Theory Coding 1
Information Theory Coding 1Information Theory Coding 1
Information Theory Coding 1
 
Chap 3
Chap 3Chap 3
Chap 3
 
Convolution Codes
Convolution CodesConvolution Codes
Convolution Codes
 
Eye diagram in Communication
Eye diagram in CommunicationEye diagram in Communication
Eye diagram in Communication
 
Digital modulation techniques...
Digital modulation techniques...Digital modulation techniques...
Digital modulation techniques...
 
Wireless Channels Capacity
Wireless Channels CapacityWireless Channels Capacity
Wireless Channels Capacity
 
Information theory
Information theoryInformation theory
Information theory
 
Parameters of multipath channel
Parameters of multipath channelParameters of multipath channel
Parameters of multipath channel
 
Decimation and Interpolation
Decimation and InterpolationDecimation and Interpolation
Decimation and Interpolation
 
Linear block coding
Linear block codingLinear block coding
Linear block coding
 
Source coding
Source coding Source coding
Source coding
 
Digital base band modulation
Digital base band modulationDigital base band modulation
Digital base band modulation
 
Digital Communication: Channel Coding
Digital Communication: Channel CodingDigital Communication: Channel Coding
Digital Communication: Channel Coding
 
Information theory
Information theoryInformation theory
Information theory
 
linear codes and cyclic codes
linear codes and cyclic codeslinear codes and cyclic codes
linear codes and cyclic codes
 
M ary psk modulation
M ary psk modulationM ary psk modulation
M ary psk modulation
 
communication system ch1
communication system ch1communication system ch1
communication system ch1
 
Digital Data, Digital Signal | Scrambling Techniques
Digital Data, Digital Signal | Scrambling TechniquesDigital Data, Digital Signal | Scrambling Techniques
Digital Data, Digital Signal | Scrambling Techniques
 
Baseband transmission
Baseband transmissionBaseband transmission
Baseband transmission
 
Introduction to Analog signal
Introduction to Analog signalIntroduction to Analog signal
Introduction to Analog signal
 

Semelhante a Information Theory and Coding

Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATIONUnit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATIONrubini Rubini
 
Information theory & coding PPT Full Syllabus.pptx
Information theory & coding PPT Full Syllabus.pptxInformation theory & coding PPT Full Syllabus.pptx
Information theory & coding PPT Full Syllabus.pptxprernaguptaec
 
INFORMATION_THEORY.pdf
INFORMATION_THEORY.pdfINFORMATION_THEORY.pdf
INFORMATION_THEORY.pdftemmy7
 
Unit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxUnit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxKIRUTHIKAAR2
 
Unit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxUnit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxKIRUTHIKAAR2
 
Huffman&Shannon-multimedia algorithms.ppt
Huffman&Shannon-multimedia algorithms.pptHuffman&Shannon-multimedia algorithms.ppt
Huffman&Shannon-multimedia algorithms.pptPrincessSaro
 
Data Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano codingData Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano codingDr Rajiv Srivastava
 
Lecture1
Lecture1Lecture1
Lecture1ntpc08
 
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfUnit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfvani374987
 
A mathematical theory of communication
A mathematical theory of communicationA mathematical theory of communication
A mathematical theory of communicationSergio Zaina
 
Neutrino communication for insider trading
Neutrino communication for insider tradingNeutrino communication for insider trading
Neutrino communication for insider tradingSon Cao
 
Module 1 till huffman coding5c-converted.pdf
Module 1 till huffman coding5c-converted.pdfModule 1 till huffman coding5c-converted.pdf
Module 1 till huffman coding5c-converted.pdfAmoghR3
 
A Mathematical Theory of Communication
A Mathematical Theory of CommunicationA Mathematical Theory of Communication
A Mathematical Theory of CommunicationSergey Oboroc
 

Semelhante a Information Theory and Coding (20)

Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATIONUnit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
 
DC@UNIT 2 ppt.ppt
DC@UNIT 2 ppt.pptDC@UNIT 2 ppt.ppt
DC@UNIT 2 ppt.ppt
 
UNIT-2.pdf
UNIT-2.pdfUNIT-2.pdf
UNIT-2.pdf
 
Information theory & coding PPT Full Syllabus.pptx
Information theory & coding PPT Full Syllabus.pptxInformation theory & coding PPT Full Syllabus.pptx
Information theory & coding PPT Full Syllabus.pptx
 
INFORMATION_THEORY.pdf
INFORMATION_THEORY.pdfINFORMATION_THEORY.pdf
INFORMATION_THEORY.pdf
 
Unit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxUnit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptx
 
Unit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxUnit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptx
 
Huffman&Shannon-multimedia algorithms.ppt
Huffman&Shannon-multimedia algorithms.pptHuffman&Shannon-multimedia algorithms.ppt
Huffman&Shannon-multimedia algorithms.ppt
 
Data Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano codingData Communication & Computer network: Shanon fano coding
Data Communication & Computer network: Shanon fano coding
 
Lecture1
Lecture1Lecture1
Lecture1
 
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdfUnit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
Unit I DIGITAL COMMUNICATION-INFORMATION THEORY.pdf
 
A mathematical theory of communication
A mathematical theory of communicationA mathematical theory of communication
A mathematical theory of communication
 
Shannon1948
Shannon1948Shannon1948
Shannon1948
 
Shannon1948
Shannon1948Shannon1948
Shannon1948
 
Neutrino communication for insider trading
Neutrino communication for insider tradingNeutrino communication for insider trading
Neutrino communication for insider trading
 
Unit IV_SS_MMS.ppt
Unit IV_SS_MMS.pptUnit IV_SS_MMS.ppt
Unit IV_SS_MMS.ppt
 
Module 1 till huffman coding5c-converted.pdf
Module 1 till huffman coding5c-converted.pdfModule 1 till huffman coding5c-converted.pdf
Module 1 till huffman coding5c-converted.pdf
 
info
infoinfo
info
 
Entropy
EntropyEntropy
Entropy
 
A Mathematical Theory of Communication
A Mathematical Theory of CommunicationA Mathematical Theory of Communication
A Mathematical Theory of Communication
 

Mais de VIT-AP University

Cost-effective architecture of decoder circuits and futuristic scope in the e...
Cost-effective architecture of decoder circuits and futuristic scope in the e...Cost-effective architecture of decoder circuits and futuristic scope in the e...
Cost-effective architecture of decoder circuits and futuristic scope in the e...VIT-AP University
 
Efficient architecture for arithmetic designs using perpendicular NanoMagneti...
Efficient architecture for arithmetic designs using perpendicular NanoMagneti...Efficient architecture for arithmetic designs using perpendicular NanoMagneti...
Efficient architecture for arithmetic designs using perpendicular NanoMagneti...VIT-AP University
 
An in-depth study of the electrical characterization of supercapacitors for r...
An in-depth study of the electrical characterization of supercapacitors for r...An in-depth study of the electrical characterization of supercapacitors for r...
An in-depth study of the electrical characterization of supercapacitors for r...VIT-AP University
 
Algorithm of Reading Scientific Research Article
Algorithm of Reading Scientific Research Article Algorithm of Reading Scientific Research Article
Algorithm of Reading Scientific Research Article VIT-AP University
 
How to Calculate the H-index and Effective Response of Reviewer
How to Calculate the H-index and Effective Response of ReviewerHow to Calculate the H-index and Effective Response of Reviewer
How to Calculate the H-index and Effective Response of ReviewerVIT-AP University
 
Writing and Good Abstract to Improve Your Article Quality
Writing and Good Abstract to Improve Your Article QualityWriting and Good Abstract to Improve Your Article Quality
Writing and Good Abstract to Improve Your Article QualityVIT-AP University
 
Fundamental of Electrical and Electronics Engineering.pdf
Fundamental of Electrical and Electronics Engineering.pdfFundamental of Electrical and Electronics Engineering.pdf
Fundamental of Electrical and Electronics Engineering.pdfVIT-AP University
 
Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...
Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...
Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...VIT-AP University
 
Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...
Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...
Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...VIT-AP University
 
Sensor Energy Optimization Using Fuzzy Logic in Wireless Sensor Networking
Sensor Energy Optimization Using Fuzzy Logic in Wireless Sensor NetworkingSensor Energy Optimization Using Fuzzy Logic in Wireless Sensor Networking
Sensor Energy Optimization Using Fuzzy Logic in Wireless Sensor NetworkingVIT-AP University
 
Approach to design a high performance fault-tolerant reversible ALU
Approach to design a high performance fault-tolerant reversible ALUApproach to design a high performance fault-tolerant reversible ALU
Approach to design a high performance fault-tolerant reversible ALUVIT-AP University
 
Novel conservative reversible error control circuits based on molecular QCA
Novel conservative reversible error control circuits based on molecular QCANovel conservative reversible error control circuits based on molecular QCA
Novel conservative reversible error control circuits based on molecular QCAVIT-AP University
 
A modular approach for testable conservative reversible multiplexer circuit f...
A modular approach for testable conservative reversible multiplexer circuit f...A modular approach for testable conservative reversible multiplexer circuit f...
A modular approach for testable conservative reversible multiplexer circuit f...VIT-AP University
 
Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...
Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...
Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...VIT-AP University
 
A Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular Automata
A Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular AutomataA Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular Automata
A Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular AutomataVIT-AP University
 
A Redundant Adder Architecture in Ternary Quantum-Dot Cellular Automata
A Redundant Adder Architecture in Ternary Quantum-Dot Cellular AutomataA Redundant Adder Architecture in Ternary Quantum-Dot Cellular Automata
A Redundant Adder Architecture in Ternary Quantum-Dot Cellular AutomataVIT-AP University
 
Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...
Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...
Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...VIT-AP University
 
An Explicit Cell-Based Nesting Robust Architecture and Analysis of Full Adder
An Explicit Cell-Based Nesting Robust Architecture and Analysis of Full AdderAn Explicit Cell-Based Nesting Robust Architecture and Analysis of Full Adder
An Explicit Cell-Based Nesting Robust Architecture and Analysis of Full AdderVIT-AP University
 

Mais de VIT-AP University (20)

Quantum Computing
Quantum ComputingQuantum Computing
Quantum Computing
 
Cost-effective architecture of decoder circuits and futuristic scope in the e...
Cost-effective architecture of decoder circuits and futuristic scope in the e...Cost-effective architecture of decoder circuits and futuristic scope in the e...
Cost-effective architecture of decoder circuits and futuristic scope in the e...
 
Efficient architecture for arithmetic designs using perpendicular NanoMagneti...
Efficient architecture for arithmetic designs using perpendicular NanoMagneti...Efficient architecture for arithmetic designs using perpendicular NanoMagneti...
Efficient architecture for arithmetic designs using perpendicular NanoMagneti...
 
An in-depth study of the electrical characterization of supercapacitors for r...
An in-depth study of the electrical characterization of supercapacitors for r...An in-depth study of the electrical characterization of supercapacitors for r...
An in-depth study of the electrical characterization of supercapacitors for r...
 
Algorithm of Reading Scientific Research Article
Algorithm of Reading Scientific Research Article Algorithm of Reading Scientific Research Article
Algorithm of Reading Scientific Research Article
 
How to Calculate the H-index and Effective Response of Reviewer
How to Calculate the H-index and Effective Response of ReviewerHow to Calculate the H-index and Effective Response of Reviewer
How to Calculate the H-index and Effective Response of Reviewer
 
Importance of ORCHID ID
Importance of ORCHID IDImportance of ORCHID ID
Importance of ORCHID ID
 
Writing and Good Abstract to Improve Your Article Quality
Writing and Good Abstract to Improve Your Article QualityWriting and Good Abstract to Improve Your Article Quality
Writing and Good Abstract to Improve Your Article Quality
 
Fundamental of Electrical and Electronics Engineering.pdf
Fundamental of Electrical and Electronics Engineering.pdfFundamental of Electrical and Electronics Engineering.pdf
Fundamental of Electrical and Electronics Engineering.pdf
 
Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...
Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...
Content Addressable Memory Design in 3D pNML for Energy-Aware Sustainable Com...
 
Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...
Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...
Performance Evaluation & Design Methodologies for Automated 32 Bit CRC Checki...
 
Sensor Energy Optimization Using Fuzzy Logic in Wireless Sensor Networking
Sensor Energy Optimization Using Fuzzy Logic in Wireless Sensor NetworkingSensor Energy Optimization Using Fuzzy Logic in Wireless Sensor Networking
Sensor Energy Optimization Using Fuzzy Logic in Wireless Sensor Networking
 
Approach to design a high performance fault-tolerant reversible ALU
Approach to design a high performance fault-tolerant reversible ALUApproach to design a high performance fault-tolerant reversible ALU
Approach to design a high performance fault-tolerant reversible ALU
 
Novel conservative reversible error control circuits based on molecular QCA
Novel conservative reversible error control circuits based on molecular QCANovel conservative reversible error control circuits based on molecular QCA
Novel conservative reversible error control circuits based on molecular QCA
 
A modular approach for testable conservative reversible multiplexer circuit f...
A modular approach for testable conservative reversible multiplexer circuit f...A modular approach for testable conservative reversible multiplexer circuit f...
A modular approach for testable conservative reversible multiplexer circuit f...
 
Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...
Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...
Analysis on Fault Mapping of Reversible Gates with Extended Hardware Descript...
 
A Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular Automata
A Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular AutomataA Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular Automata
A Novel and Efficient Design for Squaring Units by Quantum-Dot Cellular Automata
 
A Redundant Adder Architecture in Ternary Quantum-Dot Cellular Automata
A Redundant Adder Architecture in Ternary Quantum-Dot Cellular AutomataA Redundant Adder Architecture in Ternary Quantum-Dot Cellular Automata
A Redundant Adder Architecture in Ternary Quantum-Dot Cellular Automata
 
Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...
Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...
Implementation of Non-restoring Reversible Divider Using a Quantum-Dot Cellul...
 
An Explicit Cell-Based Nesting Robust Architecture and Analysis of Full Adder
An Explicit Cell-Based Nesting Robust Architecture and Analysis of Full AdderAn Explicit Cell-Based Nesting Robust Architecture and Analysis of Full Adder
An Explicit Cell-Based Nesting Robust Architecture and Analysis of Full Adder
 

Último

Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSrknatarajan
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 

Último (20)

Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICSUNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
UNIT-IFLUID PROPERTIES & FLOW CHARACTERISTICS
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 

Information Theory and Coding

  • 1. Information Theory and Coding (CSE2013) Presented By: Dr. Neeraj Kumar Misra Associate Professor Department of SENSE, VIT-AP University Google Scholar: https://scholar.google.co.in/citations?user=_V5Af5kAAAAJ&hl=en Research Gate profile: https://www.researchgate.net/profile/Neeraj_Kumar_Misra ORCHID ID: https://orcid.org/0000-0002-7907-0276
  • 2. Introduction: Dr. Neeraj Kumar Misra, Associate Professor, SENSE  Full-time Ph.D (Under TEQIP-II, A World Bank funded Fellowship, Gov. of India)  Post Graduate Diploma in Artificial Intelligence and Machine Learning, from, NIT Warangle.  B.Tech and M.Tech first class with distinction.  Three International Patent granted (Validity 2028), Patent no. 2021105083, 2020103849, 2020102068  9+ years of Teaching and Industry Experience  Highest impact factor 8.907 paper published.  Launced two special issue as Editor of MDPI journal which in SCIE category.  Two times Young Achiever Awards in the domain of Quantum Computing  The total 14 publications that were published in the SCI.  Total 03 International books (ISBN: 978-3-659-34118-2, ISBN: 978-613-9-91317-6, ISBN: 978-620-2-01508-0)  Google scholar citation 325, i10-15 and h-index-11.  Funded Project completed 18 Lakh.  Professional Member: IEEE, ACM, ISTE, IETE, WRC, InSc, IAENG, IRED.  Editorial Board Member: ACTA Scientific, JCHE, IEAC, JEEE, IJCS  Total no of Student Guided M.Tech: 10, Ph.D: 03  Most cited paper in the year 2019.
  • 3. Good-Companies  Public Sector Companies in India
  • 4.
  • 5. Recommended Books Text Books 1. R. Togneri, C.J.S deSilva, “Fundamentals of Information Theory and Coding Design”, 1e, CRS Press, Imprint: Taylor and Francis, 2003. 2. R. Bose, “Information Theory Coding and Cryptography”, 3e paperback,Tata McGraw Hill, 2016. 3. J. A. Thomas, “Elements of Information Theory”, T. M. Cover, 2e, Wiley, 2008. References 1. R. J. McEliece, “The Theory of Information and Coding”, Cambridge University Press, 2004. 2. S. Roman, “Coding and Information Theory”, Springer, 1997.
  • 6. 6 Claude Shannon Father of Digital Communications “Claude Shannon's creation in the 1940's of the subject of information theory is arguably one of the great intellectual achievements of the twentieth century” Bell Labs Computing and Mathematical Sciences Research
  • 7.
  • 8. Q. Comments on the information content of the following message 1. Tomorrow the sun will rise from the east. 2. It will snow fall in Amaravati this winter 3. The phone will ring in the next one hour. Ans. 1. The first statement does not carry any information since it is sure that sun always rises from east. The probability of occurrence of first event is high or sure. Hence it carries less or negligible information Ik=Log1/pk=Log1/1=0 2. In the winter season snow fall in amaravati is very rare. Hence probability of occurrence of this event is very rare, so it carries large amount of information. 3. In the third statement predicts about phone ring in the time span of one hour. It does not mention exact time but span of one hour is mentioned. Hence it carries moderate information. Numerical Problem
  • 11. Entropy Concept  Entropy means degree of randomness or higher level of disorder and hence a higher value of entropy 0<P<1 means log (1/p) > 0 Then H(x) > 0  The entropy of sure (P = 1) or impossible (P = 0) event is zero
  • 12.
  • 13.
  • 14. Q1. In a binary system if '0' occur with probability 1/4 and '1‘ occur with probability 3/4. Then calculate amount of information conveyed by each bits. Ans. I1=2 bits, I2=0.415bits Q2. If there are M equally likely and independent message, then prove that amount of information carried by each message be I=N bits where M=2^N and N is an integer Q3. Prove the following statement “If receiver knows the message being transmitted then amount of information carried is zero” Q4. If I1 is the information carried by message m1 and I2 is the information carried by message m2 then prove that the amount of information carried compositely due to m1 and m2 is I 1,2=I1+I2 Numerical Problem
  • 16. Q. A source emit one of 4 possible symbol X0 to X3 during each signalling interval. The symbol occur with probability as given in table Find the amount of information gained by observing the source emitting each of these symbol and find also the entropy of source. Ans. I0=1.322bits, I1=1.737bits, I2=2.322bits, I3=3.322bits H(x)=1.84bits/msg symbol Probability Symbol P0=0.4 X0 P1=0.3 X1 P2=0.2 X2 P3=0.1 X3
  • 17.  It is rate at which information is generated. R = r . H Where r=rate at which message are generated H=Entropy or average information R=r in message/second x H in information bits/message  Unit of information rate- Information bits/second r is represented in average number of bits of information per second. Information Rate (R)
  • 18. Q1. The entropy of sure event is a. Zero b. One c. Infinity d. None Q2. The entropy of impossible event is a. Zero b. One c. Infinity d. None Q3. Entropy is maximum when probability a. P = 0 b. P = 1 c. P = ∞ d. P = 1/2 Q4. How to define the information rate a. R = r/H b. R = H/r c. R = r. H d. None Solution of Quiz
  • 19. Q5. If the bandwidth is B then what is the Nyquist rate a. B/2 b. B c. B/4 d. 2B Q6. Calculate the information when P=1/4 a. 4 bits b. 3 bits c. 1 bits d. 2 bits Fill in the blank Q1. Entropy means____Degree of randomness__________ Q2. Entropy is minimum when probability_____p=1/2____________ Q3. Unit of information______bits_________ Q4. Probability of occurrence is less then information is______more__________
  • 20. Q1. a) An Analog Signal is bandwidth to B Hz sampled at Nyquist rate. These samples are quantized into 4 levels. Each level represent message. Thus these are 4 message. The probability of occurrence of these 4 level are P1=P4=1/8 and P2=P3=3/8. Find our information rate of source. b) If the transmission scheme as pervious question. Calculate information rate if all message are equally likely. Ans. a) R = 3.6bit/ses, b) R=4B bits/sec Numerical Problem
  • 21. Q. If the source emit m message [m1, m2, m3…………..] with probability [p1, p2, p3………]. Calculate the entropy when all the message are equi-probable. Q. A source produce 4 symbol with probability 1/2, 1/4, 1/8 and 1/8. For this source practical coding scheme has an average code-word length of 2bits/symbols. Calculate the efficiency of code Numerical Problem
  • 22. Numerical Problem Q. For a discrete memory-less source there are 3 symbols with probability P1=α and P2=P3. Determine the entropy of the source and sketch its variation for different values of α Q Find the entropy of a source in nats/symbols and Hartleys/symbols of a source that emits one out of four symbols A, B, C, D in a statistically independent sequence with probability 1/2, 1/4, 1/8, and 1/8 Ans. H(s)=1.2135 nats/symbols, 0.5268 Hartleys/Symbols
  • 23. Q. For a discrete memory-less source there are 3 symbols with probability P1=α and P2=P3. Determine the entropy of the source and sketch its variation for different values of α
  • 24. Find the entropy of a source in nats/symbols and Hartleys/symbols of a source that emits one out of four symbols A, B, C, D in a statistically independent sequence with probability 1/2, 1/4, 1/8, and 1/8
  • 25. Q. A source transmit two independent message with probability of P and (1-P) respectively. Prove that the entropy is maximum when both the message are equally likely. Plot the entropy of source versus probability (0<P<1) Numerical Problem
  • 26. Q. A source transmit two independent message with probability of P and (1-P) respectively. Prove that the entropy is maximum when both the message are equally likely. Plot the entropy of source versus probability (0<P<1)
  • 27. Numerical Problem Q. Consider a telegraph source having two element dot and dash. The dot duration is 0.2sec and the dash duration is 3 times of dot duration. The probability of the dot occurring is twice that of dash and time between symbol is 0.2 second. Assume 1200 symbol. Calculate information rate of the telegraph source. Ans. R=1.7218 bit/sec
  • 28. Q Consider a telegraph source having two element dot and dash. The dot duration is 0.2sec and the dash duration is 3 times of dot duration. The probability of the dot occurring is twice that of dash and time between symbol is 0.2 second. Assume 1200 symbol. Calculate information rate of the telegraph source.
  • 29. Q1. An event has two possible outcomes with probability P1 = 1/2 and P2 = 1/64 . The rate of information with 16 outcomes per second is: a. 38/4 bit/sec b. 38/64 bit/sec c. 38/2 bit/sec d. 38/32 bit/sec Q2. For a system having 16 distinct symbols maximum entropy is obtained when probabilities are a. 1/8 b. 1/4 c. 1/3 d. 1/16 Q3. An Analog signal band limited to 10Khz is quantized in 8 level with probability 1/4, 1/4, 1/5, 1/5, 1/10, 1/10, 1/20, and 1/20. Find the entropy and rate of information a. 2.84 bits/message, 56800 bit/sec- b. 3.42 bits/message, 6.823 bit/sec c. 1.324 bit/message. 2.768 bit/sec d. 4.567 bit/message, 8.653 bit/sec Quiz
  • 30. Q1. A binary random variable X takes the value +2 or -2. The probability P(X = +2) = α. The value of α for which the entropy of X is maximum is a. 0 b. 1 c. 2 d. 0.5- Q2. Discrete source S1 has 4 equi-probable symbols while discrete source S2 has 16 equi- probable symbols. When the entropy of these two source is compared the entropy of: a. S1 is greater than S2 b.S1 is less than S2 - c. S1 is equal to S2 d. Depends on rate of symbols/second Quiz 3
  • 31. Q3. If all the message are equally likely then entropy will be a. Minimum b. Zero c. Maximum d. None Q4. In a binary source 0s occur three times as often as 1s. What is the information contained in the 1s? a. 0.415 bit b. 0.333 bit c. 3 bit d. 2 bit Fill in the blank Q5. What is the unit of entropy ------- Q6. What is the unit of Information rate-------
  • 32. Extension of Zero Order Source  Two symbol S1 and S2 with probability P1 and P2  Second order extension of the source (Number of Source)Order of extension 22=4 (Possible combination)  S1 S1= P1 P1 = P12  S1 S2= P1 P2  S2 S1= P2 P1  S2 S2 = P2 P2 = P22
  • 33. Numerical Problem Q Consider a discrete memory-less source with alphabet S={S0, S1, S2} with source statistics {0.7, 0.15, 0.15} a) Calculate the entropy of source b) Calculate the entropy of the 2nd order extension of the source Q A source generate information with probability P1=0.1, P2=0.2, P3=0.3 and P4=0.4. Find the entropy of the system. What percentage of maximum possible information is being generated by source.
  • 34. Q Consider a discrete memory-less source with alphabet S={S0, S1, S2} with source statistics {0.7, 0.15, 0.15} a) Calculate the entropy of source b) Calculate the entropy of the 2nd order extension of the source
  • 35. Q A source generate information with probability P1=0.1, P2=0.2, P3=0.3 and P4=0.4. Find the entropy of the system. What percentage of maximum possible information is being generated by source.
  • 36. Q1. An event has two possible outcomes with probability P1=1/2 and P2=1/64. Calculate the rate of information with 16 outcomes per second is A. 24/3 B. 64/6 C. 38/4- D. None Q2. Entropy is always a. Negative b. Fractional c. Positive d. None
  • 37. Q3. How to define the efficiency of the channel a. ή=H(s) x |H(s)|max b. ή=H(s) + |H(s)|max c. ή=H(s) - |H(s)|max d. ή=H(s) / |H(s)|max- Q4. How to define the redundancy (R) of the channel a. R=1 + ή b. R=1 / ή c. R=1 x ή d. R=1 – ή
  • 38. Q1. When the base of the logarithm is 2, then the unit of measure of information is a) Bits b) Bytes c) Nats d) None of the mentioned Q2. When probability of error during transmission is 0.5, it indicates that a) Channel is very noisy b) No information is received c) Channel is very noisy & No information is received d) None of the mentioned
  • 39. Q4. When the base of the logarithm is e, the unit of measure of information is a) Bits b) Bytes c) Nats d) None of the mentioned Q5. A binary source emitting an independent sequence of 0’s and 1’s with probabilities p and (1-p) respectively. In which case probability is maximum a) P=0 b) P=1 c) P=0.5 d) P=Infinity Q6. An image use 512x512 picture elements. Each of the picture element can take any of the 4 distinguishable intensity levels. Calculate the maximum bit needed. a) 512x512x3 b) 512x512x4 c) 512x512x2 d) 512x512x1
  • 40. Numerical Problem Q. Consider a system emitting three symbols A, B and C with respective probability 0.7, 0.15 and 0.15. Calculate the efficiency and redundancy Ans. Efficiency = 74.52%, Redundancy = 25.48% Q. Prove that the nth order extension of Entropy is represented in this form H(Sn) = n. H(s)
  • 41. Q. Consider a system emitting three symbols A, B and C with respective probability 0.7, 0.15 and 0.15. Calculate the efficiency and redundancy
  • 42. Q. Prove that the nth order extension of Entropy is represented in this form H(Sn) = n. H(s)
  • 43. Q. Prove that the nth order extension of Entropy is represented in this form H(Sn) = n. H(s)
  • 44. Q3. An image use 512x512 picture elements. Each of the picture element can take any of the 8 distinguishable intensity levels. Calculate the maximum bit needed. Ans. 786432 bits Numerical Problem
  • 45. Q3. An image use 512x512 picture elements. Each of the picture element can take any of the 8 distinguishable intensity levels. Calculate the maximum bit needed.
  • 46. Q. An image use 512x512 picture elements. Each of the picture element can take any of the 4 distinguishable intensity levels. Calculate the maximum bit needed. Numerical Problem
  • 47. Q. An image use 512x512 picture elements. Each of the picture element can take any of the 4 distinguishable intensity levels. Calculate the maximum bit needed.
  • 48. Joint Probability P(x, y)  Probability that (x, y) simultaneously occur Where x and y represent the random variable Conditional Probability P(x/y) P(x/y)= represent the conditional probability (Probability of x when y is known) P(x, y)= P(x/y). P(y) Or P(x, y)= P (y/x). P(x) If x and y are independent Then P(x, y)= P(x). P(y)
  • 49. Average Uncertainty of channel input Average Uncertainty of channel output Entropy of output channel Joint Entropy
  • 50. Joint Entropy Conditional Entropy Conditional Entropy H(Y/X) is the average uncertainty of channel output gives that X was transmitted H(X/Y) is the average uncertainty of channel output gives that Y was transmitted
  • 51.  H(X/Y) is the average uncertainty of channel output gives that Y was transmitted  H(Y/X) is the average uncertainty of channel output gives that X was transmitted Important Property
  • 52. Mutual Information  It is defined as the amount of information transferred, where xi is transmitted and yi is received.
  • 53. Average Mutual Information  Amount of source information gained per received symbol
  • 54. Important Properties of Mutual Information Symmetrical Property of Mutual Information Mutual Information is always positive Mutual Information is represented in entropies Mutual Information related to joint entropies
  • 55. Q1. Which is the correct relationship between Joint entropy and conditional entropy? A) H(X, Y) = H(X/Y) + H(Y) B) H(X, Y) = H(Y/X) + H(X) C) Both a and b D) None Q2. Which of the following is NOT representative of the mutual information? A) I(X; Y) = H(X) - H(X/ Y) B) I(X; Y) = I(Y; X) C) I(X; Y) = H(Y) - H(X/ Y) D) I(X; Y) = H(X) + H(Y) - H(X, Y) Quiz
  • 56. Q3. For which value(s) of p is the binary entropy function H(p) maximized? A) 0 B) 0.5 C) 1 D) 1.2 Q4. When the event are independent then which relations is correct? A) P(x, y)= P(x) + P(y) B) P(x, y)= P(x) - P(y) C) P(x, y)= P(x) / P(y) D) P(x, y)= P(x). P(y) Q5. How to represent conditional probability when 1 was transmitted and 0 received A) P(1/1) B) P(1/0) C) P(0/0) D) P(0/1)
  • 57. Q6. There is a heavy snowfall in Amaravati means A) Information is more B) Information is less C) Information is equal D) None Q7. 1 cm=10^-2 m means A) Information is more B) Information is less C) Information is equal D) None
  • 58. Mutual Information I(X:Y)=Uncertainty about the channel input that is resolved by observing the channel output
  • 59. Discrete Memory-less Channel Note  Discrete Memory-less channel means output only depends on the current input not on the previous input.
  • 60. Channel matrix represent the transition probability. Channel Matrix P(1/0)----> Probability of transmitted 0 and received 1. P(0/1)----> Probability of transmitted 1 and received 0. P(1/1) ---->Probability of transmitted 1 and received 1. P(0/0) ---->Probability of transmitted 0 and received 0.
  • 61. Channel Matrix, Noise Matrix or Probability Transition Matrix
  • 62. Types of Channel 1. Lossless Channel 2. Deterministic Channel 3. Noiseless Channel 4. Binary Symmetric Channel (BSC)
  • 63. Lossless Channel Important Points  Only one non-zero element in each column  In lossless channel no source information is lost in transmission  Each output has received some information  Non-zero element in each column
  • 64.  Only one non zero element in each row. This element must be unity  Output symbol will be received for a given source symbol. Deterministic Channel Everything is determined already
  • 65. Noiseless Channel It has both the property as  Lossless +  Deterministic channel
  • 66. Binary Symmetric Channel (BSC)  This channel has probability distribution
  • 67. Quick Revision  Discrete memoryless channel means output only depends on the current input not on the previous input.  Channel matric represent the transition matrix  P(0/1) represent the probability of transmitted is 1 and received 0  P(Y/X) represent conditional probability matrix or channel matrix  Joint probability means complete channel as whole  In lossless only one non-zero elements in each column  In deterministic channel one non-zero element in each row and element must be unity.  Noiseless channel has satisfy both lossless and deterministic channel property.
  • 68. Q1. In Lossless channel the important features are a. Non-zero elements in each column b. Non-zero elements in each row c. Non-zero elements in both column and row d. None Q2. In Deterministic Channel the important features are a. Non-zero elements in each column b. Non-zero elements in each row c. Non-zero elements in both column and row d. None Q3. In Noiseless Channel the important features are a. Non-zero elements in each column b. Non-zero elements in each row c. Non-zero elements in both column and row d. None Quiz
  • 69. Q4. Which property is correct for mutual information a. I(X;Y)=I(Y;X) b. I(X;Y)>0 c. I(X;Y) = H(Y) - H(Y/X) d. I(X;Y)=H(X) + H(Y) - H(X;Y) e. All of these Q5. Channel matrix represent the a. Transition probability b. Steady state probability c. Conditional probability d. None Fill in the blank 1. Unit of Entropy_______bits/Symbol______ 2. Unit of Information rate____bit/sec_____
  • 70. Calculation of Output Probability Properties
  • 71. Q Given a binary channel shown below a. Find the channel matrix of the channel. b. Find the P(y1) and P(y2) when P(x1)=P(x2)=0.5 c. Find the joint probability P(x1,y2) and P(x2,y1) when P(x1)=P(x2)=0.5 Hint: 1. Find conditional probability using channel diagram: P(y1/x1), P(y2/x1), P(y1/x2) and P(y2/x2) 2. All Condition probability to be fill into channel matrix P(y/x) 3. Use formula P(y)=P(x). P(y/x) 4. In P(y) matrix sum all the column then find the P(y1) and P(y2) 5. For joint probability matrix use formula P(x, y)= P(x)d. P(y/x)
  • 72. Q Given a binary channel shown below a. Find the channel matrix of the channel. b. Find the P(y1) and P(y2) when P(x1)=P(x2)=0.5 c. Find the joint probability P(x1,y2) and P(x2,y1) when P(x1)=P(x2)=0.5
  • 73. Q Given a binary channel shown below a. Find the channel matrix of the channel. b. Find the P(y1) and P(y2) when P(x1)=P(x2)=0.5 c. Find the joint probability P(x1,y2) and P(x2,y1) when P(x1)=P(x2)=0.5
  • 74. Q. A Channel has the following channel matrix P(y/x) = − − a) Draw the channel diagram b) If the source has equally likely inputs, compute the output probabilities associated with the channel outputs for P = 0.2 Numerical Problem Hint 1. From the Channel matrix P(y/x) establish the link between input and output variable. 2. Source equally likely means input variable probability P(x1)=0.5 P(x2)=0.5 3. Use formula to calculate the output symbol probability P(y)=P(x). P(y/x)
  • 75. Q. A Channel has the following channel matrix P(y/x) = − − a) Draw the channel diagram b) If the source has equally likely inputs, compute the output probabilities associated with the channel outputs for P = 0.2
  • 76. Q. A Channel has the following channel matrix P(y/x) = − − a) Draw the channel diagram b) If the source has equally likely outputs, compute the probabilities associated with the channel outputs for P = 0.2
  • 77. Cascaded Channel  Mutual Information for Channel 1: I(x, y) = H(y) - H(y/x)  Mutual Information for Channel 2: I(y, z) = H(z) - H(z/y)  Mutual Information for Cascaded Channel: I(x, z) = H(z) - H(z/x)  For cascaded channel Mutual Information I(x, z) < I (x, y)
  • 78. Q1. Which relationship is correct a. P(x, y)= P(y/x). P(y) b. P(x, y)= P(x/y). P(x) c. P(x, y)= P(y/x). P(z) d. P(x, y)= P(y/x). P(x) Q2. Which relationship is correct for cascaded channel a. P(z/x) = P (x/y). P(z/y) b. P(z/x) = P (y/x). P(y/z) c. P(z/x) = P (y/x). P(z/y) d. None Q3. An event has two possible outcomes with probability P1=1/2 and P2= 1/64. Find the rate of information with 16 outcomes per second is a. 19/2 b. 25/3 c. 35/6 d. None Quiz
  • 79. Q. A discrete source transmit message {x0, x1, x2} with probability {0.3, 0.4, 0.3}. The source is connected to the channel given in the figure. Calculate H(x), H(y), H(x, y), and H(y/x) Numerical Problem Hint: Useful formula to solve this question
  • 80.
  • 81.
  • 82.
  • 83.
  • 84. Q. Two binary symmetric channel are connected in cascaded as shown in the figure a)Find the channel matrix of resultant channel b)Find P(z1) and P(z2) if P(x1)=0.6 and P(x2)=0.4 Cascaded Channel based Numerical Problem
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90. Channel Capacity (Cs)  The maximum mutual information is called as channel capacity (Cs).  Channel capacity indicates the utilization factor of the channel. Cs = max { I(x; y) } bit/Symbols-------(1) Where, max { I(x; y) } represent the maximum value of mutual information  Channel capacity (Cs) indicated maximum number of bits the channel is capable of transmit through the channel per symbol. C = r x Cs bit/sec Where C represent the channel capacity in bit/sec and r represents the data rate.
  • 91. Redundancy Channel Efficiency ἡ Channel capacity Cs or C Redundancy = 1- ἡ ἡ =I(x; y) / C Cs = max { I(x; y) } bit/Symbols- C = r x Cs bit/sec  For symmetrical and uniform channel C = (log S – H) bit/sec, where S represent the number of output symbol
  • 92. Q1. Which type of channel does not represent any correlation between input and output symbols? a. Noiseless Channel b. Lossless Channel c. Useless Channel d. Deterministic Channel Q2. For M equally likely messages, the average amount of information H is a. H = log10M b. H = log2M c. H = log10M2 d. H = 2log10M
  • 93. The capacity of Special Channels 1)Lossless Channel: For lossless channel Conditional Entropy, H(x/y) = 0 Cs = max { I(x; y) }….> Maximum of Mutual Information Cs= max { H(x) } Cs = log m where m represents the messages I(x; y)= H(x) – H(x/y) = H(x) – 0 = H(x)
  • 94. 2. Deterministic Channel: Only one non-zero element in each row  Conditional Entropy--- H(y/x)= 0  Information---- I = log (1/p)= log (1/1) = 0  Entropy---- H = I. P = 0. P= 0  Mutual information…I(x; y) = H(y) - H(y/x) = H(y)  Channel Capacity…..Cs= max I(x;y)=max(Hy) Cs=log n Where n represents the different symbol
  • 95. 3. Noiseless Channel H(x) = H(y), Both entropy are same H(y/x) = H(x/y) = 0 Cs=max{I(x;y)} Cs=log m= log n
  • 96. 4. Binary Symmetric Channel Channel capacity per symbol Cs= 1+ P log p + (1-P) log (1-p) Mutual Information I(x;y)= H(y)+P log p + (1-P) log (1-p) H(x)max when P(x0)=P(x1)=1/2 Where x0 represent binary ‘0’, x1 represent binary ‘1’ Mutual information I(x;y) is maximum when C=I(x;y) when P(x0)=P(x1)=1/2 P(y0/x1)=P(y1/x0)=p P(y0/x0)=P(y1/x1)=1-p
  • 97. Q. Find the capacity of the binary symmetric channel when probability 0.5
  • 98. Q. A binary symmetric channel the following noise matrix with source probability P(x1)=2/3 and P(x2)=1/3 a. Determine H(x), H(y), H(y/x) b. Find channel capacity C c. Find channel efficiency and redundancy
  • 99.
  • 100.
  • 101.
  • 102. Q1. What is the value of Information, Entropy and channel capacity, in the deterministic channel. a) 0, 1, log(n) b) 1, 0 , log(n) c) 1, 1, log(n) d) 0, 0, log(n) Q2. For symmetrical and uniform channel, how to define the capacity a) C = (log S + H) bit/sec b) C = (H + log S) bit/sec c) C = (H - log S) bit/sec d) C = (log S - H) bit/sec Quiz
  • 103. Q3. Channel capacity (Cs) indicated _________ number of bits the channel is capable of transmit through the channel per symbol a) Lowest b) Equal c) Maximum d) None Q4. Calculate the Entropy for symmetrical and uniform channel, when P(y/x) = / / / / / / / / / a) 5/8 b) 6/8 c) 7/8 d) None
  • 104. Q5. For the binary symmetric channel, how to define the capacity a) Cs= 1+ P log p + (1+P) log (1+p) b) Cs= 1+ P log p + (1+P) log (1-p) c) Cs= 1+ P log p + (1-P) log (1+p) d) Cs= 1+ P log p + (1-P) log (1-p)
  • 105. Numerical Q6. Two noisy channel are cascaded whose channel matrix are given below Where input symbol probability given as P(x1) = P(x2) = 1/2. Find the overall mutual information I(X; Z) Useful Hint 1. Draw the combined channel diagram and find the channel matrix 2. Find P(x, z) using formula P(x, z) = P(x). P(z/x) 3. Find entropy H(z/x) = ∑∑ P(x, z) log 1/P(z/x), and H(z)= ∑ P(z) log 1/P(z) 4. Use formula to find the overall mutal information I(x, z) = H(z) – H(z/x)
  • 106.
  • 107.
  • 108.
  • 109. Numerical Q2. Determine the capacity of the channel shown below Useful Hint 1. Find the channel matrix P(y/x) 2. Find the Entropy H for consideration of one column due to uniform and symmetrical channel 3. Use formula C = log S - H
  • 110. Q. Determine the capacity of the channel shown below
  • 111. Q3. Find the channel capacity Numerical Useful Hint 1. Find the channel matrix P(y/x) 2. Find the Entropy H for consideration of one column due to uniform and symmetrical channel 3. Use formula C = log S - H
  • 112. Q. For the channel matrix and channel capacity as shown below
  • 113. Numerical Q2. For the channel matrix given below compute the channel capacity, given rs=1000 symbol/sec Useful Hint 1. Find the Entropy H 3. Use formula C = (log S – H ).r bit/sec
  • 114. Q. For the channel matrix given below compute the channel capacity, given rs=1000 symbol/sec
  • 115. Shannon Hartley Law: Channel capacity is the maximum rate at which the data can be transmitted through a channel without errors  For error- free transmission Cs > r where Cs is the channel capacity and r is the bit rate  Relationship between Capacity, bandwidth and Signal to noise ratio Where S/N= Signal to noise ratio, B= Bandwidth Signal to Noise ratio: A signal-to-noise ratio over 0 dB indicates that the signal level is greater than the noise level. The higher the ratio, the better the signal quality. For example, a Wi-Fi signal with S/N of 40 dB will deliver better network services than a signal with S/N of 20 dB Data rate or Channel capacity C= H log (1+S/N) bit/sec or B
  • 116. Q1. Assuming that a channel has bandwidth B=3000Hz, and typical Signal to noise ratio (SNR) of 20dB. Determine the maximum data rate that can be achieved. Q2. A channel has a bandwidth of 8KHZ and signal to noise ratio of 31. For the same channel capacity if the SNR is increase to 61 then find the new channel bandwidth of the channel. Q3. In a Communication system the S/N ratio at the input of receiver is 15. Determine the channel capacity of the channel if bandwidth as a) B= 1Khz b) B = 1MHZ c) B = 1GHZ Numerical Question
  • 117. Q1. Assuming that a channel has bandwidth B=3000Hz, and typical Signal to noise ratio (SNR) of 20dB. Determine the maximum data rate that can be achieved.
  • 118. Q2. A channel has a bandwidth of 8KHZ and signal to noise ratio of 31. For the same channel capacity if the SNR is increase to 61 then find the new channel bandwidth of the channel.
  • 119. Q3. In a Communication system the S/N ratio at the input of receiver is 15. Determine the channel capacity of the channel if bandwidth as a) B= 1Khz b) B = 1MHZ c) B = 1GHZ
  • 120. Q. The channel capacity under the Gaussian noise environment for a discrete memory-less channel with a bandwidth of 4 MHz and SNR of 31 is Q. What is the capacity of extremely noisy communication channel that has a bandwidth of 150 MHz such that the signal to noise ratio is very much below the noise power? Numerical
  • 121. Q. The channel capacity under the Gaussian noise environment for a discrete memory-less channel with a bandwidth of 4 MHz and SNR of 31 is
  • 122. Q. What is the capacity of extremely noisy communication channel that has a bandwidth of 150 MHz such that the signal to noise ratio is very much below the noise power?
  • 123. Q. In the communication system, if for a given rate of information transmission requires channel bandwidth, B1 and signal-to-noise ratio SNR1 . If the channel bandwidth is doubled for same rate of information then a new signal-to-noise ratio will be Numerical
  • 124. Q. In the communication system, if for a given rate of information transmission requires channel bandwidth, B1 and signal-to-noise ratio SNR1 . If the channel bandwidth is doubled for same rate of information then a new signal-to-noise ratio will be
  • 125. Q. A communication channel with AWGN operating at SNR ratio>>1 and bandwidth B has capacity C1. If the SNR is doubled keeping B constant then find the resulting capacity C2 in terms of C1 and B Numerical
  • 126. Q. A communication channel with AWGN operating at SNR ratio>>1 and bandwidth B has capacity C1. If the SNR is doubled keeping B constant then find the resulting capacity C2 in terms of C1 and B
  • 127. Fact 1. If there is more uncertainty about the message, information carried is also more. 2. I know after finishing my lecture, I will go to my cabin then amount of information carried is zero. 3. Hi--------> I1 How are You-----> I2 Hi How are You------> I1 + I2 4. P (Sun rise in the east) > P (Sun rise in the west) I (Sun rise in the east) < I (Sun rise in the west)
  • 128. Q3 The channel capacity under the Gaussian noise environment for a discrete memory- less channel with a bandwidth of 4 MHz and SNR of 31 is 1. 20 Mbps 2. 4 Mbps 3. 8 kbps 4. 4 kbps Q4 The Shannon’s Theorem sets limit on the 1. Highest frequency that may be sent over channel 2. Maximum capacity of a channel with a given noise level 3. Maximum number coding levels in a channel 4. Maximum number of quantizing levels in a channel Quiz
  • 129. Q1. Q2. In the communication system, if for a given rate of information transmission requires channel bandwidth, B and signal-to-noise ratio SNR . If the channel bandwidth is doubled for same rate of information then a new signal-to-noise ratio will be Quiz
  • 130. Q. The symbol A, B, C and D occur with probability 1/2, 1/4, 1/8 and 1/8 respectively. Find the information content in the message 'BDA', where the symbol is independent. a. 2bit b. 4bit c. 5bit d. 6bit Q. How many bits per symbol to encode 32 different symbols? a. 2 bit/symbol b. 4 bit/symbol c. 5 bit/symbol d. 6 bit/symbol Quiz
  • 131.
  • 132. Q. An analog signal having 4-kHz bandwidth is sampled at 1.25 times the Nyquist rate, and each sample is quantized into one of 256 equally likely levels. Assume that the successive samples are statistically independent. (i) What is the information rate of this source? (ii) Can the output of this source be transmitted without error over an AWGN channel with a bandwidth of 10 kHz and S/N ratio of 20 dB? (iii) Find the S/N ratio required for error- free transmission for part (ii) Hint 1. Nyquist rate fs = 2fm 2. Nyquist rate (r) = fs X 1.25 3. If all message are equally likely then H(x) = log m 4. R= r. H(x) 5. Use channel capacity formular C= Blog (1+S/N) bit/Sec 6. For error free communication C > r
  • 133. Q. An analog signal having 4-kHz bandwidth is sampled at 1.25 times the Nyquist rate, and each sample is quantized into one of 256 equally likely levels. Assume that the successive samples are statistically independent. (i) What is the information rate of this source? (ii) Can the output of this source be transmitted without error over an AWGN channel with a bandwidth of 10 kHz and S/N ratio of 20 dB? (iii) Find the S/N ratio required for error- free transmission for part (ii)
  • 134.
  • 135. Q. Consider an AWGN channel with 4-kHz bandwidth and the noise power spectral density η/2= 10 W/Hz. The signal power required at the receiver is 0.1 mW. Calculate the capacity of this channel.
  • 136. Q. The terminal of a computer used to enter alphabetic data is connected to the computer through a voice grade telephone having a usable bandwidth of 3KHz and SRN=10. Assume that terminal has 128 characters, determine a. The capacity of channel b. The max rate of transmission without error. Numerical Problem
  • 137. Q. The terminal of a computer used to enter alphabetic data is connected to the computer through a voice grade telephone having a usable bandwidth of 3KHz and SRN=10. Assume that terminal has 128 characters, determine a. The capacity of channel b. The max rate of transmission without error.
  • 138. Q. A source produces 4 symbols with probabilities 1/2, 1/4, 1/8 and 1/8. For this source, a practical coding scheme has an average code word length of 2 bits / symbol. Calculate the efficiency of the code. Numerical Problem
  • 139. Q. A source produces 4 symbols with probabilities 1/2, 1/4, 1/8 and 1/8. For this source, a practical coding scheme has an average code word length of 2 bits / symbol. Calculate the efficiency of the code.
  • 140. Q. The output of an information source consists of 300 symbols, 60 of which are of probability 1/32 and remaining with a probability of 1/512. The source emits 1400 symbols/sec. Assuming that the symbols are chosen independently. Find the average information rate. Numerical Problem
  • 141. Q. The output of an information source consists of 300 symbols, 60 of which are of probability 1/32 and remaining with a probability of 1/512. The source emits 1400 symbols/sec. Assuming that the symbols are chosen independently. Find the average information rate.
  • 142. Consider the binary information channel fully specified, P = 3/4 1/4 2/8 6/8 , input probabilities as 1/2 and 1/2. Calculate a) Output probabilities, b) Backward probabilities. Numerical Problem
  • 143. Q. Consider the binary information channel fully specified, P = 3/4 1/4 2/8 6/8 , input probabilities as 1/3 and 1/3. Calculate (a) Output probabilities, (b) Backward probabilities.
  • 144. Average Information (Entropy) E = P log (1/P) 1. Entropy is zero if the event is sure or it is impossible H = 0 if P=1 or 0 2. Upper bound on entropy Hmax = log M
  • 145. Information Rate R = r H Information bits/second R = (r in message/sec) (H in Information bits/message) Unit of information rate………….> Information bits/second
  • 147. Extension of Discrete Memoryless Source H (Xn) = n H (X) Where n is the number of successive symbols in one group or block
  • 148. Communication Channel Communication channel has three main parts 1. Transmitter 2. Receiver 3. Channel Transmitted data is discrete then it is known as Discrete channel
  • 149. Proofs
  • 150. Q. Find the transition matrix p(Z/X) for the cascaded channel shown:
  • 151. Q. Find the transition matrix p(Z/X) for the cascaded channel shown
  • 153.
  • 154. Kullback–Leibler divergence (Relative Entropy)  D(P||Q) is a type of statistical distance: a measure of how one probability distribution P is different from a second, reference probability distribution Q.
  • 155.
  • 156. Joint Entropy and Conditional Entropy
  • 158. Relation between Joint Entropy and Conditional Entropty
  • 159.
  • 160. Hint Find P(x, y) = P(x)d. P(y/x) Find H(y/x) Find I(x;y) = H(y) - H(y/x) Case 1. α=0.5 and P=0.1 find P(y)=P(x). P(y/x) Find H(y) Use formula I(x;y) = H(y)+Plog P+ (1-P) log (1-P) Case 2. α=0.5 and P=0.5 find P(y)=P(x). P(y/x) Find H(y) Use formula I(x;y) = H(y)+Plog P+ (1-P) log (1-P)
  • 161.
  • 162. Hint 1. Find P(y/x) 2. Find P(y)=P(x).P(y/x) 3. Find P(x, y) 4. Find H(y) 5. Find H(y/x) 6. Use a formula for mutual information I(x,y) = H(y)-H(y/x) 7. Maximum value of mutual information is known as channel capacity 8. Maximum value of entropy is 1
  • 163.
  • 164.
  • 165. Binary Erasure Channel (BEC) • Most commonly used channel in digital communication. • This channel has 100% data recovery. Always ensure this channel provide the correct message. • If a symbol is received in error it is just marked as error and discarded and a retransmitted is requested from the transmitter using a reverse channel till the correct symbol is received at the output.
  • 166. Q1. A channel matrix has the following characteristics. The probability of P(x1)=P(x2)=0.5 Find H(X), H(Y), H(X,Y), I(X;Y) and Channel capacity (bit/sec), if r=1000 symbol/sec Numerical Useful Hint 1. Find P(x, y)=P(x). P(y/x) 2. Using P(x, y)…..> find P(y1) = P(y2)= P(y3)= P(y4) 3. Find H(x) = P(x1) log (1/p(x1)) + P(x2) log (1/p(x2)) 4. Find H(y)= P(y1) log (1/p(y1)) + P(y2) log (1/p(y2))+ P(y3) log (1/p(y3)) + P(y4) log (1/p(y4)) 5. Find H(x, y)= ∑ P(xi, yi) log (1/ P(xi, yi) ) 6. Find I(x, y) = H(x) + H(y) – H(x,y) 7. Find Cs = max I(x, y) bit/symbol 8. Find C = r x Cs bit/sec
  • 167.
  • 168.
  • 169. Cascaded channels based Numerical Problems Q. Two channel are cascaded as shown below Given P(x1)=P(x2)=0.5 Show that I(x, z) < I(x, y)
  • 170. Useful formulas I(x; y)=H(y) – H(y/x) I(x; z)=H(z) – H(z/x) H(z/x)= ∑ ∑ P(x, z) log (1/ P(z/x)) H(y/x)= ∑ ∑ P(x, y) log (1/ P(y/x)) Important Steps 1. Find P(y/x) matrix 2. Find P(z/y) matrix 3. Find P(z/x) matrix 4. Find P(x, y) matrix 5. P(x, y)= P(y/x). P(x)…> P(y1) and P(y2)…..>H(y) 6. P(z, x)= P(z/x). P(x)…..> P(z1) and P(z2)….>H(z)
  • 171. Q. Comments on the information content of the following message 1. Tomorrow the sun will rise from the east. 2. It will snow fall in Amaravati this winter 3. The phone will ring in the next one hour. Numerical Problem
  • 172. Q. Comments on the information content of the following message 1. Tomorrow the sun will rise from the east. 2. It will snow fall in Amaravati this winter 3. The phone will ring in the next one hour. Ans. 1. The first statement does not carry any information since it is sure that sun always rises from east. The probability of occurrence of first event is high or sure. Hence it carries less or negligible information Ik=Log1/pk=Log1/1=0 2. In the winter season snow fall in amaravati is very rare. Hence probability of occurrence of this event is very rare, so it carries large amount of information. 3. In the third statement predicts about phone ring in the time span of one hour. It does not mention exact time but span of one hour is mentioned. Hence it carries moderate information. Numerical Problem
  • 173. Numerical Problem Q. In the binary erasure channel (BEC), input message probability is x and (1- x). Find the capacity of the channel. Useful Hint 1. Find P(B/A) 2. Find H(B/A) 3. Find H(A) 4. Find P(A, B) = P(B/A). P(A) 5. From P(A,B) matrix find P(B) matrix 6. Find P(A/B) = P(A, B) / P(B) 7. From P(A/B) matrix find H (A/B) using expression H(A/B)= P(A,B) log(1/P(A/B) 8. Put H(A/B) value in channel capacity equation as mentioned below 9. C = max [H(A) – H(A/B)]. r
  • 174. Useful Hint 1. Find P(B/A) 2. Find H(B/A) 3. Find H(A) 4. Find P(A, B) = P(B/A). P(A) 5. From P(A,B) matrix find P(B) matrix 6. Find P(A/B) = P(A, B) / P(B) 7. From P(A/B) matrix find H (A/B) using expression H(A/B)= P(A,B) log(1/P(A/B) 8. Put H(A/B) value in channel capacity equation as mentioned below 9. C = max [H(A) – H(A/B)]. r
  • 175. Q. In a Binary Eraser channel P=0.1, P(x1)=0.6 and P(x2)=0.4. Determine a) mutual information, b) channel capacity, c) channel efficiency and d) redundancy.
  • 176. Source Coding A conversion of a DMS output into a binary symbol sequence is known as source coding. The device that performs this conversion is known as source encoder. Aim To minimize the average bit rate required for the representation of the source by reducing the redundancy and increasing the efficiency of the information source
  • 177. Terminology 1. Code length: The length of a codeword is the number of binary digit in the codeword. 2. Code Efficiency When ἡ approach to unity the code is said to be efficient 3. Code redundancy Code Efficiency ἡ = Lmin/ L or ἡ = H(s)/ Lmin Code length L = ∑ P(xi) . li Code redundancy: 1- ἡ
  • 178. Source coding Theorem Or Shannon’s First Theorem The source coding theorem states that for a DMS X with entropy H(x) the average code word length (L) per symbol is bounded by If Lmin = H(x) then ἡ = H(x)/L  As it establishes an error free encoding it is called noiseless coding theorem L ≤ H(x)
  • 179. Data Compaction: Variable length source coding algorithm (Entropy Coding) Aim of Data Compaction: • Efficient in-terms of average number of bits per symbol. • Original data can be reconstructed without loss of any information. • Remove the redundancy and increase the efficiency from signal prior to transmission
  • 180. Technique for data compaction Three technique 1. Prefix coding or instantaneous coding 2. Shannon- fano coding 3. Huffman coding
  • 181. Steps in Shannon- fano coding 1. Arrange the message in the order of decreasing probability. 2. Divide the message into two almost equal probable groups. 3. The message in the first group is assigned the bit ‘0’ and the message in the second group are assigned the bit ‘1’. 4. The procedure is repeated until no further division is possible.
  • 182. Q. Encode using Shannon Fano coding F E D C B A Symbols 0.05 0.08 0.12 0.20 0.25 0.30 Probability Numerical
  • 183. Q. Encode using Shannon Fano coding F E D C B A Symbols 0.05 0.08 0.12 0.20 0.25 0.30 Probability
  • 184. Q. Encode using Shannon Fano coding F E D C B A Symbols 1/24 1/24 1/12 1/6 1/3 1/3 Probability Numerical
  • 185. Stage-5 (x5)=0.33 (x6)=0.041 Stage-4 (x4)=0.33 (x5 x6) = 0.041+0.041= 0.082 Stage-3 (x3)=0.33 (x4 x5 x6) =0.083+0.041+0.04 1= 0.165 Stage-2 (x2)=0.33 (x3 x4 x5 x6) = 0.166+0.083+0.041 +0.041= 0.331 Stage-1 (x1) =0.33 (x2 x3 x4 x5 x6)=0.33+0.166+0.083+ 0.041+0.041= 0.661 Probability Messa ge x1x2x3x4 freeze x1x2x3 freeze x1x2 freeze x1 freeze 0 1/3= 0.33 x1 0 1 1/3= 0.33 x2 0 1 1 1/6= 0.166 x3 0 1 1 1 1/12= 0.083 x4 0 1 1 1 1 1/24= 0.041 x5 1 1 1 1 1 1/24= 0.041 x6
  • 186. Agenda 1.Quiz Test 2. Coding Technique: Huffman Coding 3. Numerical Question based on Huffman Coding 4.Problem based on Shannon fano coding
  • 187. Q1. In Shannon Fano Coding all the messages in the order of a. Increasing Probability b. Equal Probability c. Decreasing probability d. None Q2. In Shannon Fano Coding Divide the message into a. Three almost equal probable groups b. Four almost equal probable groups c. Two almost equal probable groups d. None Q3. In Shannon Fano Coding a. First group is assigned the bit ‘1’ and the second group are assigned the bit ‘0’. b. First group is assigned the bit ‘1’ and the second group are assigned the bit ‘1’. c. First group is assigned the bit ‘0’ and the second group are assigned the bit ‘1’. d. None Quiz
  • 188. Q2. A discrete memoryless source x has six symbols x1, x2, x3, x4, x5, x6 with P(x1)=0.30, P(x2)=0.25, P(x3)=0.20, P(x4)=0.12, P(x5)=0.08 and P(x6)=0.05. Using Huffman code to find the code length and efficiency. Numerical based on Huffman Coding
  • 190. Q1. A discrete memoryless source x has five symbols x1, x2, x3 with P(x1)=0.4, P(x2)=0.19, P(x3)=0.16, P(x4)=0.15 and P(x5)=0.1. Using Huffman code to find the code length and efficiency. Numerical
  • 191. Q1. A DMS has five symbols A, B, C, D and E with probability given in the table. a. Construct a Shannon fano coding and calculate the efficiency of the code. b. Repeat for the Huffman code and compare the results E D C B A Symbols 0.1 0.15 0.16 0.19 0.4 Probability Numerical Question
  • 192.
  • 193.
  • 194. Agenda 1.Quiz Test 2. Numerical Question based on Huffman Coding 3.Problem based on Shannon fano coding
  • 195. Q1. Calculate the value of H(X2) if H(x)=1.5 a. 1 bits/symbols b. 2 bits/symbols c. 3 bits/symbols d. None Q2. Huffman coding technique is adopted for constructing the source code with ________ redundancy. a. Maximum b. Constant c. Minimum d. Unpredictable Quiz
  • 196. Q3. Which type of channel does not represent any correlation between input and output symbols? a. Noiseless Channel b. Lossless Channel c. Useless Channel d. Deterministic Channel Q4. On which factor the channel capacity depends on the communication system a. bandwidth b. Signal to noise ratio c. Both a and b d. None Q5. Information is the A. data B. meaningful data C. raw data D. Both A and B
  • 197. Q3. A discrete memoryless source x has five symbols x1, x2, x3, x4, x5, x6 with P(x1)=0.25, P(x2)=0.20, P(x3)=0.20, P(x4)=0.15 P(x5)=0.15 and P(x6)=0.05 . Using minimum variance Huffman code to find the code length and efficiency. Numerical
  • 198. Q3. A discrete memoryless source x has five symbols x1, x2, x3, x4, x5, x6 with P(x1)=0.25, P(x2)=0.20, P(x3)=0.20,P(x4)=0.15, P(x5)=0.15 and P(x6)=0.05. Using minimum variance Huffman code to find the code length and efficiency.
  • 199. Q1. Encode using Shannon Fano coding and find the efficiency. F E D C B A Symbols 0.08 0.10 0.12 0.15 0.25 0.30 Probability Numerical Question based on Shannon Fano Coding E D C B A Symbols 0.10 0.15 0.16 0.19 0.4 Probability Q2. Encode using Shannon Fano coding and find the efficiency.
  • 202. Numerical Q3. Encode using Shannon Fano coding and find the efficiency. H G F E D C B A 0.125 0.125 0.50 0.0625 0.03125 0.0625 0.03125 0.0625
  • 204. Prefix Code: Set of Binary Sequence such that no sequence in P is a prefix of any other binary sequence in P Q. Check the given code is prefix or not? a) P= {01, 010, 110, 010} b) P= {011, 101, 0011, 01101} c) P= {01, 100, 101} d) P= {001, 0100, 0101, 010101}
  • 205. Numerical Q3. A technique used in construction a source encoder is to arrange the message in the o decreasing probability and dividing the message into two almost equal provable groups. The message in the first group are assigned the bit ‘0’ and the message in the second group are assigned the bit ‘1’. The procedure is repated unitl no further division is possible. Using this algorithm. Find the code words for six message with probability as given below F E D C B A 1/24 1/24 1/12 1/6 1/3 1/3
  • 206.
  • 207. Numerical Q Compare the Huffman coding and Shannon-fano coding algorithm for data compression. For a discrete memoryless source X with six symbol x1, x2, x3, x4, x5 and x6. Find the compact code for every symbol if the probability distribution is as follow P(x1)=0.3, P(x2)=0.25, P(x3)=0.2, P(x4)=0.12, P(x5)=0.08, P(x6)=0.05 Calculate a. Entropy of the source b. Average length of the code. c. Compare the efficiency and redundancy in both the coding
  • 208. Q Compare the Huffman coding and Shannon-fano coding algorithm for data compression. For a discrete memoryless source X with six symbol x1, x2, x3, x4, x5 and x6. Find the compact code for every symbol if the probability distribution is as follow P(x1)=0.3, P(x2)=0.25, P(x3)=0.2, P(x4)=0.12, P(x5)=0.08, P(x6)=0.05 Calculate Entropy of the source, Average length of the code and Compare the efficiency and redundancy in both the coding
  • 209.
  • 210. Numerical Q A information source produce a sequence of independent symbols having the following probabilities, using Huffman encoding find the efficiency S6 S5 S4 S3 S2 S1 S0 Symbol 1/27 1/27 1/9 1/9 1/3 1/27 1/3 Probabili ty
  • 211.
  • 212. Agenda Dictionary Coding: Lempel-ziv (LZ) coding a)Basic concept b)Numerical questions Lempel-Ziv-Welch (LZW) coding a) Basic concept b) Numerical questions
  • 213. Main points 1. A drawback of the Huffman code is that it requires knowledge of a probabilistic model of the source. 2. To overcome these practical limitations, Lempel-Ziv coding will be helpful. 3. LZ coding is used for lossless data compression. 4. In LZ coding parsing the source data stream into segments that are the shortest sub-sequences not encountered previously. LZ (Lempel-Ziv Coding)
  • 214. Numerical Q. Encode Message: AABABBBABAABABBBABBABB using LZ coding Assumption: A denoted by 0 and B denoted by 1
  • 215. Numerical Q. Encode Message: AABABBBABAABABBBABBABB using LZ coding Sol. Assumption: A denoted by 0 and B denoted by 1 BB 9 ABBA 8 BB 7 ABAB 6 ABA 5 B 4 ABB 3 AB 2 A 1 7 3A 4B 5B 2A φB 2B 1B φA 0111 011, 0 100, 1 101, 1 10, 0 00, 1 10, 1 1, 1 0
  • 216. Dictionary Coding: LZ Coding Q. Encode message 101011011010101011 using LZ coding
  • 217. Dictionary Coding: LZ Coding Q. Encode message 101011011010101011 using LZ coding Sol. 1011 8 010 7 101 6 01 5 11 4 10 3 0 2 1 1 6,1 5,0 3,1 2,1 1,1 1,0 Φ,0 Φ,1 (000,1) (000, 0), (001, 0), (001, 1), (010, 1), (011, 1), (101, 0), (110, 1)
  • 218. Salient Feature of LZW Coding Greedy approach Most popular lossless compression technique Reduces file size contained repetitive data Fast and simple
  • 219. Dictionary Coding: LZW Coding Q. Encode and Decode message a b a b b a b c a b a b b a using LZW coding, if initial dictionary as Entry Index a 1 b 2 c 3
  • 222. Q. Encode this code using LZW dictionary, initial dictionary is given as wabba b wabba b wabba b wabba b woo b woo b woo Dictionary Coding: LZW Coding b 1 a 2 b 3 o 4 w 5
  • 224. Q. Decode this code using LZW dictionary, initial dictionary is given 3, 1, 4, 6, 8, 4, 2, 1, 2, 5, 10, 6, 11, 13, 6 a 1 b 2 r 3 t 4 5 6 7 8 9 10 11 12 13 6 13 11 6 10 5 2 1 2 4 8 6 4 1 3 at br ba at tb ra b a b t ata at t a r Received Message a 1 b 2 r 3 t 4
  • 225. Q. Decode this code using LZW dictionary, initial dictionary is given 3, 1, 4, 6, 8, 4, 2, 1, 2, 5, 10, 6, 11, 13, 6 a 1 b 2 r 3 t 4 ra 5 at 6 ta 7 ata 8 atat 9 tb 10 ba 11 ab 12 br 13 6 13 11 6 10 5 2 1 2 4 8 6 4 1 3 at br ba at tb ra b a b t ata at t a r at br ba at tb ra b a b t ata at t a r Received Message a 1 b 2 r 3 t 4
  • 226. Kraft Inequality A necessary and sufficient condition for the existence of an instantaneous binary code is K= ∑ ≤ Kraft inequality assurance of the existence of instantaneously decodable code with codeword length ni. This ni should satisfy the inequality.
  • 227. Q. Consider a DMS with symbols xi=1, 2, 3, 4. Below table list of 4 possible binary codes. Show that a) All codes except code B not satisfy the Kraft inequality b) Codes A and D are uniquely decodable but codes B and C are not uniquely decodable Numerical Based on Kraft Inequality Code D Code C Code B Code A Symbol xi 0 0 0 00 X1 100 11 10 01 X2 110 100 11 10 X3 111 110 110 11 X4
  • 228.
  • 229. Code B: Inequality not satisfied, so it is not uniquely decodable. Let’s take example 110 message is received Decode either x4 (110) or x3x1 (110), So there is confusion while decoding Codes A and D are prefix codes. They are uniquely decodable Code C: Inequality satisfied but not uniquely decodable. Let’s take example 0110110 message is received Decode either X1x2x1x2x1= (0 11 0 11 0) x1x4x4 = (0 110 110) x1x4x2x1 = (0 110 11 0) x1x2x1x4 = (0 11 0 110)
  • 230. Arithmetic Coding Main Points Near Optimal Fast Separates modelling from coding
  • 231. P(Symbol) =Lower Limit + (Upper Limit – Lower Limit) x Probability of the Symbol of reference
  • 232. Q. Using arithmetic coding generate the Tag value of INDIA
  • 233. P(Symbol)=Lower Limit + (Upper Limit – Lower Limit) x Probability of the Symbol of reference
  • 234. Q1. Decode 1 1 2 4 3 5 8 2 3 2 using LZW coding, initial dictionary is given as a 1 b 2 c 3
  • 235. Q1. Decode 1 1 2 4 3 5 8 2 3 2 using LZW coding, initial dictionary is given as a 1 b 2 c 3 aa 4 ab 5 ba 6 aac 7 ca 8 2 3 2 8 5 3 4 2 1 1 b c b ca ab c aa b a a Entry Index Decode a 1 b 2 c 3
  • 236. Q Encode this message using LZW Coding, if initial dictionary is given as aabaacabcabcb a 1 b 2 c 3 d 4
  • 238. Terminology Code Efficiency ἡ = H(s)/ Lmin Code length L = ∑ P(xi) . ni Code redundancy: 1- ἡ Variance σ2 = ∑ Pk (ni – L)2
  • 239. Question Types 1. Generate the Tag (Decimal value) -------Binary value--------sequence 2. Sequence---------Generate the Tag value (Deciphering Method) Arithmetic Coding
  • 240. Question Types 1. Generate the Tag (Decimal value) -------Binary value--------sequence 2. Sequence---------Generate the Tag value (Deciphering Method) Arithmetic Coding
  • 241. Q. For given set of symbols {‘A’, ‘B’, ‘C’} the probabilities are {0.2, 0.5 0.3} respectively, (a) Generated the Tag value for the sequence BAC (b) From the Tag 0.28 generate the sequence. Arithmetic Coding
  • 242. Q.For given set of symbols {‘A’, ‘B’, ‘C’} the probabilities are {0.5, 0.3 0.2} respectively, compute the word for the given tag value 0.63725 using Arithmetic decoding. Arithmetic Coding
  • 243. Q.For given set of symbols {‘A’, ‘B’, ‘C’} the probabilities are {0.5, 0.3 0.2} respectively, compute the word for the given tag value 0.63725 using Arithmetic decoding.
  • 244. Q. For given set of symbols {‘A’, ‘B’, ‘C’} the probabilities are {0.8, 0.02 0.18} respectively, (a) Generated the Tag value, if the sequence 1321 Arithmetic Coding
  • 245. Arithmetic Coding Main Points Near Optimal Fast Separates modelling from coding
  • 246. P(Symbol) =Lower Limit + (Upper Limit – Lower Limit) x Probability of the Symbol of reference
  • 247. Q. For given set of symbols {‘I’, ‘N’, ‘D’, ‘A’} the probabilities are {0.4, 0.2, 0.2, 0.2} respectively, (a) Generated the Tag value for the sequence INDIA (b) From the Tag 0.2136 generate the sequence.
  • 249.
  • 250.
  • 252.
  • 253. Hamming Weight and Hamming Distance Hamming Weight: The hamming weight of a code-words is equal to the number of non-zero elements in the code-word. Hamming distance: between two code-words is the number of places by which the code-words differ. The hamming distance between two code- words C1 and C2 is denoted by d(C1,C2)
  • 254. Quiz Q.1 The channel capacity is measured in terms of: a. bits per channel b. number of input channels connected c. calls per channel d. number of output channels connected Q2. Channel capacity of a noise-free channel having m symbols is given by: a. m2 b. 2m c. Log (m) d. M Q3. An Ideal power limited communication channel with additive white Gaussian noise is having 4 kHz band width and Signal to Noise ratio of 255. The channel capacity is: a. 8 kilo bits / sec b. 9.63 kilo bits / sec c. 16 kilo bits / sec d. 32 kilo bits / sec
  • 255. Lemma: Valid Relationship between hamming distance and hamming weight d (C1, C2) = w (C1 - C2)= w (C2 - C1) Proof: Let the Code-word A={0010, 1110} w(0010) = c1 = 1 w(1110) = c2 = 3 w(c2 - c1) = 2 Hamming distance 0 0 1 0 1 1 1 0 d(C1,C2) = 2 Hence Proved: d (C1, C2) = w (C1 - C2)
  • 256. Q1. Find the hamming weight of the given code word 11000110 A. 1 B. 2 C. 3 D. 4 Q3. Find the hamming distance in the code word 'work' ‘walk’, ‘point’ ‘paint’, ‘lane vale’ A. 3, 1, 2 B. 2, 1, 2 C. 1, 1, 1 D. 3, 2, 1 Quiz
  • 257. Hamming Code Aim: Error detection, Error correction and Encoding and decoding Main Points: In this coding parity bits are used to detect and correct the error. Parity bits are the extra bit to mix with message bits. Hamming code is used to detect and correct single bit error. Parity bits position is decided by 2n, where n=0,1,2,3….
  • 258. For (7,4) Hamming code parity bits position as follows 20=1, 21=2, 22=4…………. Parity bits values are decided as P1--------Check 1bit and skip 1-bit (1,3,5,7,9…………..) P2-------Check 2-bit and skip 2-bit (2,3,6,7…………….) P3-------Check 4-bit and skip 4-bit (4,5,6,7) (12,13,14) (20,21,22) 1 2 3 4 5 6 7 P1 P2 D3 P4 D5 D6 D7
  • 259. Q. Let the transmitted message be 1001, using hamming code find out A. Encode the message and transmit B. Include error in 6th bit position and find out the error C. If the received code word has error in 6th bit position then correct the error Numerical
  • 260. Hamming codes are used for the purpose of error detection and correction. Hamming codes are used for channel encoding and decoding. Hamming code are known as linear-error correcting codes. Hamming code encodes four bits of data into seven bits by adding three parity bits. Hamming code is a set of error-correction codes that can be used to detect and correct '1' bit errors that can occur when bit stream are sent through channel Quick Revision
  • 261. Q1. Hamming code is capable of 1. Only detects single bit error 2. Only corrects single bit error 3. Detects and corrects single bit error 4. None of the above Q2. The Position of parity is decided by a. 22n b. 2n-1 c. 2n d. None of the above Quiz
  • 262. Q3. How to decide the parity P1 in (7,4) Hamming code a. Check 1-bit and skip 1-bit b. Check 2-bit and skip 2-bit c. Check 3-bit and skip 3-bit d. None Q4. How to decide the parity P2 in (7,4) Hamming code a. Check 1-bit and skip 1-bit b. Check 2-bit and skip 2-bit c. Check 3-bit and skip 3-bit d. None Q5. How to decide the parity P3 in (7,4) Hamming code a. Check 1-bit and skip 1-bit b. Check 2-bit and skip 2-bit c. Check 4-bit and skip 4-bit d. None
  • 263. Q6. At end of message received by receiver parity means A. Redundant bit as they do contain useful information B. Redundant bit as they do not contain any information C. UnRedudant bit as they do not contain any information D. None of these Q7. Find the hamming weight of the given code word 11000110 A. 1 B. 2 C. 3 D. 4
  • 264. Important Terminology in Hamming Code For (n, k) Hamming Code, where n-Total length of message, Parity bit (q)= n – k Number of message bits k = n – q or 2q – q-1 Block Length n = 2q – 1 Rate (R) = k/n = (n-q)/ n = 1- q/n = 1- q/2q – 1
  • 265. Numerical related to Hamming Code Q. A 7-bit hamming code is received as 1011011. Assume even parity and state whether the received code is correct or wrong, if wrong locates the bit in error.
  • 266. Error detection and Correction Capabilities in Hamming Codes Error Detection is find out by using this expression Dmin ≥ s+1 Where s represent the errors For eg. If Dmin = 3 then 3 ≥ s+1, s ≥ 2 that means detect only 2 error Error Correction is find out by using this expression Dmin ≥ 2t+1 Where t represent the errors For eg. If Dmin = 3 then 3 ≥ 2t+1, t ≥ 1 that means correct only 1 error
  • 267. Lemma Prove that Dmin ≥ 3, if hamming code detect double error and correct single error. Proff: For detecting double (2) error Dmin ≥ s+1 Dmin ≥ 2+1 Dmin ≥ 3 For correcting upto one (1) error Dmin ≥ 2t+1 Dmin ≥ 2(1)+1 Dmin ≥ 3
  • 268. Q1. What is the correct expression for error detection in Hamming code a. Dmin ≥ 2s+1 b. Dmin ≥ s-1 c. Dmin ≥ s+1 d. None Q2. What is the correct expression for error correction in Hamming code a. Dmin ≥ 2t+1 b. Dmin ≥ t-1 c. Dmin ≥ t+1 d. None Quiz
  • 269. Q3. What is the correct expression for Rate (R) in Hamming Code a. k/2n b. k/3n c. k/n d. None Q4. What is the correct expression for parity bit (q) in Hamming code a. q= n + k b. q= n – 2k c. q= n – k d. None
  • 270. Block diagram of Communication System
  • 271. Parity Check Matrix (H) Aim Parity check matrix (H) is used at the the receiver side for channel docoding. Parity check matrix (H) is used for detection on and correction of errors In (n, k) Hamming codes, parity check matrix [H] is defined as: H = [PT : I ] Where P represent the parity matrix and I represent the Identity matrix
  • 272. Generator Matrix (G) Aim Generator matrix generate parity and mix with message. In (n, k) Hamming codes, parity check matrix [H] is defined as: G = [I: P] Where P represent the parity matrix and I represent the Identity matrix
  • 273. Numerical related to Hamming Code Q. Generator matrix 3x6 = k x n Find out all possible code vector Hint 1. Use formula C= D.G, where D- data word and G is the generator matrix 2. G- Generator Matrix G = [Ik: P]kxn 3. Data word combination is find out by 2 k
  • 275.
  • 276. Q1. The main purpose of generator matrix is a. Error correction b. Error detection c. Generate the parity bit and include in message bit Q2. Parity check matrix is used for a. Detection and correction of errors b. Used at receiver side c. Both a and b d. None Q3. Which expression is correct for block length a. Block Length n = 2q + 1 b. Block Length n = 2q – 1 c. Block Length n = 2q x 1 d. None Quiz
  • 277. Presented By: Dr. Neeraj Kumar Misra Associate Professor Department of SENSE, VIT-AP University Google Scholar: https://scholar.google.co.in/citations?user=_V5Af5kAAAAJ&hl=en Research Gate profile: https://www.researchgate.net/profile/Neeraj_Kumar_Misra ORCHID ID: https://orcid.org/0000-0002-7907-0276 Information Theory and Coding 06/10/2022 Numerical Q. For a (5, 2) linear block code the generator matrix is in the form [Ik: P] where P matrix as P= Find out (a) Generator matrix (b) Parity check matrix (c) All possible code vectors (d) Find minimum Hamming distance Hint 1. Use formula G = [Ik: P] 2. Use parity check matrix formula H = [PT : In-k ] 3. Use formula C= D. G where D- Data word and G is the generator matrix. 4. For total combination of data word use formula 2k 5. Count the number of 1 in the codeword then find the minimum weight
  • 278. Numerical Q. For a (6, 3) code the generator matrix G is G= Find out (a) All corresponding code vectors (b) Minimum hamming distance (c) Verify that this code is single error correcting code (d) Parity check matrix (e) Determine transmitted codeword if received codeword is 100011 Hint 1. Use formula G = [Ik: P] 2. Use formula C= D. G where D- Data word and G is the generator matrix 3. Count the number of 1 in the codeword then find the minimum distance dmin 4. For error correction Dmin >2t+1 5. Use formula for parity check matrix H = [PT : I ] 6. Find syndrome S=r. [HT]
  • 279. Q1. Which expression is correct for Syndrome a. S = r / [HT] b. S = r + [HT] c. S = r. [HT] d. None Q2. Which expression is correct for error correction a. Dmin > t+1 b. Dmin > t-1 c. Dmin > 2t+1 d. None Quiz
  • 280. Q3. Which one of the following set of gates are best suited for parity checking and parity generation? a. AND, OR, NOT gates b. EX-NOR or EX-OR gates c. NAND gates d. NOR gates Q4. The coding efficiency is given by a. 1- Redundancy b. 1 + Redundancy c. 1/Redundancy d. none
  • 281. Q. If the hamming distance in code is 5, then the maximum number of errors correction is a. 4 b. 3 c. 2 d. 1 Q. To guarantee the detection of up to 5 errors in all cases, the minimum Hamming distance in a block code must be _______ a. 5 b. 6 c. 11 d. none of the above
  • 282. Q. To guarantee correction of up to 5 errors in all cases, the minimum Hamming distance in a block code must be ________ A. 5 B. 6 C. 11 D. none of the above Q. The _____of errors is more difficult than the ______ A. correction; detection B. detection; correction C. creation; correction D. creation; detection
  • 283. Numerical Q. A parity check matrix is given as H= suppose that received code word is 110110 then what will be decoded received word? a. 110010 b. 100110 c. 010110 d. 110111
  • 284.
  • 285. Numerical Q. For a linear (6, 3) code whose parity matrix is given as PT= Find out how many errors to be detect and correct.
  • 286.
  • 287. Cyclic Codes Cyclic codes satisfies two basic properties a. Linearity b. Cyclic properties.
  • 288. Cyclic Code for Non-Systamatic Codeword C(x)= m(x) . g(x)…………….. (1) Where C(x)- code-word polynomial m(x)- message polynomial g(x)- generator polynomial Format of codeword c = [message, parity]-------Systematic code-word Message and parity bits are not in proper order...> Non-systematic code-word
  • 289. Numerical Question based on Cyclic- code Q. Code word C=10111 find the cyclic polynomial. Q. Construct non-systematic cyclic codes (7,4) using generator polynomial g(x)= x3+x2+1 with message (1010) Q. If generator polynomial is given as g(x)= x3+x2+1 Find out (a) Cyclic Encoder (b) Codewords if message is (1110)
  • 290. Q. Construct non-systematic cyclic codes (7,4) using generator polynomial g(x)= x3+x2+1 with message (1010)
  • 291. Quiz Q1. Check the given code word is cyclic or not c={0000, 1100, 1001, 1111} a. Yes b. No c. Can not determine d. None Q2. Convert the given message bit m=(0110) in polynomial form a. x3+x b. x3+x+1 c. x2+x d. None
  • 292. Numerical Question based on Cyclic- code Q. If generator polynomial is given as g(x)= x3+x2+1 Find out (a) Cyclic Encoder (b) Codewords if message is (1110)
  • 293. Cyclic Code for systematic Codewords In systematic cyclic code the codeword as C=[message, parity] C(x)=xn-k m(x) + p(x) Where p(x) = Rem ( ) ( ) m(x) is the message polynomial g(x) is the generator polynomial C(x) is the codeword polynomial
  • 294. Q. Construct systematic cyclic code (7,4) using generator polynomial g(x)= x3+x2+1 with message (1010)
  • 295. Q. Construct systematic cyclic code (7,4) using generator polynomial g(x)= x3+x2+1 with message (1010)
  • 296. Cyclic Code for generator matrix • 1st row of parity matrix Rem ( ) • 2nd row of parity matrix Rem ( ) • 3st row of parity matrix Rem ( ) • 4st row of parity matrix Rem ( )
  • 297. Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1 Then construct generator matrix Hint Use generator matrix format G = [Ik: P] The find the parity matrix P rows using the below expression 1st row of parity matrix Rem ( ) 2nd row of parity matrix Rem ( ) 3st row of parity matrix Rem ( ) 4st row of parity matrix Rem ( )
  • 298. Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1. Then construct generator matrix
  • 299. Quiz Q1. Parity Check Matrix (H) is represent in this form 1. H=[I:PT] 2. H=[PT:I] 3. H=[ - PT:I] 4. None Q2. Code vector is defined in this form A. C=D.G B. C=D/G C. C=D+G D. C=D-G Q3. Find the minimum hamming code for the following codes 11001100 01010101 10101000 A. 4 B. 3 C. 2 D. 1
  • 300. Nonlinear Code Linear Code Convolution Codes Block Codes  Nonlinear codes does not necessarily produce third codeword.  If two cod words of the linear code are added by modulo-2 arithmetic then it produce third codeword in the code  Coding operation is discrete time convolution of input sequence  It accept the message bits continuously and generates the encoded sequential continuously.  n-number of bits  K-message  n-k redundant bits or parity bits Types of Codes
  • 301. Error Correction Capabilities Distance requirement Name of errors detected/ corrected Dmin>s+1 Detect upto s errors per word Dmin>2t+1 Correct upto t errors per word Dmin>t+s+1 Correct upto t errors and detect s > t errors per word
  • 303. Q. An error control code has the following parity check matrix H= a) Determine the generator matrix G b) Find the codword that begin with 101 c) Decode the received codeword 110110. Comment on error detection and correction capability of this code. d) What is the relationship between G and H? Verify the same. Numerical
  • 304. Generator matrix and Parity Check matrix
  • 305.
  • 306. Dual Codes For (n, k) linear block codes- G (Generator Matrix) H (Parity Check matrix) For (n, n-k) linear block codes H (Generator matrix) G (Parity check matrix)
  • 307.
  • 308. Q. For a certain G matrix Find all the code words of its dual code
  • 309. Error Correcting Codes Two Types 1. Block codes (In block codes Memory is not present) 2. Convolution Codes (In convolutional codes Memory is present) 3. Block codes: Block codes follow linear block codes Linear block codes: A code is said to be linear if the sum (Modulo-2 sum of corresponding bits of any two code vectors results in another code vector.
  • 310. Error Correcting Codes K - number of message bits n - number of code bits (n-k) = q (number of parity bits) Code rate or code efficiency r= k/n If the code rate close to 1 for high efficiency Code word: The encoded block of n-bits Block length: Number of bits n after encoding Weight of the code: Number of non-zero elements in code vector.
  • 311. Error Correcting Codes Systematic block code: k-message bits (n-k) message bits (n-k) parity bits k message bits
  • 312. Error Correcting Codes Generator Matrix follows (n, k) linear block codes code word C= m. G Size of the code word: 1xn = (1xk). (kxn) G = [I: P] kxn = [(kxn) {kx(n-k)}]kxn
  • 313. Numerical Questions Q. An information source produces a sequence of independent symbols having the following probabilities, using the Huffman binary codes to find its efficiency. s7 s6 s5 s4 s3 s2 S1
  • 314. Numerical Questions Q. Apply Huffman coding for following message and calculate the code length, efficiency and redundancy. A message has 8 symbols x1, x2, x3, x4, x5, x6, x7, x8 has probability 16/32, 4/32, 2/32, 2/32, 1/32, 4/32, 2/32, 1/32 respectively.
  • 315. Numerical Problem Q. For a discrete memory-less source there are 3 symbols with probability P1=α and P2=P3. Determine the entropy of the source and sketch its variation for different values of α
  • 316. Convolution Codes Concept: A convolutional encoder is a finite state machine and consists of a tapped shift register with (L+1) stages whose outputs are connected to modulo-2 adders through coefficient multipliers
  • 317. Convolution Codes Encoding Decoding Code Tree State Transition diagram Trellis Diagram Viterbi Algorithm Sequential Decoding
  • 318. Code Tree: Main Points 1. Code Tree used in both Transmitter and Receiver. 2. Code Tree is converted into a branch with applied input. 3. To form the code Tree first generate the convolution code pattern. 4. Convolution code pattern depends on the shift register. 5. In code Tree input must be in 0 and 1 form. 6. If input 0, the branch goes to the upside; if input 1, the branch goes to the downside. 7. Code Tree means branch divided into two parts.
  • 319. Q. Draw the code Trees for the three-bit shift register if the input data 1010 also draw the state of the encoder.
  • 320.
  • 321. Basic Terminology in Convolution Codes 1. Code rate r = k/n = number of message bits/ number of encoded output bits Code dimension (n, k )
  • 322. 1. CRC stands for cyclic redundancy check. At Transmitter End At Transmitter 4-steps to be performed to generate the CRC bits 1. First find the divisor bits (n-bits) 2. Add (n-1) bits to data bits. 3. Then perform the division and find the reminder bits. 4. Remainder bits are known as CRC 5. Combined the CRC bits to data At Receiver End 1. At the received again divide the division with the received data 2. If the reminder is zero that means there is no errors. CRC Concept
  • 323. Q. Find the CRC value and check if the received code word is correct or wrong if data bits 1010101010 and divisor polynomial P(x)= x4+x3+1 CRC based Problem
  • 324.
  • 325.
  • 326.
  • 327. Trellis Diagram for Convolution codes  Graphical diagram  State transition from one stage to next stage  Dot line- represents 1 input  Dash line- represent 0 input  M0 and m1 represent the present state of convolutin encoder
  • 328.
  • 329. Error Correcting Coding Linear Block codes Convolutional codes Hamming code cyclic code Turbo codes Trellis codes
  • 330. Principle of Turbo coding  If more length of the code means more randomness.  If randomness is more than code is more secure.  Turbo code is the parallel concatenation of convolution codes.  There are three types of concatenation a) Series b) Parallel c) Hybrid
  • 332. Encoder of Turbo Codes  Parallel concatenation increases the length of code means randomness is increased and more secure communication.  In the encoder of turbo code, feedback path to increase the length of the code and randomness increases.  Interlever is represented by pie  Interleaver is used to rearrange the message code.
  • 333. Design of Encoder for Turbo Codes
  • 334. Working Process of Encoder for Turbo Codes
  • 335. Types of Turbo Codes  Two type of turbo codes a. Symmetric turbo codes: In which the constituent encoder is the same.
  • 336. Types of Turbo Codes  Two type of turbo codes a. Asymmetric turbo codes: In which the constituent encoder is different.
  • 337. Concept of Puncturing  In the puncturing reduce the output.  In turbo codes mostly three outputs using the puncturing output to be reduced and the code rate to be increased.  If even time, collect y1y2 output--------> code rate=1/2  If odd time, collect y1y3 output---------> code rate=1/2
  • 338. Multiple Turbo Code  More number of encoder  More number of interleaver  More number of output
  • 339. Questions Q1. What do you understand by convolution codes? How are they different from block codes? Q2. Give block diagram explain the operation of any convolution encoder. Q3. What is the constraint length for convolution encoder? Q4. What are code tree, code trellis and state diagram for convolution encoder? Q5. What is the difference between code tree and trellis diagram Q6. What do you understand by state diagram in convolution codes. Q7. What is the difference between linear block codes and convolutional codes Q8. Explain Viterbi algorithm and sequential decoding of convolution codes. Q9. Compare linear block codes, cyclic codes and convolution codes by giving their advantage and disadvantage
  • 340. Difference between linear block codes and convolution codes Convolution codes Linear block codes Convolution codes are generated by convolution between message sequence and generating sequence Block codes are generated by Coding is bit by bit Coding is block by block Viterbi decoding is used for most likelihood decoding Syndrome decoding is used for most likelihood decoding Generating sequences are used to get code vectors Generating polynomial and generator matrix aare used to get code vectors
  • 341. Difference between code tree and trellis diagram Trellis Diagram Code tree Trellis diagram indicates transition from current state to next state Code tree indicate flow of the coded signal along the nodes of the tree Trellis diagram is shorter or compact way of representing coding process Code tree is lengthy way of representing coding process Decoding is little complex using trellis diagram Decoding is very simple using code tree Trellis diagram repeats in every stage. In steady state trellis diagram has only stage Code tree repeat after number of stages used in the encoder
  • 342. Convolution code is the encoding of the message bit using the previous bit as well as the present bit, here the memory element is flipflop and the shift register is present. These blocks are helping to encode in such a way that the present bit and previous bits are used to get the encoded signal. Memory elements are present Encoding: Present state +Previous bits are used Constant length: Number of shifts over which single message bit influence the encoder output. Or in other words constant length represent number of flip flop present. Convolutional Codes
  • 343.
  • 344. Q. Using the trees diagram of 3-bit shift register, if the received code word as 01 10 11 00 00. Find the error in the received code-words using Viterbi algorithm Numerical
  • 345.
  • 346.
  • 347. Error Correcting Codes Two Types 1. Linear Block codes (In block codes Memory is not present) 2. Convolution Codes (In convolutional codes Memory is present) 3. Block codes: Block codes follow linear block codes Linear block codes: A code is said to be linear if the sum (Modulo-2 sum of corresponding bits of any two code vectors results in another code vector.
  • 348. Error Correcting Codes K - number of message bits n - number of code bits (n-k) = q (number of parity bits) Code rate or code efficiency r= k/n If the code rate close to 1 for high efficiency Code word: The encoded block of n-bits Block length: Number of bits n after encoding Weight of the code: Number of non-zero elements in code vector.
  • 349. Difference between convolution and Block codes 1. Code word depends on block of k-bit message within that time unit 2. Best suitable for error detection 1. Codeword depends on block of k message bits but also on proceeding (N- 1) blocks of message bits 2. Best suitable for error correction 3. Used blocks in convolution codes shift register and Mod-2 adder 4. Code word depends on message and previous message 5. Code rate (r) = k/n
  • 350. Convolution Codes Encoding Decoding Code Tree State Transition diagram Trellis Diagram Viterbi Algorithm Sequential Decoding
  • 351. Convolution Encoder Methods Output of the Encoder Code Tree State diagram Trellis diagram
  • 352. Q. Find the codewords of the following convolution encoder for the input 10011. Draw the state diagram, tree diagram, and trellis diagram of the encoder.
  • 353. Process to continue if shift register in original state
  • 354. Code Tree Digaram: Main Points 1. Code Tree used in both Transmitter and Receiver. 2. Code Tree is converted into a branch with applied input. 3. To form the code Tree first generate the convolution code pattern. 4. Convolution code pattern depends on the shift register. 5. In code Tree input must be in 0 and 1 form. 6. If input 0, the branch goes to the upside; if input 1, the branch goes to the downside. 7. Code Tree means branch divided into two parts.
  • 355. Tree diagram:  Graphical representation  Each branch of the tree represent an input with the corresponding pair of output bits indicated on the branch
  • 356.
  • 357. Quiz Q1. Which of the following is not a way to represent convolution code? a) State diagram b) Trellis diagram c) Tree diagram d) Linear matrix Q2. The code in convolution coding is generated using a) EX-OR logic b) AND logic c) OR logic d) None of the above Q3. For decoding in convolution coding, in a code tree, a) Diverge upward when a bit is 0 and diverge downward when the bit is 1 b) Diverge downward when a bit is 0 and diverge upward when the bit is 1 c) Diverge left when a bit is 0 and diverge right when the bit is 1 d) Diverge right when a bit is 0 and diverge left when the bit is 1 Q4. A parity check code can a. detect a single bit error b. correct a single bit error c. detect two-bit error d. correct two-bit error
  • 358. Q. The Hamming distance between the words 100101101 and 011011001 is a) 4 b) 5 c) 6 d) 7 Q. Cyclic code is a subclass of convolutional code. True False Q. The method used for representing convolution encoder are a. Trellis diagram b. State diagram c. Code Tree diagram d. All of the mentioned Q In treee diagram, the number of nodes ______ at successive branching. a. Increases by 1 b. Doubles c. Triples d. None of the mentioned
  • 359. Convolution code is the encoding of the message bit using the previous bit as well as the present bit, here the memory element is flipflop and the shift register is present. These blocks are helping to encode in such a way that the present bit and previous bits are used to get the encoded signal. Memory elements are present Encoding: Present state +Previous bits are used Constant length: Number of shifts over which single message bit influence the encoder output. Or in other words constant length represent number of flip flop present. Convolutional Codes
  • 360. Difference between linear block codes and convolution codes Convolution codes Linear block codes Convolution codes are generated by convolution between message sequence and generating sequence Block codes are generated by Coding is bit by bit Coding is block by block Viterbi decoding is used for most likelihood decoding Syndrome decoding is used for most likelihood decoding Generating sequences are used to get code vectors Generating polynomial and generator matrix aare used to get code vectors
  • 361. Difference between code tree and trellis diagram Trellis Diagram Code tree Trellis diagram indicates transition from current state to next state Code tree indicate flow of the coded signal along the nodes of the tree Trellis diagram is shorter or compact way of representing coding process Code tree is lengthy way of representing coding process Decoding is little complex using trellis diagram Decoding is very simple using code tree Trellis diagram repeats in every stage. In steady state trellis diagram has only stage Code tree repeat after number of stages used in the encoder
  • 362. State diagram In state diagram represent graph of the possible states of the encoder and possible transition from one state to another. State diagram provide the relationship between the encoder state, input and output of the convolution encoder Number of bits in state = number of shift register -1 Number of possible state = 2n
  • 363.
  • 364. State Table Next state (NS) Outputs Bits in Shift register Input bits Present state (PS)
  • 365.
  • 366. Quiz Q1. Which among the below stated logical circuits are present in encoder and decoder used for the implementation of cyclic codes? A. Shift Registers B. Modulo-2 Adders C. Counters D. Multiplexers a. A & B b. C & D c. A & C d. B & D Q2. State diagram provide the relationship a. Input only b. Output only c. encoder state, input and output d. None Q3. Number of bits in state is defined as a. number of shift register +1 b. number of shift register +2 c. number of shift register -1 d. None
  • 367. Quiz Q1. In convolution code memory element is not present a. True b. False Q2. In convolution code encoding is depend on a. Only on Present state b. Only on Next state c. Both Present and next state d. Not defined Q3. Trellis diagram is shorter or compact way of representing coding process a. True b. False Q4. In state diagram represent graph a. Only possible states of the encoder b. Only possible transition from one state to another c. Both possible state and possible transition. d. None
  • 368. Trellis Diagram In tree diagram the number of nodes at each stages increases (Like 1, 2, 4, 8, 16) The tree becomes repetitive after the first three branches To reduce the redundancy consider a structure called Trellis diagram The trellis diagram is a way to show the transition between various states at the time evolved. In trellis diagram redundancy is removed. Representing all the states on vertical axis and repeating this along the time axis. Transition from one state to another state is denoted by a branch connecting the two states on two adjacent vertical axis.
  • 369.
  • 371.
  • 373. Quiz Q1. To reduce the redundancy the best method a. Code Tree b. State diagram c. Trellis diagram d. None Q2. The trellis diagram is a way to show the transition between various states in which a. weight evolved b. State evolved c. time evolved d. None Q3. In trellis diagram all the states on a. Horizontal axis b. Vertical axis c. X- axis d. Y-axis
  • 374. Quiz Q. In code tree node is repeated as a. 3, 5, 7, 9 b. 0, 1, 3, 5, 7 c. 1, 2, 4, 8, 16 d. 2, 8, 32, 128 Q. In trellis diagram all nodes are _______ a. connected b. Not connected c. out d. None Q. What are the convolution encoder methods a. Code tree b. State diagram c. Trellis diagram d. Veterbi algorithm A. a, b, d B. a, b, c C. b, c, d D. a, c, d
  • 375. Viterbi Algorithm Used for decoding Most efficient method of decoding In digital communication two factor is more important such as energy, efficiency. Both factors are fulfilled by Viterbi algorithm The main advantage of Viterbi algorithm as a. Highly satisfied bit error performance (Very less errors) b. High speed of operation (Less delay means output get quickly) Viterbi algorithm is very less complex and fast. The algorithm is based on add-compare-select the best path each time at each state.
  • 376. Terminology: Viterbi Algorithm Metric: It is defined as the hamming distance between the coded sequence and the received sequence. Surviving path: Path of the decoded signal with minimum metric. Initialization: Set all zero state of the trellis to zero. In Viterbi algorithm use the trellis diagram compare the paths entering the node. The path with the lower metric is retained and other path is discarded.
  • 377. Terminology: Viterbi Algorithm Q. Find the data-word transmitted for the received word 11010111 using Viterbi decoding algorithm
  • 378.
  • 379. Q. A rate of 1/3 convolution code is described by g1=[111], g2=[100], g3=[101]. Draw the encoder and code tree corresponding to this code. Q. A rate of ½ convolution code is described by the following generator sequence g1=[101] and g2=[110] a. Draw the encoder diagram of this code b. What is the constraint length of the code c. Find the codeword for the message sequence 11001 Q. Find the data-word transmitted for the received word 11011100 using Viterbi decoding algorithm. Practise Question Based on Convolution Code
  • 380. Q. Construct the decoding table for single error correcting (7, 4) cyclic code with g(x) = 1+x2+x3 Determine the data vector transmitted for the following received vectors. (a)1101101 (b) 0101000 (c) 0001100 Syndrome Decoding
  • 381. 1. The received code contains an error if the syndrome vector is a) Zero b) Non zero c) Infinity d) None of the mentioned 2. Syndrome is calculated by a) H/T b) rHT c) rH d) None of the mentioned 3. For a (7, 4) cyclic code, the generator polynomial g(x) = 1 + x + x3. Find the data word for the codeword 0110100 Ans__________________ 4. If the constraint length of a (n, k, L) convolutional code is defined as the number of encoder output bits influenced by each message bit, then the constraint length is given by 1. L(n + 1) 2. n(L + 1) 3. n(L + k) 4. L(n + k)
  • 382. Q. ________codes are special linear block codes with one extra property. If a codeword is rotated the result is another codeword. a. Nonlinear b. Convolution c. Cyclic d. None Q. To guarantee correction of up to 5 errors in all cases, the minimum Hamming distance in a block code must be a. 5 b. 6 c. 11 d. None Q. To guarantee detection of up to 5 errors in all cases, the minimum Hamming distance in a block code must be a. 5 b. 6 c. 11 d. None Q. The Hamming distance between 100 and 001 is_____ a. 2 b. 0 c. 1 d. None
  • 383. Q. If generator polynomial of a (7, 4) cyclic code g(x) = 1+x+x3. Find the codewords in non-systematic form Numerical Question based on Cyclic Code Q. If generator polynomial of a (7, 4) cyclic code g(x) = 1+x+x3. Find the codewords in systematic form
  • 384. Q. If generator polynomial of cyclic code (7, 4) is given by g(x) = 1+x+x3 then construct generator matrix Numerical Question based on Cyclic Code
  • 385. Q. Draw a state table and state diagram for a (2,1,2) for convolution encoder with generator sequence g(1)= {1,1,1} and g(2)= {1,0,1} Numerical
  • 386. State Table Next state (NS) Outputs Bits in Shift register Input bits Present state (PS)
  • 387. How to Draw the Convolution Encoder diagram
  • 388. How to Draw the Convolution Encoder diagram
  • 389. Q1. Draw encoder diagram for (7, 3) cyclic code where generator polynomial is given as g(x)=1+x+x2+x4
  • 390. Q2. Design an encoder for the (7,4) binary cyclic code generated by g(x)=1+x+x3 and verify its operation using the message vectors (1001) and (1011)
  • 391.
  • 392.
  • 393.
  • 394. Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1 Then construct generator matrix Hint Use generator matrix format G = [Ik: P] The find the parity matrix P rows using the below expression 1st row of parity matrix Rem ( ) 2nd row of parity matrix Rem ( ) 3st row of parity matrix Rem ( ) 4st row of parity matrix Rem ( )
  • 395. Q. If generator polynomial of cyclic code (7,4) is given by g(x)= x3+x+1. Then construct generator matrix
  • 396. Cyclic Code for generator matrix • 1st row of parity matrix Rem ( ) • 2nd row of parity matrix Rem ( ) • 3st row of parity matrix Rem ( ) • 4st row of parity matrix Rem ( )