SlideShare a Scribd company logo
1 of 13
CHANNEL CODING
Introduction
The mutual information I(A,B), with input A and output B,
measures the amount of information that the channel is able to
convey over the source. If the channel is noiseless, then this is
equal to the information content of source A, I(A,B)=H(A). But
if the channel is noisy, there is an uncertainty of equivocation
that reduces the mutual information to I(A,B)=H(A) – H(A|B).
The presence of noise manifests itself as random errors in the
transmission of the symbols.
The random errors may be tolerable ( eg. Occasional bit errors
in the transmission of an image may not yield any noticeable
degradation in picture quality) or fatal (e.g., with Huffman
codes, any bit errors in the encoded variable-length output will
create a cascade of errors in the decoded sequence due the loss
of coding synchronization ) .
When the errors introduced by the information channel are
unacceptable, then channel coding is needed to reduce the error
rate and improve the reliability of the transmission.
Since the digital communication system uses a binary channel
and the source coder will encode the source to a binary code,
then the intervening channel coder will code binary msgs to
binary codes.
Our concern is the binary block codes, that is, both the channel
coder inputs and outputs will be binary and fixed length.
Definition of channel coding
The channel encoder separates or segments the incoming bit
stream into equal length blocks of L binary digits and maps
each L-bit message block into an N-bit code word where N > L
and the extra N –L check bits provide the required error
protection.
There are M=2L messages and 2L code words of length N bits

 The channel decoder has the task of detecting that there
has been a bit error and (if possible) correcting the bit error
 The channel decoder can resolve bit errors by two scheme
of error-control .

ARQ (Automatic-Repeat-Request ) If the channel decoder performs error
detection then errors can be detected and a feedback channel from the
channel decoder to the channel encoder can be used to control the
retransmission of the code word until the code word is received without
detectable errors.

FEC (Forward Error Correction) If the channel decoder
performs error correction then errors are not only detected but
the bits in error can be identified and corrected (by bit inversion)

Code Rate
 The extra N –L bits requires a higher bit-rate or bandwidth
of a channel
 Assume that the source coder generates messages at an
average bit rate of ns bit per second
 If the channel encoder maps each L-bit message into N-bit
code word, then the channel bit rate will be
nc = ( N/L )ns
 It infers that more bits transmitted through the channel, so
the channel must transmit bits faster than the source
encoder
The general expression for the code rate is
R=H(M)/n
Where H(M) is the average number of bits transmitted with each
source message and n is the length of code word
For the case of channel coding, we have M= 2L and each
message is transmitted as the code word of length N, then
H(M)=log2M=L and n = N
R = L/N = ns/nc = Tc/Ts
 The code rate measure the relative amount of information
conveyed in each code word
 Higher code rate implies fewer redundant check bits
relative to the code word length
 The message rate and the channel rates are fixed design
parameters, the code word length is then selected:
N = Lnc/ns
Example
A system has bps and the channel bit rate is bps. This requires a
channel coder with
If L = 3, we have to choose N = 4
If L = 5, we have to choose N = 6, and the channel coder has
code rate of 5/6 yields loss of synchronization, we have to
compensate with 2 “dummy’ bits every 3 code word

Decoding Rules
The form of decoding rule will govern how the channel codes
are used and the level of error protection provided
Let ai = ai1ai2….aiN be the i-th N-bit code word transmitted
through the channel
Let b = b1b2….bN be the corresponding N-bit word produced at
the output of the channel
Let BM represent the set of M valid N-bit code words and its
complement BCM is the remaining N-bit which are not code
words
The BN = BM BCM is the set of all possible N-bit binary
sequences and b = BN whereas ai

The channel decoder has to apply the decoding rule on the
received word to decide which code word was transmitted.
If the decision rule is D(.) then ai = D(b)
Let PN (b | ai) be the probability of receiving b given ai was
transmitted
For discrete memoryless channel, the probability is

Let PN(ai) be the a priori probability for the message
corresponding to the code word ai
By the Bayes’ Theorem we get

If the decoder decodes b into the code word ai then the
probability that this is correct is PN(ai | b) and the probability
that it is wrong is 1 – PN(ai| b)
If we want to minimize the error, we have to maximize the
PN(ai | b) (It is the minimum-error decoding rule)

Definition of minimum-error decoding rule:
We choose DME(b)=a* (a* ) such that
PN (a* | b) >= PN(ai | b) i

By using Bayes’ Theorem and noting that PN(b) is independent
of the code word, the condition becomes
PN (b | a*) PN (a*) >= PN(b | ai) PN (ai) i
requiring both knowledge of the channel probabilities and
channel input probabilities.
This decoding rule guarantees minimum error in decoding
Another rule is called maximum-likelihood decoding rule, by
maximizing PN(b | ai) , the likelihood that b is received given
ai was transmitted
The rule is simpler as the channel input probabilities are not
required

Definition of Maximum-Likelihood Decoding:
We choose DML(b)=a* where a*  is such that
PN (b | a*) >= PN(b | ai) i
requiring only knowledge of the channel probabilities. This
decoding rule does not guarantee minimum error in decoding
The maximum-likelihood decoding rule is the same as the
minimum-error decoding rule when the channel input
probabilities are equal
Assume we have decoding rule D(.) and Bi is the set of all
possible received words b such that D(b) = ai
The probability of decoding error given code word ai is
transmitted is

The overall probability of decoding error is

Hamming Distance
The important parameter in error coding is the number of bit
errors or differences between two different code words ai, and
aj
This measure is provided by Hamming distance on the space of
binary words of length N
The properties of Hamming distance are instrumental in
establishing the operation and performance of channel codes for
both error detection and error correction
Definition of Hamming Distance:
Consider the two N-length binary words a = a1a2….aN and
b = b1b2….bN
The Hamming distance between a and b, d(a,b) is defined as the
number of bit positions in which a and b differ
The Hamming distance is a metric on the space of all binary
words of length N since for arbitrary words a, b, c, the
Hamming distance obeys the following conditions:
d(a,b)>=0 with equality when a= b
d(a,b)= d(b,a)
d(a,b)+ d(b,c)>=d(a,c) (triangle inequality)

Hamming Distance for BSCs
Consider the maximum likelihood decoding rule where the code
word, ai , which maximizes the conditional probability PN(b | ai)
has to be found
Consider a BSC with bit error probability q
Let the Hamming distance d(b,ai) = D represent the number of
bit errors between the transmitted code word and received
word
This gives the following expression for the conditional
probability
If q< 0.5, then eq. above is maximized when ai is chosen such
that d(b,ai) is minimized
Hamming Distance Decoding Rule
The binary word, b, of length N is received upon transmission
of one of the M possible N-bit binary code words, ai BM,
through a BSC.
Assuming the maximum likelihood decoding rule we choose
the most likely code word as follows:
 If b = ai , for a particular i then the code word ai was sent
 If b  ai , for any i, we find the codeword a*  B which
is closest to b in the Hamming sense:

If there is only one candidate a* then the t-bit error, where
t=d(a*,b), is corrected and a* was sent
If there is more than one candidate a* then the t-bit error, where
t=d(a*,b) , can only be detected
Error Detection/Correction Using Hamming Distance
Errors are detected when the received word is equidistance
from more than one code word
Errors are corrected when the received word is closest to only
one code word
Definition of minimum distance of a code:
The minimum distance of a block code Kn, where Kn identifies
the set of length n code words, is given by


d(Kn) = min d(a,b) |a,b Kn and a b
that is, the smallest Hamming distance over all pairs of distinct
code words
A block code Kn detects up to t errors if and only if its
minimum distance is greater than t

A block code Kn corrects up to t errors if and only if its
minimum distance is greater than 2t

Upper Bounds on M and the Hamming Bound
Definition of Upper Bound on M:
For a block code KN of length N and minimum distance d(KN)
the maximum number of code words , and hence the msgs, M,
to guarantee that such a code is exist is given by :
M = B(N,d(KN))
The following are elementary results for B(N,d(KN)) :
B(N,1) = 2N
B(N,2) = 2N-1
B(N,2t+1) = B(N+1,2t+2)
B(N,N) = 2
Where B(N,2t+1) = b(N+1,2t+2) indicates that if we know
bound for odd d(KN) equivalent 2t+1 then we can obtain the
bound for the next even d(KN)+1 = 2t+2, and vice versa
Theorem: Hamming Bound
If the block code of length N is a t-bit error correcting code, then
the number of code words, M, must satisfy the following
inequality :

which is an upper bound (the Hamming bound) on the number
of code words
We have already known that for a t-bit error correcting code
d(KN)>2t
A t-bit error correcting code will have a minimum distance of
d(KN)=2t+1 or more
Consider the problem of finding the upper bound for a code
with d(KN)=2t+1
Since d(KN)>2t this is a t-bit error correcting code and subject
to the Hamming bound
Maximal Codes and Gilbert Bound
The code K3= {000, 011} has d(K3)=2 and contains M= 2 code
words of length N= 3 implying L= 1 and a code rate R= 1/3
The code can be made more efficient by augmenting it to the
code K3= {000, 011, 101, 110} which is also d(K3) = 2
but contains M= 4 code words of length N= 3 implying L= 2
and a code rate R= 2/3
The second one is more efficient than the first code without
sacrificing the minimum distance

Definition of Maximal Codes:
A code KN of length N and minimum distance d(KN) with M
code words is said to be maximal if it is not part of, or cannot be
augmented to, another code KN of length N, minimum distance
d(KN) but with M+ 1 code words. It can be shown that a code is
maximum if and only if for all words b of length N there is a
code word ai, such that d(b,ai) <d(KN)
Example
The code K3= {000, 011} with d(K3)=2 is not maximal since for
word b= 101, d(a1,b)=d(000,101)=2 , and d(a1,b)=d(011,101)=2
and thus there is no code word such that d(b,ai) < 2
If code KN is a maximal code then it satisfies the Gilbert
bound
Theorem: Gilbert Bound
If a block code KN of length N is a maximal code with minimum
distance d(KN) then the number of code words, M, must satisfy
the following :
which provides a lower bound (the Gilbert bound) on the
number of code words
Error Probabilities
There are two important measures for a channel code :
 The code rate. Codes with higher code rate are more
desirable
 The probability of error. Codes with lower probability of
error are more desirable. For error correcting codes the
block error probability is important. For error detecting
codes the probability of undetected error is important
There is a tradeoff between the code rate and probability of
error
To achieve a lower probability of error it may be necessary to
sacrifice code rate
The bit error probability, Peb(Kn), is the probability of bit
error between the message and the decoded message
For a BSC system without a channel coder : Peb(Kn)=q
The block error probability, Pe(Kn), is the probability of a
decoding error by the channel decoder
 The probability that the decoder picks the wrong code
word when applying the Hamming distance decoding rule
 For L = 1, the bit error and block error probabilities are the
same
 For L > 1 the calculation of the block error probability is
straightforward if we know the minimum distance of the
code
Example :
Consider code K6 as shown below :

Say message 000 is transmitted as code word 000000
Two bit errors occur in the last two bits, so the received
word is 000011
 The channel decoder will select the nearest code: 100011
 The block error has occured
 If this always occurs then Pe(Kn)=1
 The code word 100011 is decoded as message 100 which
has only the first bit wrong and the remaining two bits
correct when compared with the original message
 Then, the bit error is Pb(Kn)=1/3

More Related Content

What's hot

Convolution codes and turbo codes
Convolution codes and turbo codesConvolution codes and turbo codes
Convolution codes and turbo codes
Manish Srivastava
 
Digital communications
Digital communicationsDigital communications
Digital communications
Allanki Rao
 
Pulse code modulation
Pulse code modulationPulse code modulation
Pulse code modulation
Naveen Sihag
 
Data Communication And Networking - DATA RATE LIMITS
Data Communication And Networking - DATA RATE LIMITSData Communication And Networking - DATA RATE LIMITS
Data Communication And Networking - DATA RATE LIMITS
Avijeet Negel
 

What's hot (20)

Unit iv wcn main
Unit iv wcn mainUnit iv wcn main
Unit iv wcn main
 
Convolution codes and turbo codes
Convolution codes and turbo codesConvolution codes and turbo codes
Convolution codes and turbo codes
 
Source coding
Source coding Source coding
Source coding
 
Error Control coding
Error Control codingError Control coding
Error Control coding
 
Convolutional codes
Convolutional codesConvolutional codes
Convolutional codes
 
Error correction, ARQ, FEC
Error correction, ARQ, FECError correction, ARQ, FEC
Error correction, ARQ, FEC
 
PAM
PAMPAM
PAM
 
9. parameters of mobile multipath channels
9. parameters of mobile multipath channels9. parameters of mobile multipath channels
9. parameters of mobile multipath channels
 
Error control coding techniques
Error control coding techniquesError control coding techniques
Error control coding techniques
 
UNIT-3 : CHANNEL CODING
UNIT-3 : CHANNEL CODINGUNIT-3 : CHANNEL CODING
UNIT-3 : CHANNEL CODING
 
Ec 2401 wireless communication unit 4
Ec 2401 wireless communication   unit 4Ec 2401 wireless communication   unit 4
Ec 2401 wireless communication unit 4
 
Digital communications
Digital communicationsDigital communications
Digital communications
 
Diversity Techniques in Wireless Communication
Diversity Techniques in Wireless CommunicationDiversity Techniques in Wireless Communication
Diversity Techniques in Wireless Communication
 
Convolution Codes
Convolution CodesConvolution Codes
Convolution Codes
 
M ary psk modulation
M ary psk modulationM ary psk modulation
M ary psk modulation
 
Pulse code modulation
Pulse code modulationPulse code modulation
Pulse code modulation
 
Coding
CodingCoding
Coding
 
Source coding theorem
Source coding theoremSource coding theorem
Source coding theorem
 
Digital communication unit II
Digital communication unit IIDigital communication unit II
Digital communication unit II
 
Data Communication And Networking - DATA RATE LIMITS
Data Communication And Networking - DATA RATE LIMITSData Communication And Networking - DATA RATE LIMITS
Data Communication And Networking - DATA RATE LIMITS
 

Viewers also liked

Basics of Coding Theory
Basics of Coding TheoryBasics of Coding Theory
Basics of Coding Theory
Piyush Mittal
 
Uchim.org 6-klass-vilenkin
Uchim.org 6-klass-vilenkinUchim.org 6-klass-vilenkin
Uchim.org 6-klass-vilenkin
Razon Ej
 
Data and Information - Input, Process and Output
Data and Information - Input, Process and OutputData and Information - Input, Process and Output
Data and Information - Input, Process and Output
shiplakeict
 
The HR scorecard
The HR scorecardThe HR scorecard
The HR scorecard
Jon Ingham
 

Viewers also liked (18)

Channel coding
Channel codingChannel coding
Channel coding
 
Source coding
Source codingSource coding
Source coding
 
Source coding
Source codingSource coding
Source coding
 
Linear block code
Linear block codeLinear block code
Linear block code
 
LDPC Encoding and Hamming Encoding
LDPC Encoding and Hamming EncodingLDPC Encoding and Hamming Encoding
LDPC Encoding and Hamming Encoding
 
02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight
 
LDPC Encoding
LDPC EncodingLDPC Encoding
LDPC Encoding
 
LDPC
LDPCLDPC
LDPC
 
LDPC - Low Density Parity Check Matrix
LDPC - Low Density Parity Check MatrixLDPC - Low Density Parity Check Matrix
LDPC - Low Density Parity Check Matrix
 
LDPC Codes
LDPC CodesLDPC Codes
LDPC Codes
 
Basics of Coding Theory
Basics of Coding TheoryBasics of Coding Theory
Basics of Coding Theory
 
Uchim.org 6-klass-vilenkin
Uchim.org 6-klass-vilenkinUchim.org 6-klass-vilenkin
Uchim.org 6-klass-vilenkin
 
A new Algorithm to construct LDPC codes with large stopping sets
A new Algorithm to construct LDPC codes with large stopping setsA new Algorithm to construct LDPC codes with large stopping sets
A new Algorithm to construct LDPC codes with large stopping sets
 
Data and Information - Input, Process and Output
Data and Information - Input, Process and OutputData and Information - Input, Process and Output
Data and Information - Input, Process and Output
 
Mechanical measurements and Measurement systems
Mechanical measurements and Measurement systemsMechanical measurements and Measurement systems
Mechanical measurements and Measurement systems
 
The HR scorecard
The HR scorecardThe HR scorecard
The HR scorecard
 
Performance Management
Performance ManagementPerformance Management
Performance Management
 
KEY PERFORMANCE INDICATOR
KEY PERFORMANCE INDICATORKEY PERFORMANCE INDICATOR
KEY PERFORMANCE INDICATOR
 

Similar to Channel coding

06 ET 351_Lecture_06_January_07_2023.ppt
06 ET 351_Lecture_06_January_07_2023.ppt06 ET 351_Lecture_06_January_07_2023.ppt
06 ET 351_Lecture_06_January_07_2023.ppt
OmmyOmar
 
A method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesA method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codes
Alexander Decker
 
第四次课程 Chap8
第四次课程 Chap8第四次课程 Chap8
第四次课程 Chap8
Emma2013
 

Similar to Channel coding (20)

Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)Channel Coding (Error Control Coding)
Channel Coding (Error Control Coding)
 
error control coding
error control coding error control coding
error control coding
 
Channel Coding .pptx
Channel Coding .pptxChannel Coding .pptx
Channel Coding .pptx
 
Channel Coding.ppt
Channel Coding.pptChannel Coding.ppt
Channel Coding.ppt
 
Hossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on GraphsHossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on Graphs
 
Presentation
PresentationPresentation
Presentation
 
06 ET 351_Lecture_06_January_07_2023.ppt
06 ET 351_Lecture_06_January_07_2023.ppt06 ET 351_Lecture_06_January_07_2023.ppt
06 ET 351_Lecture_06_January_07_2023.ppt
 
Unit-4.pptx
Unit-4.pptxUnit-4.pptx
Unit-4.pptx
 
Coding
CodingCoding
Coding
 
M.TECH, ECE 2nd SEM LAB RECORD
M.TECH, ECE 2nd SEM LAB RECORD M.TECH, ECE 2nd SEM LAB RECORD
M.TECH, ECE 2nd SEM LAB RECORD
 
DCN Error Detection & Correction
DCN Error Detection & CorrectionDCN Error Detection & Correction
DCN Error Detection & Correction
 
A method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codesA method to determine partial weight enumerator for linear block codes
A method to determine partial weight enumerator for linear block codes
 
Cs8591 Computer Networks
Cs8591 Computer NetworksCs8591 Computer Networks
Cs8591 Computer Networks
 
TeleCom Lecture 07.ppt
TeleCom Lecture 07.pptTeleCom Lecture 07.ppt
TeleCom Lecture 07.ppt
 
Image compression
Image compressionImage compression
Image compression
 
5 digital datacomm
5 digital datacomm5 digital datacomm
5 digital datacomm
 
第四次课程 Chap8
第四次课程 Chap8第四次课程 Chap8
第四次课程 Chap8
 
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
Block coding, error detection (Parity checking, Cyclic redundancy checking (C...
 
chp2 - data link layer.pptx
chp2 - data link layer.pptxchp2 - data link layer.pptx
chp2 - data link layer.pptx
 
13-DataLink_02.ppt
13-DataLink_02.ppt13-DataLink_02.ppt
13-DataLink_02.ppt
 

Recently uploaded

Recently uploaded (20)

Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 

Channel coding

  • 1. CHANNEL CODING Introduction The mutual information I(A,B), with input A and output B, measures the amount of information that the channel is able to convey over the source. If the channel is noiseless, then this is equal to the information content of source A, I(A,B)=H(A). But if the channel is noisy, there is an uncertainty of equivocation that reduces the mutual information to I(A,B)=H(A) – H(A|B). The presence of noise manifests itself as random errors in the transmission of the symbols. The random errors may be tolerable ( eg. Occasional bit errors in the transmission of an image may not yield any noticeable degradation in picture quality) or fatal (e.g., with Huffman codes, any bit errors in the encoded variable-length output will create a cascade of errors in the decoded sequence due the loss of coding synchronization ) . When the errors introduced by the information channel are unacceptable, then channel coding is needed to reduce the error rate and improve the reliability of the transmission. Since the digital communication system uses a binary channel and the source coder will encode the source to a binary code, then the intervening channel coder will code binary msgs to binary codes. Our concern is the binary block codes, that is, both the channel coder inputs and outputs will be binary and fixed length.
  • 2. Definition of channel coding The channel encoder separates or segments the incoming bit stream into equal length blocks of L binary digits and maps each L-bit message block into an N-bit code word where N > L and the extra N –L check bits provide the required error protection. There are M=2L messages and 2L code words of length N bits  The channel decoder has the task of detecting that there has been a bit error and (if possible) correcting the bit error  The channel decoder can resolve bit errors by two scheme of error-control . ARQ (Automatic-Repeat-Request ) If the channel decoder performs error detection then errors can be detected and a feedback channel from the channel decoder to the channel encoder can be used to control the
  • 3. retransmission of the code word until the code word is received without detectable errors. FEC (Forward Error Correction) If the channel decoder performs error correction then errors are not only detected but the bits in error can be identified and corrected (by bit inversion) Code Rate  The extra N –L bits requires a higher bit-rate or bandwidth of a channel  Assume that the source coder generates messages at an average bit rate of ns bit per second  If the channel encoder maps each L-bit message into N-bit code word, then the channel bit rate will be nc = ( N/L )ns  It infers that more bits transmitted through the channel, so the channel must transmit bits faster than the source encoder The general expression for the code rate is R=H(M)/n Where H(M) is the average number of bits transmitted with each source message and n is the length of code word For the case of channel coding, we have M= 2L and each message is transmitted as the code word of length N, then H(M)=log2M=L and n = N R = L/N = ns/nc = Tc/Ts  The code rate measure the relative amount of information conveyed in each code word  Higher code rate implies fewer redundant check bits relative to the code word length
  • 4.  The message rate and the channel rates are fixed design parameters, the code word length is then selected: N = Lnc/ns Example A system has bps and the channel bit rate is bps. This requires a channel coder with If L = 3, we have to choose N = 4 If L = 5, we have to choose N = 6, and the channel coder has code rate of 5/6 yields loss of synchronization, we have to compensate with 2 “dummy’ bits every 3 code word Decoding Rules The form of decoding rule will govern how the channel codes are used and the level of error protection provided Let ai = ai1ai2….aiN be the i-th N-bit code word transmitted through the channel Let b = b1b2….bN be the corresponding N-bit word produced at the output of the channel Let BM represent the set of M valid N-bit code words and its complement BCM is the remaining N-bit which are not code words The BN = BM BCM is the set of all possible N-bit binary sequences and b = BN whereas ai The channel decoder has to apply the decoding rule on the received word to decide which code word was transmitted.
  • 5. If the decision rule is D(.) then ai = D(b) Let PN (b | ai) be the probability of receiving b given ai was transmitted For discrete memoryless channel, the probability is Let PN(ai) be the a priori probability for the message corresponding to the code word ai By the Bayes’ Theorem we get If the decoder decodes b into the code word ai then the probability that this is correct is PN(ai | b) and the probability that it is wrong is 1 – PN(ai| b) If we want to minimize the error, we have to maximize the PN(ai | b) (It is the minimum-error decoding rule) Definition of minimum-error decoding rule: We choose DME(b)=a* (a* ) such that PN (a* | b) >= PN(ai | b) i By using Bayes’ Theorem and noting that PN(b) is independent of the code word, the condition becomes PN (b | a*) PN (a*) >= PN(b | ai) PN (ai) i requiring both knowledge of the channel probabilities and channel input probabilities. This decoding rule guarantees minimum error in decoding
  • 6. Another rule is called maximum-likelihood decoding rule, by maximizing PN(b | ai) , the likelihood that b is received given ai was transmitted The rule is simpler as the channel input probabilities are not required Definition of Maximum-Likelihood Decoding: We choose DML(b)=a* where a*  is such that PN (b | a*) >= PN(b | ai) i requiring only knowledge of the channel probabilities. This decoding rule does not guarantee minimum error in decoding The maximum-likelihood decoding rule is the same as the minimum-error decoding rule when the channel input probabilities are equal Assume we have decoding rule D(.) and Bi is the set of all possible received words b such that D(b) = ai The probability of decoding error given code word ai is transmitted is The overall probability of decoding error is Hamming Distance The important parameter in error coding is the number of bit errors or differences between two different code words ai, and aj
  • 7. This measure is provided by Hamming distance on the space of binary words of length N The properties of Hamming distance are instrumental in establishing the operation and performance of channel codes for both error detection and error correction Definition of Hamming Distance: Consider the two N-length binary words a = a1a2….aN and b = b1b2….bN The Hamming distance between a and b, d(a,b) is defined as the number of bit positions in which a and b differ The Hamming distance is a metric on the space of all binary words of length N since for arbitrary words a, b, c, the Hamming distance obeys the following conditions: d(a,b)>=0 with equality when a= b d(a,b)= d(b,a) d(a,b)+ d(b,c)>=d(a,c) (triangle inequality) Hamming Distance for BSCs Consider the maximum likelihood decoding rule where the code word, ai , which maximizes the conditional probability PN(b | ai) has to be found Consider a BSC with bit error probability q Let the Hamming distance d(b,ai) = D represent the number of bit errors between the transmitted code word and received word This gives the following expression for the conditional probability
  • 8. If q< 0.5, then eq. above is maximized when ai is chosen such that d(b,ai) is minimized Hamming Distance Decoding Rule The binary word, b, of length N is received upon transmission of one of the M possible N-bit binary code words, ai BM, through a BSC. Assuming the maximum likelihood decoding rule we choose the most likely code word as follows:  If b = ai , for a particular i then the code word ai was sent  If b  ai , for any i, we find the codeword a*  B which is closest to b in the Hamming sense: If there is only one candidate a* then the t-bit error, where t=d(a*,b), is corrected and a* was sent If there is more than one candidate a* then the t-bit error, where t=d(a*,b) , can only be detected Error Detection/Correction Using Hamming Distance Errors are detected when the received word is equidistance from more than one code word Errors are corrected when the received word is closest to only one code word Definition of minimum distance of a code: The minimum distance of a block code Kn, where Kn identifies the set of length n code words, is given by
  • 9.  d(Kn) = min d(a,b) |a,b Kn and a b that is, the smallest Hamming distance over all pairs of distinct code words A block code Kn detects up to t errors if and only if its minimum distance is greater than t A block code Kn corrects up to t errors if and only if its minimum distance is greater than 2t Upper Bounds on M and the Hamming Bound Definition of Upper Bound on M: For a block code KN of length N and minimum distance d(KN) the maximum number of code words , and hence the msgs, M, to guarantee that such a code is exist is given by : M = B(N,d(KN)) The following are elementary results for B(N,d(KN)) : B(N,1) = 2N B(N,2) = 2N-1 B(N,2t+1) = B(N+1,2t+2) B(N,N) = 2
  • 10. Where B(N,2t+1) = b(N+1,2t+2) indicates that if we know bound for odd d(KN) equivalent 2t+1 then we can obtain the bound for the next even d(KN)+1 = 2t+2, and vice versa Theorem: Hamming Bound If the block code of length N is a t-bit error correcting code, then the number of code words, M, must satisfy the following inequality : which is an upper bound (the Hamming bound) on the number of code words We have already known that for a t-bit error correcting code d(KN)>2t A t-bit error correcting code will have a minimum distance of d(KN)=2t+1 or more Consider the problem of finding the upper bound for a code with d(KN)=2t+1 Since d(KN)>2t this is a t-bit error correcting code and subject to the Hamming bound
  • 11. Maximal Codes and Gilbert Bound The code K3= {000, 011} has d(K3)=2 and contains M= 2 code words of length N= 3 implying L= 1 and a code rate R= 1/3 The code can be made more efficient by augmenting it to the code K3= {000, 011, 101, 110} which is also d(K3) = 2 but contains M= 4 code words of length N= 3 implying L= 2 and a code rate R= 2/3 The second one is more efficient than the first code without sacrificing the minimum distance Definition of Maximal Codes: A code KN of length N and minimum distance d(KN) with M code words is said to be maximal if it is not part of, or cannot be augmented to, another code KN of length N, minimum distance d(KN) but with M+ 1 code words. It can be shown that a code is maximum if and only if for all words b of length N there is a code word ai, such that d(b,ai) <d(KN) Example The code K3= {000, 011} with d(K3)=2 is not maximal since for word b= 101, d(a1,b)=d(000,101)=2 , and d(a1,b)=d(011,101)=2 and thus there is no code word such that d(b,ai) < 2 If code KN is a maximal code then it satisfies the Gilbert bound Theorem: Gilbert Bound If a block code KN of length N is a maximal code with minimum distance d(KN) then the number of code words, M, must satisfy the following :
  • 12. which provides a lower bound (the Gilbert bound) on the number of code words Error Probabilities There are two important measures for a channel code :  The code rate. Codes with higher code rate are more desirable  The probability of error. Codes with lower probability of error are more desirable. For error correcting codes the block error probability is important. For error detecting codes the probability of undetected error is important There is a tradeoff between the code rate and probability of error To achieve a lower probability of error it may be necessary to sacrifice code rate The bit error probability, Peb(Kn), is the probability of bit error between the message and the decoded message For a BSC system without a channel coder : Peb(Kn)=q The block error probability, Pe(Kn), is the probability of a decoding error by the channel decoder  The probability that the decoder picks the wrong code word when applying the Hamming distance decoding rule  For L = 1, the bit error and block error probabilities are the same
  • 13.  For L > 1 the calculation of the block error probability is straightforward if we know the minimum distance of the code Example : Consider code K6 as shown below : Say message 000 is transmitted as code word 000000 Two bit errors occur in the last two bits, so the received word is 000011  The channel decoder will select the nearest code: 100011  The block error has occured  If this always occurs then Pe(Kn)=1  The code word 100011 is decoded as message 100 which has only the first bit wrong and the remaining two bits correct when compared with the original message  Then, the bit error is Pb(Kn)=1/3