A cornerstone of our digital society sytems from the explosive growth of multimedia traffic in general and video in particular. Video already accounts for over 50% of the internet traffic today and mobile video traffic is expected to grow by a factor of more than 20 in the next five years. This massive volume of data has resulted in a strong demand for implementing highly efficient approaches for video transmission. Error correcting codes for reliable communication have been studied for several decades. However, ideal coding techniques for video streaming are fundamentally different from the classical error corr ection codes. In order to be optimized they must operate under low latency, sequential encoding and decoding constrains, and as such they must inherently have a convolutional structure. Such unique constrains lead to fascinating new open problems in the de sign of error correction codes. In this talk we aim to look from a system theoretical perspective at these problems. In particular we propose the use of convolutional code.
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Ā
Error-Correcting codes: Application of convolutional codes to Video Streaming
1. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error-Correcting codes: Application of
convolutional codes to Video Streaming
Diego Napp
Department of Mathematics, Universidad of Aveiro, Portugal
July 22, 2016
2. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Overview
Introduction to Error-correcting codes
Two challenges that recently emerged
Block codes vs convolutional codes
3. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
4. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
ā¢ Bits get corrupted, 0 ā 1 or 1 ā 0, but rarely.
5. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
ā¢ Bits get corrupted, 0 ā 1 or 1 ā 0, but rarely.
What happens when we store/send information and errors occur?
6. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Error Correcting Codes
Basic Problem:
ā¢ want to store bits on magnetic storage device
ā¢ or send a message (sequence of zeros/ones)
ā¢ Bits get corrupted, 0 ā 1 or 1 ā 0, but rarely.
What happens when we store/send information and errors occur?
can we detect them? correct?
7. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The International Standard Book Number (ISBN)
It can be proved that all possible valid ISBN-10ās have at least two
digits diļ¬erent from each other.
ISBN-10:
x1 ā x2x3x4 ā x5x6x7x8x9 ā x10
satisfy
10
i=1
ixi = 0 mod 11
For example, for an ISBN-10 of 0-306-40615-2:
s = (0 Ć 10) + (3 Ć 9) + (0 Ć 8) + (6 Ć 7)+
+ (4 Ć 6) + (0 Ć 5) + (6 Ć 4) + (1 Ć 3) + (5 Ć 2) + (2 Ć 1)
= 0 + 27 + 0 + 42 + 24 + 0 + 24 + 3 + 10 + 2
= 132 = 12 Ć 11
8. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā 1 1 0 1 Ā· Ā· Ā·
9. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā 1 1 0 1 Ā· Ā· Ā·
ā¢ Good: Very easy Encoding / decoding
ā¢ Bad: Rate 1/3
10. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Sending/storing information: Naive solution
Repeat every bit three times
Message
1 1 0 1 Ā· Ā· Ā·
Encoding
=ā
Codeword
111 111 000 111 Ā· Ā· Ā·
Received message
111 111 001 111 Ā· Ā· Ā·
Decoding
=ā 1 1 0 1 Ā· Ā· Ā·
ā¢ Good: Very easy Encoding / decoding
ā¢ Bad: Rate 1/3
Can we do better???
11. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Another Solution
ā¢ Break the message into 2 bits blocks m = u1 u2 ā F2
ā¢ Encode each block as follows:
u āā uG
where
G =
1 0 1
0 1 1
;
(u1, u2)
1 0 1
0 1 1
= u1 u2 u1 + u2
12. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Another Solution
ā¢ Break the message into 2 bits blocks m = u1 u2 ā F2
ā¢ Encode each block as follows:
u āā uG
where
G =
1 0 1
0 1 1
;
(u1, u2)
1 0 1
0 1 1
= u1 u2 u1 + u2
ā¢ Better Rate 2/3
ā¢ I can detect 1 error...but I cannot correct.
13. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Break the message into 3 bits blocks m = 1 1 0 ā F3
ā¢ Encode each block as follows:
u āā uG G =
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø ;
14. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Break the message into 3 bits blocks m = 1 1 0 ā F3
ā¢ Encode each block as follows:
u āā uG G =
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø ;
For example
(1, 1, 0)
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø = (1, 1, 0, 0, 1, 1);
(1, 0, 1)
ļ£«
ļ£
1 0 0 1 1 0
0 1 0 1 0 1
0 0 1 0 1 1
ļ£¶
ļ£ø = (1, 0, 1, 1, 0, 1);
etc...
15. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Rate 3/6 = 1/2, better than before (1/3).
ā¢ Only 23 codewords in F6
C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1),
(1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)}
ā¢ In F6 we have 26 possible vectors
16. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Rate 3/6 = 1/2, better than before (1/3).
ā¢ Only 23 codewords in F6
C = {(1, 0, 0, 1, 1, 0), (0, 1, 0, 1, 0, 1), (0, 0, 1, 0, 1, 1), (1, 1, 0, 0, 1, 1),
(1, 0, 1, 1, 0, 1), (0, 1, 1, 1, 1, 0), (1, 1, 1, 0, 0, 0), (0, 0, 0, 0, 0, 0)}
ā¢ In F6 we have 26 possible vectors
ā¢ Any two codewords diļ¬er at least in 3 coordinates. I can
detect and correct 1 error!!!
17. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Hamming distance
The intuitive concept of āclosenessā of two words is well
formalized through Hamming distance h(x, y) of words x, y. For
two words x, y
h(x, y) = the number of symbols x and y diļ¬er.
A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important
parameter of C is its minimal distance.
dist(C) = min{h(x, y) | x, y ā C, x = y},
18. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Hamming distance
The intuitive concept of āclosenessā of two words is well
formalized through Hamming distance h(x, y) of words x, y. For
two words x, y
h(x, y) = the number of symbols x and y diļ¬er.
A code C is a subset of Fn, F a ļ¬nite ļ¬eld. An important
parameter of C is its minimal distance.
dist(C) = min{h(x, y) | x, y ā C, x = y},
Theorem (Basic error correcting theorem)
1. A code C can detected up to s errors if dist(C) ā„ s + 1.
2. A code C can correct up to t errors if dist(C) ā„ 2t + 1.
19. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
An (n, k) block code C is a k-dimensional subspace of Fn and the
rows of G form a basis of C
C = ImFG = uG : u ā Fk
(1)
20. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
An (n, k) block code C is a k-dimensional subspace of Fn and the
rows of G form a basis of C
C = ImFG = uG : u ā Fk
(1)
Main coding theory problem
1. Construct codes that can correct a maximal number of errors
while using a minimal amount of redundancy (rate)
2. Construct codes (as above) with eļ¬cient encoding and
decoding procedures
21. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ Coding theory develops methods to protect information
against errors.
ā¢ Cryptography develops methods how to protect information
against an enemy (or an unauthorized user).
ā¢ Coding theory - theory of error correcting codes - is one of the
most interesting and applied part of mathematics and
informatics.
ā¢ All real systems that work with digitally represented data, as
CD players, TV, fax machines, internet, satelites, mobiles,
require to use error correcting codes because all real channels
are, to some extent, noisy.
ā¢ Coding theory methods are often elegant applications of very
basic concepts and methods of (abstract) algebra.
22. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
23. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
ā¢ Application of convolutional codes to Distributed Storage
Systems.
24. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Topics we are investigating
ā¢ (Convolutional) Codes over ļ¬nite rings (Zpr ).
ā¢ Application of convolutional codes to Distributed Storage
Systems.
ā¢ Application of convolutional codes Network Coding, in
particular to Video Streaming.
25. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā¢ Fast-growing demand for large-scale data storage.
26. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā¢ Fast-growing demand for large-scale data storage.
ā¢ Failures occur: Redundancy is needed to ensure resilience ā
Coding theory.
27. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Distributed storage systems
ā¢ Fast-growing demand for large-scale data storage.
ā¢ Failures occur: Redundancy is needed to ensure resilience ā
Coding theory.
ā¢ Data is stored over a network of nodes: Peer-to-peer or data
centers ā Distributed storage systems.
28. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Coding Theory
ā¢ A ļ¬le u = (u1, . . . , uk) ā Fk
q is redundantly stored across n
nodes
v = (v1, . . . , vn) = uG,
where G is the generator matrix of an (n, k)-code C.
29. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ What happens when nodes fail? The repair problem.
30. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
ā¢ What happens when nodes fail? The repair problem.
ā¢ Metrics:
ā¢ Repair bandwidth
ā¢ Storage cost
ā¢ Locality
31. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
32. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
ā¢ Locality ā„ 2 (if the code is not a replication).
ā¢ There is a natural trade-oļ¬ between distance and locality.
33. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Locality
ā¢ The locality is the number of nodes necessary to repair one
node that fails.
Deļ¬nition
An (n, k) code has locality r if every codeword symbol in a
codeword is a linear combination of at most r other symbols in
the codeword.
ā¢ Locality ā„ 2 (if the code is not a replication).
ā¢ There is a natural trade-oļ¬ between distance and locality.Theorem (Gopalan et. al,2012)
Let C be an (n, k) linear code with minimum distance d and
locality r. Then
n ā k + 1 ā d ā„
k ā 1
r
.
34. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Pyramid Codes are optimal with respect to this bound
ā¢ Pyramid codes were Implemented in Facebook and Windows
Azure Storage (release in 2007).
ā¢ But if two erasures occur .... how to repair multiple erasures?
35. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Pyramid Codes are optimal with respect to this bound
ā¢ Pyramid codes were Implemented in Facebook and Windows
Azure Storage (release in 2007).
ā¢ But if two erasures occur .... how to repair multiple erasures?
ā¢ ....Next time... Today we focus in Video Streaming.
36. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā¢ Explosive growth of multimedia traļ¬c in general and in video
in particular.
37. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā¢ Explosive growth of multimedia traļ¬c in general and in video
in particular.
ā¢ Video already accounts for over 50 % of the internet traļ¬c
today and mobile video traļ¬c is expected to grow by a factor
of more than 20 in the next ļ¬ve years [1].
[1] Cisco: Forecast and Methodology 2012-2017.
38. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Video Streaming
ā¢ Strong demand for implementing highly eļ¬cient approaches
for video transmission.
39. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Network Coding
How is the best way to disseminate information over a network?
40. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Network Coding
How is the best way to disseminate information over a network?
Linear random network coding
It has been proven that linear coding is enough to achieve the
upper bound in multicast problems with one or more sources. It
optimizes the throughput.
41. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
42. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
ā¢ These packets propagate through the network. Each node
creates a random -linear combination of the packets it has
available and transmits this random combination.
43. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Linear Network Coding
ā¢ During one shot the transmitter injects a number of packets
into the network, each of which may be regarded as a row
vector over a ļ¬nite ļ¬eld Fqm .
ā¢ These packets propagate through the network. Each node
creates a random -linear combination of the packets it has
available and transmits this random combination.
ā¢ Finally, the receiver collects such randomly generated packets
and tries to infer the set of packets injected into the network
44. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā¢ Rank metric codes are matrix codes C ā FmĆn
q , armed with
the rank distance
drank(X, Y ) = rank(X ā Y ), where X, Y ā FmĆn
q .
45. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā¢ Rank metric codes are matrix codes C ā FmĆn
q , armed with
the rank distance
drank(X, Y ) = rank(X ā Y ), where X, Y ā FmĆn
q .
ā¢ For linear (n, k) rank metric codes over Fqm with m ā„ n the
following analog of the Singleton bound holds,
drank(C) ā¤ n ā k + 1.
46. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Rank metric codes are used in Network Coding
ā¢ Rank metric codes are matrix codes C ā FmĆn
q , armed with
the rank distance
drank(X, Y ) = rank(X ā Y ), where X, Y ā FmĆn
q .
ā¢ For linear (n, k) rank metric codes over Fqm with m ā„ n the
following analog of the Singleton bound holds,
drank(C) ā¤ n ā k + 1.
ā¢ The code that achieves this bound is called Maximum Rank
Distance (MRD). Gabidulin codes are a well-known class of
MRD codes.
47. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
48. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
49. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
ā¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
50. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
ā¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
ā¢ Although the use of convolutional codes is widespread, its
application to video streaming (or using the rank metric) is
yet unexplored.
51. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
THE IDEA: Multi-shot
ā¢ Coding can also be performed over multiple uses of the
network, whose internal structure may change at each shot
ā¢ Creating dependencies among the transmitted codewords of
diļ¬erent shots can improve the error-correction capabilities
ā¢ Ideal coding techniques for video streaming must operate
under low-latency, sequential encoding and decoding
constrains, and as such they
must inherently have a convolutional structure.
ā¢ Although the use of convolutional codes is widespread, its
application to video streaming (or using the rank metric) is
yet unexplored.
ā¢ We propose a novel scheme that add complex dependencies to
data streams in a quite simple way
52. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Block codes vs convolutional codes
. . . u2, u1, u0
G
āāāāā . . . v2 = u2G, v1 = u1G, v0 = u0G
represented in a polynomial fashion
Ā· Ā· Ā· + u2D2
+ u1D + u0
G
āāāāā Ā· Ā· Ā· + u2G
v2
D2
+ u1G
v1
D + u0G
v0
substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs?
...u2D2
+ u1D + u0
G(D)
āāā...(u2G0 + u1G1 + u0G2)
v2
D2
+(u1G0 + u0G1)
v1
D +u0G0
v0
53. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Block codes vs convolutional codes
. . . u2, u1, u0
G
āāāāā . . . v2 = u2G, v1 = u1G, v0 = u0G
represented in a polynomial fashion
Ā· Ā· Ā· + u2D2
+ u1D + u0
G
āāāāā Ā· Ā· Ā· + u2G
v2
D2
+ u1G
v1
D + u0G
v0
substitute G by G(D) = G0 + G1D + Ā· Ā· Ā· + GsDs?
...u2D2
+ u1D + u0
G(D)
āāā...(u2G0 + u1G1 + u0G2)
v2
D2
+(u1G0 + u0G1)
v1
D +u0G0
v0
Block codes: C = {uG} = ImFG ā¼ {u(D)G} = ImFG(D)
Convolutional codes: C = {u(D)G(D)} = ImF((D))G(D)
54. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
55. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
A matrix G(D) whose rows form a bases for C is called an encoder.
If C has rank k then we say the C has rate k/n.
C = ImF((D))G(D) = u(D)G(D) : u(D) ā Fk
((D))
= KerF[D]H(D) = {v(D) ā Fn
[D] : v(D)H(D) = 0}
where H(D) is called the parity-check of C.
56. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
A convolutional code C is a F((D))-subspace of Fn((D)).
A matrix G(D) whose rows form a bases for C is called an encoder.
If C has rank k then we say the C has rate k/n.
C = ImF((D))G(D) = u(D)G(D) : u(D) ā Fk
((D))
= KerF[D]H(D) = {v(D) ā Fn
[D] : v(D)H(D) = 0}
where H(D) is called the parity-check of C.
Remark
One can also consider the ring of polynomials F[D] instead of
Laurent series F((D)) and deļ¬ne C as a F[D]-module of Fn[D].
57. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
A convolutional encoder is also a linear device which maps
u(0), u(1), Ā· Ā· Ā· āā v(0), v(1), . . .
In this sense it is the same as block encoders. The diļ¬erence is
that the convolutional encoder has an internal āstorage vectorā or
āmemoryā.
v(i) does not depend only on u(i) but also on the storage vector
x(i)
x(i + 1) = Ax(i) + Bu(i)
v(i) = Cx(i) + Eu(i) (2)
A ā FĪ“ĆĪ“, B ā FĪ“Ćk, C ā FnĆĪ“, E ā FĪ“Ćk.
58. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // //
...0, 1, 1 // 0 //
;;
##
0
// 0
bb
||
ā // //
59. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // // 1
...0, 1 // 1 //
0
// 0
bb
||
ā // // 1
60. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // // 1, 1
...0 // 1 //
1
// 0
bb
||
ā // // 0, 1
61. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The encoder
G(D) =
D2 + 1
D2 + D + 1
has the following implementation
ā // // 1, 1, 1
// 0 //
1
// 1
bb
||
ā // // 0, 0, 1
62. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
Let the convolutional code be given by matrices
x1(i + 1)
x2(i + 1)
=
0 1
0 0
x1(i)
x2(i)
+
1
0
u(i)
v1(i)
v2(i)
=
1 0
1 1
x1(i)
x2(i)
+
1
1
u(i)
We can compute an encoder
G(D) = E + B(Dā1
Im ā A)ā1
C = 1 + D + D2 1 + D2
63. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the encoder
G(D) = 1 + D + D2 1 + D2 . This encoder has degree 2 and
memory 2.
Clearly any matrix which is F(D)-equivalent to G(D) is also an
encoder.
G (D) = 1 1+D2
1+D+D2
64. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the generator matrix G (D). This
encoder has degree 2 and inļ¬nite memory.
65. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
A physical realization for the catastrophic encoder
G (D) = 1 + D3 1 + D + D2 + D3 . This encoder has
degree 3 and memory 3.
66. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
67. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
Deļ¬nition
A generator matrix G(D) is said to be catastrophic if for every
v(D) = u(D)G(D)
supp(v(D)) is ļ¬nite ā supp(u(D)) is ļ¬nite
68. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Polynomial encoders
Two encoders G(D), G (D) generate the same code if there exist
an invertible matrix U(D) such that G(D) = U(D)G (D).
Deļ¬nition
A generator matrix G(D) is said to be catastrophic if for every
v(D) = u(D)G(D)
supp(v(D)) is ļ¬nite ā supp(u(D)) is ļ¬nite
Theorem
G(D) is non-catastrophic if G(D) admits a polynomial right
inverse.
69. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Example
The encoder
G(D) =
1 + D 0 1 D
1 D 1 + D 0
is noncatastrophic as an inverse is
H(D) =
ļ£«
ļ£¬
ļ£¬
ļ£
0 1
0 0
0 0
Dā1 1 + Dā1
ļ£¶
ļ£·
ļ£·
ļ£ø
70. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Historical Remarks
ā¢ Convolutional codes were introduced by Elias (1955)
ā¢ The theory was imperfectly understood until a series of papers
of Forney in the 70ās on the algebra of the k Ć n matrices over
the ļ¬eld of rational functions in the delay operator D.
ā¢ Became widespread in practice with the Viterbi decoding.
They belong to the most widely implemented codes in
(wireless) communications.
ā¢ The ļ¬eld is typically F2 but in the last decade a renewed
interest has grown for convolutional codes over large ļ¬elds
trying to fully exploit the potential of convolutional codes.
71. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
72. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
73. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
ā¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬cient.
74. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
ā¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬cient.
ā¢ Convolutional codes over large alphabets have attracted much
attention in recent years.
75. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
In Applications
ā¢ In block coding it is normally considered n and k large.
ā¢ Convolutional codes are typically studied for n and k small and
ļ¬xed (n = 2 and k = 1 is common) and for several values of Ī“.
ā¢ Decoding over the symmetric channel is, in general, diļ¬cult.
ā¢ The ļ¬eld is typically F2. The degree cannot be too large so
that the Viterbi decoding algorithm is eļ¬cient.
ā¢ Convolutional codes over large alphabets have attracted much
attention in recent years.
ā¢ In [Tomas, Rosenthal, Smarandache 2012]: Decoding over the
erasure channel is easy and Viterbi is not needed, just linear
algebra.
76. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
77. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
78. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
ā¢ We are interested in the maximum possible value of dfree(C)
79. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
ā¢ We are interested in the maximum possible value of dfree(C)
ā¢ For block codes (Ī“ = 0) we know that maximum value is
given by the Singleton bound: n ā k + 1
80. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
MDS convolutional codes over F
The Hamming weight of a polynomial vector
v(D) =
iāN
vi Di
= v0 + v1D + v2D2
+ Ā· Ā· Ā· + vĪ½DĪ½
ā F[D]n
,
deļ¬ned as wt(v(D)) = Ī½
i=0 wt(vi ).
The free distance of a convolutional code C is given by,
dfree(C) = min {wt(v(D)) | v(D) ā C and v(D) = 0}
ā¢ We are interested in the maximum possible value of dfree(C)
ā¢ For block codes (Ī“ = 0) we know that maximum value is
given by the Singleton bound: n ā k + 1
ā¢ This bound can be achieve if |F| n
81. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Theorem
Rosenthal and Smarandache (1999) showed that the free distance
of convolutional code of rate k/n and degree Ī“ must be upper
bounded by
dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1. (3)
A code achieving (3) is called Maximum Distance Separable (MDS)
and if it achieves it āas fast as possibleā is called strongly MDS.
82. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Theorem
Rosenthal and Smarandache (1999) showed that the free distance
of convolutional code of rate k/n and degree Ī“ must be upper
bounded by
dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1. (3)
A code achieving (3) is called Maximum Distance Separable (MDS)
and if it achieves it āas fast as possibleā is called strongly MDS.
ā¢ Allen conjecture (1999) the existence of convolutional codes
that are both sMDS and MDP when k = 1 and n = 2.
ā¢ Rosenthal and Smarandache (2001), provided the ļ¬rst
concrete construction of MDS convolutional codes
ā¢ Gluessing-Luerssen, et. al. (2006), provided the ļ¬rst
construction of strongly MDS when (n ā k)|Ī“.
ā¢ Napp and Smarandache (2016) provided the ļ¬rst construction
of strongly MDS for all rates and degrees.
83. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Deļ¬nition
Another important distance measure for a convolutional code is
the jth column distance dc
j (C), (introduced by Costello), given by
dc
j (C) = min wt(v[0,j](D)) | v(D) ā C and v0 = 0
where v[0,j](D) = v0 + v1D + . . . + vj Dj represents the j-th
truncation of the codeword v(D) ā C.
84. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā¤ dc
1 ā¤ Ā· Ā· Ā· ā¤ lim
jāā
dc
j (C) = dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā¤ (n ā k)(j + 1) + 1, (4)
85. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā¤ dc
1 ā¤ Ā· Ā· Ā· ā¤ lim
jāā
dc
j (C) = dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā¤ (n ā k)(j + 1) + 1, (4)
86. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
The column distances satisfy
dc
0 ā¤ dc
1 ā¤ Ā· Ā· Ā· ā¤ lim
jāā
dc
j (C) = dfree(C) ā¤ (n ā k)
Ī“
k
+ 1 + Ī“ + 1.
The j-th column distance is upper bounded as following
dc
j (C) ā¤ (n ā k)(j + 1) + 1, (4)
How do we construct MDP?
The construction of MDP convolutional boils down to the
construction of Superregular matrices
87. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
LT-Superregular matrices
Deļ¬nition [Gluesing-Luerssen,Rosenthal,Smadandache (2006)]
A lower triangular matrix
B =
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£
a0
a1 a0
...
...
...
aj ajā1 Ā· Ā· Ā· a0
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£ø
(5)
is LT-superregular if all of its minors, with no zeros in the diagonal,
are nonsingular.
Remark
Note that due to such a lower triangular conļ¬guration the
remaining minors are necessarily zero.
89. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Remarks
ā¢ Construction of classes of LT-superregular matrices is very
diļ¬cult due to their triangular conļ¬guration.
ā¢ Only two classes exist:
1. Rosenthal et al. (2006) presented the ļ¬rst construction. For
any n there exists a prime number p such that
ļ£«
ļ£¬
ļ£¬
ļ£¬
ļ£
n
0
nā1
1
n
0
...
...
...
nā1
nā1 Ā· Ā· Ā· nā1
1
n
0
ļ£¶
ļ£·
ļ£·
ļ£·
ļ£ø
ā FnĆn
p
is LT-superregular. Bad news: Requires a ļ¬eld with very large
characteristic.
90. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Remarks
2. Almeida, Napp and Pinto (2013) ļ¬rst construction over any
characteristic: Let Ī± be a primitive element of a ļ¬nite ļ¬eld F
of characteristic p. If |F| ā„ p2M
then the following matrix
ļ£®
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£Æ
ļ£°
Ī±20
Ī±21
Ī±20
Ī±22
Ī±21
Ī±20
...
...
...
Ī±2Mā1
Ā· Ā· Ā· Ā· Ā· Ā· Ī±20
ļ£¹
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£ŗ
ļ£»
.
is LT-superregular. Bad news: |F| very large.
91. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Performance over the erasure channel
Theorem
Let C be an (n, k, Ī“) convolutional code and dc
j0
the j = j0 -th
column distance. If in any sliding window of length (j0 + 1)n at
most dc
j0
ā 1 erasures occur then we can recover completely the
transmitted sequence.
92. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Performance over the erasure channel
Theorem
Let C be an (n, k, Ī“) convolutional code and dc
j0
the j = j0 -th
column distance. If in any sliding window of length (j0 + 1)n at
most dc
j0
ā 1 erasures occur then we can recover completely the
transmitted sequence.
Example
Ā· Ā· Ā· vv|
60
Ā· Ā· Ā·
80
vvv Ā· Ā· Ā· vv
60
Ā· vv|vv Ā· Ā· Ā·
A [202, 101] MDS blok code can correct 101 erasures in a window
of 202 symbols (recovering rate 101
202): ā cannot correct this
window.
A (2, 1, 50) MDP convolutional code has also 50% error capability.
(L + 1)n = 101 Ć 2 = 202. Take a window of 120 symbols, correct
and continue until you correct the whole window.
We have ļ¬exibility in choosing the size and position of the sliding
window.
93. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Fundamental Open Problems
ā¢ Come up with LT superregular matrices over small ļ¬elds.
ā¢ How is the minimum ļ¬eld size needed to construct LT
superregular matrices?
ā¢ Typically convolutional codes are decoded via the Viterbi
decoding algorithm. The complexity of this algorithm grows
exponentially with the McMillan degree. New classes of codes
coming with more eļ¬cient decoding algorithms are needed.
ā¢ Good constructions of convolutional codes for rank metric
ā¢ Good constructions tailor made to deal with burst of erasures
(in both Hamming and Rank metric)
94. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
References
G.D. Forney, Jr. (1975)
āMinimal bases of rational vector spaces, with applications to
multivariable linear systemsā,
SIAM J. Control 13, 493ā520, 1975.
G.D. Forney, Jr. (1970)
āConvolutional codes I: algebraic structure ā,
IEEE Trans. Inform. Theory vol. IT-16, pp. 720-738, 1970.
R.J. McEliece (1998)
The Algebraic Theory of Convolutional Codes
Handbook of Coding Theory Vol. 1 , North-Holland,
Amsterdam.
Johannesson, Rolf and Zigangirov, K. (1998)
Fundamentals of convolutional coding
IEEE Communications society and IEEE Information theory
society and Vehicular Technology Society.
95. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Summary of the basics
ā¢ Convolutional codes are block codes with memory.
Convolutional codes generalize linear block codes in a natural
way.
ā¢ Convolutional codes are capable of decoding a large number
of errors per time interval require a large free distance and a
good distance proļ¬le.
ā¢ In order to construct good convolutional codes we need to
construct superregular matrices over F: Diļ¬cult.
ā¢ Convolutional codes treat the information as a stream: Great
potential for video streaming
ā¢ A lots of Mathematics are needed: Linear algebra, systems
theory, ļ¬nite ļ¬elds, rings/module theory, etc...
96. Introduction to Error-correcting codes Two challenges that recently emerged Block codes vs convolutional codes
Muchas gracias por la invitacion!