SlideShare uma empresa Scribd logo
1 de 141
Recovering Lost Sensor
Data through Compressed
         Sensing
                   Zainul Charbiwala

Collaborators:Younghun Kim, Sadaf Zahedi, Supriyo Chakraborty,
    Ting He (IBM), Chatschik Bisdikian (IBM), Mani Srivastava
The Big Picture



                         Lossy Communication Link




                  zainul@ee.ucla.edu - CSEC - Jan 2010   2
The Big Picture



                         Lossy Communication Link




                  zainul@ee.ucla.edu - CSEC - Jan 2010   2
The Big Picture



                         Lossy Communication Link




                  zainul@ee.ucla.edu - CSEC - Jan 2010   2
The Big Picture



                           Lossy Communication Link




How do we recover from this loss?


                    zainul@ee.ucla.edu - CSEC - Jan 2010   2
The Big Picture



                              Lossy Communication Link




How do we recover from this loss?
       • Retransmit the lost packets


                       zainul@ee.ucla.edu - CSEC - Jan 2010   2
The Big Picture



                              Lossy Communication Link




How do we recover from this loss?
       • Retransmit the lost packets


                       zainul@ee.ucla.edu - CSEC - Jan 2010   2
The Big Picture

                    Generate Error
                    Correction Bits




                                Lossy Communication Link




How do we recover from this loss?
       • Retransmit the lost packets
       • Proactively encode the data with some protection bits


                         zainul@ee.ucla.edu - CSEC - Jan 2010    2
The Big Picture

                    Generate Error
                    Correction Bits




                                Lossy Communication Link




How do we recover from this loss?
       • Retransmit the lost packets
       • Proactively encode the data with some protection bits


                         zainul@ee.ucla.edu - CSEC - Jan 2010    2
The Big Picture

                    Generate Error
                    Correction Bits




                                Lossy Communication Link




How do we recover from this loss?
       • Retransmit the lost packets
       • Proactively encode the data with some protection bits


                         zainul@ee.ucla.edu - CSEC - Jan 2010    2
The Big Picture

                    Generate Error
                    Correction Bits




                                Lossy Communication Link




How do we recover from this loss?
       • Retransmit the lost packets
       • Proactively encode the data with some protection bits
       • Can we do something better ?


                         zainul@ee.ucla.edu - CSEC - Jan 2010    2
The Big Picture - Using Compressed Sensing



                      Lossy Communication Link




                                                      CSEC
               zainul@ee.ucla.edu - CSEC - Jan 2010          3
The Big Picture - Using Compressed Sensing



                      Lossy Communication Link




                                                      CSEC
               zainul@ee.ucla.edu - CSEC - Jan 2010          3
The Big Picture - Using Compressed Sensing

            Generate Compressed
               Measurements




                          Lossy Communication Link




                                                          CSEC
                   zainul@ee.ucla.edu - CSEC - Jan 2010          3
The Big Picture - Using Compressed Sensing

            Generate Compressed
               Measurements




                          Lossy Communication Link




                                                          CSEC
                   zainul@ee.ucla.edu - CSEC - Jan 2010          3
The Big Picture - Using Compressed Sensing

            Generate Compressed
               Measurements




                          Lossy Communication Link




                                                          CSEC
                   zainul@ee.ucla.edu - CSEC - Jan 2010          3
The Big Picture - Using Compressed Sensing

                Generate Compressed
                   Measurements




                              Lossy Communication Link




                                                                                CSEC
                                                             Recover from
                                                          Received Compressed
                                                             Measurements




How does this work ?


                       zainul@ee.ucla.edu - CSEC - Jan 2010                            3
The Big Picture - Using Compressed Sensing

                  Generate Compressed
                     Measurements




                                Lossy Communication Link




                                                                                  CSEC
                                                               Recover from
                                                            Received Compressed
                                                               Measurements




How does this work ?
       • Use knowledge of signal model and channel


                         zainul@ee.ucla.edu - CSEC - Jan 2010                            3
The Big Picture - Using Compressed Sensing

                  Generate Compressed
                     Measurements




                                Lossy Communication Link




                                                                                  CSEC
                                                               Recover from
                                                            Received Compressed
                                                               Measurements




How does this work ?
       • Use knowledge of signal model and channel
       • CS uses randomized sampling/projections


                         zainul@ee.ucla.edu - CSEC - Jan 2010                            3
The Big Picture - Using Compressed Sensing

                  Generate Compressed
                     Measurements




                                Lossy Communication Link




                                                                                  CSEC
                                                               Recover from
                                                            Received Compressed
                                                               Measurements



How does this work ?
       • Use knowledge of signal model and channel
       • CS uses randomized sampling/projections
       • Random losses look like additional randomness !


                         zainul@ee.ucla.edu - CSEC - Jan 2010                            3
The Big Picture - Using Compressed Sensing

                  Generate Compressed
                     Measurements




                                Lossy Communication Link




                                                                                  CSEC
                                                               Recover from
                                                            Received Compressed
                                                               Measurements



How does this work ?
       • Use knowledge of signal model and channel
       • CS uses randomized sampling/projections
       • Random losses look like additional randomness !
             Rest of this talk focuses on describing
              “How” and “How Well” this works
                         zainul@ee.ucla.edu - CSEC - Jan 2010                            3
Talk Outline



‣   A Quick Intro to Compressed Sensing
‣   CS Erasure Coding for Recovering Lost Sensor Data
‣   Evaluating CSEC’s cost and performance
‣   Concluding Remarks




                         zainul@ee.ucla.edu - CSEC - Jan 2010   4
Why Compressed Sensing ?
 Physical   Sampling       Compression                  Communication   Application
  Signal




                       zainul@ee.ucla.edu - CSEC - Jan 2010                           5
Why Compressed Sensing ?
 Physical    Sampling         Compression                  Communication       Application
  Signal




 Physical   Compressive
                              Communication                       Decoding     Application
  Signal     Sampling


                                                                 Shifts computation to
                                                                    a capable server

                          zainul@ee.ucla.edu - CSEC - Jan 2010                               5
Transform Domain Analysis




              zainul@ee.ucla.edu - CSEC - Jan 2010   6
Transform Domain Analysis
‣   We usually acquire signals in the time or spatial domain




                          zainul@ee.ucla.edu - CSEC - Jan 2010   6
Transform Domain Analysis
‣   We usually acquire signals in the time or spatial domain
‣   By looking at the signal in another domain, the signal may be
    represented more compactly




                          zainul@ee.ucla.edu - CSEC - Jan 2010      6
Transform Domain Analysis
‣   We usually acquire signals in the time or spatial domain
‣   By looking at the signal in another domain, the signal may be
    represented more compactly




                          zainul@ee.ucla.edu - CSEC - Jan 2010      6
Transform Domain Analysis
‣   We usually acquire signals in the time or spatial domain
‣   By looking at the signal in another domain, the signal may be
    represented more compactly


‣ Eg: a sine wave can be expressed by
  3 parameters: frequency, amplitude
  and phase.




                          zainul@ee.ucla.edu - CSEC - Jan 2010      6
Transform Domain Analysis
‣   We usually acquire signals in the time or spatial domain
‣   By looking at the signal in another domain, the signal may be
    represented more compactly


‣ Eg: a sine wave can be expressed by
  3 parameters: frequency, amplitude
  and phase.
‣ Or, in this case, by the index of the
  FFT coefficient and its complex
  value



                            zainul@ee.ucla.edu - CSEC - Jan 2010    6
Transform Domain Analysis
‣   We usually acquire signals in the time or spatial domain
‣   By looking at the signal in another domain, the signal may be
    represented more compactly


‣ Eg: a sine wave can be expressed by
  3 parameters: frequency, amplitude
  and phase.
‣ Or, in this case, by the index of the
  FFT coefficient and its complex
  value
‣ Sine wave is sparse in frequency
  domain
                            zainul@ee.ucla.edu - CSEC - Jan 2010    6
Lossy Compression




             zainul@ee.ucla.edu - CSEC - Jan 2010   7
Lossy Compression

‣   This is known as Transform Domain Compression




                        zainul@ee.ucla.edu - CSEC - Jan 2010   7
Lossy Compression

‣   This is known as Transform Domain Compression
‣   The domain in which the signal can be most compactly represented
    depends on the signal




                         zainul@ee.ucla.edu - CSEC - Jan 2010      7
Lossy Compression

‣   This is known as Transform Domain Compression
‣   The domain in which the signal can be most compactly represented
    depends on the signal
    ‣   The signal processing world has been coming up with domains for many
        classes of signals




                              zainul@ee.ucla.edu - CSEC - Jan 2010             7
Lossy Compression

‣   This is known as Transform Domain Compression
‣   The domain in which the signal can be most compactly represented
    depends on the signal
    ‣   The signal processing world has been coming up with domains for many
        classes of signals

‣   A necessary property for transforms is invertibility




                              zainul@ee.ucla.edu - CSEC - Jan 2010             7
Lossy Compression

‣   This is known as Transform Domain Compression
‣   The domain in which the signal can be most compactly represented
    depends on the signal
    ‣   The signal processing world has been coming up with domains for many
        classes of signals

‣   A necessary property for transforms is invertibility
‣   It would also be nice if there were efficient algorithms to convert
    the signals to transform between domains



                              zainul@ee.ucla.edu - CSEC - Jan 2010             7
Lossy Compression

‣   This is known as Transform Domain Compression
‣   The domain in which the signal can be most compactly represented
    depends on the signal
    ‣   The signal processing world has been coming up with domains for many
        classes of signals

‣   A necessary property for transforms is invertibility
‣   It would also be nice if there were efficient algorithms to convert
    the signals to transform between domains
‣   But why is it called lossy compression?


                              zainul@ee.ucla.edu - CSEC - Jan 2010             7
Lossy Compression
‣   When we transform the signal to the right domain, some
    coefficients stand out but lots will be near zero
‣   The top few coeffs describe the signal “well enough”




                         zainul@ee.ucla.edu - CSEC - Jan 2010   8
Lossy Compression
‣   When we transform the signal to the right domain, some
    coefficients stand out but lots will be near zero
‣   The top few coeffs describe the signal “well enough”




                         zainul@ee.ucla.edu - CSEC - Jan 2010   8
Lossy Compression
‣   When we transform the signal to the right domain, some
    coefficients stand out but lots will be near zero
‣   The top few coeffs describe the signal “well enough”




                         zainul@ee.ucla.edu - CSEC - Jan 2010   8
Lossy Compression




             zainul@ee.ucla.edu - CSEC - Jan 2010   9
Lossy Compression




• JPEG(100%) :
407462 bytes, ~
2x gain

                  zainul@ee.ucla.edu - CSEC - Jan 2010   9
Lossy Compression




• JPEG(100%) :     • JPEG (10%) :
407462 bytes, ~      7544 bytes,
2x gain              ~ 100x gain

                  zainul@ee.ucla.edu - CSEC - Jan 2010   9
Lossy Compression




• JPEG(100%) :     • JPEG (10%) :                        • JPEG
                                                              (1%) :
407462 bytes, ~      7544 bytes,                         2942 bytes,
2x gain              ~ 100x gain                         ~ 260x gain

                  zainul@ee.ucla.edu - CSEC - Jan 2010                 9
Compressing a Sine Wave




              zainul@ee.ucla.edu - CSEC - Jan 2010   10
Compressing a Sine Wave



‣   Assume we’re interesting in acquiring a single sine wave x(t) in a
    noiseless environment




                          zainul@ee.ucla.edu - CSEC - Jan 2010           10
Compressing a Sine Wave



‣   Assume we’re interesting in acquiring a single sine wave x(t) in a
    noiseless environment
‣   An infinite duration sine wave can be expressed using three
    parameters: frequency f, amplitude a and phase φ.




                          zainul@ee.ucla.edu - CSEC - Jan 2010           10
Compressing a Sine Wave



‣   Assume we’re interesting in acquiring a single sine wave x(t) in a
    noiseless environment
‣   An infinite duration sine wave can be expressed using three
    parameters: frequency f, amplitude a and phase φ.
‣   Question: What’s the best way to find the parameters ?




                          zainul@ee.ucla.edu - CSEC - Jan 2010           10
Compressing a Sine Wave




              zainul@ee.ucla.edu - CSEC - Jan 2010   11
Compressing a Sine Wave



‣   Technically, to estimate three parameters one needs three good
    measurements




                         zainul@ee.ucla.edu - CSEC - Jan 2010        11
Compressing a Sine Wave



‣   Technically, to estimate three parameters one needs three good
    measurements
‣   Questions:




                         zainul@ee.ucla.edu - CSEC - Jan 2010        11
Compressing a Sine Wave



‣   Technically, to estimate three parameters one needs three good
    measurements
‣   Questions:
    ‣   What are “good” measurements ?




                            zainul@ee.ucla.edu - CSEC - Jan 2010     11
Compressing a Sine Wave



‣   Technically, to estimate three parameters one needs three good
    measurements
‣   Questions:
    ‣   What are “good” measurements ?

    ‣   How do you estimate f, a, φ from three measurements ?




                             zainul@ee.ucla.edu - CSEC - Jan 2010    11
Compressed Sensing




             zainul@ee.ucla.edu - CSEC - Jan 2010   12
Compressed Sensing

‣   With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3




                            zainul@ee.ucla.edu - CSEC - Jan 2010          12
Compressed Sensing

‣   With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3
‣   We know that any solution of f, a and φ must meet the three
    constraints and spans a 3D space:




                            zainul@ee.ucla.edu - CSEC - Jan 2010          12
Compressed Sensing

‣   With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3
‣   We know that any solution of f, a and φ must meet the three
    constraints and spans a 3D space:
                 z i = x(t i ) = a sin(2π ft i + φ )
                 ∀i ∈{1, 2, 3}




                                zainul@ee.ucla.edu - CSEC - Jan 2010      12
Compressed Sensing

‣   With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3
‣   We know that any solution of f, a and φ must meet the three
    constraints and spans a 3D space:
                   z i = x(t i ) = a sin(2π ft i + φ )
                   ∀i ∈{1, 2, 3}
                                                                         φ
    ‣   Feasible solution space is much smaller


                                                                             a



                                  zainul@ee.ucla.edu - CSEC - Jan 2010       12
Compressed Sensing

‣   With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3
‣   We know that any solution of f, a and φ must meet the three
    constraints and spans a 3D space:
                   z i = x(t i ) = a sin(2π ft i + φ )
                   ∀i ∈{1, 2, 3}
                                                                         φ
    ‣   Feasible solution space is much smaller

‣   As the number of constraints grows (from
    more measurements), the feasible solution
                                                                             a
    space shrinks
‣   Exhaustive search over this space reveals the
    right answer
                                  zainul@ee.ucla.edu - CSEC - Jan 2010       12
Formulating the Problem
‣   We could also represent f, a and φ as a very long, but mostly
    empty FFT coefficient vector.




                          zainul@ee.ucla.edu - CSEC - Jan 2010      13
Formulating the Problem
‣   We could also represent f, a and φ as a very long, but mostly
    empty FFT coefficient vector.




                          zainul@ee.ucla.edu - CSEC - Jan 2010      13
Formulating the Problem
‣   We could also represent f, a and φ as a very long, but mostly
    empty FFT coefficient vector.                            Sine wave.
                                                                       Amplitude
                                                                     represented by
                                                                 x        color




                          zainul@ee.ucla.edu - CSEC - Jan 2010                    13
Formulating the Problem
‣   We could also represent f, a and φ as a very long, but mostly
    empty FFT coefficient vector.                            Sine wave.
                                                                       Amplitude
                                                                     represented by
                                   Ψ (Fourier Transform)         x        color




                          zainul@ee.ucla.edu - CSEC - Jan 2010                    13
Formulating the Problem
‣   We could also represent f, a and φ as a very long, but mostly
    empty FFT coefficient vector.                            Sine wave.
                                                                       Amplitude
                                                                     represented by
                 y =               Ψ (Fourier Transform)         x        color




                          zainul@ee.ucla.edu - CSEC - Jan 2010                    13
Formulating the Problem
‣   We could also represent f, a and φ as a very long, but mostly
    empty FFT coefficient vector.                            Sine wave.
                                                                            Amplitude
                                                                          represented by
                         y =            Ψ (Fourier Transform)         x        color




         − j2 π ft + φ
    ae




                               zainul@ee.ucla.edu - CSEC - Jan 2010                    13
Sampling Matrix

‣   We could also write out the sampling process in matrix form




                         zainul@ee.ucla.edu - CSEC - Jan 2010     14
Sampling Matrix

‣   We could also write out the sampling process in matrix form


                                                                x




                         zainul@ee.ucla.edu - CSEC - Jan 2010       14
Sampling Matrix

‣   We could also write out the sampling process in matrix form


                                                Φ               x




                         zainul@ee.ucla.edu - CSEC - Jan 2010       14
Sampling Matrix

‣   We could also write out the sampling process in matrix form


                z   =                           Φ               x




                         zainul@ee.ucla.edu - CSEC - Jan 2010       14
Sampling Matrix

‣   We could also write out the sampling process in matrix form


                z   =                           Φ               x



                          Three non-zero entries at some
                                 “good” locations




                         zainul@ee.ucla.edu - CSEC - Jan 2010       14
Sampling Matrix

‣   We could also write out the sampling process in matrix form

Three measurements
                     z   =                          Φ               x



                              Three non-zero entries at some
                                     “good” locations




                             zainul@ee.ucla.edu - CSEC - Jan 2010       14
Sampling Matrix

‣   We could also write out the sampling process in matrix form

Three measurements
                     z       =                          Φ               x
                         k

                                                                            n

                                  Three non-zero entries at some
                                         “good” locations




                                 zainul@ee.ucla.edu - CSEC - Jan 2010           14
Exhaustive Search
‣   Objective of exhaustive search:
    ‣   Find an estimate of the vector y that meets the constraints and is the most
        compact representation of x (also called the sparsest representation)

‣   Our search is now guided by the fact that y is a sparse vector
‣   Rewriting constraints:
                   z = Φx
                   y = Ψx
                             −1
                   z = ΦΨ y



                                  zainul@ee.ucla.edu - CSEC - Jan 2010                15
Exhaustive Search
‣   Objective of exhaustive search:
    ‣   Find an estimate of the vector y that meets the constraints and is the most
        compact representation of x (also called the sparsest representation)

‣   Our search is now guided by the fact that y is a sparse vector
‣   Rewriting constraints:
                                                                         ˆ           %
                                                                         y = arg min y l 0
                                                                                       %
                                                                                       y
                   z = Φx
                   y = Ψx                                                            %
                                                                         s.t. z = ΦΨ y     −1



                   z = ΦΨ y  −1                                          y   l0
                                                                                  @ {i : yi ≠ 0}




                                  zainul@ee.ucla.edu - CSEC - Jan 2010                             15
Exhaustive Search
‣   Objective of exhaustive search:
    ‣   Find an estimate of the vector y that meets the constraints and is the most
        compact representation of x (also called the sparsest representation)

‣   Our search is now guided by the fact that y is a sparse vector
‣   Rewriting constraints:
                                                                         ˆ           %
                                                                         y = arg min y l 0
                                                                                       %
                                                                                       y
                   z = Φx
                   y = Ψx                                                            %
                                                                         s.t. z = ΦΨ y     −1



                   z = ΦΨ y  −1                                          y   l0
                                                                                  @ {i : yi ≠ 0}

                                                                  This optimization problem
                                                                         is NP-Hard !
                                  zainul@ee.ucla.edu - CSEC - Jan 2010                             15
l1 Minimization

‣   Approximate the l0 norm to an l1 norm

             ˆ           %
             y = arg min y l 1
                      %
                      y                                          y        = ∑ yi
                                                                     l1

                        % −1                                                i
            s.t. z = ΦΨ y

‣   This problem can now be solved efficiently using linear
    programming techniques
‣   This approximation was not new
‣   The big leap in Compressed Sensing was a theorem that showed
    that under the right conditions, this approximation was exact!


                          zainul@ee.ucla.edu - CSEC - Jan 2010                     16
The Restricted Isometry Property
            ˆ           %
            y = arg min y l 1
                      %
                      y

                        %
            s.t. z = ΦΨ y −1                Rewrite as:
                                                      %
                                                 z = Ay
 For any positive integer constant s, find the smallest δs such that:
                               2             2                   2
                   (1 − δ s ) y ≤ Ay ≤ (1 + δ s ) y
 holds for all s-sparse vectors y
 A vector is said to s-sparse if it has at most s non-zero entries




                          zainul@ee.ucla.edu - CSEC - Jan 2010         17
The Restricted Isometry Property
            ˆ           %
            y = arg min y l 1
                      %
                      y

                        %
            s.t. z = ΦΨ y −1                Rewrite as:
                                                      %
                                                 z = Ay
 For any positive integer constant s, find the smallest δs such that:
                               2             2                   2
                   (1 − δ s ) y ≤ Ay ≤ (1 + δ s ) y
 holds for all s-sparse vectors y
 A vector is said to s-sparse if it has at most s non-zero entries


      The closer δs(A) is to 0, the better the matrix combination
             A is at capturing unique features of the signal

                          zainul@ee.ucla.edu - CSEC - Jan 2010         17
CS Recovery Theorem
  Theorem: Assume that δs(A) <√2-1 for some matrix A, then the
  solution to the l1 minimization problem obeys:
[Candes-Romberg-        ˆ
                        y− y               ˆ
                                     ≤ C 0 y − ys
     Tao-05]                      l1                             l1

                                         C0
                        ˆ
                        y− y      l2
                                       ≤    ˆ
                                            y − ys                l1
                                          s
  for some small positive constant C0
  ys is an approximation of a non-sparse vector with only its s-largest
  entries
  If y is s-sparse, the reconstruction is exact


                          zainul@ee.ucla.edu - CSEC - Jan 2010         18
Gaussian Random Projections

‣
                                               1
    Gaussian: independent realizations of N (0, )
                                               n

                                                                * y




                         zainul@ee.ucla.edu - CSEC - Jan 2010         19
Gaussian Random Projections

‣
                                               1
    Gaussian: independent realizations of N (0, )
                                               n

                                                     Ψ-1 (Inverse Fourier Transform) * y




                         zainul@ee.ucla.edu - CSEC - Jan 2010                              19
Gaussian Random Projections

‣
                                               1
    Gaussian: independent realizations of N (0, )
                                               n

                    Φ                       *        Ψ-1 (Inverse Fourier Transform) * y




                         zainul@ee.ucla.edu - CSEC - Jan 2010                              19
Gaussian Random Projections

‣
                                               1
    Gaussian: independent realizations of N (0, )
                                               n

z =                 Φ                       *        Ψ-1 (Inverse Fourier Transform) * y




                         zainul@ee.ucla.edu - CSEC - Jan 2010                              19
Bernoulli Random Projections

‣
                                               +1 −1 
    Realizations of equiprobable Bernoulli RV  , 
                                               n n

 z =                Φ                       *        Ψ-1 (Inverse Fourier Transform) * y




                         zainul@ee.ucla.edu - CSEC - Jan 2010                              20
Uniform Random Sampling

‣   Select samples uniformly randomly

z =                 Φ                      *        Ψ-1 (Inverse Fourier Transform) * y




                        zainul@ee.ucla.edu - CSEC - Jan 2010                              21
Per-Module Energy Consumption on Mica
      ˆ
      y                                     %!
                                                      ;<=           >?)               001           ;@=A3(1B
                                            $!
                               23456(789:
                                            #!
           H
                                            "!

                                             !
 etection pro-                                    "!&'(         #!&'(         $!&'(         #+!&'       "!#%&'   "!#%&'
 on                                               )*            )*            )*            )*          ,-.*     ,*/001
                                           "!!!
 ctions that de-                                          ;<=           >?)           001           ;@=A3(1B
                       ;D<<A<E(1A85(78F:




                                            C+!
 m. Model pa-
 nergy accurate                             +!!
MicaZ [18]. The                             #+!
 zed sampling is
  Irwin-Hall dis-                             !
                                                  "!&'(         #!&'(         $!&'(         #+!&'       "!#%&'   "!#%&'
niform random                                     )*            )*            )*            )*          ,-.*     ,*/001
  rmed using an
                          Figure 3: Power and Duty Cycle costs for Compres-
 -bit operation.
                          sive Sensing versus Nyquist Sampling w/ local FFT.
 he 4KB ‣RAMFFT     computation higher than transmission cost
                          of each block for every one second window. This equates to
          ‣ Highest       the achievable duty cycle ofrandom number generator
                          consumer in CS is the the node, lower values for which
                          further improve the overall energy efficiency. The ADC la-
on of the com-            tency is clearly visible here as - CSEC - Jan 2010 component for
                                             zainul@ee.ucla.edu the dominant                                              22
Compressive Sampling
 Physical        Sampling                    Time domain
  Signal                                       samples
            n
x ∈°            z = In x




 Physical       Compressive                   Randomized
  Signal         Sampling                    measurements
            n
x ∈°            z = Φx
                     kxn
                     k<n


                              zainul@ee.ucla.edu - CSEC - Jan 2010   23
Compressive Sampling
 Physical        Sampling          Compression                        Compressed
  Signal                                                             domain samples
            n
x ∈°            z = In x            y = Ψz
                                           nxn




 Physical       Compressive                                                            Compressed
                                                               Decoding
  Signal         Sampling                                                             domain samples
            n
x ∈°            z = Φx                                                   %
                                                             y = arg min y l 1
                     kxn                                                   %
                                                                           y
                     k<n
                                                                         %
                                                             s.t. z = ΦΨ y     −1

                              zainul@ee.ucla.edu - CSEC - Jan 2010                                24
Handling Missing Data
 Physical                                                          Compressed
                Sampling       Compression
  Signal                                                          domain samples
            n
x ∈°            z = In x         y = Ψz
                                       nxn




                           zainul@ee.ucla.edu - CSEC - Jan 2010                    25
Handling Missing Data
 Physical                                                           Compressed
                Sampling       Compression
  Signal                                                           domain samples
            n
x ∈°            z = In x         y = Ψz
                                       nxn


                                                                  Missing
                           Communication
                                                                  samples




 When communication channel is lossy:


                           zainul@ee.ucla.edu - CSEC - Jan 2010                     25
Handling Missing Data
 Physical                                                                Compressed
                     Sampling       Compression
  Signal                                                                domain samples
            n
x ∈°                z = In x          y = Ψz
                                            nxn


                                                                       Missing
                                Communication
                                                                       samples




 When communication channel is lossy:
                • Use retransmissions to recover lost data


                                zainul@ee.ucla.edu - CSEC - Jan 2010                     25
Handling Missing Data
 Physical                                                                Compressed
                     Sampling       Compression
  Signal                                                                domain samples
            n
x ∈°                z = In x          y = Ψz
                                            nxn


                                                                       Missing
                                Communication
                                                                       samples



 When communication channel is lossy:
                • Use retransmissions to recover lost data
                • Or, use error (erasure) correcting codes


                                zainul@ee.ucla.edu - CSEC - Jan 2010                     25
Handling Missing Data
 Physical                                                           Compressed
                Sampling       Compression
  Signal                                                           domain samples
            n
x ∈°            z = In x         y = Ψz
                                       nxn


                                                                  Missing
                           Communication
                                                                  samples




                           zainul@ee.ucla.edu - CSEC - Jan 2010                     26
Handling Missing Data
 Physical                                                            Compressed
                  Sampling       Compression
  Signal                                                            domain samples
            n
x ∈°             z = In x          y = Ψz
                                         nxn


                                                                                       Recovered
                Channel                                       Channel
                                                                 Missing
                             Communication                                            compressed
                Coding                                        Decoding
                                                                 samples             domain samples




                             zainul@ee.ucla.edu - CSEC - Jan 2010                               26
Handling Missing Data
 Physical                                                             Compressed
                   Sampling       Compression
  Signal                                                             domain samples
            n
x ∈°              z = In x          y = Ψz
                                          nxn


                                                                                        Recovered
                 Channel                                       Channel
                                                                  Missing
                              Communication                                            compressed
                 Coding                                        Decoding
                                                                  samples             domain samples
                                                                         +
                w = Ωy         wl = Cw                   y = ( CΩ ) wl
                                                         ˆ
                   mxn
                   m>n




                              zainul@ee.ucla.edu - CSEC - Jan 2010                               26
Handling Missing Data
 Physical                                                             Compressed
                   Sampling       Compression
  Signal                                                             domain samples
            n
x ∈°              z = In x          y = Ψz                  Done at
                                          nxn           application layer

                                                                                        Recovered
                 Channel                                       Channel
                                                                  Missing
                              Communication                                            compressed
                 Coding                                        Decoding
                                                                  samples             domain samples
                                                                         +
                w = Ωy         wl = Cw                   y = ( CΩ ) wl
                                                         ˆ
                   mxn
                   m>n




                              zainul@ee.ucla.edu - CSEC - Jan 2010                               26
Handling Missing Data
 Physical                                                              Compressed
                   Sampling        Compression
  Signal                                                              domain samples
            n
x ∈°              z = In x           y = Ψz                  Done at
                                           nxn           application layer

                                                                                         Recovered
                 Channel                                        Channel
                                                                   Missing
                               Communication                                            compressed
                 Coding                                         Decoding
                                                                   samples             domain samples
                                                                          +
                w = Ωy          wl = Cw                   y = ( CΩ ) wl
                                                          ˆ
                   mxn
                   m>n        Done at physical layer
                               Can’t exploit signal
                                 characteristics



                               zainul@ee.ucla.edu - CSEC - Jan 2010                               26
CS Erasure Coding
Physical       Compressive                                                         Compressed
                             Communication                    Decoding
 Signal         Sampling                                                          domain samples
           n
x ∈°           z = Φx          zl = Cz                                  %
                                                            y = arg min y l 1
                    kxn                                                  %
                                                                         y
                    k<n
                                                                          %
                                                            s.t. zl = CΦΨ y  −1




                              zainul@ee.ucla.edu - CSEC - Jan 2010                            27
CS Erasure Coding
Physical       Compressive                                                         Compressed
                             Communication                    Decoding
 Signal         Sampling                                                          domain samples
           n
x ∈°           z = Φx          zl = Cz                                  %
                                                            y = arg min y l 1
                    kxn                                                  %
                                                                         y
                    k<n
                                                                          %
                                                            s.t. zl = CΦΨ y  −1




Physical       Compressive                                                         Compressed
                             Communication                    Decoding
 Signal         Sampling                                                          domain samples
           n
x ∈°           z = Φx          zl = Cz                                  %
                                                            y = arg min y l 1
                   mxn                                                   %
                                                                         y
                  k<m<n
                                                                          %
                                                            s.t. zl = CΦΨ y  −1

                              zainul@ee.ucla.edu - CSEC - Jan 2010                            27
CS Erasure Coding
Physical       Compressive                                                          Compressed
                             Communication                     Decoding
 Signal         Sampling                                                           domain samples
           n
x ∈°           z = Φx           zl = Cz                                  %
                                                             y = arg min y l 1
                    kxn                                                   %
                                                                          y
                    k<n
                                                                           %
                                                             s.t. zl = CΦΨ y  −1

      Over-sampling in CS is
        Erasure Coding !


Physical       Compressive                                                          Compressed
                             Communication                     Decoding
 Signal         Sampling                                                           domain samples
           n
x ∈°           z = Φx           zl = Cz                                  %
                                                             y = arg min y l 1
                   mxn                                                    %
                                                                          y
                  k<m<n
                                                                           %
                                                             s.t. zl = CΦΨ y  −1

                               zainul@ee.ucla.edu - CSEC - Jan 2010                            27
Features of CS Erasure Coding
‣   No need of additional channel coding block
‣   Redundancy achieved by oversampling
‣   Recovery is resilient to incorrect channel estimates
    ‣   Traditional channel coding fails if redundancy is inadequate

‣   Decoding is free if CS was used for compression anyway




                                zainul@ee.ucla.edu - CSEC - Jan 2010   28
Features of CS Erasure Coding
‣   No need of additional channel coding block
‣   Redundancy achieved by oversampling
‣   Recovery is resilient to incorrect channel estimates
    ‣   Traditional channel coding fails if redundancy is inadequate

‣ Decoding is free if CS was used for compression anyway
‣ Intuition:
    ‣   Channel Coding spreads information out over measurements

    ‣   Compression (Source Coding) - compact information in few measurements

    ‣   CSEC - spreads information while compacting !


                                zainul@ee.ucla.edu - CSEC - Jan 2010        28
Effects of Missing Samples on CS

        z   =                      Φ                   x




                zainul@ee.ucla.edu - CSEC - Jan 2010       29
Effects of Missing Samples on CS

                 z   =                      Φ                   x


    Missing
samples at the
   receiver




                         zainul@ee.ucla.edu - CSEC - Jan 2010       29
Effects of Missing Samples on CS

                 z   =                      Φ                   x


    Missing
samples at the
                              Same as missing
   receiver
                                rows in the
                              sampling matrix




                         zainul@ee.ucla.edu - CSEC - Jan 2010       29
Effects of Missing Samples on CS

          z   =                       Φ                   x




What happens if we over-sample?


                   zainul@ee.ucla.edu - CSEC - Jan 2010       29
Effects of Missing Samples on CS

          z    =                        Φ                   x




What happens if we over-sample?
      • Can we recover the lost data?


                     zainul@ee.ucla.edu - CSEC - Jan 2010       29
Effects of Missing Samples on CS

          z   =                         Φ                   x




What happens if we over-sample?
      • Can we recover the lost data?
      • How much over-sampling is needed?


                     zainul@ee.ucla.edu - CSEC - Jan 2010       29
Some CS Results


‣   Theorem: If k samples of a length n signal are acquired uniformly
    randomly (if each sample is equiprobable) and reconstruction is
    performed in the Fourier basis:
                                      k
    [Rudelson06]              s≤C · 4   ′
                                                                 w.h.p.
                                   log (n)



‣   Where s is the sparsity of the signal



                          zainul@ee.ucla.edu - CSEC - Jan 2010            30
Extending CS Results



‣   Claim: When m>k samples are acquired uniformly randomly and
    communicated through a memoryless binary erasure channel that
    drops m-k samples, the received k samples are still equiprobable.


    ‣   Implies that bound on sparsity condition should hold.

    ‣   If bound is tight, over-sampling rate (m-k) is same as loss rate


[Charbiwala10]

                                zainul@ee.ucla.edu - CSEC - Jan 2010       31
Evaluating the RIP
        Create CS                                                  Compute RIP
                                     Simulate
     Sampling+Domain                                                constant of
                                     Channel
          Matrix                                                  received matrix

                    Φ                           *      Ψ-1 (Inverse Fourier Transform)


     103 instances, size
          256x1024




                           zainul@ee.ucla.edu - CSEC - Jan 2010                          32
Evaluating the RIP
        Create CS                                                  Compute RIP
                                     Simulate
     Sampling+Domain                                                constant of
                                     Channel
          Matrix                                                  received matrix

   A= Φ* Ψ-1


     103 instances, size
          256x1024




                           zainul@ee.ucla.edu - CSEC - Jan 2010                     32
Evaluating the RIP
        Create CS                                                   Compute RIP
                                     Simulate
     Sampling+Domain                                                 constant of
                                     Channel
          Matrix                                                   received matrix

   A= Φ* Ψ-1                                                      A’= C*Φ* Ψ-1


     103 instances, size
          256x1024




                           zainul@ee.ucla.edu - CSEC - Jan 2010                      32
RIP Verification in Memoryless Channels

          Fourier Random Sampling
                                                          Baseline performance - No Loss
                                                                (Shading: Min - Max)




                   zainul@ee.ucla.edu - CSEC - Jan 2010                                    33
RIP Verification in Memoryless Channels

          Fourier Random Sampling
                                                          Baseline performance - No Loss
                                                                (Shading: Min - Max)

                                                             20 % Loss - Increase in
                                                                 RIP constant




                   zainul@ee.ucla.edu - CSEC - Jan 2010                                    33
RIP Verification in Memoryless Channels

          Fourier Random Sampling
                                                          Baseline performance - No Loss
                                                                (Shading: Min - Max)

                                                             20 % Loss - Increase in
                                                                  RIP constant
                                                              20 % Oversampling -
                                                             RIP constant recovers




                   zainul@ee.ucla.edu - CSEC - Jan 2010                                    33
RIP Verification in Bursty Channels

           Fourier Random Sampling
                                                           Baseline performance - No Loss
                                                                 (Shading: Min - Max)




                    zainul@ee.ucla.edu - CSEC - Jan 2010                                    34
RIP Verification in Bursty Channels

           Fourier Random Sampling
                                                           Baseline performance - No Loss
                                                                 (Shading: Min - Max)

                                                               20 % Loss - Increase in RIP
                                                               constant and large variation




                    zainul@ee.ucla.edu - CSEC - Jan 2010                                      34
RIP Verification in Bursty Channels

           Fourier Random Sampling
                                                           Baseline performance - No Loss
                                                                 (Shading: Min - Max)

                                                               20 % Loss - Increase in RIP
                                                               constant and large variation
                                                               20 % Oversampling - RIP constant
                                                                  reduces but doesn’t recover




                    zainul@ee.ucla.edu - CSEC - Jan 2010                                          34
RIP Verification in Bursty Channels

           Fourier Random Sampling
                                                            Baseline performance - No Loss
                                                                  (Shading: Min - Max)

                                                                20 % Loss - Increase in RIP
                                                                constant and large variation
                                                                20 % Oversampling - RIP constant
                                                                   reduces but doesn’t recover


               Oversampling + Interleaving
                - RIP constant recovers




                     zainul@ee.ucla.edu - CSEC - Jan 2010                                          34
Signal Recovery Performance Evaluation




   Create     CS       Interleave              Lossy                CS      Reconstruction
   Signal   Sampling    Samples               Channel            Recovery       Error?




                          zainul@ee.ucla.edu - CSEC - Jan 2010                               35
In Memoryless Channels
                        Baseline performance - No Loss




              zainul@ee.ucla.edu - CSEC - Jan 2010       36
In Memoryless Channels
                        Baseline performance - No Loss



                             20 % Loss - Drop in
                             recovery probability




              zainul@ee.ucla.edu - CSEC - Jan 2010       36
In Memoryless Channels
                        Baseline performance - No Loss



                             20 % Loss - Drop in
                             recovery probability


                                        20 % Oversampling -
                                         complete recovery




              zainul@ee.ucla.edu - CSEC - Jan 2010            36
In Memoryless Channels
                        Baseline performance - No Loss



                             20 % Loss - Drop in
                             recovery probability


                                        20 % Oversampling -
                                         complete recovery




                                                        Less than 20 %
                                                        Oversampling -
                                                     recovery does not fail
                                                          completely




              zainul@ee.ucla.edu - CSEC - Jan 2010                            36
In Bursty Channels
                         Baseline performance - No Loss




               zainul@ee.ucla.edu - CSEC - Jan 2010       37
In Bursty Channels
                         Baseline performance - No Loss



                              20 % Loss - Drop in
                              recovery probability




               zainul@ee.ucla.edu - CSEC - Jan 2010       37
In Bursty Channels
                         Baseline performance - No Loss



                              20 % Loss - Drop in
                              recovery probability

                                         20 % Oversampling -
                                           doesn’t recover
                                             completely




               zainul@ee.ucla.edu - CSEC - Jan 2010            37
In Bursty Channels
                                           Baseline performance - No Loss



                                                20 % Loss - Drop in
                                                recovery probability
   Oversampling + Interleaving
   - Still incomplete recovery
                                                           20 % Oversampling -
                                                             doesn’t recover
                                                               completely




                                 zainul@ee.ucla.edu - CSEC - Jan 2010            37
In Bursty Channels
    Worse than baseline                     Baseline performance - No Loss



                                                 20 % Loss - Drop in
                                                 recovery probability
    Oversampling + Interleaving
    - Still incomplete recovery
                                                            20 % Oversampling -
                                                              doesn’t recover
                                                                completely

                                                                          Better than baseline



‣ Recovery incomplete because of low interleaving depth
‣ Recovery better at high sparsity because bursty channels deliver
  bigger packets on average, but with higher variance

                                  zainul@ee.ucla.edu - CSEC - Jan 2010                           37
In Bursty Channels
    Worse than baseline                     Baseline performance - No Loss



                                                 20 % Loss - Drop in
                                                 recovery probability
    Oversampling + Interleaving
    - Still incomplete recovery
                                                            20 % Oversampling -
                                                              doesn’t recover
                                                                completely

                                                                          Better than baseline



‣ Recovery incomplete because of low interleaving depth
‣ Recovery better at high sparsity because bursty channels deliver
  bigger packets on average, but with higher variance

                                  zainul@ee.ucla.edu - CSEC - Jan 2010                           37
In Real 802.15.4 Channel
                         Baseline performance - No Loss




               zainul@ee.ucla.edu - CSEC - Jan 2010       38
In Real 802.15.4 Channel
                         Baseline performance - No Loss



                              20 % Loss - Drop in
                              recovery probability




               zainul@ee.ucla.edu - CSEC - Jan 2010       38
In Real 802.15.4 Channel
                         Baseline performance - No Loss



                              20 % Loss - Drop in
                              recovery probability


                                         20 % Oversampling -
                                          complete recovery




               zainul@ee.ucla.edu - CSEC - Jan 2010            38
In Real 802.15.4 Channel
                         Baseline performance - No Loss



                              20 % Loss - Drop in
                              recovery probability


                                         20 % Oversampling -
                                          complete recovery




                                                         Less than 20 %
                                                         Oversampling -
                                                      recovery does not fail
                                                           completely




               zainul@ee.ucla.edu - CSEC - Jan 2010                            38
Cost of CSEC
                         5
                                   Rnd     ADC     FFT    Radio TX    RS


                         4
     Energy/block (mJ)




                         3



                         2



                         1



                         0
                             m=256 S-n-S    m=10 C-n-S     m=64 CS     k=320 S-n-S+RS    k=16 C-n-S+RS   k=80 CSEC

                              Sense         Sense,          CS            Sense           Sense,          CSEC
                               and         Compress         and             and          Compress          and
                              Send          (FFT)          Send            Send             and           Send
                                              and          (1/4th          with            Send
                                             Send          rate)           Reed            with
                                                                         Solomon            RS




                                                  zainul@ee.ucla.edu - CSEC - Jan 2010                               39
Cost of CSEC
                         5
                                   Rnd     ADC     FFT    Radio TX    RS


                         4
     Energy/block (mJ)




                         3



                         2



                         1



                         0
                             m=256 S-n-S    m=10 C-n-S     m=64 CS     k=320 S-n-S+RS    k=16 C-n-S+RS   k=80 CSEC

                              Sense         Sense,          CS            Sense           Sense,          CSEC
                               and         Compress         and             and          Compress          and
                              Send          (FFT)          Send            Send             and           Send
                                              and          (1/4th          with            Send
                                             Send          rate)           Reed            with
                                                                         Solomon            RS




                                                  zainul@ee.ucla.edu - CSEC - Jan 2010                               39
Summary


‣   Oversampling is a valid erasure coding strategy for compressive
    reconstruction
‣   For binary erasure channels, an oversampling rate equal to loss
    rate is sufficient (empirical)
‣   CS erasure coding can be rate-less like fountain codes
    ‣   Allows adaptation to varying channel conditions

‣   Can be computationally more efficient than traditional erasure
    codes


                               zainul@ee.ucla.edu - CSEC - Jan 2010   40
Closing Remarks

‣   CSEC spreads information out while compacting
    ‣   No free lunch syndrome: Data rate requirement is higher than if using good
        source and channel coding independently

    ‣   But, then, computation cost is higher too

‣   CSEC requires knowledge of signal model
    ‣   If signal is non-stationary, model needs to be updated during recovery

    ‣   This can be done using over-sampling too

‣   CSEC requires knowledge of channel conditions
    ‣   Can use CS streaming with feedback


                               zainul@ee.ucla.edu - CSEC - Jan 2010              41
Thank You

Mais conteúdo relacionado

Destaque

Destaque (7)

Sparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and ApplicationsSparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and Applications
 
Lesson 22: Optimization II (Section 021 slides)
Lesson 22: Optimization II (Section 021 slides)Lesson 22: Optimization II (Section 021 slides)
Lesson 22: Optimization II (Section 021 slides)
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
 
Introduction to compressive sensing
Introduction to compressive sensingIntroduction to compressive sensing
Introduction to compressive sensing
 
Compressed Sensing In Spectral Imaging
Compressed Sensing In Spectral Imaging  Compressed Sensing In Spectral Imaging
Compressed Sensing In Spectral Imaging
 
Compressed Sensing - Achuta Kadambi
Compressed Sensing - Achuta KadambiCompressed Sensing - Achuta Kadambi
Compressed Sensing - Achuta Kadambi
 
IEEE Presentation
IEEE PresentationIEEE Presentation
IEEE Presentation
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 

Recovering Lost Sensor Data through Compressed Sensing

  • 1. Recovering Lost Sensor Data through Compressed Sensing Zainul Charbiwala Collaborators:Younghun Kim, Sadaf Zahedi, Supriyo Chakraborty, Ting He (IBM), Chatschik Bisdikian (IBM), Mani Srivastava
  • 2. The Big Picture Lossy Communication Link zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 3. The Big Picture Lossy Communication Link zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 4. The Big Picture Lossy Communication Link zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 5. The Big Picture Lossy Communication Link How do we recover from this loss? zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 6. The Big Picture Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 7. The Big Picture Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 8. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 9. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 10. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 11. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits • Can we do something better ? zainul@ee.ucla.edu - CSEC - Jan 2010 2
  • 12. The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 13. The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 14. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 15. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 16. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 17. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 18. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 19. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel • CS uses randomized sampling/projections zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 20. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel • CS uses randomized sampling/projections • Random losses look like additional randomness ! zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 21. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel • CS uses randomized sampling/projections • Random losses look like additional randomness ! Rest of this talk focuses on describing “How” and “How Well” this works zainul@ee.ucla.edu - CSEC - Jan 2010 3
  • 22. Talk Outline ‣ A Quick Intro to Compressed Sensing ‣ CS Erasure Coding for Recovering Lost Sensor Data ‣ Evaluating CSEC’s cost and performance ‣ Concluding Remarks zainul@ee.ucla.edu - CSEC - Jan 2010 4
  • 23. Why Compressed Sensing ? Physical Sampling Compression Communication Application Signal zainul@ee.ucla.edu - CSEC - Jan 2010 5
  • 24. Why Compressed Sensing ? Physical Sampling Compression Communication Application Signal Physical Compressive Communication Decoding Application Signal Sampling Shifts computation to a capable server zainul@ee.ucla.edu - CSEC - Jan 2010 5
  • 25. Transform Domain Analysis zainul@ee.ucla.edu - CSEC - Jan 2010 6
  • 26. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain zainul@ee.ucla.edu - CSEC - Jan 2010 6
  • 27. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly zainul@ee.ucla.edu - CSEC - Jan 2010 6
  • 28. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly zainul@ee.ucla.edu - CSEC - Jan 2010 6
  • 29. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly ‣ Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. zainul@ee.ucla.edu - CSEC - Jan 2010 6
  • 30. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly ‣ Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. ‣ Or, in this case, by the index of the FFT coefficient and its complex value zainul@ee.ucla.edu - CSEC - Jan 2010 6
  • 31. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly ‣ Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. ‣ Or, in this case, by the index of the FFT coefficient and its complex value ‣ Sine wave is sparse in frequency domain zainul@ee.ucla.edu - CSEC - Jan 2010 6
  • 32. Lossy Compression zainul@ee.ucla.edu - CSEC - Jan 2010 7
  • 33. Lossy Compression ‣ This is known as Transform Domain Compression zainul@ee.ucla.edu - CSEC - Jan 2010 7
  • 34. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal zainul@ee.ucla.edu - CSEC - Jan 2010 7
  • 35. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals zainul@ee.ucla.edu - CSEC - Jan 2010 7
  • 36. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals ‣ A necessary property for transforms is invertibility zainul@ee.ucla.edu - CSEC - Jan 2010 7
  • 37. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals ‣ A necessary property for transforms is invertibility ‣ It would also be nice if there were efficient algorithms to convert the signals to transform between domains zainul@ee.ucla.edu - CSEC - Jan 2010 7
  • 38. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals ‣ A necessary property for transforms is invertibility ‣ It would also be nice if there were efficient algorithms to convert the signals to transform between domains ‣ But why is it called lossy compression? zainul@ee.ucla.edu - CSEC - Jan 2010 7
  • 39. Lossy Compression ‣ When we transform the signal to the right domain, some coefficients stand out but lots will be near zero ‣ The top few coeffs describe the signal “well enough” zainul@ee.ucla.edu - CSEC - Jan 2010 8
  • 40. Lossy Compression ‣ When we transform the signal to the right domain, some coefficients stand out but lots will be near zero ‣ The top few coeffs describe the signal “well enough” zainul@ee.ucla.edu - CSEC - Jan 2010 8
  • 41. Lossy Compression ‣ When we transform the signal to the right domain, some coefficients stand out but lots will be near zero ‣ The top few coeffs describe the signal “well enough” zainul@ee.ucla.edu - CSEC - Jan 2010 8
  • 42. Lossy Compression zainul@ee.ucla.edu - CSEC - Jan 2010 9
  • 43. Lossy Compression • JPEG(100%) : 407462 bytes, ~ 2x gain zainul@ee.ucla.edu - CSEC - Jan 2010 9
  • 44. Lossy Compression • JPEG(100%) : • JPEG (10%) : 407462 bytes, ~ 7544 bytes, 2x gain ~ 100x gain zainul@ee.ucla.edu - CSEC - Jan 2010 9
  • 45. Lossy Compression • JPEG(100%) : • JPEG (10%) : • JPEG (1%) : 407462 bytes, ~ 7544 bytes, 2942 bytes, 2x gain ~ 100x gain ~ 260x gain zainul@ee.ucla.edu - CSEC - Jan 2010 9
  • 46. Compressing a Sine Wave zainul@ee.ucla.edu - CSEC - Jan 2010 10
  • 47. Compressing a Sine Wave ‣ Assume we’re interesting in acquiring a single sine wave x(t) in a noiseless environment zainul@ee.ucla.edu - CSEC - Jan 2010 10
  • 48. Compressing a Sine Wave ‣ Assume we’re interesting in acquiring a single sine wave x(t) in a noiseless environment ‣ An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. zainul@ee.ucla.edu - CSEC - Jan 2010 10
  • 49. Compressing a Sine Wave ‣ Assume we’re interesting in acquiring a single sine wave x(t) in a noiseless environment ‣ An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. ‣ Question: What’s the best way to find the parameters ? zainul@ee.ucla.edu - CSEC - Jan 2010 10
  • 50. Compressing a Sine Wave zainul@ee.ucla.edu - CSEC - Jan 2010 11
  • 51. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements zainul@ee.ucla.edu - CSEC - Jan 2010 11
  • 52. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements ‣ Questions: zainul@ee.ucla.edu - CSEC - Jan 2010 11
  • 53. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements ‣ Questions: ‣ What are “good” measurements ? zainul@ee.ucla.edu - CSEC - Jan 2010 11
  • 54. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements ‣ Questions: ‣ What are “good” measurements ? ‣ How do you estimate f, a, φ from three measurements ? zainul@ee.ucla.edu - CSEC - Jan 2010 11
  • 55. Compressed Sensing zainul@ee.ucla.edu - CSEC - Jan 2010 12
  • 56. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 zainul@ee.ucla.edu - CSEC - Jan 2010 12
  • 57. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: zainul@ee.ucla.edu - CSEC - Jan 2010 12
  • 58. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = a sin(2π ft i + φ ) ∀i ∈{1, 2, 3} zainul@ee.ucla.edu - CSEC - Jan 2010 12
  • 59. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = a sin(2π ft i + φ ) ∀i ∈{1, 2, 3} φ ‣ Feasible solution space is much smaller a zainul@ee.ucla.edu - CSEC - Jan 2010 12
  • 60. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = a sin(2π ft i + φ ) ∀i ∈{1, 2, 3} φ ‣ Feasible solution space is much smaller ‣ As the number of constraints grows (from more measurements), the feasible solution a space shrinks ‣ Exhaustive search over this space reveals the right answer zainul@ee.ucla.edu - CSEC - Jan 2010 12
  • 61. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. zainul@ee.ucla.edu - CSEC - Jan 2010 13
  • 62. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. zainul@ee.ucla.edu - CSEC - Jan 2010 13
  • 63. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by x color zainul@ee.ucla.edu - CSEC - Jan 2010 13
  • 64. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by Ψ (Fourier Transform) x color zainul@ee.ucla.edu - CSEC - Jan 2010 13
  • 65. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by y = Ψ (Fourier Transform) x color zainul@ee.ucla.edu - CSEC - Jan 2010 13
  • 66. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by y = Ψ (Fourier Transform) x color − j2 π ft + φ ae zainul@ee.ucla.edu - CSEC - Jan 2010 13
  • 67. Sampling Matrix ‣ We could also write out the sampling process in matrix form zainul@ee.ucla.edu - CSEC - Jan 2010 14
  • 68. Sampling Matrix ‣ We could also write out the sampling process in matrix form x zainul@ee.ucla.edu - CSEC - Jan 2010 14
  • 69. Sampling Matrix ‣ We could also write out the sampling process in matrix form Φ x zainul@ee.ucla.edu - CSEC - Jan 2010 14
  • 70. Sampling Matrix ‣ We could also write out the sampling process in matrix form z = Φ x zainul@ee.ucla.edu - CSEC - Jan 2010 14
  • 71. Sampling Matrix ‣ We could also write out the sampling process in matrix form z = Φ x Three non-zero entries at some “good” locations zainul@ee.ucla.edu - CSEC - Jan 2010 14
  • 72. Sampling Matrix ‣ We could also write out the sampling process in matrix form Three measurements z = Φ x Three non-zero entries at some “good” locations zainul@ee.ucla.edu - CSEC - Jan 2010 14
  • 73. Sampling Matrix ‣ We could also write out the sampling process in matrix form Three measurements z = Φ x k n Three non-zero entries at some “good” locations zainul@ee.ucla.edu - CSEC - Jan 2010 14
  • 74. Exhaustive Search ‣ Objective of exhaustive search: ‣ Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) ‣ Our search is now guided by the fact that y is a sparse vector ‣ Rewriting constraints: z = Φx y = Ψx −1 z = ΦΨ y zainul@ee.ucla.edu - CSEC - Jan 2010 15
  • 75. Exhaustive Search ‣ Objective of exhaustive search: ‣ Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) ‣ Our search is now guided by the fact that y is a sparse vector ‣ Rewriting constraints: ˆ % y = arg min y l 0 % y z = Φx y = Ψx % s.t. z = ΦΨ y −1 z = ΦΨ y −1 y l0 @ {i : yi ≠ 0} zainul@ee.ucla.edu - CSEC - Jan 2010 15
  • 76. Exhaustive Search ‣ Objective of exhaustive search: ‣ Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) ‣ Our search is now guided by the fact that y is a sparse vector ‣ Rewriting constraints: ˆ % y = arg min y l 0 % y z = Φx y = Ψx % s.t. z = ΦΨ y −1 z = ΦΨ y −1 y l0 @ {i : yi ≠ 0} This optimization problem is NP-Hard ! zainul@ee.ucla.edu - CSEC - Jan 2010 15
  • 77. l1 Minimization ‣ Approximate the l0 norm to an l1 norm ˆ % y = arg min y l 1 % y y = ∑ yi l1 % −1 i s.t. z = ΦΨ y ‣ This problem can now be solved efficiently using linear programming techniques ‣ This approximation was not new ‣ The big leap in Compressed Sensing was a theorem that showed that under the right conditions, this approximation was exact! zainul@ee.ucla.edu - CSEC - Jan 2010 16
  • 78. The Restricted Isometry Property ˆ % y = arg min y l 1 % y % s.t. z = ΦΨ y −1 Rewrite as: % z = Ay For any positive integer constant s, find the smallest δs such that: 2 2 2 (1 − δ s ) y ≤ Ay ≤ (1 + δ s ) y holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries zainul@ee.ucla.edu - CSEC - Jan 2010 17
  • 79. The Restricted Isometry Property ˆ % y = arg min y l 1 % y % s.t. z = ΦΨ y −1 Rewrite as: % z = Ay For any positive integer constant s, find the smallest δs such that: 2 2 2 (1 − δ s ) y ≤ Ay ≤ (1 + δ s ) y holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries The closer δs(A) is to 0, the better the matrix combination A is at capturing unique features of the signal zainul@ee.ucla.edu - CSEC - Jan 2010 17
  • 80. CS Recovery Theorem Theorem: Assume that δs(A) <√2-1 for some matrix A, then the solution to the l1 minimization problem obeys: [Candes-Romberg- ˆ y− y ˆ ≤ C 0 y − ys Tao-05] l1 l1 C0 ˆ y− y l2 ≤ ˆ y − ys l1 s for some small positive constant C0 ys is an approximation of a non-sparse vector with only its s-largest entries If y is s-sparse, the reconstruction is exact zainul@ee.ucla.edu - CSEC - Jan 2010 18
  • 81. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  • 82. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  • 83. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  • 84. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n z = Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  • 85. Bernoulli Random Projections ‣  +1 −1  Realizations of equiprobable Bernoulli RV  ,   n n z = Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 20
  • 86. Uniform Random Sampling ‣ Select samples uniformly randomly z = Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 21
  • 87. Per-Module Energy Consumption on Mica ˆ y %! ;<= >?) 001 ;@=A3(1B $! 23456(789: #! H "! ! etection pro- "!&'( #!&'( $!&'( #+!&' "!#%&' "!#%&' on )* )* )* )* ,-.* ,*/001 "!!! ctions that de- ;<= >?) 001 ;@=A3(1B ;D<<A<E(1A85(78F: C+! m. Model pa- nergy accurate +!! MicaZ [18]. The #+! zed sampling is Irwin-Hall dis- ! "!&'( #!&'( $!&'( #+!&' "!#%&' "!#%&' niform random )* )* )* )* ,-.* ,*/001 rmed using an Figure 3: Power and Duty Cycle costs for Compres- -bit operation. sive Sensing versus Nyquist Sampling w/ local FFT. he 4KB ‣RAMFFT computation higher than transmission cost of each block for every one second window. This equates to ‣ Highest the achievable duty cycle ofrandom number generator consumer in CS is the the node, lower values for which further improve the overall energy efficiency. The ADC la- on of the com- tency is clearly visible here as - CSEC - Jan 2010 component for zainul@ee.ucla.edu the dominant 22
  • 88. Compressive Sampling Physical Sampling Time domain Signal samples n x ∈° z = In x Physical Compressive Randomized Signal Sampling measurements n x ∈° z = Φx kxn k<n zainul@ee.ucla.edu - CSEC - Jan 2010 23
  • 89. Compressive Sampling Physical Sampling Compression Compressed Signal domain samples n x ∈° z = In x y = Ψz nxn Physical Compressive Compressed Decoding Signal Sampling domain samples n x ∈° z = Φx % y = arg min y l 1 kxn % y k<n % s.t. z = ΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 24
  • 90. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn zainul@ee.ucla.edu - CSEC - Jan 2010 25
  • 91. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples When communication channel is lossy: zainul@ee.ucla.edu - CSEC - Jan 2010 25
  • 92. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples When communication channel is lossy: • Use retransmissions to recover lost data zainul@ee.ucla.edu - CSEC - Jan 2010 25
  • 93. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples When communication channel is lossy: • Use retransmissions to recover lost data • Or, use error (erasure) correcting codes zainul@ee.ucla.edu - CSEC - Jan 2010 25
  • 94. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples zainul@ee.ucla.edu - CSEC - Jan 2010 26
  • 95. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples zainul@ee.ucla.edu - CSEC - Jan 2010 26
  • 96. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples + w = Ωy wl = Cw y = ( CΩ ) wl ˆ mxn m>n zainul@ee.ucla.edu - CSEC - Jan 2010 26
  • 97. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz Done at nxn application layer Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples + w = Ωy wl = Cw y = ( CΩ ) wl ˆ mxn m>n zainul@ee.ucla.edu - CSEC - Jan 2010 26
  • 98. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz Done at nxn application layer Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples + w = Ωy wl = Cw y = ( CΩ ) wl ˆ mxn m>n Done at physical layer Can’t exploit signal characteristics zainul@ee.ucla.edu - CSEC - Jan 2010 26
  • 99. CS Erasure Coding Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 kxn % y k<n % s.t. zl = CΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 27
  • 100. CS Erasure Coding Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 kxn % y k<n % s.t. zl = CΦΨ y −1 Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 mxn % y k<m<n % s.t. zl = CΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 27
  • 101. CS Erasure Coding Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 kxn % y k<n % s.t. zl = CΦΨ y −1 Over-sampling in CS is Erasure Coding ! Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 mxn % y k<m<n % s.t. zl = CΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 27
  • 102. Features of CS Erasure Coding ‣ No need of additional channel coding block ‣ Redundancy achieved by oversampling ‣ Recovery is resilient to incorrect channel estimates ‣ Traditional channel coding fails if redundancy is inadequate ‣ Decoding is free if CS was used for compression anyway zainul@ee.ucla.edu - CSEC - Jan 2010 28
  • 103. Features of CS Erasure Coding ‣ No need of additional channel coding block ‣ Redundancy achieved by oversampling ‣ Recovery is resilient to incorrect channel estimates ‣ Traditional channel coding fails if redundancy is inadequate ‣ Decoding is free if CS was used for compression anyway ‣ Intuition: ‣ Channel Coding spreads information out over measurements ‣ Compression (Source Coding) - compact information in few measurements ‣ CSEC - spreads information while compacting ! zainul@ee.ucla.edu - CSEC - Jan 2010 28
  • 104. Effects of Missing Samples on CS z = Φ x zainul@ee.ucla.edu - CSEC - Jan 2010 29
  • 105. Effects of Missing Samples on CS z = Φ x Missing samples at the receiver zainul@ee.ucla.edu - CSEC - Jan 2010 29
  • 106. Effects of Missing Samples on CS z = Φ x Missing samples at the Same as missing receiver rows in the sampling matrix zainul@ee.ucla.edu - CSEC - Jan 2010 29
  • 107. Effects of Missing Samples on CS z = Φ x What happens if we over-sample? zainul@ee.ucla.edu - CSEC - Jan 2010 29
  • 108. Effects of Missing Samples on CS z = Φ x What happens if we over-sample? • Can we recover the lost data? zainul@ee.ucla.edu - CSEC - Jan 2010 29
  • 109. Effects of Missing Samples on CS z = Φ x What happens if we over-sample? • Can we recover the lost data? • How much over-sampling is needed? zainul@ee.ucla.edu - CSEC - Jan 2010 29
  • 110. Some CS Results ‣ Theorem: If k samples of a length n signal are acquired uniformly randomly (if each sample is equiprobable) and reconstruction is performed in the Fourier basis: k [Rudelson06] s≤C · 4 ′ w.h.p. log (n) ‣ Where s is the sparsity of the signal zainul@ee.ucla.edu - CSEC - Jan 2010 30
  • 111. Extending CS Results ‣ Claim: When m>k samples are acquired uniformly randomly and communicated through a memoryless binary erasure channel that drops m-k samples, the received k samples are still equiprobable. ‣ Implies that bound on sparsity condition should hold. ‣ If bound is tight, over-sampling rate (m-k) is same as loss rate [Charbiwala10] zainul@ee.ucla.edu - CSEC - Jan 2010 31
  • 112. Evaluating the RIP Create CS Compute RIP Simulate Sampling+Domain constant of Channel Matrix received matrix Φ * Ψ-1 (Inverse Fourier Transform) 103 instances, size 256x1024 zainul@ee.ucla.edu - CSEC - Jan 2010 32
  • 113. Evaluating the RIP Create CS Compute RIP Simulate Sampling+Domain constant of Channel Matrix received matrix A= Φ* Ψ-1 103 instances, size 256x1024 zainul@ee.ucla.edu - CSEC - Jan 2010 32
  • 114. Evaluating the RIP Create CS Compute RIP Simulate Sampling+Domain constant of Channel Matrix received matrix A= Φ* Ψ-1 A’= C*Φ* Ψ-1 103 instances, size 256x1024 zainul@ee.ucla.edu - CSEC - Jan 2010 32
  • 115. RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) zainul@ee.ucla.edu - CSEC - Jan 2010 33
  • 116. RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant zainul@ee.ucla.edu - CSEC - Jan 2010 33
  • 117. RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant 20 % Oversampling - RIP constant recovers zainul@ee.ucla.edu - CSEC - Jan 2010 33
  • 118. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) zainul@ee.ucla.edu - CSEC - Jan 2010 34
  • 119. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation zainul@ee.ucla.edu - CSEC - Jan 2010 34
  • 120. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn’t recover zainul@ee.ucla.edu - CSEC - Jan 2010 34
  • 121. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn’t recover Oversampling + Interleaving - RIP constant recovers zainul@ee.ucla.edu - CSEC - Jan 2010 34
  • 122. Signal Recovery Performance Evaluation Create CS Interleave Lossy CS Reconstruction Signal Sampling Samples Channel Recovery Error? zainul@ee.ucla.edu - CSEC - Jan 2010 35
  • 123. In Memoryless Channels Baseline performance - No Loss zainul@ee.ucla.edu - CSEC - Jan 2010 36
  • 124. In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability zainul@ee.ucla.edu - CSEC - Jan 2010 36
  • 125. In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery zainul@ee.ucla.edu - CSEC - Jan 2010 36
  • 126. In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely zainul@ee.ucla.edu - CSEC - Jan 2010 36
  • 127. In Bursty Channels Baseline performance - No Loss zainul@ee.ucla.edu - CSEC - Jan 2010 37
  • 128. In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability zainul@ee.ucla.edu - CSEC - Jan 2010 37
  • 129. In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn’t recover completely zainul@ee.ucla.edu - CSEC - Jan 2010 37
  • 130. In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability Oversampling + Interleaving - Still incomplete recovery 20 % Oversampling - doesn’t recover completely zainul@ee.ucla.edu - CSEC - Jan 2010 37
  • 131. In Bursty Channels Worse than baseline Baseline performance - No Loss 20 % Loss - Drop in recovery probability Oversampling + Interleaving - Still incomplete recovery 20 % Oversampling - doesn’t recover completely Better than baseline ‣ Recovery incomplete because of low interleaving depth ‣ Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance zainul@ee.ucla.edu - CSEC - Jan 2010 37
  • 132. In Bursty Channels Worse than baseline Baseline performance - No Loss 20 % Loss - Drop in recovery probability Oversampling + Interleaving - Still incomplete recovery 20 % Oversampling - doesn’t recover completely Better than baseline ‣ Recovery incomplete because of low interleaving depth ‣ Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance zainul@ee.ucla.edu - CSEC - Jan 2010 37
  • 133. In Real 802.15.4 Channel Baseline performance - No Loss zainul@ee.ucla.edu - CSEC - Jan 2010 38
  • 134. In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability zainul@ee.ucla.edu - CSEC - Jan 2010 38
  • 135. In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery zainul@ee.ucla.edu - CSEC - Jan 2010 38
  • 136. In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely zainul@ee.ucla.edu - CSEC - Jan 2010 38
  • 137. Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mJ) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense Sense, CS Sense Sense, CSEC and Compress and and Compress and Send (FFT) Send Send and Send and (1/4th with Send Send rate) Reed with Solomon RS zainul@ee.ucla.edu - CSEC - Jan 2010 39
  • 138. Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mJ) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense Sense, CS Sense Sense, CSEC and Compress and and Compress and Send (FFT) Send Send and Send and (1/4th with Send Send rate) Reed with Solomon RS zainul@ee.ucla.edu - CSEC - Jan 2010 39
  • 139. Summary ‣ Oversampling is a valid erasure coding strategy for compressive reconstruction ‣ For binary erasure channels, an oversampling rate equal to loss rate is sufficient (empirical) ‣ CS erasure coding can be rate-less like fountain codes ‣ Allows adaptation to varying channel conditions ‣ Can be computationally more efficient than traditional erasure codes zainul@ee.ucla.edu - CSEC - Jan 2010 40
  • 140. Closing Remarks ‣ CSEC spreads information out while compacting ‣ No free lunch syndrome: Data rate requirement is higher than if using good source and channel coding independently ‣ But, then, computation cost is higher too ‣ CSEC requires knowledge of signal model ‣ If signal is non-stationary, model needs to be updated during recovery ‣ This can be done using over-sampling too ‣ CSEC requires knowledge of channel conditions ‣ Can use CS streaming with feedback zainul@ee.ucla.edu - CSEC - Jan 2010 41