SlideShare uma empresa Scribd logo
1 de 71
Baixar para ler offline
Injecting image priors into Learnable
Compressive Subsampling
Martino G. Ferrari
April 30, 2018
Supervisors: Prof. S. Voloshynovskiy
O. Taran
University of Geneva
Faculty of Science
1/21
Table of contents
1. Problem formulation
2. My approach
3. Results
4. Applicability
5. Conclusion
2/21
Problem formulation
Acquisition
3/21
Acquisition
3/21
Acquisition
3/21
Acquisition
3/21
Acquisition
3/21
Compressive Subsampling
Classical acquisition
Encoder Decoder
x hlp × hlp ˆx
Compressive Subsampling (CS)
Encoder Decoder
x × ∆ ˆx
4/21
Compressive Subsampling
Classical acquisition
Encoder Decoder
x hlp × hlp ˆx
Compressive Subsampling (CS)
Encoder Decoder
x × ∆ ˆx
4/21
Compressive Subsampling
Classical acquisition
Encoder Decoder
x hlp × hlp ˆx
Encoder
b =↓ M(hlp ∗ x)
Compressive Subsampling (CS)
Encoder Decoder
x × ∆ ˆx
Encoder
b = Ax
4/21
Compressive Subsampling
Classical acquisition
Encoder Decoder
x hlp × hlp ˆx
Encoder
b =↓ M(hlp ∗ x)
Compressive Subsampling (CS)
Encoder Decoder
x × ∆ ˆx
Encoder
b = Ax
4/21
Compressive Subsampling
Classical acquisition
Encoder Decoder
x hlp × hlp ˆx
Encoder
b =↓ M(hlp ∗ x)
Compressive Subsampling (CS)
Encoder Decoder
x × ∆ ˆx
Encoder
b = Ax
Decoder
ˆx = hlp ∗ (↑ Rb)
Decoder
ˆx = ∆(b)
where both x and ˆx have dimension n while b has dimension m
4/21
CS Decoder
The typical Compressive Subsampling decoding is performed by solving
an optimisation problem, as for example the LASSO [1] minimisation:
ˆx = arg min
x
Ax − b 2
2+α x 1
5/21
CS Decoder
The typical Compressive Subsampling decoding is performed by solving
an optimisation problem, as for example the LASSO [1] minimisation:
ˆx = arg min
x
Ax − b 2
2+α x 1
Limitations:
• computationally hard
• noise over-fit
5/21
Sparse Domains
6/21
Sparse Domains
6/21
Sparse Domains
6/21
Learnable Compressive Subsampling
Adaptive sampling (k-best) [2]
Encoder Decoder
Ψx × Ψ∗
ˆx
learn
Learnable CS (favg) [3]
Encoder Decoder
Ψx × Ψ∗
ˆx
ΨX
learn
7/21
Learnable Compressive Subsampling
Adaptive sampling (k-best) [2]
Encoder Decoder
Ψx × Ψ∗
ˆx
learn
Encoder
b = P ΩΨx
Learnable CS (favg) [3]
Encoder Decoder
Ψx × Ψ∗
ˆx
ΨX
learn
7/21
Learnable Compressive Subsampling
Adaptive sampling (k-best) [2]
Encoder Decoder
Ψx × Ψ∗
ˆx
learn
Encoder
b = P ΩΨx
Learnable CS (favg) [3]
Encoder Decoder
Ψx × Ψ∗
ˆx
ΨX
learn
7/21
Learnable Compressive Subsampling
Adaptive sampling (k-best) [2]
Encoder Decoder
Ψx × Ψ∗
ˆx
learn
Encoder
b = P ΩΨx
Learnable CS (favg) [3]
Encoder Decoder
Ψx × Ψ∗
ˆx
ΨX
learn
Decoder
ˆx = Ψ∗
P T
Ωb
7/21
Learnable Compressive Subsampling
Adaptive sampling (k-best) [2]
Encoder Decoder
Ψx × Ψ∗
ˆx
learn
Encoder
b = P ΩΨx
Learnable CS (favg) [3]
Encoder Decoder
Ψx × Ψ∗
ˆx
ΨX
learn
Decoder
ˆx = Ψ∗
P T
Ωb
Learning
ˆΩ = arg max
Ω
P ΩΨx 2
2
Learning
ˆΩ = arg max
Ω
N
i=1 P ΩΨxi
2
2
7/21
My approach
LSC - Overview
8/21
LSC - Overview
8/21
LSC - Overview
8/21
LSC - Overview
8/21
LSC I - Learning
−1 0 1
·104
0.0
0.2
0.4
0.6
0.8
1.0
frequency
energy
non-cumulated
cumulated
1. transform dataset: ΨX = {Ψx1, . . . , ΨxN }
9/21
LSC I - Learning
−1 0 1
·104
0.0
0.2
0.4
0.6
0.8
1.0
frequency
energy
non-cumulated
cumulated
1. transform dataset: ΨX = {Ψx1, . . . , ΨxN }
2. cumulative magnitude: c(k) = 1
N
k
j=−J
N
i=0|Ψxi(j)|2
9/21
LSC I - Learning
−1 0 1
·104
0.0
0.2
0.4
0.6
0.8
1.0
frequency
energy
non-cumulated
cumulated
1. transform dataset: ΨX = {Ψx1, . . . , ΨxN }
2. cumulative magnitude: c(k) = 1
N
k
j=−J
N
i=0|Ψxi(j)|2
3. sub-band splitting: ΨX = {ΨX1
, . . . , ΨXL
}
9/21
LSC I - Learning
−1 0 1
·104
0.0
0.2
0.4
0.6
0.8
1.0
frequency
energy
non-cumulated
cumulated
1. transform dataset: ΨX = {Ψx1, . . . , ΨxN }
2. cumulative magnitude: c(k) = 1
N
k
j=−J
N
i=0|Ψxi(j)|2
3. sub-band splitting: ΨX = {ΨX1
, . . . , ΨXL
}
4. real Cl
re and imaginary Cl
im codebook generation
9/21
LSC I - Learning
−1 0 1
·104
0.0
0.2
0.4
0.6
0.8
1.0
frequency
energy
non-cumulated
cumulated
1. transform dataset: ΨX = {Ψx1, . . . , ΨxN }
2. cumulative magnitude: c(k) = 1
N
k
j=−J
N
i=0|Ψxi(j)|2
3. sub-band splitting: ΨX = {ΨX1
, . . . , ΨXL
}
4. real Cl
re and imaginary Cl
im codebook generation
5. sampling-pattern computation:
ˆΩl = arg max
Ωl
(P Ωl
σCl
re
+ P Ωl
σCl
im
)
9/21
LSC II - Sampling and Encoding
The sub-band sampling is expressed as:
bl
= P Ωl
(Ψx)l
The code identification is computed in a single step for both imaginary
and real part:
ˆcl
re, ˆcl
im = arg min cl
re,cl
im
bl
re − P Ωi
cl
re
2
2 +
bl
im − PΩi
cl
im
2
2+
|bl
|−P Ωi
|cl
re + jcl
im| 2
2+
arg(bl
) − P Ωi arg(cl
re + jcl
im) 2π
10/21
LSC III - Decoding
The decoder is linear and can be expressed as:
ˆx = Ψ∗



(PT
Ω1
b1
+ PT
ΩC
1
PΩC
1
(ˆc1
re + jˆc1
im)),
. . . ,
(PT
ΩL
bL
+ PT
ΩC
L
PΩC
L
(ˆcL
re + jˆcL
im))



where ΩC
l is the complementary set to Ωl
11/21
DIP - Overview
Implement a hourglass network [4] using the Deep Image Prior [5]
framework:
Minimisation problem is as follow:
ˆθ = arg min
θ
b − PΩΨfθ(z)
2
2 + βΩθ(θ)
12/21
DIP - Prior Injection
With prior injection, the minimisation problem becomes:
ˆθ = arg min
θ
b − PΩΨfθ(z)
2
2 + α fθ(z) − c
2
2 + βΩθ(θ)
while the final reconstruction is obtained through the following linear
operation:
ˆx = Ψ∗
(PT
Ωb + PT
ΩC PΩC Ψfˆθ(z))
13/21
Results
Datasets
14/21
Datasets
14/21
Datasets
14/21
LSC and DIP results i
15/21
LSC and DIP results
0
0.5
1
1.5
·10−2MSE CELEBa YaleB NASA SDO
10−2
10−1
0
0.5
1
1.5
·10−2
s.r.
MSE
INRIA
k-Best favg LSC DIP
10−2
10−1
s.r.
OASIS
10−2
10−1
s.r.
NASA IRIS
16/21
LSC robust signal recovering
k-Best
1 .0 0 e -0 1
favgLSC
2 .7 8 e -0 1 7 .7 4 e -0 1 2 .1 5 e +0 0 5 .9 9 e +0 0 1 .6 7 e +0 1 4 .6 4 e +0 1 1 .2 9 e +0 2 3 .5 9 e +0 2
10 2
10 1
100
101
102
Noise STD ( z)
0.0025
0.0050
0.0075
0.0100
0.0125
0.0150
0.0175
MSE
MSE vs Nose STD
Sampling Rate: 0.05
k-best
favg
LSC
10 2
10 1
100
Sampling Rate
0.000
0.002
0.004
0.006
0.008
0.010
0.012
0.014
MSE
MSE vs Nose STD
Noise STD: 15.0
k-best
favg
LSC
17/21
Applicability
Computed tomography scan (CT)
18/21
Computed tomography scan (CT)
18/21
Computed tomography scan (CT)
18/21
Computed tomography scan (CT)
18/21
Computed tomography scan (CT)
18/21
Computed tomography scan (CT)
18/21
Computed tomography scan (CT)
18/21
Radio interferometer
19/21
Radio interferometer
19/21
Radio interferometer
19/21
Radio interferometer
19/21
Radio interferometer
19/21
Conclusion
Summary
Strengths
• low-sampling-rate recovery
• few prior data needed
• fast encoder/decoder (LSC)
• robust to noise (LSC)
• no prior-training required (DIP)
20/21
Summary
Strengths
• low-sampling-rate recovery
• few prior data needed
• fast encoder/decoder (LSC)
• robust to noise (LSC)
• no prior-training required (DIP)
Weaknesses
• some dependency on signal
alignment
• complex decoder (DIP)
• prior-training required (LSC)
20/21
Summary
Strengths
• low-sampling-rate recovery
• few prior data needed
• fast encoder/decoder (LSC)
• robust to noise (LSC)
• no prior-training required (DIP)
Weaknesses
• some dependency on signal
alignment
• complex decoder (DIP)
• prior-training required (LSC)
Future development
• investigate better sub-band splitting (LSC)
• improve coding models (LSC)
• investigate new prior model and cost function (DIP)
• combine deep models within LSC (DIP + LSC)
* This work has been submitted to EUSIPCO 2018 20/21
Questions?
21/21
Backup slide - Alignment
LSC
OASIS YaleB CELEBa SDO IRIS INRIA
Dataset
0.2
0.0
0.2
0.4
0.6
CorrelationCoefficent
Correlation between
Alignement and Reconstruction error
k=0.01
k=0.05
k=0.1
DIP
OASIS YaleB CELEBa SDO IRIS INRIA
Dataset
0.2
0.0
0.2
0.4
0.6
0.8
1.0
CorrelationCoefficent
Correlation between
Alignement and Reconstruction error
D.I.P.
L.S.C.
Backup slide - Alignment
LSC
0.02 0.04 0.06 0.08 0.10
Sampling rate
0.22
0.24
0.26
0.28
0.30
0.32
0.34
0.36
0.38
AverageCorrelation
DIP
OASIS YaleB CELEBa SDO IRIS INRIA
Dataset
0.2
0.0
0.2
0.4
0.6
0.8
1.0
CorrelationCoefficent
Correlation between
Alignement and Reconstruction error
D.I.P.
L.S.C.
Backup slide - LSC Precision
10 2
10 1
100
Sampling Rate (k)
0.0
0.2
0.4
0.6
0.8
ErrorRate
a. OASIS
Sub-band 1
Sub-band 2
Sub-band 3
Sub-band 4
Sub-band 5
10 2
10 1
100
Sampling Rate (k)
0.00
0.05
0.10
0.15
0.20
0.25
b. SDO
Sub-band 1
Sub-band 2
Sub-band 3
Sub-band 4
Sub-band 5
Backup slide - DIP Convergence
100
101
102
103
104
Iteration
10 5
10 4
10 3
10 2
10 1
MSE
loss error
recover error
observed error
Backup slide - Performances
LSC
10 2
10 1
100
Number of Samples
0.0
0.2
0.4
0.6
0.8
1.0
ReconstructionTime
measured
expected
DIP
103
104
Number of Samples
20
40
60
80
100
ReconstructionTime(s)
measured
expected
Backup slide - Performances
LSC
IRIS SDO OASIS YaleB CELEBa INRIA
0
5e-7
10e-7
15e-7
Nornmalized
TrainingTime(s)
0
100
200
300
400
500
TrainingTime(s)
DIP
103
104
Number of Samples
20
40
60
80
100
ReconstructionTime(s)
measured
expected
Backup slide - Performances
LSC
IRIS SDO OASIS YaleB CELEBa INRIA
0
5e-7
10e-7
15e-7
Nornmalized
TrainingTime(s)
0
100
200
300
400
500
TrainingTime(s)
DIP
10000 20000 30000 40000 50000 60000
Resolution (log)
20
30
40
50
60
70
AverageReconstruction
Time(s)
measured
expected
References i
Robert Tibshirani.
Regression shrinkage and selection via the lasso.
Journal of the Royal Statistical Society. Series B (Methodological),
58(1):267–288, 1996.
Bubacarr Bah, Ali Sadeghian, and Volkan Cevher.
Energy-aware adaptive bi-lipschitz embeddings.
CoRR, abs/1307.3457, 2013.
Luca Baldassarre, Yen-Huan Li, Jonathan Scarlett, Baran Gzc, Ilija
Bogunovic, and Volkan Cevher.
Learning-based Compressive Subsampling.
IEEE Journal of Selected Topics in Signal Processing,
10(4):809–822, June 2016.
arXiv: 1510.06188.
References ii
Alejandro Newell, Kaiyu Yang, and Jia Deng.
Stacked hourglass networks for human pose estimation.
CoRR, abs/1603.06937, 2016.
Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky.
Deep image prior.
arXiv preprint arXiv:1711.10925, 2017.

Mais conteúdo relacionado

Mais procurados

Reducing Structural Bias in Technology Mapping
Reducing Structural Bias in Technology MappingReducing Structural Bias in Technology Mapping
Reducing Structural Bias in Technology Mappingsatrajit
 
Pr057 mask rcnn
Pr057 mask rcnnPr057 mask rcnn
Pr057 mask rcnnTaeoh Kim
 
Tutorial on Object Detection (Faster R-CNN)
Tutorial on Object Detection (Faster R-CNN)Tutorial on Object Detection (Faster R-CNN)
Tutorial on Object Detection (Faster R-CNN)Hwa Pyung Kim
 
Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...Usatyuk Vasiliy
 
LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...
LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...
LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...PyData
 
Low-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamicsLow-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamicsAlexander Litvinenko
 
First Place Memocode'14 Design Contest Entry
First Place Memocode'14 Design Contest EntryFirst Place Memocode'14 Design Contest Entry
First Place Memocode'14 Design Contest EntryKevin Townsend
 
Computing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdfComputing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdfPolytechnique Montréal
 
Wireless Localization: Ranging (second part)
Wireless Localization: Ranging (second part)Wireless Localization: Ranging (second part)
Wireless Localization: Ranging (second part)Stefano Severi
 
Practical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT MethodsPractical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT MethodsNaughty Dog
 
Graph Kernels for Chemical Informatics
Graph Kernels for Chemical InformaticsGraph Kernels for Chemical Informatics
Graph Kernels for Chemical InformaticsMukund Raj
 
High Performance Pedestrian Detection On TEGRA X1
High Performance Pedestrian Detection On TEGRA X1High Performance Pedestrian Detection On TEGRA X1
High Performance Pedestrian Detection On TEGRA X1NVIDIA
 
Tao Fayan_Iso and Full_volume rendering
Tao Fayan_Iso and Full_volume renderingTao Fayan_Iso and Full_volume rendering
Tao Fayan_Iso and Full_volume renderingFayan TAO
 
Graph Neural Network in practice
Graph Neural Network in practiceGraph Neural Network in practice
Graph Neural Network in practicetuxette
 
SPU Optimizations - Part 2
SPU Optimizations - Part 2SPU Optimizations - Part 2
SPU Optimizations - Part 2Naughty Dog
 
Speaker Diarization
Speaker DiarizationSpeaker Diarization
Speaker DiarizationHONGJOO LEE
 
An improved spfa algorithm for single source shortest path problem using forw...
An improved spfa algorithm for single source shortest path problem using forw...An improved spfa algorithm for single source shortest path problem using forw...
An improved spfa algorithm for single source shortest path problem using forw...IJMIT JOURNAL
 

Mais procurados (19)

Reducing Structural Bias in Technology Mapping
Reducing Structural Bias in Technology MappingReducing Structural Bias in Technology Mapping
Reducing Structural Bias in Technology Mapping
 
Pr057 mask rcnn
Pr057 mask rcnnPr057 mask rcnn
Pr057 mask rcnn
 
Tutorial on Object Detection (Faster R-CNN)
Tutorial on Object Detection (Faster R-CNN)Tutorial on Object Detection (Faster R-CNN)
Tutorial on Object Detection (Faster R-CNN)
 
Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...
 
LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...
LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...
LightFields.jl: Fast 3D image reconstruction for VR applications - Hector And...
 
Low-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamicsLow-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamics
 
First Place Memocode'14 Design Contest Entry
First Place Memocode'14 Design Contest EntryFirst Place Memocode'14 Design Contest Entry
First Place Memocode'14 Design Contest Entry
 
Computing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdfComputing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdf
 
Wireless Localization: Ranging (second part)
Wireless Localization: Ranging (second part)Wireless Localization: Ranging (second part)
Wireless Localization: Ranging (second part)
 
Practical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT MethodsPractical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT Methods
 
Graph Kernels for Chemical Informatics
Graph Kernels for Chemical InformaticsGraph Kernels for Chemical Informatics
Graph Kernels for Chemical Informatics
 
High Performance Pedestrian Detection On TEGRA X1
High Performance Pedestrian Detection On TEGRA X1High Performance Pedestrian Detection On TEGRA X1
High Performance Pedestrian Detection On TEGRA X1
 
Volume computation and applications
Volume computation and applications Volume computation and applications
Volume computation and applications
 
Tao Fayan_Iso and Full_volume rendering
Tao Fayan_Iso and Full_volume renderingTao Fayan_Iso and Full_volume rendering
Tao Fayan_Iso and Full_volume rendering
 
Graph Neural Network in practice
Graph Neural Network in practiceGraph Neural Network in practice
Graph Neural Network in practice
 
SPU Optimizations - Part 2
SPU Optimizations - Part 2SPU Optimizations - Part 2
SPU Optimizations - Part 2
 
Speaker Diarization
Speaker DiarizationSpeaker Diarization
Speaker Diarization
 
Biochip
BiochipBiochip
Biochip
 
An improved spfa algorithm for single source shortest path problem using forw...
An improved spfa algorithm for single source shortest path problem using forw...An improved spfa algorithm for single source shortest path problem using forw...
An improved spfa algorithm for single source shortest path problem using forw...
 

Semelhante a Injecting image priors into Learnable Compressive Subsampling

GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用
GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用
GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用NVIDIA Taiwan
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...Alexander Litvinenko
 
An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...
An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...
An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...João Baltazar
 
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...Alex Pruden
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
 
MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4arogozhnikov
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9M S Prasad
 
lecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdflecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdfAnaNeacsu5
 
Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...Usatyuk Vasiliy
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionAlexander Litvinenko
 
Data sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve ExpansionData sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve ExpansionAlexander Litvinenko
 
Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06Rediet Moges
 
CyberSec_JPEGcompressionForensics.pdf
CyberSec_JPEGcompressionForensics.pdfCyberSec_JPEGcompressionForensics.pdf
CyberSec_JPEGcompressionForensics.pdfMohammadAzreeYahaya
 
Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Karen Pao
 
Design of infinite impulse response digital filters 2
Design of infinite impulse response digital filters 2Design of infinite impulse response digital filters 2
Design of infinite impulse response digital filters 2HIMANSHU DIWAKAR
 
Model-counting Approaches For Nonlinear Numerical Constraints
Model-counting Approaches For Nonlinear Numerical ConstraintsModel-counting Approaches For Nonlinear Numerical Constraints
Model-counting Approaches For Nonlinear Numerical ConstraintsQuoc-Sang Phan
 
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...Tomoya Murata
 
ENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-MeansENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-Meanstthonet
 

Semelhante a Injecting image priors into Learnable Compressive Subsampling (20)

GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用
GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用
GTC Taiwan 2017 GPU 平台上導入深度學習於半導體產業之 EDA 應用
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...
 
Thesis_Presentation
Thesis_PresentationThesis_Presentation
Thesis_Presentation
 
An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...
An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...
An Iteratively Coupled Solution Method for Partial and Super-Cavitation Predi...
 
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9
 
lecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdflecture01_lecture01_lecture0001_ceva.pdf
lecture01_lecture01_lecture0001_ceva.pdf
 
Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...Cycle’s topological optimizations and the iterative decoding problem on gener...
Cycle’s topological optimizations and the iterative decoding problem on gener...
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
Slides
SlidesSlides
Slides
 
Data sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve ExpansionData sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve Expansion
 
Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06Digital Signal Processing[ECEG-3171]-Ch1_L06
Digital Signal Processing[ECEG-3171]-Ch1_L06
 
CyberSec_JPEGcompressionForensics.pdf
CyberSec_JPEGcompressionForensics.pdfCyberSec_JPEGcompressionForensics.pdf
CyberSec_JPEGcompressionForensics.pdf
 
Druinsky_SIAMCSE15
Druinsky_SIAMCSE15Druinsky_SIAMCSE15
Druinsky_SIAMCSE15
 
Design of infinite impulse response digital filters 2
Design of infinite impulse response digital filters 2Design of infinite impulse response digital filters 2
Design of infinite impulse response digital filters 2
 
Model-counting Approaches For Nonlinear Numerical Constraints
Model-counting Approaches For Nonlinear Numerical ConstraintsModel-counting Approaches For Nonlinear Numerical Constraints
Model-counting Approaches For Nonlinear Numerical Constraints
 
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
 
ENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-MeansENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-Means
 

Último

GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)Areesha Ahmad
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)Areesha Ahmad
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhousejana861314
 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptxanandsmhk
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...Sérgio Sacani
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPirithiRaju
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksSérgio Sacani
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfSumit Kumar yadav
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)Areesha Ahmad
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bSérgio Sacani
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSSLeenakshiTyagi
 
GFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxGFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxAleenaTreesaSaji
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000Sapana Sha
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsSérgio Sacani
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...anilsa9823
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PPRINCE C P
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxUmerFayaz5
 

Último (20)

GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhouse
 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
 
The Philosophy of Science
The Philosophy of ScienceThe Philosophy of Science
The Philosophy of Science
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdf
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
 
GFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxGFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptx
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
 
CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C P
 
Engler and Prantl system of classification in plant taxonomy
Engler and Prantl system of classification in plant taxonomyEngler and Prantl system of classification in plant taxonomy
Engler and Prantl system of classification in plant taxonomy
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
 

Injecting image priors into Learnable Compressive Subsampling

  • 1. Injecting image priors into Learnable Compressive Subsampling Martino G. Ferrari April 30, 2018 Supervisors: Prof. S. Voloshynovskiy O. Taran University of Geneva Faculty of Science 1/21
  • 2. Table of contents 1. Problem formulation 2. My approach 3. Results 4. Applicability 5. Conclusion 2/21
  • 9. Compressive Subsampling Classical acquisition Encoder Decoder x hlp × hlp ˆx Compressive Subsampling (CS) Encoder Decoder x × ∆ ˆx 4/21
  • 10. Compressive Subsampling Classical acquisition Encoder Decoder x hlp × hlp ˆx Compressive Subsampling (CS) Encoder Decoder x × ∆ ˆx 4/21
  • 11. Compressive Subsampling Classical acquisition Encoder Decoder x hlp × hlp ˆx Encoder b =↓ M(hlp ∗ x) Compressive Subsampling (CS) Encoder Decoder x × ∆ ˆx Encoder b = Ax 4/21
  • 12. Compressive Subsampling Classical acquisition Encoder Decoder x hlp × hlp ˆx Encoder b =↓ M(hlp ∗ x) Compressive Subsampling (CS) Encoder Decoder x × ∆ ˆx Encoder b = Ax 4/21
  • 13. Compressive Subsampling Classical acquisition Encoder Decoder x hlp × hlp ˆx Encoder b =↓ M(hlp ∗ x) Compressive Subsampling (CS) Encoder Decoder x × ∆ ˆx Encoder b = Ax Decoder ˆx = hlp ∗ (↑ Rb) Decoder ˆx = ∆(b) where both x and ˆx have dimension n while b has dimension m 4/21
  • 14. CS Decoder The typical Compressive Subsampling decoding is performed by solving an optimisation problem, as for example the LASSO [1] minimisation: ˆx = arg min x Ax − b 2 2+α x 1 5/21
  • 15. CS Decoder The typical Compressive Subsampling decoding is performed by solving an optimisation problem, as for example the LASSO [1] minimisation: ˆx = arg min x Ax − b 2 2+α x 1 Limitations: • computationally hard • noise over-fit 5/21
  • 19. Learnable Compressive Subsampling Adaptive sampling (k-best) [2] Encoder Decoder Ψx × Ψ∗ ˆx learn Learnable CS (favg) [3] Encoder Decoder Ψx × Ψ∗ ˆx ΨX learn 7/21
  • 20. Learnable Compressive Subsampling Adaptive sampling (k-best) [2] Encoder Decoder Ψx × Ψ∗ ˆx learn Encoder b = P ΩΨx Learnable CS (favg) [3] Encoder Decoder Ψx × Ψ∗ ˆx ΨX learn 7/21
  • 21. Learnable Compressive Subsampling Adaptive sampling (k-best) [2] Encoder Decoder Ψx × Ψ∗ ˆx learn Encoder b = P ΩΨx Learnable CS (favg) [3] Encoder Decoder Ψx × Ψ∗ ˆx ΨX learn 7/21
  • 22. Learnable Compressive Subsampling Adaptive sampling (k-best) [2] Encoder Decoder Ψx × Ψ∗ ˆx learn Encoder b = P ΩΨx Learnable CS (favg) [3] Encoder Decoder Ψx × Ψ∗ ˆx ΨX learn Decoder ˆx = Ψ∗ P T Ωb 7/21
  • 23. Learnable Compressive Subsampling Adaptive sampling (k-best) [2] Encoder Decoder Ψx × Ψ∗ ˆx learn Encoder b = P ΩΨx Learnable CS (favg) [3] Encoder Decoder Ψx × Ψ∗ ˆx ΨX learn Decoder ˆx = Ψ∗ P T Ωb Learning ˆΩ = arg max Ω P ΩΨx 2 2 Learning ˆΩ = arg max Ω N i=1 P ΩΨxi 2 2 7/21
  • 29. LSC I - Learning −1 0 1 ·104 0.0 0.2 0.4 0.6 0.8 1.0 frequency energy non-cumulated cumulated 1. transform dataset: ΨX = {Ψx1, . . . , ΨxN } 9/21
  • 30. LSC I - Learning −1 0 1 ·104 0.0 0.2 0.4 0.6 0.8 1.0 frequency energy non-cumulated cumulated 1. transform dataset: ΨX = {Ψx1, . . . , ΨxN } 2. cumulative magnitude: c(k) = 1 N k j=−J N i=0|Ψxi(j)|2 9/21
  • 31. LSC I - Learning −1 0 1 ·104 0.0 0.2 0.4 0.6 0.8 1.0 frequency energy non-cumulated cumulated 1. transform dataset: ΨX = {Ψx1, . . . , ΨxN } 2. cumulative magnitude: c(k) = 1 N k j=−J N i=0|Ψxi(j)|2 3. sub-band splitting: ΨX = {ΨX1 , . . . , ΨXL } 9/21
  • 32. LSC I - Learning −1 0 1 ·104 0.0 0.2 0.4 0.6 0.8 1.0 frequency energy non-cumulated cumulated 1. transform dataset: ΨX = {Ψx1, . . . , ΨxN } 2. cumulative magnitude: c(k) = 1 N k j=−J N i=0|Ψxi(j)|2 3. sub-band splitting: ΨX = {ΨX1 , . . . , ΨXL } 4. real Cl re and imaginary Cl im codebook generation 9/21
  • 33. LSC I - Learning −1 0 1 ·104 0.0 0.2 0.4 0.6 0.8 1.0 frequency energy non-cumulated cumulated 1. transform dataset: ΨX = {Ψx1, . . . , ΨxN } 2. cumulative magnitude: c(k) = 1 N k j=−J N i=0|Ψxi(j)|2 3. sub-band splitting: ΨX = {ΨX1 , . . . , ΨXL } 4. real Cl re and imaginary Cl im codebook generation 5. sampling-pattern computation: ˆΩl = arg max Ωl (P Ωl σCl re + P Ωl σCl im ) 9/21
  • 34. LSC II - Sampling and Encoding The sub-band sampling is expressed as: bl = P Ωl (Ψx)l The code identification is computed in a single step for both imaginary and real part: ˆcl re, ˆcl im = arg min cl re,cl im bl re − P Ωi cl re 2 2 + bl im − PΩi cl im 2 2+ |bl |−P Ωi |cl re + jcl im| 2 2+ arg(bl ) − P Ωi arg(cl re + jcl im) 2π 10/21
  • 35. LSC III - Decoding The decoder is linear and can be expressed as: ˆx = Ψ∗    (PT Ω1 b1 + PT ΩC 1 PΩC 1 (ˆc1 re + jˆc1 im)), . . . , (PT ΩL bL + PT ΩC L PΩC L (ˆcL re + jˆcL im))    where ΩC l is the complementary set to Ωl 11/21
  • 36. DIP - Overview Implement a hourglass network [4] using the Deep Image Prior [5] framework: Minimisation problem is as follow: ˆθ = arg min θ b − PΩΨfθ(z) 2 2 + βΩθ(θ) 12/21
  • 37. DIP - Prior Injection With prior injection, the minimisation problem becomes: ˆθ = arg min θ b − PΩΨfθ(z) 2 2 + α fθ(z) − c 2 2 + βΩθ(θ) while the final reconstruction is obtained through the following linear operation: ˆx = Ψ∗ (PT Ωb + PT ΩC PΩC Ψfˆθ(z)) 13/21
  • 42. LSC and DIP results i 15/21
  • 43. LSC and DIP results 0 0.5 1 1.5 ·10−2MSE CELEBa YaleB NASA SDO 10−2 10−1 0 0.5 1 1.5 ·10−2 s.r. MSE INRIA k-Best favg LSC DIP 10−2 10−1 s.r. OASIS 10−2 10−1 s.r. NASA IRIS 16/21
  • 44. LSC robust signal recovering k-Best 1 .0 0 e -0 1 favgLSC 2 .7 8 e -0 1 7 .7 4 e -0 1 2 .1 5 e +0 0 5 .9 9 e +0 0 1 .6 7 e +0 1 4 .6 4 e +0 1 1 .2 9 e +0 2 3 .5 9 e +0 2 10 2 10 1 100 101 102 Noise STD ( z) 0.0025 0.0050 0.0075 0.0100 0.0125 0.0150 0.0175 MSE MSE vs Nose STD Sampling Rate: 0.05 k-best favg LSC 10 2 10 1 100 Sampling Rate 0.000 0.002 0.004 0.006 0.008 0.010 0.012 0.014 MSE MSE vs Nose STD Noise STD: 15.0 k-best favg LSC 17/21
  • 59. Summary Strengths • low-sampling-rate recovery • few prior data needed • fast encoder/decoder (LSC) • robust to noise (LSC) • no prior-training required (DIP) 20/21
  • 60. Summary Strengths • low-sampling-rate recovery • few prior data needed • fast encoder/decoder (LSC) • robust to noise (LSC) • no prior-training required (DIP) Weaknesses • some dependency on signal alignment • complex decoder (DIP) • prior-training required (LSC) 20/21
  • 61. Summary Strengths • low-sampling-rate recovery • few prior data needed • fast encoder/decoder (LSC) • robust to noise (LSC) • no prior-training required (DIP) Weaknesses • some dependency on signal alignment • complex decoder (DIP) • prior-training required (LSC) Future development • investigate better sub-band splitting (LSC) • improve coding models (LSC) • investigate new prior model and cost function (DIP) • combine deep models within LSC (DIP + LSC) * This work has been submitted to EUSIPCO 2018 20/21
  • 63. Backup slide - Alignment LSC OASIS YaleB CELEBa SDO IRIS INRIA Dataset 0.2 0.0 0.2 0.4 0.6 CorrelationCoefficent Correlation between Alignement and Reconstruction error k=0.01 k=0.05 k=0.1 DIP OASIS YaleB CELEBa SDO IRIS INRIA Dataset 0.2 0.0 0.2 0.4 0.6 0.8 1.0 CorrelationCoefficent Correlation between Alignement and Reconstruction error D.I.P. L.S.C.
  • 64. Backup slide - Alignment LSC 0.02 0.04 0.06 0.08 0.10 Sampling rate 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 0.38 AverageCorrelation DIP OASIS YaleB CELEBa SDO IRIS INRIA Dataset 0.2 0.0 0.2 0.4 0.6 0.8 1.0 CorrelationCoefficent Correlation between Alignement and Reconstruction error D.I.P. L.S.C.
  • 65. Backup slide - LSC Precision 10 2 10 1 100 Sampling Rate (k) 0.0 0.2 0.4 0.6 0.8 ErrorRate a. OASIS Sub-band 1 Sub-band 2 Sub-band 3 Sub-band 4 Sub-band 5 10 2 10 1 100 Sampling Rate (k) 0.00 0.05 0.10 0.15 0.20 0.25 b. SDO Sub-band 1 Sub-band 2 Sub-band 3 Sub-band 4 Sub-band 5
  • 66. Backup slide - DIP Convergence 100 101 102 103 104 Iteration 10 5 10 4 10 3 10 2 10 1 MSE loss error recover error observed error
  • 67. Backup slide - Performances LSC 10 2 10 1 100 Number of Samples 0.0 0.2 0.4 0.6 0.8 1.0 ReconstructionTime measured expected DIP 103 104 Number of Samples 20 40 60 80 100 ReconstructionTime(s) measured expected
  • 68. Backup slide - Performances LSC IRIS SDO OASIS YaleB CELEBa INRIA 0 5e-7 10e-7 15e-7 Nornmalized TrainingTime(s) 0 100 200 300 400 500 TrainingTime(s) DIP 103 104 Number of Samples 20 40 60 80 100 ReconstructionTime(s) measured expected
  • 69. Backup slide - Performances LSC IRIS SDO OASIS YaleB CELEBa INRIA 0 5e-7 10e-7 15e-7 Nornmalized TrainingTime(s) 0 100 200 300 400 500 TrainingTime(s) DIP 10000 20000 30000 40000 50000 60000 Resolution (log) 20 30 40 50 60 70 AverageReconstruction Time(s) measured expected
  • 70. References i Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288, 1996. Bubacarr Bah, Ali Sadeghian, and Volkan Cevher. Energy-aware adaptive bi-lipschitz embeddings. CoRR, abs/1307.3457, 2013. Luca Baldassarre, Yen-Huan Li, Jonathan Scarlett, Baran Gzc, Ilija Bogunovic, and Volkan Cevher. Learning-based Compressive Subsampling. IEEE Journal of Selected Topics in Signal Processing, 10(4):809–822, June 2016. arXiv: 1510.06188.
  • 71. References ii Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. CoRR, abs/1603.06937, 2016. Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. arXiv preprint arXiv:1711.10925, 2017.