SlideShare a Scribd company logo
1 of 56
Download to read offline
Low Complexity
Regularization of
Inverse Problems
Gabriel Peyré

Joint works with:
Samuel Vaiter Jalal Fadili
Charles Dossal
Mohammad Golbabaee

VISI
www.numerical-tours.com

N
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Single Pixel Camera (Rice)
x0
˜
Single Pixel Camera (Rice)
x0
˜

y[i] = hx0 , 'i i

P measures

N micro-mirrors
Single Pixel Camera (Rice)
x0
˜

y[i] = hx0 , 'i i

P measures

P/N = 1

N micro-mirrors

P/N = 0.16

P/N = 0.02
CS Hardware Model
˜
CS is about designing hardware: input signals f

L2 (R2 ).

Physical hardware resolution limit: target resolution f
2

x0 2 L
˜

array
resolution
CS hardware

x0 2 R

N

micro

mirrors

RN .

y 2 RP
CS Hardware Model
˜
CS is about designing hardware: input signals f

L2 (R2 ).

Physical hardware resolution limit: target resolution f

x0 2 L
˜

array
resolution

x0 2 R

N

micro

mirrors

y 2 RP

CS hardware

,
,
...

2

RN .

,

Operator
x0
Inverse Problems
Recovering x0 RN from noisy observations
y = x 0 + w 2 RP
Inverse Problems
Recovering x0 RN from noisy observations
y = x 0 + w 2 RP

Examples: Inpainting, super-resolution, compressed-sensing

x0

x0
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter

Example: variational methods
1
x(y) 2 argmin ||y
x||2 + J(x)
x2RN 2
Data fidelity Regularity
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter

Example: variational methods
1
x(y) 2 argmin ||y
x||2 + J(x)
x2RN 2
Data fidelity Regularity

Performance analysis:
! Criteria on (x0 , ||w||, ) to ensure

L2 stability ||x(y) x0 || = O(||w||)
Model stability (e.g. spikes location)
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:
T

Coe cients x

Image

x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Image

x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Image

x

D

Analysis
sparsity:
Image x

Gradient D⇤ x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Analysis
sparsity:

Image

x

D

Low-rank:

Image x

Gradient D⇤ x

S1,·

Multi-spectral imaging:
Pr
xi,· = j=1 Ai,j Sj,·

S2,·

x

S3,·
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

Piecewise regular ball , Union of linear models (T )T 2T

x
J(x) = ||x||1 T
T = sparse
vectors
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

Piecewise regular ball , Union of linear models (T )T 2T

T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

x

T0

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

x
T

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0

x

0

J(x) = ||x||⇤

T = low-rank
matrices
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

T0

x
T

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0

x

x
x0

0

J(x) = ||x||⇤

T = low-rank
matrices

J(x) = ||x||1

T = antisparse
vectors
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

xi}

|x|
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

I = supp(x) = {i  xi 6= 0}

@J(x)
0

x
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

I = supp(x) = {i  xi 6= 0}
Tx = {⌘  supp(⌘) = I}

Definition:

Tx = VectHull(@J(x))?

@J(x)
0

x

Tx
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

@J(x)
0

I = supp(x) = {i  xi 6= 0}

Tx

Tx = {⌘  supp(⌘) = I}
ex = sign(x)

Definition:

Tx = VectHull(@J(x))?
⌘ 2 @J(x)

ex x

=)

ProjTx (⌘) = ex
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

x

@J(x)

x

0

Tx = {z  supp(z) ⇢ supp(x)}
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}

x

@J(x)

x

0

x

@J(x)

x

0
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}
Nuclear norm: J(x) = ||x||⇤

ex = U V

x

@J(x)

x

0

⇤

x

x = U ⇤V ⇤
SVD:

⇤
Tx = {z  U? zV? = 0}

@J(x)

x

0

x
@J(x)
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}
Nuclear norm: J(x) = ||x||⇤

ex = U V

⇤

x = U ⇤V ⇤
SVD:

⇤
Tx = {z  U? zV? = 0}

I = {i  |xi | = ||x||1 }
Anti-sparsity: J(x) = ||x||1
Tx = {y  yI / sign(xI )}
ex = |I| 1 sign(x)

x

@J(x)

x

0

x

@J(x)

@J(x)

x

0

x
@J(x)

x

x

0
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

x?

x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:

D(x0 ) = Im(

⇤

⌘

x?

)  @J(x0 )

@J(x0 )
x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(

⇤

⌘

@J(x0 )

x?

)  @J(x0 )
⇤
)  ri(@J(x0 ))

x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(
Theorem:
¯
If 9 ⌘ 2 D(x0 ), for

⌘

@J(x0 )

x?

x=

x0

⇤

)  @J(x0 )
⇤
)  ri(@J(x0 ))
[Fadili et al. 2013]

⇠ ||w|| one has ||x?

x0 || = O(||w||)
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(
Theorem:
¯
If 9 ⌘ 2 D(x0 ), for

⌘

@J(x0 )

x?

x=

x0

⇤

)  @J(x0 )
⇤
)  ri(@J(x0 ))
[Fadili et al. 2013]

⇠ ||w|| one has ||x?

x0 || = O(||w||)

[Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1 .
[Grassmair 2012]: J(x? x0 ) = O(||w||).
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

⇠ N (0, 1), i.i.d.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

Low-rank matrices: J = || · ||⇤ .

Theorem: Let r = rank(x0 ). If

.

[Chandrasekaran et al. 2011]

x0 2 RN1 ⇥N2

P > 3r(N1 + N2 r)
¯
Then 9⌘ 2 D(x0 ) with high probability on

.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

Low-rank matrices: J = || · ||⇤ .

Theorem: Let r = rank(x0 ). If

.

[Chandrasekaran et al. 2011]

x0 2 RN1 ⇥N2

P > 3r(N1 + N2 r)
¯
Then 9⌘ 2 D(x0 ) with high probability on

! Similar results for || · ||1,2 , || · ||1 .

.
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

T = T x0
e = ex0
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

T = T x0
e = ex0

argmin
⌘=

⇤ q,⌘

T =e

||q||
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =
Proposition:

One has

T = T x0
e = ex0

argmin
⌘=

⌘0 = (

⇤ q,⌘

+
T

T =e

)⇤ e

||q||
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

argmin
⌘=

One has

⌘0 = (

¯
If ⌘0 2 D(x0 ) and

⇤ q,⌘

+
T

T =e

||q||

)⇤ e

⇠ ||w||,

Proposition:
Theorem:

T = T x0
e = ex0

the unique solution x? of P (y) for y = x0 + w satisfies

Tx ? = T x 0

and ||x?

x0 || = O(||w||) [Vaiter et al. 2013]
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

argmin
⌘=

One has

⌘0 = (

¯
If ⌘0 2 D(x0 ) and

⇤ q,⌘

+
T

T =e

||q||

)⇤ e

⇠ ||w||,

Proposition:
Theorem:

T = T x0
e = ex0

the unique solution x? of P (y) for y = x0 + w satisfies

Tx ? = T x 0

and ||x?

x0 || = O(||w||) [Vaiter et al. 2013]

[Fuchs 2004]: J = || · ||1 .
[Vaiter et al. 2011]: J = ||D⇤ · ||1 .
[Bach 2008]: J = || · ||1,2 and J = || · ||⇤ .
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on
.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability
P ⇠ 2s log(N )
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability

! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 .

P ⇠ 2s log(N )
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability

! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 .

P ⇠ 2s log(N )

! Not using RIP technics (non-uniform result on x0 ).
1-D Sparse Spikes Deconvolution
⇥x =

xi (·

i)

x0

i

J(x) = ||x||1

Increasing :
reduces correlation.
reduces resolution.

x0
1-D Sparse Spikes Deconvolution
⇥x =

xi (·

x0

i)

i

J(x) = ||x||1

Increasing :
reduces correlation.
reduces resolution.

x0

||⌘0,I c ||1
2

1
0

10

20

I = {j  x0 (j) 6= 0}
||⌘0,I c ||1 < 1
()
¯
⌘0 2 D(x0 )
()
support recovery.
Conclusion
Gauges: encode linear models as singular points.
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees

Specific certificate ⌘0 .
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees

Specific certificate ⌘0 .
Open problems:
– Approximate model recovery Tx? ⇡ Tx0 .
– CS performance with complicated gauges (e.g. TV).

More Related Content

What's hot

Open GL 04 linealgos
Open GL 04 linealgosOpen GL 04 linealgos
Open GL 04 linealgos
Roziq Bahtiar
 
Image Processing 3
Image Processing 3Image Processing 3
Image Processing 3
jainatin
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
Stéphane Canu
 

What's hot (20)

Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
 
Open GL 04 linealgos
Open GL 04 linealgosOpen GL 04 linealgos
Open GL 04 linealgos
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
 
Image Processing 3
Image Processing 3Image Processing 3
Image Processing 3
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
 
Lecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalLecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primal
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas Eberle
 
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge TheoryL. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
 

Similar to Low Complexity Regularization of Inverse Problems

KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
Chiheb Ben Hammouda
 
Integration techniques
Integration techniquesIntegration techniques
Integration techniques
Krishna Gali
 

Similar to Low Complexity Regularization of Inverse Problems (20)

ch3.ppt
ch3.pptch3.ppt
ch3.ppt
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential Equation
 
sada_pres
sada_pressada_pres
sada_pres
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
Nested sampling
Nested samplingNested sampling
Nested sampling
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares MethodNonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares Method
 
ML unit2.pptx
ML unit2.pptxML unit2.pptx
ML unit2.pptx
 
Integration techniques
Integration techniquesIntegration techniques
Integration techniques
 
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
 

More from Gabriel Peyré

More from Gabriel Peyré (14)

Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
 

Recently uploaded

BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
SoniaTolstoy
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Krashi Coaching
 

Recently uploaded (20)

Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 

Low Complexity Regularization of Inverse Problems

  • 1. Low Complexity Regularization of Inverse Problems Gabriel Peyré Joint works with: Samuel Vaiter Jalal Fadili Charles Dossal Mohammad Golbabaee VISI www.numerical-tours.com N
  • 2. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 3. Single Pixel Camera (Rice) x0 ˜
  • 4. Single Pixel Camera (Rice) x0 ˜ y[i] = hx0 , 'i i P measures N micro-mirrors
  • 5. Single Pixel Camera (Rice) x0 ˜ y[i] = hx0 , 'i i P measures P/N = 1 N micro-mirrors P/N = 0.16 P/N = 0.02
  • 6. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f 2 x0 2 L ˜ array resolution CS hardware x0 2 R N micro mirrors RN . y 2 RP
  • 7. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f x0 2 L ˜ array resolution x0 2 R N micro mirrors y 2 RP CS hardware , , ... 2 RN . , Operator x0
  • 8. Inverse Problems Recovering x0 RN from noisy observations y = x 0 + w 2 RP
  • 9. Inverse Problems Recovering x0 RN from noisy observations y = x 0 + w 2 RP Examples: Inpainting, super-resolution, compressed-sensing x0 x0
  • 10. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter
  • 11. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter Example: variational methods 1 x(y) 2 argmin ||y x||2 + J(x) x2RN 2 Data fidelity Regularity
  • 12. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter Example: variational methods 1 x(y) 2 argmin ||y x||2 + J(x) x2RN 2 Data fidelity Regularity Performance analysis: ! Criteria on (x0 , ||w||, ) to ensure L2 stability ||x(y) x0 || = O(||w||) Model stability (e.g. spikes location)
  • 13. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 14. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: T Coe cients x Image x
  • 15. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Image x
  • 16. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Image x D Analysis sparsity: Image x Gradient D⇤ x
  • 17. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Analysis sparsity: Image x D Low-rank: Image x Gradient D⇤ x S1,· Multi-spectral imaging: Pr xi,· = j=1 Ai,j Sj,· S2,· x S3,·
  • 18. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x)
  • 19. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) Piecewise regular ball , Union of linear models (T )T 2T x J(x) = ||x||1 T T = sparse vectors
  • 20. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) Piecewise regular ball , Union of linear models (T )T 2T T0 x0 x J(x) = ||x||1 T T = sparse vectors
  • 21. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors x T0 x |x1 |+||x2,3 || T = block sparse vectors 0
  • 22. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors x T x |x1 |+||x2,3 || T = block sparse vectors 0 x 0 J(x) = ||x||⇤ T = low-rank matrices
  • 23. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors T0 x T x |x1 |+||x2,3 || T = block sparse vectors 0 x x x0 0 J(x) = ||x||⇤ T = low-rank matrices J(x) = ||x||1 T = antisparse vectors
  • 24. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y xi} |x|
  • 25. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / I = supp(x) = {i xi 6= 0} @J(x) 0 x
  • 26. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / I = supp(x) = {i xi 6= 0} Tx = {⌘ supp(⌘) = I} Definition: Tx = VectHull(@J(x))? @J(x) 0 x Tx
  • 27. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / @J(x) 0 I = supp(x) = {i xi 6= 0} Tx Tx = {⌘ supp(⌘) = I} ex = sign(x) Definition: Tx = VectHull(@J(x))? ⌘ 2 @J(x) ex x =) ProjTx (⌘) = ex
  • 28. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) x @J(x) x 0 Tx = {z supp(z) ⇢ supp(x)}
  • 29. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} x @J(x) x 0 x @J(x) x 0
  • 30. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} Nuclear norm: J(x) = ||x||⇤ ex = U V x @J(x) x 0 ⇤ x x = U ⇤V ⇤ SVD: ⇤ Tx = {z U? zV? = 0} @J(x) x 0 x @J(x)
  • 31. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} Nuclear norm: J(x) = ||x||⇤ ex = U V ⇤ x = U ⇤V ⇤ SVD: ⇤ Tx = {z U? zV? = 0} I = {i |xi | = ||x||1 } Anti-sparsity: J(x) = ||x||1 Tx = {y yI / sign(xI )} ex = |I| 1 sign(x) x @J(x) x 0 x @J(x) @J(x) x 0 x @J(x) x x 0
  • 32. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 33. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) x? x= x0
  • 34. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ⇤ ⌘ x? ) @J(x0 ) @J(x0 ) x= x0
  • 35. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( ⇤ ⌘ @J(x0 ) x? ) @J(x0 ) ⇤ ) ri(@J(x0 )) x= x0
  • 36. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( Theorem: ¯ If 9 ⌘ 2 D(x0 ), for ⌘ @J(x0 ) x? x= x0 ⇤ ) @J(x0 ) ⇤ ) ri(@J(x0 )) [Fadili et al. 2013] ⇠ ||w|| one has ||x? x0 || = O(||w||)
  • 37. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( Theorem: ¯ If 9 ⌘ 2 D(x0 ), for ⌘ @J(x0 ) x? x= x0 ⇤ ) @J(x0 ) ⇤ ) ri(@J(x0 )) [Fadili et al. 2013] ⇠ ||w|| one has ||x? x0 || = O(||w||) [Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1 . [Grassmair 2012]: J(x? x0 ) = O(||w||).
  • 38. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j ⇠ N (0, 1), i.i.d.
  • 39. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on .
  • 40. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on Low-rank matrices: J = || · ||⇤ . Theorem: Let r = rank(x0 ). If . [Chandrasekaran et al. 2011] x0 2 RN1 ⇥N2 P > 3r(N1 + N2 r) ¯ Then 9⌘ 2 D(x0 ) with high probability on .
  • 41. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on Low-rank matrices: J = || · ||⇤ . Theorem: Let r = rank(x0 ). If . [Chandrasekaran et al. 2011] x0 2 RN1 ⇥N2 P > 3r(N1 + N2 r) ¯ Then 9⌘ 2 D(x0 ) with high probability on ! Similar results for || · ||1,2 , || · ||1 . .
  • 42. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e T = T x0 e = ex0
  • 43. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = T = T x0 e = ex0 argmin ⌘= ⇤ q,⌘ T =e ||q||
  • 44. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = Proposition: One has T = T x0 e = ex0 argmin ⌘= ⌘0 = ( ⇤ q,⌘ + T T =e )⇤ e ||q||
  • 45. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = argmin ⌘= One has ⌘0 = ( ¯ If ⌘0 2 D(x0 ) and ⇤ q,⌘ + T T =e ||q|| )⇤ e ⇠ ||w||, Proposition: Theorem: T = T x0 e = ex0 the unique solution x? of P (y) for y = x0 + w satisfies Tx ? = T x 0 and ||x? x0 || = O(||w||) [Vaiter et al. 2013]
  • 46. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = argmin ⌘= One has ⌘0 = ( ¯ If ⌘0 2 D(x0 ) and ⇤ q,⌘ + T T =e ||q|| )⇤ e ⇠ ||w||, Proposition: Theorem: T = T x0 e = ex0 the unique solution x? of P (y) for y = x0 + w satisfies Tx ? = T x 0 and ||x? x0 || = O(||w||) [Vaiter et al. 2013] [Fuchs 2004]: J = || · ||1 . [Vaiter et al. 2011]: J = ||D⇤ · ||1 . [Bach 2008]: J = || · ||1,2 and J = || · ||⇤ .
  • 47. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on .
  • 48. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability P ⇠ 2s log(N )
  • 49. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability ! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 . P ⇠ 2s log(N )
  • 50. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability ! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 . P ⇠ 2s log(N ) ! Not using RIP technics (non-uniform result on x0 ).
  • 51. 1-D Sparse Spikes Deconvolution ⇥x = xi (· i) x0 i J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0
  • 52. 1-D Sparse Spikes Deconvolution ⇥x = xi (· x0 i) i J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0 ||⌘0,I c ||1 2 1 0 10 20 I = {j x0 (j) 6= 0} ||⌘0,I c ||1 < 1 () ¯ ⌘0 2 D(x0 ) () support recovery.
  • 53. Conclusion Gauges: encode linear models as singular points.
  • 54. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees
  • 55. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees Specific certificate ⌘0 .
  • 56. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees Specific certificate ⌘0 . Open problems: – Approximate model recovery Tx? ⇡ Tx0 . – CS performance with complicated gauges (e.g. TV).