SlideShare uma empresa Scribd logo
1 de 32
Baixar para ler offline
Tensor completion for PDEs with uncertain
coefficients and Bayesian Update
Alexander Litvinenko
(joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies)
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
Extreme Computing Research Center, KAUST
Alexander Litvinenko (joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies)Tensor completion for PDEs with uncertain coefficients and B
4*
The structure of the talk
Part I (Stochastic forward problem):
1. Motivation
2. Elliptic PDE with uncertain coefficients
3. Discretization and low-rank tensor approximations
Part II (Bayesian update):
1. Bayesian update surrogate
2. Examples
Part III (Tensor completion):
1. Problem setup
2. Tensor completion for Bayesian Update
4*
Motivation to do Uncertainty Quantification (UQ)
Motivation: there is an urgent need to quantify and reduce the
uncertainty in output quantities of computer simulations within
complex (multiscale-multiphysics) applications.
Typical challenges: classical sampling methods are often very
inefficient, whereas straightforward functional representations
are subject to the well-known Curse of Dimensionality.
Nowadays computational predictions are used in critical
engineering decisions and thanks to modern computers we are
able to simulate very complex phenomena. But, how reliable
are these predictions? Can they be trusted?
Example: Saudi Aramco currently has a simulator,
GigaPOWERS, which runs with 9 billion cells. How sensitive
are the simulation results with respect to the unknown reservoir
properties?
Center for Uncertainty
Quantification
ation Logo Lock-up
3 / 30
4*
Part I: Stochastic forward problem
Part I: Stochastic Galerkin method to solve
elliptic PDE with uncertain coefficients
4*
PDE with uncertain coefficient and RHS
Consider
− div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ R2,
u = 0 on ∂G,
(1)
where κ(x, ω) - uncertain diffusion coefficient. Since κ positive,
usually κ(x, ω) = eγ(x,ω).
For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff
11, Ullmann 10].
Further we will assume that covκ(x, y) is given.
Center for Uncertainty
Quantification
ation Logo Lock-up
4 / 30
4*
My previous work
After applying the stochastic Galerkin method, obtain:
Ku = f, where all ingredients are represented in a tensor format
Compute max{u}, var(u), level sets of u, sign(u)
[1] Efficient Analysis of High Dimensional Data in Tensor Formats,
Espig, Hackbusch, A.L., Matthies and Zander, 2012.
Research which ingredients influence on the tensor rank of K
[2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats,
W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013.
Approximate κ(x, ω), stochastic Galerkin operator K in Tensor
Train (TT) format, solve for u, postprocessing
[3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic
partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016.
Center for Uncertainty
Quantification
ation Logo Lock-up
5 / 30
4*
Canonical and Tucker tensor formats
Definition and Examples of tensors
Center for Uncertainty
Quantification
ation Logo Lock-up
6 / 30
4*
Canonical and Tucker tensor formats
[Pictures are taken from B. Khoromskij and A. Auer lecture course]
Storage: O(nd ) → O(dRn) and O(Rd + dRn).
Center for Uncertainty
Quantification
ation Logo Lock-up
7 / 30
4*
Definition of tensor of order d
Tensor of order d is a multidimensional array over a d-tuple
index set I = I1 × · · · × Id ,
A = [ai1...id
: iµ ∈ Iµ] ∈ RI
, Iµ = {1, ..., nµ}, µ = 1, .., d.
A is an element of the linear space
Vn =
d
µ=1
Vµ, Vµ = RIµ
equipped with the Euclidean scalar product ·, · : Vn × Vn → R,
defined as
A, B :=
(i1...id )∈I
ai1...id
bi1...id
, for A, B ∈ Vn.
Center for Uncertainty
Quantification
ation Logo Lock-up
8 / 30
4*
Discretization of elliptic PDE
Now let us discretize our diffusion equation with
uncertain coefficients
Center for Uncertainty
Quantification
ation Logo Lock-up
9 / 30
4*
Karhunen Lo´eve and Polynomial Chaos Expansions
Apply both
Karhunen Lo´eve Expansion (KLE):
κ(x, ω) = κ0(x) + ∞
j=1 κjgj(x)ξj(θ(ω)), where
θ = θ(ω) = (θ1(ω), θ2(ω), ..., ),
ξj(θ) = 1
κj G (κ(x, ω) − κ0(x)) gj(x)dx.
Polynomial Chaos Expansion (PCE)
κ(x, ω) = α κ(α)(x)Hα(θ), compute ξj(θ) = α∈J ξ
(α)
j Hα(θ),
where ξ
(α)
j = 1
κj G κ(α)(x)gj(x)dx.
Further compute ξ
(α)
j ≈ s
=1(ξ )j
∞
k=1(ξ , k )αk
.
Center for Uncertainty
Quantification
ation Logo Lock-up
10 / 30
4*
Final discretized stochastic PDE
Ku = f, where
K:= s
=1 K ⊗ M
µ=1 ∆ µ, K ∈ RN×N, ∆ µ ∈ RRµ×Rµ ,
u:= r
j=1 uj ⊗ M
µ=1 ujµ, uj ∈ RN, ujµ ∈ RRµ ,
f:= R
k=1 fk ⊗ M
µ=1 gkµ, fk ∈ RN and gkµ ∈ RRµ .
(Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011)
Examples of stochastic Galerkin matrices:
Center for Uncertainty
Quantification
ation Logo Lock-up
11 / 30
4*
Part II
Part II: Bayesian update
We will speak about Gauss-Markov-Kalman filter for the
Bayesian updating of parameters in comput. model.
4*
Mathematical setup
Consider
K(u; q) = f ⇒ u = S(f; q),
where S is solution operator.
Operator depends on parameters q ∈ Q,
hence state u ∈ U is also function of q:
Measurement operator Y with values in Y:
y = Y(q; u) = Y(q, S(f; q)).
Examples of measurements:
y(ω) = D0
u(ω, x)dx, or u in few points
Center for Uncertainty
Quantification
ation Logo Lock-up
12 / 30
4*
Random QoI
With state u a RV, the quantity to be measured
y(ω) = Y(q(ω), u(ω)))
is also uncertain, a random variable.
Noisy data: ˆy + (ω),
where ˆy is the “true” value and a random error .
Forecast of the measurement: z(ω) = y(ω) + (ω).
Center for Uncertainty
Quantification
ation Logo Lock-up
13 / 30
4*
Conditional probability and expectation
Classically, Bayes’s theorem gives conditional probability
P(Iq|Mz) =
P(Mz|Iq)
P(Mz)
P(Iq) (or πq(q|z) =
p(z|q)
Zs
pq(q));
Expectation with this posterior measure is conditional
expectation.
Kolmogorov starts from conditional expectation E (·|Mz),
from this conditional probability via P(Iq|Mz) = E χIq
|Mz .
Center for Uncertainty
Quantification
ation Logo Lock-up
14 / 30
4*
Conditional expectation
The conditional expectation is defined as
orthogonal projection onto the closed subspace L2(Ω, P, σ(z)):
E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2
L2
The subspace Q∞ := L2(Ω, P, σ(z)) represents the available
information.
The update, also called the assimilated value
qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV
and represents new state of knowledge after the measurement.
Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}.
Center for Uncertainty
Quantification
ation Logo Lock-up
15 / 30
4*
Numerical computation of NLBU
Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω):
ϕ ≈ ˜ϕ =
α∈Jp
ϕαΦα(z(ξ))
and minimize q(ξ) − ˜ϕ(z(ξ)) 2
L2
, where Φα are polynomials
(e.g. Hermite, Laguerre, Chebyshev or something else).
Taking derivatives with respect to ϕα:
∂
∂ϕα
q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp
Inserting representation for ˜ϕ, obtain:
Center for Uncertainty
Quantification
ation Logo Lock-up
16 / 30
4*
Numerical computation of NLBU
∂
∂ϕα
E

q2
(ξ) − 2
β∈J
qϕβΦβ(z) +
β,γ∈J
ϕβϕγΦβ(z)Φγ(z)


= 2E

−qΦα(z) +
β∈J
ϕβΦβ(z)Φα(z)


= 2


β∈J
E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]

 = 0 ∀α ∈ J .
Center for Uncertainty
Quantification
ation Logo Lock-up
17 / 30
4*
Numerical computation of NLBU
Now, rewriting the last sum in a matrix form, obtain the linear
system of equations (=: A) to compute coefficients ϕβ:



... ... ...
... E [Φα(z(ξ))Φβ(z(ξ))]
...
... ... ...







...
ϕβ
...



 =




...
E [q(ξ)Φα(z(ξ))]
...



 ,
where α, β ∈ J , A is of size |J | × |J |.
Center for Uncertainty
Quantification
ation Logo Lock-up
18 / 30
4*
Numerical computation of NLBU
We can rewrite the system above in the compact form:
[Φ] [diag(...wi...)] [Φ]T




...
ϕβ
...



 = [Φ]


w0q(ξ0)
...
wNq(ξN)


[Φ] ∈ RJα×N, [diag(...wi...)] ∈ RN×N, [Φ] ∈ RJα×N.
Solving this system, obtain vector of coefficients (...ϕβ...)T for
all β.
Finally, the assimilated parameter qa will be
qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (2)
z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp
ϕβΦβ(z(ξ))
Center for Uncertainty
Quantification
ation Logo Lock-up
19 / 30
4*
Explanation of ” Bayesian Update surrogate” from E. Zander
Let the stochastic model of the measurement is given by
y = M(q) + ε, ε -measurement noise (3)
Best estimator ˜ϕ for q given z, i.e.
˜ϕ = argminϕ E[ q(·) − ϕ(z(·)) 2
2]. (4)
The best estimate (or predictor) of q given the
measurement model is
qM(ξ) = ˜ϕ(z(ξ))). (5)
The remainder, i.e. the difference between q and qM, is
given by
q⊥
M(ξ) = q(ξ) − qM(ξ), (6)
Due to the minimisation property of the MMSE
estimator—orthogonal to qM(ξ), i.e. cov(q⊥
M, qM) = 0.
Center for Uncertainty
Quantification
ation Logo Lock-up
20 / 30
In other words,
q(ξ) = qM(ξ) + q⊥
M(ξ) (7)
yields an orthogonal decomposition of q.
Actual measurement ˆy, prediction ˆq = ˜ϕ(ˆy). Part qM of q
can be “collapsed” to ˆq. Updated stochastic model q is
thus given by
q (ξ) = ˆq + q⊥
M(ξ) (8)
q (ξ) = q(ξ) + ( ˜ϕ(ˆy) − ˜ϕ(z(ξ))). (9)
Center for Uncertainty
Quantification
ation Logo Lock-up
21 / 30
4*
Example: 1D elliptic PDE with uncertain coeffs
− · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1]
+ Dirichlet random b.c. g(0, ξ) and g(1, ξ).
3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3,
x(0.8) = 18, s.d. 0.3.
κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov.
length 0.1, multi-variate Hermite polynomial of order pκ = 2;
RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03,
multi-variate Hermite polynomial of order pf = 2;
b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10,
multi-variate Hermite polynomial of order pg = 1;
pφ = 3 and pu = 3
Center for Uncertainty
Quantification
ation Logo Lock-up
22 / 30
4*
Example: updating of the solution u
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3
standard deviations
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
23 / 30
4*
Example: Updating of the parameter
0 0.5 1
0
0.5
1
1.5
0 0.5 1
0
0.5
1
1.5
Figure: Original and updated parameter κ.
Center for Uncertainty
Quantification
ation Logo Lock-up
24 / 30
4*
Part III. Tensor completion
Now, we consider how to
apply Tensor Completion Techniques
for Bayesian Update
In Bayesian Update surrogate, the assimilated PCE coeffs of
parameter qa will be
NEW gPCE coeffs=OLD gPCE coeffs + gPCE of Update
ALL INGREDIENTS ARE TENSORS!
qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (10)
z(ξ) = y(ξ) + ε(ω), qa ∈ RN×#Ja , N = 1..107, #Ja > 1000,
#Jf < #Ja.
4*
Problem setup: Tensor completion
Problem of fitting a low rank tensor A ∈ RI, I := I1 × ... × Id ,
Iµ = {1, ..., nµ}, µ ∈ D := {1, .., d}, to given data points
{Mi ∈ R | i ∈ P}, P ⊂ I, #P ≥
d
µ=1
nµ, (11)
by minimizing the distance between the given values (Mi)i∈P
and approximations (Ai)i∈P:
A = argmin˜A∈T
i∈P
(Mi − ˜Ai)2
(12)
Remark: here we assume that our target tensor M allows for a
low rank approximation M − ˜M ≤ ε, ε ≥ 0 and ˜M fulfills
certain rank bounds, T - Low rank format under consideration.
Center for Uncertainty
Quantification
ation Logo Lock-up
26 / 30
4*
Problem setup: Tensor completion
L. Grasedyck et all, 2016, hierarchical and tensor train formats
W. Austin, T, Kolda, D, Kressner, M. Steinlechner et al, CP
format
Goal: Reconstruct tensor with O(log N) number of samples.
Methods:
1. ALS inspired by LMaFit method for matrix completion,
complexity O(r4d#P).
2. Alternating directions fitting (ADF), complexity O(r2d#P).
Center for Uncertainty
Quantification
ation Logo Lock-up
27 / 30
4*
Numerical experiments for SPDEs: Tensor completion
[L. Grasedyck, M. Kluge, S. Kraemer, SIAM J. Sci. Comput., Vol 37/5, 2016]
Applied ALS and ADF methods to:
− div(κ(x, ω) u(x, ω)) = 1 in D × Ω,
u(x, ω) = 0 on ∂G × Ω,
(13)
D = [−1, 1]. The goal is to determine u(ω) := D u(x, ω)dx.
FE with 50 dofs, KLE with d terms, d-stochastic independent
RVs,
Yields to tensor Ai1...id
:= u(i1, ..., id ),
n = 100, d = 5, slice density CSD = 6.
Software (matlab) is available.
Center for Uncertainty
Quantification
ation Logo Lock-up
28 / 30
4*
Example: updating of the solution u
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3
standard deviations. Number of available measurements {0, 1, 2, 3, 5}
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
29 / 30
4*
Conclusion
Introduced low-rank tensor methods to solve elliptic PDEs
with uncertain coefficients,
Explained how to compute the maximum and the mean in
low-rank tensor format,
Derived Bayesian update surrogate ϕ (as a linear,
quadratic, cubic etc approximation), i.e. compute
conditional expectation of q, given measurement y.
Apply Tensor Completion method to sparse measurement
tensor in the likelihood.
Center for Uncertainty
Quantification
ation Logo Lock-up
30 / 30

Mais conteúdo relacionado

Mais procurados

Divergence clustering
Divergence clusteringDivergence clustering
Divergence clusteringFrank Nielsen
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexityFrank Nielsen
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursGabriel Peyré
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsAlexander Litvinenko
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralityFrancesco Tudisco
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Francesco Tudisco
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryFrancesco Tudisco
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Frank Nielsen
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsFrancesco Tudisco
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsFrank Nielsen
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksMKosmykov
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slackStéphane Canu
 
Tutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian NetworksTutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian NetworksAnmol Dwivedi
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresAnmol Dwivedi
 
Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationAlexander Litvinenko
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Daisuke Yoneoka
 

Mais procurados (20)

Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-periphery
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networks
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
 
Tutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian NetworksTutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian Networks
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantification
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
 

Destaque

VRとわたし
VRとわたしVRとわたし
VRとわたしJun Iio
 
Definitive casts and dies
Definitive casts and diesDefinitive casts and dies
Definitive casts and dieshesham1964
 
4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-strukturaVladimir Stanković
 
How to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEOHow to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEOJameson (Jack) Treseler
 
Azure sql database escalabilidad
Azure sql database escalabilidadAzure sql database escalabilidad
Azure sql database escalabilidadEduardo Castro
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationAlexander Litvinenko
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesAlexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionAlexander Litvinenko
 
CURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFCURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFSubhasree Mondal
 
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01Yolanda Charre
 
社会科学研究者からみた機械学習
社会科学研究者からみた機械学習社会科学研究者からみた機械学習
社会科学研究者からみた機械学習Jun Iio
 
El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.seduca
 
IT in Healthcare
IT in HealthcareIT in Healthcare
IT in HealthcareNetApp
 
Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3Anderson Favaro
 
Codemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulosCodemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulosSantiago Porras Rodríguez
 

Destaque (20)

VRとわたし
VRとわたしVRとわたし
VRとわたし
 
Why cuba trade delegation
Why cuba trade delegationWhy cuba trade delegation
Why cuba trade delegation
 
Definitive casts and dies
Definitive casts and diesDefinitive casts and dies
Definitive casts and dies
 
4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura
 
02 brojni sistemi
02 brojni sistemi02 brojni sistemi
02 brojni sistemi
 
How to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEOHow to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEO
 
Azure sql database escalabilidad
Azure sql database escalabilidadAzure sql database escalabilidad
Azure sql database escalabilidad
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
CURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFCURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDF
 
Tecnologia
TecnologiaTecnologia
Tecnologia
 
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01
 
Modulador AM DSBFC
Modulador AM DSBFCModulador AM DSBFC
Modulador AM DSBFC
 
社会科学研究者からみた機械学習
社会科学研究者からみた機械学習社会科学研究者からみた機械学習
社会科学研究者からみた機械学習
 
El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.
 
IT in Healthcare
IT in HealthcareIT in Healthcare
IT in Healthcare
 
TRABAJO UNIVERSIDAD
TRABAJO UNIVERSIDADTRABAJO UNIVERSIDAD
TRABAJO UNIVERSIDAD
 
Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3
 
Codemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulosCodemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulos
 

Semelhante a Tensor completion for PDEs with uncertain coefficients and Bayesian update surrogate

A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaAlexander Litvinenko
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsAlexander Litvinenko
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateAlexander Litvinenko
 
Linear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficientsLinear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficientsAlexander Litvinenko
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixturesChristian Robert
 
Non-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian UpdateNon-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian UpdateAlexander Litvinenko
 
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfAlexander Litvinenko
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsAlexander Litvinenko
 
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsSolving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsAlexander Litvinenko
 
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...Alexander Litvinenko
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...Alexander Litvinenko
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Tomasz Kusmierczyk
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Jagadeeswaran Rathinavel
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
 

Semelhante a Tensor completion for PDEs with uncertain coefficients and Bayesian update surrogate (20)

A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
Linear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficientsLinear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficients
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
Non-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian UpdateNon-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian Update
 
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
 
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsSolving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
 
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
 
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
 
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
 

Mais de Alexander Litvinenko

litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdfAlexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityAlexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfAlexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfAlexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Alexander Litvinenko
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Alexander Litvinenko
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowAlexander Litvinenko
 
Simulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowSimulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowAlexander Litvinenko
 
Approximation of large covariance matrices in statistics
Approximation of large covariance matrices in statisticsApproximation of large covariance matrices in statistics
Approximation of large covariance matrices in statisticsAlexander Litvinenko
 
Semi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster EnsembleSemi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster EnsembleAlexander Litvinenko
 

Mais de Alexander Litvinenko (20)

litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
 
Simulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowSimulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flow
 
Approximation of large covariance matrices in statistics
Approximation of large covariance matrices in statisticsApproximation of large covariance matrices in statistics
Approximation of large covariance matrices in statistics
 
Semi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster EnsembleSemi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster Ensemble
 

Último

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...PsychoTech Services
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...Sapna Thakur
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 

Último (20)

1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
BAG TECHNIQUE Bag technique-a tool making use of public health bag through wh...
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 

Tensor completion for PDEs with uncertain coefficients and Bayesian update surrogate

  • 1. Tensor completion for PDEs with uncertain coefficients and Bayesian Update Alexander Litvinenko (joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies) Center for Uncertainty Quantification ntification Logo Lock-up http://sri-uq.kaust.edu.sa/ Extreme Computing Research Center, KAUST Alexander Litvinenko (joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies)Tensor completion for PDEs with uncertain coefficients and B
  • 2. 4* The structure of the talk Part I (Stochastic forward problem): 1. Motivation 2. Elliptic PDE with uncertain coefficients 3. Discretization and low-rank tensor approximations Part II (Bayesian update): 1. Bayesian update surrogate 2. Examples Part III (Tensor completion): 1. Problem setup 2. Tensor completion for Bayesian Update
  • 3. 4* Motivation to do Uncertainty Quantification (UQ) Motivation: there is an urgent need to quantify and reduce the uncertainty in output quantities of computer simulations within complex (multiscale-multiphysics) applications. Typical challenges: classical sampling methods are often very inefficient, whereas straightforward functional representations are subject to the well-known Curse of Dimensionality. Nowadays computational predictions are used in critical engineering decisions and thanks to modern computers we are able to simulate very complex phenomena. But, how reliable are these predictions? Can they be trusted? Example: Saudi Aramco currently has a simulator, GigaPOWERS, which runs with 9 billion cells. How sensitive are the simulation results with respect to the unknown reservoir properties? Center for Uncertainty Quantification ation Logo Lock-up 3 / 30
  • 4. 4* Part I: Stochastic forward problem Part I: Stochastic Galerkin method to solve elliptic PDE with uncertain coefficients
  • 5. 4* PDE with uncertain coefficient and RHS Consider − div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ R2, u = 0 on ∂G, (1) where κ(x, ω) - uncertain diffusion coefficient. Since κ positive, usually κ(x, ω) = eγ(x,ω). For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff 11, Ullmann 10]. Further we will assume that covκ(x, y) is given. Center for Uncertainty Quantification ation Logo Lock-up 4 / 30
  • 6. 4* My previous work After applying the stochastic Galerkin method, obtain: Ku = f, where all ingredients are represented in a tensor format Compute max{u}, var(u), level sets of u, sign(u) [1] Efficient Analysis of High Dimensional Data in Tensor Formats, Espig, Hackbusch, A.L., Matthies and Zander, 2012. Research which ingredients influence on the tensor rank of K [2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013. Approximate κ(x, ω), stochastic Galerkin operator K in Tensor Train (TT) format, solve for u, postprocessing [3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016. Center for Uncertainty Quantification ation Logo Lock-up 5 / 30
  • 7. 4* Canonical and Tucker tensor formats Definition and Examples of tensors Center for Uncertainty Quantification ation Logo Lock-up 6 / 30
  • 8. 4* Canonical and Tucker tensor formats [Pictures are taken from B. Khoromskij and A. Auer lecture course] Storage: O(nd ) → O(dRn) and O(Rd + dRn). Center for Uncertainty Quantification ation Logo Lock-up 7 / 30
  • 9. 4* Definition of tensor of order d Tensor of order d is a multidimensional array over a d-tuple index set I = I1 × · · · × Id , A = [ai1...id : iµ ∈ Iµ] ∈ RI , Iµ = {1, ..., nµ}, µ = 1, .., d. A is an element of the linear space Vn = d µ=1 Vµ, Vµ = RIµ equipped with the Euclidean scalar product ·, · : Vn × Vn → R, defined as A, B := (i1...id )∈I ai1...id bi1...id , for A, B ∈ Vn. Center for Uncertainty Quantification ation Logo Lock-up 8 / 30
  • 10. 4* Discretization of elliptic PDE Now let us discretize our diffusion equation with uncertain coefficients Center for Uncertainty Quantification ation Logo Lock-up 9 / 30
  • 11. 4* Karhunen Lo´eve and Polynomial Chaos Expansions Apply both Karhunen Lo´eve Expansion (KLE): κ(x, ω) = κ0(x) + ∞ j=1 κjgj(x)ξj(θ(ω)), where θ = θ(ω) = (θ1(ω), θ2(ω), ..., ), ξj(θ) = 1 κj G (κ(x, ω) − κ0(x)) gj(x)dx. Polynomial Chaos Expansion (PCE) κ(x, ω) = α κ(α)(x)Hα(θ), compute ξj(θ) = α∈J ξ (α) j Hα(θ), where ξ (α) j = 1 κj G κ(α)(x)gj(x)dx. Further compute ξ (α) j ≈ s =1(ξ )j ∞ k=1(ξ , k )αk . Center for Uncertainty Quantification ation Logo Lock-up 10 / 30
  • 12. 4* Final discretized stochastic PDE Ku = f, where K:= s =1 K ⊗ M µ=1 ∆ µ, K ∈ RN×N, ∆ µ ∈ RRµ×Rµ , u:= r j=1 uj ⊗ M µ=1 ujµ, uj ∈ RN, ujµ ∈ RRµ , f:= R k=1 fk ⊗ M µ=1 gkµ, fk ∈ RN and gkµ ∈ RRµ . (Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011) Examples of stochastic Galerkin matrices: Center for Uncertainty Quantification ation Logo Lock-up 11 / 30
  • 13. 4* Part II Part II: Bayesian update We will speak about Gauss-Markov-Kalman filter for the Bayesian updating of parameters in comput. model.
  • 14. 4* Mathematical setup Consider K(u; q) = f ⇒ u = S(f; q), where S is solution operator. Operator depends on parameters q ∈ Q, hence state u ∈ U is also function of q: Measurement operator Y with values in Y: y = Y(q; u) = Y(q, S(f; q)). Examples of measurements: y(ω) = D0 u(ω, x)dx, or u in few points Center for Uncertainty Quantification ation Logo Lock-up 12 / 30
  • 15. 4* Random QoI With state u a RV, the quantity to be measured y(ω) = Y(q(ω), u(ω))) is also uncertain, a random variable. Noisy data: ˆy + (ω), where ˆy is the “true” value and a random error . Forecast of the measurement: z(ω) = y(ω) + (ω). Center for Uncertainty Quantification ation Logo Lock-up 13 / 30
  • 16. 4* Conditional probability and expectation Classically, Bayes’s theorem gives conditional probability P(Iq|Mz) = P(Mz|Iq) P(Mz) P(Iq) (or πq(q|z) = p(z|q) Zs pq(q)); Expectation with this posterior measure is conditional expectation. Kolmogorov starts from conditional expectation E (·|Mz), from this conditional probability via P(Iq|Mz) = E χIq |Mz . Center for Uncertainty Quantification ation Logo Lock-up 14 / 30
  • 17. 4* Conditional expectation The conditional expectation is defined as orthogonal projection onto the closed subspace L2(Ω, P, σ(z)): E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2 L2 The subspace Q∞ := L2(Ω, P, σ(z)) represents the available information. The update, also called the assimilated value qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV and represents new state of knowledge after the measurement. Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}. Center for Uncertainty Quantification ation Logo Lock-up 15 / 30
  • 18. 4* Numerical computation of NLBU Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω): ϕ ≈ ˜ϕ = α∈Jp ϕαΦα(z(ξ)) and minimize q(ξ) − ˜ϕ(z(ξ)) 2 L2 , where Φα are polynomials (e.g. Hermite, Laguerre, Chebyshev or something else). Taking derivatives with respect to ϕα: ∂ ∂ϕα q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp Inserting representation for ˜ϕ, obtain: Center for Uncertainty Quantification ation Logo Lock-up 16 / 30
  • 19. 4* Numerical computation of NLBU ∂ ∂ϕα E  q2 (ξ) − 2 β∈J qϕβΦβ(z) + β,γ∈J ϕβϕγΦβ(z)Φγ(z)   = 2E  −qΦα(z) + β∈J ϕβΦβ(z)Φα(z)   = 2   β∈J E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]   = 0 ∀α ∈ J . Center for Uncertainty Quantification ation Logo Lock-up 17 / 30
  • 20. 4* Numerical computation of NLBU Now, rewriting the last sum in a matrix form, obtain the linear system of equations (=: A) to compute coefficients ϕβ:    ... ... ... ... E [Φα(z(ξ))Φβ(z(ξ))] ... ... ... ...        ... ϕβ ...     =     ... E [q(ξ)Φα(z(ξ))] ...     , where α, β ∈ J , A is of size |J | × |J |. Center for Uncertainty Quantification ation Logo Lock-up 18 / 30
  • 21. 4* Numerical computation of NLBU We can rewrite the system above in the compact form: [Φ] [diag(...wi...)] [Φ]T     ... ϕβ ...     = [Φ]   w0q(ξ0) ... wNq(ξN)   [Φ] ∈ RJα×N, [diag(...wi...)] ∈ RN×N, [Φ] ∈ RJα×N. Solving this system, obtain vector of coefficients (...ϕβ...)T for all β. Finally, the assimilated parameter qa will be qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (2) z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp ϕβΦβ(z(ξ)) Center for Uncertainty Quantification ation Logo Lock-up 19 / 30
  • 22. 4* Explanation of ” Bayesian Update surrogate” from E. Zander Let the stochastic model of the measurement is given by y = M(q) + ε, ε -measurement noise (3) Best estimator ˜ϕ for q given z, i.e. ˜ϕ = argminϕ E[ q(·) − ϕ(z(·)) 2 2]. (4) The best estimate (or predictor) of q given the measurement model is qM(ξ) = ˜ϕ(z(ξ))). (5) The remainder, i.e. the difference between q and qM, is given by q⊥ M(ξ) = q(ξ) − qM(ξ), (6) Due to the minimisation property of the MMSE estimator—orthogonal to qM(ξ), i.e. cov(q⊥ M, qM) = 0. Center for Uncertainty Quantification ation Logo Lock-up 20 / 30
  • 23. In other words, q(ξ) = qM(ξ) + q⊥ M(ξ) (7) yields an orthogonal decomposition of q. Actual measurement ˆy, prediction ˆq = ˜ϕ(ˆy). Part qM of q can be “collapsed” to ˆq. Updated stochastic model q is thus given by q (ξ) = ˆq + q⊥ M(ξ) (8) q (ξ) = q(ξ) + ( ˜ϕ(ˆy) − ˜ϕ(z(ξ))). (9) Center for Uncertainty Quantification ation Logo Lock-up 21 / 30
  • 24. 4* Example: 1D elliptic PDE with uncertain coeffs − · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1] + Dirichlet random b.c. g(0, ξ) and g(1, ξ). 3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3, x(0.8) = 18, s.d. 0.3. κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov. length 0.1, multi-variate Hermite polynomial of order pκ = 2; RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03, multi-variate Hermite polynomial of order pf = 2; b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10, multi-variate Hermite polynomial of order pg = 1; pφ = 3 and pu = 3 Center for Uncertainty Quantification ation Logo Lock-up 22 / 30
  • 25. 4* Example: updating of the solution u 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 Figure: Original and updated solutions, mean value plus/minus 1,2,3 standard deviations [graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig] Center for Uncertainty Quantification ation Logo Lock-up 23 / 30
  • 26. 4* Example: Updating of the parameter 0 0.5 1 0 0.5 1 1.5 0 0.5 1 0 0.5 1 1.5 Figure: Original and updated parameter κ. Center for Uncertainty Quantification ation Logo Lock-up 24 / 30
  • 27. 4* Part III. Tensor completion Now, we consider how to apply Tensor Completion Techniques for Bayesian Update In Bayesian Update surrogate, the assimilated PCE coeffs of parameter qa will be NEW gPCE coeffs=OLD gPCE coeffs + gPCE of Update ALL INGREDIENTS ARE TENSORS! qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (10) z(ξ) = y(ξ) + ε(ω), qa ∈ RN×#Ja , N = 1..107, #Ja > 1000, #Jf < #Ja.
  • 28. 4* Problem setup: Tensor completion Problem of fitting a low rank tensor A ∈ RI, I := I1 × ... × Id , Iµ = {1, ..., nµ}, µ ∈ D := {1, .., d}, to given data points {Mi ∈ R | i ∈ P}, P ⊂ I, #P ≥ d µ=1 nµ, (11) by minimizing the distance between the given values (Mi)i∈P and approximations (Ai)i∈P: A = argmin˜A∈T i∈P (Mi − ˜Ai)2 (12) Remark: here we assume that our target tensor M allows for a low rank approximation M − ˜M ≤ ε, ε ≥ 0 and ˜M fulfills certain rank bounds, T - Low rank format under consideration. Center for Uncertainty Quantification ation Logo Lock-up 26 / 30
  • 29. 4* Problem setup: Tensor completion L. Grasedyck et all, 2016, hierarchical and tensor train formats W. Austin, T, Kolda, D, Kressner, M. Steinlechner et al, CP format Goal: Reconstruct tensor with O(log N) number of samples. Methods: 1. ALS inspired by LMaFit method for matrix completion, complexity O(r4d#P). 2. Alternating directions fitting (ADF), complexity O(r2d#P). Center for Uncertainty Quantification ation Logo Lock-up 27 / 30
  • 30. 4* Numerical experiments for SPDEs: Tensor completion [L. Grasedyck, M. Kluge, S. Kraemer, SIAM J. Sci. Comput., Vol 37/5, 2016] Applied ALS and ADF methods to: − div(κ(x, ω) u(x, ω)) = 1 in D × Ω, u(x, ω) = 0 on ∂G × Ω, (13) D = [−1, 1]. The goal is to determine u(ω) := D u(x, ω)dx. FE with 50 dofs, KLE with d terms, d-stochastic independent RVs, Yields to tensor Ai1...id := u(i1, ..., id ), n = 100, d = 5, slice density CSD = 6. Software (matlab) is available. Center for Uncertainty Quantification ation Logo Lock-up 28 / 30
  • 31. 4* Example: updating of the solution u 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 Figure: Original and updated solutions, mean value plus/minus 1,2,3 standard deviations. Number of available measurements {0, 1, 2, 3, 5} [graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig] Center for Uncertainty Quantification ation Logo Lock-up 29 / 30
  • 32. 4* Conclusion Introduced low-rank tensor methods to solve elliptic PDEs with uncertain coefficients, Explained how to compute the maximum and the mean in low-rank tensor format, Derived Bayesian update surrogate ϕ (as a linear, quadratic, cubic etc approximation), i.e. compute conditional expectation of q, given measurement y. Apply Tensor Completion method to sparse measurement tensor in the likelihood. Center for Uncertainty Quantification ation Logo Lock-up 30 / 30