SlideShare a Scribd company logo
1 of 46
Download to read offline
On the generation of random
fields
Ian Sloan
i.sloan@unsw.edu.au
University of New South Wales, Sydney, Australia
Joint with I Graham & R Scheichl (Bath),
F Kuo (UNSW) & D Nuyens(KU Leuven)
SAMSI, September 1, 2017
Suppose we want to generate a Gaussian random field over this
L-shaped region with a hole:
The question is: how to generate the random field?
Suppose we want to generate a Gaussian random field over this
L-shaped region with a hole:
The question is: how to generate the random field?
Application: PDE with random field as input.
Then QMC to find expected value of linear functionals
Gaussian random fields
Z(x) = Z(x, ω), for x ∈ D ⊂ Rd
, is a Gaussian random field if
for each x ∈ D, Z(x) is a normally distributed random variable;
the field is fully determined by knowing its mean
¯Z(x) := E[Z(x)]
and its covariance function:
r(x, y) := E Z(x) − ¯Z(x) Z(y) − ¯Z(y) .
Gaussian random fields
Z(x) = Z(x, ω), for x ∈ D ⊂ Rd
, is a Gaussian random field if
for each x ∈ D, Z(x) is a normally distributed random variable;
the field is fully determined by knowing its mean
¯Z(x) := E[Z(x)]
and its covariance function:
r(x, y) := E Z(x) − ¯Z(x) Z(y) − ¯Z(y) .
For simplicity we will consider throughout mean-zero fields, that is
¯Z(x) = 0, x ∈ D =⇒ r(x, y) := E[Z(x)Z(y)].
Examples of 2d covariance functions
r(x, y) = σ2
exp −
|x1 − y1|2
+ |x2 − y2|2
λ2
,
– very smooth (in 1-d is σ2e−|x−y|2/λ2
– “Gaussian”)
Here σ2
is the variance, and λ is the correlation length.
Examples of 2d covariance functions
r(x, y) = σ2
exp −
|x1 − y1|2
+ |x2 − y2|2
λ2
,
– very smooth (in 1-d is σ2e−|x−y|2/λ2
– “Gaussian”)
Here σ2
is the variance, and λ is the correlation length.
r(x, y) = σ2
exp −
|x1 − y1|2 + |x2 − y2|2
λ
,
– not smooth at x = y (in 1-d is σ2e−|x−y|/λ, – “exponential”).
Examples of 2d covariance functions
r(x, y) = σ2
exp −
|x1 − y1|2
+ |x2 − y2|2
λ2
,
– very smooth (in 1-d is σ2e−|x−y|2/λ2
– “Gaussian”)
Here σ2
is the variance, and λ is the correlation length.
r(x, y) = σ2
exp −
|x1 − y1|2 + |x2 − y2|2
λ
,
– not smooth at x = y (in 1-d is σ2e−|x−y|/λ, – “exponential”).
More general is the Matérn class rν(x, y), ν ∈ [1
2
, ∞), which
contains the examples above at the two ends of its parameter range:
ν = 1
2
gives the exponential case, ν = ∞ gives the Gaussian case.
Examples of 2d covariance functions
r(x, y) = σ2
exp −
|x1 − y1|2
+ |x2 − y2|2
λ2
,
– very smooth (in 1-d is σ2e−|x−y|2/λ2
– “Gaussian”)
Here σ2
is the variance, and λ is the correlation length.
r(x, y) = σ2
exp −
|x1 − y1|2 + |x2 − y2|2
λ
,
– not smooth at x = y (in 1-d is σ2e−|x−y|/λ, – “exponential”).
More general is the Matérn class rν(x, y), ν ∈ [1
2
, ∞), which
contains the examples above at the two ends of its parameter range:
ν = 1
2
gives the exponential case, ν = ∞ gives the Gaussian case.
How to compute realisations of the input field? One way is:
Karhunen-Loève expansion
Z(x, ω) =
∞
j=1
√
µj Yj(ω)φj(x),
where the Yj are independent standard normal random numbers, and
(µj, φj) satisfy
D
r(x, x′
)φj(x′
) dx′
= µjφj(x),
D
φi(x)φj(x) dx = δij.
Why does it work?
Because if
Z(x) =
∞
j=1
√
µj Yj(ω)φj(x),
then formally
E[Z(x)Z(x′
)] = E
∞
j=1
√
µj Yjφj(x)
∞
k=1
√
µk Ykφk(x′
)
=
∞
j=1
∞
k=1
√
µj
√
µkφj(x)φk(x′
)E[YjYk]
=
∞
j=1
∞
k=1
√
µj
√
µkφj(x)φk(x′
)δj,k
=
∞
j=1
µjφj(x)φj(x′
) = r(x, x′
)
by Mercer’s theorem.
There’s a problem if KL convergence is slow
The KL convergence can be very slow if
the field is rough (that is, if ν is small),
or if the correlation length λ is small,
or the variance σ2
is large.
There’s a problem if KL convergence is slow
The KL convergence can be very slow if
the field is rough (that is, if ν is small),
or if the correlation length λ is small,
or the variance σ2
is large.
And if the convergence is slow for a 1-dimensional physical domain D
then it is MUCH slower for a 2-dimensional domain, and worse still for
a 3-dimensional domain D.
This means that the truncation errors can be VERY large.
And another thing:
Also, the eigenvalue problem becomes non-trivial for a 3-dimensional
domain D if thousands of eigenvalues and eigenfunction pairs are
needed:
Recall
D
r(x, x′
)φj(x′
) dx′
= µjφj(x),
D
φi(x)φj(x) dx = δij,
The discrete alternative
If we are going to use, for example, piecewise linear finite elements,
then we don’t need the field everywhere: we only need it at points
related to the finite element grid - for example, at the triangle centroids,
as shown. This is “the discrete alternative”.
As input we now need r(x, x′
) only at the discrete x, x′
- we need
only “standard information” in the language of IBC.
How to generate the field at grid points?
Suppose we want the field only at a set of points x1, . . . , xM ∈ D.
Now the field is a vector of length M:
Z(ω) := (Z(x1, ω), . . . , Z(xM , ω))⊤
.
How to generate the field at grid points?
Suppose we want the field only at a set of points x1, . . . , xM ∈ D.
Now the field is a vector of length M:
Z(ω) := (Z(x1, ω), . . . , Z(xM , ω))⊤
.
This is a Gaussian random vector with mean zero and a positive
definite covariance matrix
R = [R]M
i,j=1.
where
Ri,j = E[Z(xi)Z(xj)]= r(xi, xj).
How to generate the field at grid points?
Suppose we want the field only at a set of points x1, . . . , xM ∈ D.
Now the field is a vector of length M:
Z(ω) := (Z(x1, ω), . . . , Z(xM , ω))⊤
.
This is a Gaussian random vector with mean zero and a positive
definite covariance matrix
R = [R]M
i,j=1.
where
Ri,j = E[Z(xi)Z(xj)]= r(xi, xj).
So if r(x, y) is known, then so is the covariance matrix.
How to generate a random field with prescribed covariance matrix R?
Suppose we can factorise the matrix in some way
R = BB⊤
.
Because R is positive definite, we can, for example, take B to be the
square root of R. Also Cholesky, ....
How to generate a random field with prescribed covariance matrix R?
Suppose we can factorise the matrix in some way
R = BB⊤
.
Because R is positive definite, we can, for example, take B to be the
square root of R. Also Cholesky, ....
Once B is known we can generate the field by
Z(ω) = BY(ω), where Y = (Y1(ω), . . . , YM (ω))⊤
,
where Y1(ω), . . . , YM (ω) are iid standard normal variables.
Why does it work?
Simply note that, because
Z(ω) = BY(ω),
with Y a vector of iid standard normal random variables, we have
E[ZZ⊤
] = E[BYY⊤
B⊤
] = B E[YY⊤
]B⊤
= BB⊤
= R.
Now there is no truncation error!
Now there is no truncation error because all we need is to factorise the
covariance matrix R. The problem has turned into one of linear
algebra.
Now there is no truncation error!
Now there is no truncation error because all we need is to factorise the
covariance matrix R. The problem has turned into one of linear
algebra.
But now suppose that M is very large, say in the tens of thousands, or
even millions. The matrix R is typically dense, so a Cholesky
factorisation will take M3
operations. This is not generally feasible
when M is large.
Let’s specialise the covariance function
In practice most covariance functions have this form:
r(x, y) = ρ( x − y ),
That is, the covariance functiion is stationary and isotropic.
In this situation there is great benefit is taking the grid to be UNIFORM.
The benefits of uniformity
When the covariance function is isotropic there are great benefits to
computing the field on a UNIFORM grid, because then the matrix is
typically block Toeplitz. This is the path we follow: we initially
compute the field only at the red crosses in this image.
After that we use bilinear interpolation to find the field at the blue
points.(The resulting error is of same order as FE error.)
The uniform grid
We cover the original domain D by the unit cube in Rd
, and on it define
a uniform grid with m0 + 1 points on each edge. So the spacing is
h = 1/m0, and in total there are M = (m0 + 1)d
points in the grid.
The 1-dimensional case
For a 1-dimensional domain D covered by the unit interval, and on it a
uniform grid with spacing h, the first row of the covariance matrix
R = (ρ(|xi − xj|))m0
i,j=0 is
ρ(0), ρ(h), ρ(2h), ....., ρ(m0h),
and the second row is
ρ(h), ρ(0), ρ(h), ...., ρ((m0 − 1)h),
etc. This is a Toeplitz matrix.
Extending the matrix
It can be made into a circulant matrix Rext
of almost double the
number of rows and columns (i.e. 2m0) by reflecting the top row, to
obtain, in the 1-dimensional case, for the first row
ρ(0), ρ(h), ρ(2h), ....., ρ((m0 − 1)h), ρ(m0h), ρ((m0 − 1)h), ..., ρ(h)
and then, by “wrapping it around”, the second row becomes
ρ(h), ρ(0), ρ(h), ...., ρ((m0 − 2)h), ρ((m0 − 1)h), ρ(m0h), ..., ρ(2h)
etc – a CIRCULANT matrix.
Extending the matrix
It can be made into a circulant matrix Rext
of almost double the
number of rows and columns (i.e. 2m0) by reflecting the top row, to
obtain, in the 1-dimensional case, for the first row
ρ(0), ρ(h), ρ(2h), ....., ρ((m0 − 1)h), ρ(m0h), ρ((m0 − 1)h), ..., ρ(h)
and then, by “wrapping it around”, the second row becomes
ρ(h), ρ(0), ρ(h), ...., ρ((m0 − 2)h), ρ((m0 − 1)h), ρ(m0h), ..., ρ(2h)
etc – a CIRCULANT matrix.
The point is that a circulant matrix of size M × M can be
factorised by FFT in a time of order M log M.
More precisely, write
Rext
= XΛX⊤
,
where Λ is the diagonal matrix of eigenvalues of Rext
, and the rows of
X are the normalised eigenvectors (which are just complex
exponentials).
Note that the eigenvalues of Rext
are real.
And we use an all-real version of FFT, which makes for efficient implementation.
More precisely, write
Rext
= XΛX⊤
,
where Λ is the diagonal matrix of eigenvalues of Rext
, and the rows of
X are the normalised eigenvectors (which are just complex
exponentials).
Note that the eigenvalues of Rext
are real.
And we use an all-real version of FFT, which makes for efficient implementation.
If D is a 2 or 3-dimensional region then the matrix is BLOCK circulant,
and again FFT can be used.
History
In the 1-dimensional case there is a substantial literature on circulant
embedding for the efficient generation of Gaussian random fields:
Dietrich and Newsam, 1997
Chan and Wood, 1997
What’s the catch?
The catch is that the extended matrix Rext
may not be positive
definite – because some of the eigenvalues of Rext
may be
negative.
Before we fix the non-p.d. problem:
Let’s assume for the moment that all eigenvalues of Rext
are
non-negative. Then we can write
Rext
= XΛX⊤
= (XΛ1/2
)(XΛ1/2
)⊤
,
How does this help with factorisation of R? Answer: R is a submatrix
of Rext
. By selecting the appropriate rows and columns of the
factorisation above we obtain
R = BB⊤
,
with B consisting of just the appropriate rows of XΛ1/2
.
Fixing the non-p.d. problem
We extend the matrix R before reflection: keeping the same grid
spacing h, we cover the unit cube by now a larger cube of size
ℓ = mh, with m > m0, and hence ℓ = m/m0 > 1.
Again this is not new, but the way we do the extension might be:
sometimes the extension is done by “padding by zeros”.
Theorem (GKNSS, 2017)
Assume that the covariance function satisfies r(x, y) = ρ(|x − y|),
with ρ ∈ L1
(Rd
) and ρ ∈ L1
(Rd
) (where ρ is the Fourier transform of
ρ), and satisfies also
k∈Zd
|ρ(hk)| < ∞.
Then for ℓ = m/m0 sufficiently large the resulting extended matrix is
positive definite.
But how large does ℓ need to be?
Theorem (GKNSS 2017) For the exponential covariance function, and
for h → 0, positive definiteness of Rext
is guaranteed if
ℓ
λ
≥ C1 + C2 log
λ
h
.
So, for fixed λ, ℓ needs to grow like log 1
h
= log(m0).
But how large does ℓ need to be?
Theorem (GKNSS 2017) For the exponential covariance function, and
for h → 0, positive definiteness of Rext
is guaranteed if
ℓ
λ
≥ C1 + C2 log
λ
h
.
So, for fixed λ, ℓ needs to grow like log 1
h
= log(m0).
Remark 1. Note that the condition is easily satisfied when λ is small.
That’s good news, because that’s the hard case!
But how large does ℓ need to be?
Theorem (GKNSS 2017) For the exponential covariance function, and
for h → 0, positive definiteness of Rext
is guaranteed if
ℓ
λ
≥ C1 + C2 log
λ
h
.
So, for fixed λ, ℓ needs to grow like log 1
h
= log(m0).
Remark 1. Note that the condition is easily satisfied when λ is small.
That’s good news, because that’s the hard case!
Remark 2. For the whole Matérn class the result is similar, but more
extension is needed as ν increases:
ℓ
λ
≥ C1 + C2ν1/2
log max(
λ
h
, ν1/2
) .
Experiments agree
1 2 3 4 5 6
0
2
4
6
8
10
12
log2 m0
ℓ=m/m0
d = 3, λ = 0.5
ν = 4
ν =
√
8
ν = 2
ν =
√
2
ν = 1
ν = 1/
√
2
ν = 0.5
The power of the technology
Some numbers to ponder:
For d = 3 and m0 = 25
= 32:
the number of grid points in the unit cube is 333
= 35, 937
the covariance matrix R has 35, 937 rows and columns, and
hence has 336
≈ 1.3 × 109
elements
The power of the technology
Some numbers to ponder:
For d = 3 and m0 = 25
= 32:
the number of grid points in the unit cube is 333
= 35, 937
the covariance matrix R has 35, 937 rows and columns, and
hence has 336
≈ 1.3 × 109
elements
if we can take ℓ = 1 than Rext
has 646
≈ 7 × 1010
elements
if ℓ = m/m0 = 6 then Rext
has 63
× 646
≈ 1013
elements, all
non-zero
The power of the technology
Some numbers to ponder:
For d = 3 and m0 = 25
= 32:
the number of grid points in the unit cube is 333
= 35, 937
the covariance matrix R has 35, 937 rows and columns, and
hence has 336
≈ 1.3 × 109
elements
if we can take ℓ = 1 than Rext
has 646
≈ 7 × 1010
elements
if ℓ = m/m0 = 6 then Rext
has 63
× 646
≈ 1013
elements, all
non-zero
And if m0 = 200 say, then ....
There’s also something interesting
There is something interesting about the theory and the experiments:
It is the EASY cases that require a lot of extension (and hence a very
big matrix Rext
).
the difficult cases are those with small correlation length λ or low smoothness ν.
There’s also something interesting
There is something interesting about the theory and the experiments:
It is the EASY cases that require a lot of extension (and hence a very
big matrix Rext
).
the difficult cases are those with small correlation length λ or low smoothness ν.
So the circulant embedding technique, while perhaps not so useful for
easy problems, might be very useful for really hard problems.
I Graham, F Kuo, D Nuyens, R Scheichl and I Sloan, “Analysis of
circulant embedding methods for sampling stationary random fields”, in
late stage of preparation

More Related Content

What's hot

Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesFrank Nielsen
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clusteringFrank Nielsen
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applicationsFrank Nielsen
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyFrank Nielsen
 

What's hot (20)

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applications
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
 

Similar to Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Generating Random Fields the Circulant Way - Ian H. Sloan, Sep 1, 2017

Cs229 notes9
Cs229 notes9Cs229 notes9
Cs229 notes9VuTran231
 
Double_Integral.pdf
Double_Integral.pdfDouble_Integral.pdf
Double_Integral.pdfd00a7ece
 
February 11 2016
February 11 2016February 11 2016
February 11 2016khyps13
 
Numarical values
Numarical valuesNumarical values
Numarical valuesAmanSaeed11
 
Numarical values highlighted
Numarical values highlightedNumarical values highlighted
Numarical values highlightedAmanSaeed11
 
Mathematics 3.pdf civil engineering concrete
Mathematics 3.pdf civil engineering concreteMathematics 3.pdf civil engineering concrete
Mathematics 3.pdf civil engineering concretemocr84810
 
Density theorems for anisotropic point configurations
Density theorems for anisotropic point configurationsDensity theorems for anisotropic point configurations
Density theorems for anisotropic point configurationsVjekoslavKovac1
 
Polya recurrence
Polya recurrencePolya recurrence
Polya recurrenceBrian Burns
 
FINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docx
FINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docxFINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docx
FINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docxvoversbyobersby
 
Algebric Functions.pdf
Algebric Functions.pdfAlgebric Functions.pdf
Algebric Functions.pdfMamadArip
 
Machine learning (10)
Machine learning (10)Machine learning (10)
Machine learning (10)NYversity
 
Math20001 dec 2015
Math20001 dec 2015Math20001 dec 2015
Math20001 dec 2015Atef Alnazer
 
Lesson20 Tangent Planes Slides+Notes
Lesson20   Tangent Planes Slides+NotesLesson20   Tangent Planes Slides+Notes
Lesson20 Tangent Planes Slides+NotesMatthew Leingang
 
μ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACE
μ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACEμ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACE
μ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACEtheijes
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfAlexander Litvinenko
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionCharles Deledalle
 
From moments to sparse representations, a geometric, algebraic and algorithmi...
From moments to sparse representations, a geometric, algebraic and algorithmi...From moments to sparse representations, a geometric, algebraic and algorithmi...
From moments to sparse representations, a geometric, algebraic and algorithmi...BernardMourrain
 

Similar to Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Generating Random Fields the Circulant Way - Ian H. Sloan, Sep 1, 2017 (20)

Lecture5
Lecture5Lecture5
Lecture5
 
Cs229 notes9
Cs229 notes9Cs229 notes9
Cs229 notes9
 
Double_Integral.pdf
Double_Integral.pdfDouble_Integral.pdf
Double_Integral.pdf
 
February 11 2016
February 11 2016February 11 2016
February 11 2016
 
Numarical values
Numarical valuesNumarical values
Numarical values
 
Numarical values highlighted
Numarical values highlightedNumarical values highlighted
Numarical values highlighted
 
Double integration
Double integrationDouble integration
Double integration
 
Mathematics 3.pdf civil engineering concrete
Mathematics 3.pdf civil engineering concreteMathematics 3.pdf civil engineering concrete
Mathematics 3.pdf civil engineering concrete
 
Density theorems for anisotropic point configurations
Density theorems for anisotropic point configurationsDensity theorems for anisotropic point configurations
Density theorems for anisotropic point configurations
 
Polya recurrence
Polya recurrencePolya recurrence
Polya recurrence
 
FINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docx
FINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docxFINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docx
FINAL PROJECT, MATH 251, FALL 2015[The project is Due Mond.docx
 
Algebric Functions.pdf
Algebric Functions.pdfAlgebric Functions.pdf
Algebric Functions.pdf
 
Machine learning (10)
Machine learning (10)Machine learning (10)
Machine learning (10)
 
Math20001 dec 2015
Math20001 dec 2015Math20001 dec 2015
Math20001 dec 2015
 
cswiercz-general-presentation
cswiercz-general-presentationcswiercz-general-presentation
cswiercz-general-presentation
 
Lesson20 Tangent Planes Slides+Notes
Lesson20   Tangent Planes Slides+NotesLesson20   Tangent Planes Slides+Notes
Lesson20 Tangent Planes Slides+Notes
 
μ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACE
μ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACEμ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACE
μ-RANGE, λ-RANGE OF OPERATORS ON A HILBERT SPACE
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - Introduction
 
From moments to sparse representations, a geometric, algebraic and algorithmi...
From moments to sparse representations, a geometric, algebraic and algorithmi...From moments to sparse representations, a geometric, algebraic and algorithmi...
From moments to sparse representations, a geometric, algebraic and algorithmi...
 

More from The Statistical and Applied Mathematical Sciences Institute

More from The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Recently uploaded

USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operationalssuser3e220a
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
Presentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxPresentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxRosabel UA
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptxmary850239
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfErwinPantujan2
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONHumphrey A Beña
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptxiammrhaywood
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfPatidar M
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management systemChristalin Nelson
 
Dust Of Snow By Robert Frost Class-X English CBSE
Dust Of Snow By Robert Frost Class-X English CBSEDust Of Snow By Robert Frost Class-X English CBSE
Dust Of Snow By Robert Frost Class-X English CBSEaurabinda banchhor
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataBabyAnnMotar
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 
Millenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptxMillenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptxJanEmmanBrigoli
 

Recently uploaded (20)

USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
Expanded definition: technical and operational
Expanded definition: technical and operationalExpanded definition: technical and operational
Expanded definition: technical and operational
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
Presentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptxPresentation Activity 2. Unit 3 transv.pptx
Presentation Activity 2. Unit 3 transv.pptx
 
4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx4.16.24 Poverty and Precarity--Desmond.pptx
4.16.24 Poverty and Precarity--Desmond.pptx
 
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdfVirtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
Virtual-Orientation-on-the-Administration-of-NATG12-NATG6-and-ELLNA.pdf
 
Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
 
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptxAUDIENCE THEORY -CULTIVATION THEORY -  GERBNER.pptx
AUDIENCE THEORY -CULTIVATION THEORY - GERBNER.pptx
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdf
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
Concurrency Control in Database Management system
Concurrency Control in Database Management systemConcurrency Control in Database Management system
Concurrency Control in Database Management system
 
Dust Of Snow By Robert Frost Class-X English CBSE
Dust Of Snow By Robert Frost Class-X English CBSEDust Of Snow By Robert Frost Class-X English CBSE
Dust Of Snow By Robert Frost Class-X English CBSE
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
Millenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptxMillenials and Fillennials (Ethical Challenge and Responses).pptx
Millenials and Fillennials (Ethical Challenge and Responses).pptx
 

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics Opening Workshop, Generating Random Fields the Circulant Way - Ian H. Sloan, Sep 1, 2017

  • 1. On the generation of random fields Ian Sloan i.sloan@unsw.edu.au University of New South Wales, Sydney, Australia Joint with I Graham & R Scheichl (Bath), F Kuo (UNSW) & D Nuyens(KU Leuven) SAMSI, September 1, 2017
  • 2. Suppose we want to generate a Gaussian random field over this L-shaped region with a hole: The question is: how to generate the random field?
  • 3. Suppose we want to generate a Gaussian random field over this L-shaped region with a hole: The question is: how to generate the random field? Application: PDE with random field as input. Then QMC to find expected value of linear functionals
  • 4. Gaussian random fields Z(x) = Z(x, ω), for x ∈ D ⊂ Rd , is a Gaussian random field if for each x ∈ D, Z(x) is a normally distributed random variable; the field is fully determined by knowing its mean ¯Z(x) := E[Z(x)] and its covariance function: r(x, y) := E Z(x) − ¯Z(x) Z(y) − ¯Z(y) .
  • 5. Gaussian random fields Z(x) = Z(x, ω), for x ∈ D ⊂ Rd , is a Gaussian random field if for each x ∈ D, Z(x) is a normally distributed random variable; the field is fully determined by knowing its mean ¯Z(x) := E[Z(x)] and its covariance function: r(x, y) := E Z(x) − ¯Z(x) Z(y) − ¯Z(y) . For simplicity we will consider throughout mean-zero fields, that is ¯Z(x) = 0, x ∈ D =⇒ r(x, y) := E[Z(x)Z(y)].
  • 6. Examples of 2d covariance functions r(x, y) = σ2 exp − |x1 − y1|2 + |x2 − y2|2 λ2 , – very smooth (in 1-d is σ2e−|x−y|2/λ2 – “Gaussian”) Here σ2 is the variance, and λ is the correlation length.
  • 7. Examples of 2d covariance functions r(x, y) = σ2 exp − |x1 − y1|2 + |x2 − y2|2 λ2 , – very smooth (in 1-d is σ2e−|x−y|2/λ2 – “Gaussian”) Here σ2 is the variance, and λ is the correlation length. r(x, y) = σ2 exp − |x1 − y1|2 + |x2 − y2|2 λ , – not smooth at x = y (in 1-d is σ2e−|x−y|/λ, – “exponential”).
  • 8. Examples of 2d covariance functions r(x, y) = σ2 exp − |x1 − y1|2 + |x2 − y2|2 λ2 , – very smooth (in 1-d is σ2e−|x−y|2/λ2 – “Gaussian”) Here σ2 is the variance, and λ is the correlation length. r(x, y) = σ2 exp − |x1 − y1|2 + |x2 − y2|2 λ , – not smooth at x = y (in 1-d is σ2e−|x−y|/λ, – “exponential”). More general is the Matérn class rν(x, y), ν ∈ [1 2 , ∞), which contains the examples above at the two ends of its parameter range: ν = 1 2 gives the exponential case, ν = ∞ gives the Gaussian case.
  • 9. Examples of 2d covariance functions r(x, y) = σ2 exp − |x1 − y1|2 + |x2 − y2|2 λ2 , – very smooth (in 1-d is σ2e−|x−y|2/λ2 – “Gaussian”) Here σ2 is the variance, and λ is the correlation length. r(x, y) = σ2 exp − |x1 − y1|2 + |x2 − y2|2 λ , – not smooth at x = y (in 1-d is σ2e−|x−y|/λ, – “exponential”). More general is the Matérn class rν(x, y), ν ∈ [1 2 , ∞), which contains the examples above at the two ends of its parameter range: ν = 1 2 gives the exponential case, ν = ∞ gives the Gaussian case. How to compute realisations of the input field? One way is:
  • 10. Karhunen-Loève expansion Z(x, ω) = ∞ j=1 √ µj Yj(ω)φj(x), where the Yj are independent standard normal random numbers, and (µj, φj) satisfy D r(x, x′ )φj(x′ ) dx′ = µjφj(x), D φi(x)φj(x) dx = δij.
  • 11. Why does it work? Because if Z(x) = ∞ j=1 √ µj Yj(ω)φj(x), then formally E[Z(x)Z(x′ )] = E ∞ j=1 √ µj Yjφj(x) ∞ k=1 √ µk Ykφk(x′ ) = ∞ j=1 ∞ k=1 √ µj √ µkφj(x)φk(x′ )E[YjYk] = ∞ j=1 ∞ k=1 √ µj √ µkφj(x)φk(x′ )δj,k = ∞ j=1 µjφj(x)φj(x′ ) = r(x, x′ ) by Mercer’s theorem.
  • 12. There’s a problem if KL convergence is slow The KL convergence can be very slow if the field is rough (that is, if ν is small), or if the correlation length λ is small, or the variance σ2 is large.
  • 13. There’s a problem if KL convergence is slow The KL convergence can be very slow if the field is rough (that is, if ν is small), or if the correlation length λ is small, or the variance σ2 is large. And if the convergence is slow for a 1-dimensional physical domain D then it is MUCH slower for a 2-dimensional domain, and worse still for a 3-dimensional domain D. This means that the truncation errors can be VERY large.
  • 14. And another thing: Also, the eigenvalue problem becomes non-trivial for a 3-dimensional domain D if thousands of eigenvalues and eigenfunction pairs are needed: Recall D r(x, x′ )φj(x′ ) dx′ = µjφj(x), D φi(x)φj(x) dx = δij,
  • 15. The discrete alternative If we are going to use, for example, piecewise linear finite elements, then we don’t need the field everywhere: we only need it at points related to the finite element grid - for example, at the triangle centroids, as shown. This is “the discrete alternative”. As input we now need r(x, x′ ) only at the discrete x, x′ - we need only “standard information” in the language of IBC.
  • 16. How to generate the field at grid points? Suppose we want the field only at a set of points x1, . . . , xM ∈ D. Now the field is a vector of length M: Z(ω) := (Z(x1, ω), . . . , Z(xM , ω))⊤ .
  • 17. How to generate the field at grid points? Suppose we want the field only at a set of points x1, . . . , xM ∈ D. Now the field is a vector of length M: Z(ω) := (Z(x1, ω), . . . , Z(xM , ω))⊤ . This is a Gaussian random vector with mean zero and a positive definite covariance matrix R = [R]M i,j=1. where Ri,j = E[Z(xi)Z(xj)]= r(xi, xj).
  • 18. How to generate the field at grid points? Suppose we want the field only at a set of points x1, . . . , xM ∈ D. Now the field is a vector of length M: Z(ω) := (Z(x1, ω), . . . , Z(xM , ω))⊤ . This is a Gaussian random vector with mean zero and a positive definite covariance matrix R = [R]M i,j=1. where Ri,j = E[Z(xi)Z(xj)]= r(xi, xj). So if r(x, y) is known, then so is the covariance matrix.
  • 19. How to generate a random field with prescribed covariance matrix R? Suppose we can factorise the matrix in some way R = BB⊤ . Because R is positive definite, we can, for example, take B to be the square root of R. Also Cholesky, ....
  • 20. How to generate a random field with prescribed covariance matrix R? Suppose we can factorise the matrix in some way R = BB⊤ . Because R is positive definite, we can, for example, take B to be the square root of R. Also Cholesky, .... Once B is known we can generate the field by Z(ω) = BY(ω), where Y = (Y1(ω), . . . , YM (ω))⊤ , where Y1(ω), . . . , YM (ω) are iid standard normal variables.
  • 21. Why does it work? Simply note that, because Z(ω) = BY(ω), with Y a vector of iid standard normal random variables, we have E[ZZ⊤ ] = E[BYY⊤ B⊤ ] = B E[YY⊤ ]B⊤ = BB⊤ = R.
  • 22. Now there is no truncation error! Now there is no truncation error because all we need is to factorise the covariance matrix R. The problem has turned into one of linear algebra.
  • 23. Now there is no truncation error! Now there is no truncation error because all we need is to factorise the covariance matrix R. The problem has turned into one of linear algebra. But now suppose that M is very large, say in the tens of thousands, or even millions. The matrix R is typically dense, so a Cholesky factorisation will take M3 operations. This is not generally feasible when M is large.
  • 24. Let’s specialise the covariance function In practice most covariance functions have this form: r(x, y) = ρ( x − y ), That is, the covariance functiion is stationary and isotropic. In this situation there is great benefit is taking the grid to be UNIFORM.
  • 25. The benefits of uniformity When the covariance function is isotropic there are great benefits to computing the field on a UNIFORM grid, because then the matrix is typically block Toeplitz. This is the path we follow: we initially compute the field only at the red crosses in this image. After that we use bilinear interpolation to find the field at the blue points.(The resulting error is of same order as FE error.)
  • 26. The uniform grid We cover the original domain D by the unit cube in Rd , and on it define a uniform grid with m0 + 1 points on each edge. So the spacing is h = 1/m0, and in total there are M = (m0 + 1)d points in the grid.
  • 27. The 1-dimensional case For a 1-dimensional domain D covered by the unit interval, and on it a uniform grid with spacing h, the first row of the covariance matrix R = (ρ(|xi − xj|))m0 i,j=0 is ρ(0), ρ(h), ρ(2h), ....., ρ(m0h), and the second row is ρ(h), ρ(0), ρ(h), ...., ρ((m0 − 1)h), etc. This is a Toeplitz matrix.
  • 28. Extending the matrix It can be made into a circulant matrix Rext of almost double the number of rows and columns (i.e. 2m0) by reflecting the top row, to obtain, in the 1-dimensional case, for the first row ρ(0), ρ(h), ρ(2h), ....., ρ((m0 − 1)h), ρ(m0h), ρ((m0 − 1)h), ..., ρ(h) and then, by “wrapping it around”, the second row becomes ρ(h), ρ(0), ρ(h), ...., ρ((m0 − 2)h), ρ((m0 − 1)h), ρ(m0h), ..., ρ(2h) etc – a CIRCULANT matrix.
  • 29. Extending the matrix It can be made into a circulant matrix Rext of almost double the number of rows and columns (i.e. 2m0) by reflecting the top row, to obtain, in the 1-dimensional case, for the first row ρ(0), ρ(h), ρ(2h), ....., ρ((m0 − 1)h), ρ(m0h), ρ((m0 − 1)h), ..., ρ(h) and then, by “wrapping it around”, the second row becomes ρ(h), ρ(0), ρ(h), ...., ρ((m0 − 2)h), ρ((m0 − 1)h), ρ(m0h), ..., ρ(2h) etc – a CIRCULANT matrix. The point is that a circulant matrix of size M × M can be factorised by FFT in a time of order M log M.
  • 30. More precisely, write Rext = XΛX⊤ , where Λ is the diagonal matrix of eigenvalues of Rext , and the rows of X are the normalised eigenvectors (which are just complex exponentials). Note that the eigenvalues of Rext are real. And we use an all-real version of FFT, which makes for efficient implementation.
  • 31. More precisely, write Rext = XΛX⊤ , where Λ is the diagonal matrix of eigenvalues of Rext , and the rows of X are the normalised eigenvectors (which are just complex exponentials). Note that the eigenvalues of Rext are real. And we use an all-real version of FFT, which makes for efficient implementation. If D is a 2 or 3-dimensional region then the matrix is BLOCK circulant, and again FFT can be used.
  • 32. History In the 1-dimensional case there is a substantial literature on circulant embedding for the efficient generation of Gaussian random fields: Dietrich and Newsam, 1997 Chan and Wood, 1997
  • 33. What’s the catch? The catch is that the extended matrix Rext may not be positive definite – because some of the eigenvalues of Rext may be negative.
  • 34. Before we fix the non-p.d. problem: Let’s assume for the moment that all eigenvalues of Rext are non-negative. Then we can write Rext = XΛX⊤ = (XΛ1/2 )(XΛ1/2 )⊤ , How does this help with factorisation of R? Answer: R is a submatrix of Rext . By selecting the appropriate rows and columns of the factorisation above we obtain R = BB⊤ , with B consisting of just the appropriate rows of XΛ1/2 .
  • 35. Fixing the non-p.d. problem We extend the matrix R before reflection: keeping the same grid spacing h, we cover the unit cube by now a larger cube of size ℓ = mh, with m > m0, and hence ℓ = m/m0 > 1. Again this is not new, but the way we do the extension might be: sometimes the extension is done by “padding by zeros”.
  • 36. Theorem (GKNSS, 2017) Assume that the covariance function satisfies r(x, y) = ρ(|x − y|), with ρ ∈ L1 (Rd ) and ρ ∈ L1 (Rd ) (where ρ is the Fourier transform of ρ), and satisfies also k∈Zd |ρ(hk)| < ∞. Then for ℓ = m/m0 sufficiently large the resulting extended matrix is positive definite.
  • 37. But how large does ℓ need to be? Theorem (GKNSS 2017) For the exponential covariance function, and for h → 0, positive definiteness of Rext is guaranteed if ℓ λ ≥ C1 + C2 log λ h . So, for fixed λ, ℓ needs to grow like log 1 h = log(m0).
  • 38. But how large does ℓ need to be? Theorem (GKNSS 2017) For the exponential covariance function, and for h → 0, positive definiteness of Rext is guaranteed if ℓ λ ≥ C1 + C2 log λ h . So, for fixed λ, ℓ needs to grow like log 1 h = log(m0). Remark 1. Note that the condition is easily satisfied when λ is small. That’s good news, because that’s the hard case!
  • 39. But how large does ℓ need to be? Theorem (GKNSS 2017) For the exponential covariance function, and for h → 0, positive definiteness of Rext is guaranteed if ℓ λ ≥ C1 + C2 log λ h . So, for fixed λ, ℓ needs to grow like log 1 h = log(m0). Remark 1. Note that the condition is easily satisfied when λ is small. That’s good news, because that’s the hard case! Remark 2. For the whole Matérn class the result is similar, but more extension is needed as ν increases: ℓ λ ≥ C1 + C2ν1/2 log max( λ h , ν1/2 ) .
  • 40. Experiments agree 1 2 3 4 5 6 0 2 4 6 8 10 12 log2 m0 ℓ=m/m0 d = 3, λ = 0.5 ν = 4 ν = √ 8 ν = 2 ν = √ 2 ν = 1 ν = 1/ √ 2 ν = 0.5
  • 41. The power of the technology Some numbers to ponder: For d = 3 and m0 = 25 = 32: the number of grid points in the unit cube is 333 = 35, 937 the covariance matrix R has 35, 937 rows and columns, and hence has 336 ≈ 1.3 × 109 elements
  • 42. The power of the technology Some numbers to ponder: For d = 3 and m0 = 25 = 32: the number of grid points in the unit cube is 333 = 35, 937 the covariance matrix R has 35, 937 rows and columns, and hence has 336 ≈ 1.3 × 109 elements if we can take ℓ = 1 than Rext has 646 ≈ 7 × 1010 elements if ℓ = m/m0 = 6 then Rext has 63 × 646 ≈ 1013 elements, all non-zero
  • 43. The power of the technology Some numbers to ponder: For d = 3 and m0 = 25 = 32: the number of grid points in the unit cube is 333 = 35, 937 the covariance matrix R has 35, 937 rows and columns, and hence has 336 ≈ 1.3 × 109 elements if we can take ℓ = 1 than Rext has 646 ≈ 7 × 1010 elements if ℓ = m/m0 = 6 then Rext has 63 × 646 ≈ 1013 elements, all non-zero And if m0 = 200 say, then ....
  • 44. There’s also something interesting There is something interesting about the theory and the experiments: It is the EASY cases that require a lot of extension (and hence a very big matrix Rext ). the difficult cases are those with small correlation length λ or low smoothness ν.
  • 45. There’s also something interesting There is something interesting about the theory and the experiments: It is the EASY cases that require a lot of extension (and hence a very big matrix Rext ). the difficult cases are those with small correlation length λ or low smoothness ν. So the circulant embedding technique, while perhaps not so useful for easy problems, might be very useful for really hard problems.
  • 46. I Graham, F Kuo, D Nuyens, R Scheichl and I Sloan, “Analysis of circulant embedding methods for sampling stationary random fields”, in late stage of preparation