SlideShare uma empresa Scribd logo
1 de 29
Part 4b:
NUMERICAL LINEAR ALGEBRA
–
–
–
–
–

LU Decomposition
Matrix Inverse
System Condition
Special Matrices
Gauss-Seidel
LU Decomposition:
 In Gauss elimination, both coefficieints and constants are
munipulated until an upper-triangular matrix is obtained.
a11 x1 a12 x2

b1

'
'
a23 x3 ... a2 n xn

'
b2

''
'
a33 x3 ... a3' n xn

'
a22 x2

a13 x3 ... a1n xn

b3''

...
(n
ann 1) xn

(
bnn

1)

 In some applications, the coefficient matrix [A] stays constant while
the right-hand-side constants vector (b) changes.
 [L][U] decomposition does not require repeated eliminations. Once
[L][U] decomposition is applied to matrix [A], it can be repeteadly
used for different values of (b) vector.
Decomposition methodology:
Solution to the linear system
A x

or

b

A x

b

0

The system can also be stated in an upper triangular form:
U x

d

or U x

d

0

Now, suppose there exist a lower triangular matrix (L) such that
L U x

d

A x

b

Then, it follows that
L U

A

and

L d

b

 Solution for (x) can be obtained by a two-step strategy (explained
next).
Decomposition strategy:
A x

b

Decomposition

U

L

L (d )
Apply backward
substitution to
calculate (x)

U ( x)

(b)

Apply forward
substitution to
calculate (d)

(d )

 The process involves one decomposition, one forward
substitution, and one backward substitution processes.
 Once matrices L and U are computed once; manipulated constant
vector (d) is repeatedly calculated from matrix L; hence vector (x).
LU Decomposition and Gauss Elimination:

 Gauss elimination processes involves an LU decomposition in itself.
 Forward elimination produces an upper triangular matrix:
.. .. ..
U

0 .. ..
0 0 ..

 In fact, while U is formed during elimination, an L matrix is formed
such that (for 3x3)
1

0

f 21

1

0

f 31

L

0

f 32

1

where
f 21

a21
a11

a31
a11

f 31

A

f 32

L U

'
a32
a22

…

This decomposition is unique when
the diagonals of L are ones.
EX 10.1 : Apply LU decomposition based on the Gauss elimination for Example
9.5 (using 6 S.D.):
Coefficient matrix:

3

0.2

0.1

7

0.3

0.3

A

0.1
0.2

10

Forward elimination resulted in the following upper triangular form:
3
0.1
0.2

U

0 7.00333

0.293333

0
0
Lower triangular matrix will have

L

1
f 21
f 31

0
1
f 32

0
0
1

1
a21
a11
a31
a11

10.0120

0

0

1

0

'
a32
'
a22

1

1
0.0333333
0.100000

0
0
1
0
0.0271300 1
Check the result:

1
A

0 3

0.0333333

L U

0
1

0 0 7.00333

0.100000

0.0271300 1 0

We obtain:

0

0.2
0.293333
10.0120

compare to:

3
A

0.1

0.1
7

0.0999999
0.3

0.2
0 .3

0.2 9.99996

3

0.2

0.1

7

0.3

0.3

A

0.1
0.2

10

Some round-off error is introduced

To find the solution:
Calculate (d) by applying one forward substitution.

L (d )

(b)

Calculate (x) by applying one back substitution.

U ( x)

(d )

[L] facilitates
obtaining modified
(b) each time (b) has
been changed during
calculations.
EX 10.2: Solve the system in the previous example using LU decomposition:

We have:

A

1
L U

0

0 3

0.0333333

1

0 0 7.00333

0.100000

0.0271300 1 0

0.1
0

0.2
0.293333
10.0120

> Apply the forward substitution:

1

0

0 d1

0.0333333

1

0 d2

0.100000

0.0271300 1 d 3

7.85
19.3
71.4

d1

7.85

d2

19.5617

d3

70.0843

> Apply the backward substitution:

3

0. 1

0.2

x1

0 7.00333

0.293333 x2

0

10.0120

0

x3

7.85
19.3
71.4

x1

3

x2

2.5

x3

7.00003
Total FLOPs with LU decomposition

n3
3

O n2

same as Gauss
elimination

Crout Decomposition:
1
A

L U

(Doolittle decomposition/factorizaton)

..
1
1

A

L U

U
(forming)

(Crout decomposition)

..
1

row operation
Column ro operation

 They have comperable performances.
 Crout decompositon can be implemented by a
concise series of formulas. (see the book).
 Storage can be minimized:
L
> No need to store 1’s in U.
(forming)
> No need to store 0’s in L and U.
> Elements of U can be stored in zeros of L.

A
(remaining)
Matrix Inverse
 If A is a square matrix,there exist an A-1,s.t.
A A

1

A

1

A

I

 LU decomposition offers an efficient way to find A-1.
A x

b

decomposition

U

L

forward
substitution
Backward
substitution

For constant
vector, enter (I:,i )
(ith column of the
identity matrix.)

L (d )

1
:, j

U ( A ) (d )

( I: , j )
Solution gives ith
column of A-1.
EX 10.3 : Use LU decomposition to determine the inverse of the system in EX 10.1

3

0.2

0.1

7

0.3

0. 3

A

0.1
0.2

10

Corresponding upper and lower triangular matrices are

3
U

0.1

1

0 7.00333

0.293333

0

L

10.0120

0

0

0

0.0333333

0.2

1

0

0.100000

0.0271300 1

To calculate the first column of A-1 :
> Forward substitution:
1
0

0.0333333
0.100000

0 d1

1

d1

0 d2

0

d2

0.03333

0.0271300 1 d 3

0

d3

0.1009

1

1
> Back substitution:

3

0.1

0.2

x1

1

x1

0.33249

0 7.00333

0.293333 x2

0.03333

x2

0.00518

0

10.0120

0.1009

x3

0.01008

0

x3

To calculate the second column

First
column
of A-1

To calculate the third column

b1

0

x1

0.004944

b1

0

x1

0.006798

b2

1

x2

0.142903

b2

0

x2

0.004183

b3

0

x3

0.00271

b3

1

x3

0.09988

We finally get

0.33249
A

1

0.004944 0.006798

0.00518 0.142903 0.004183
0.01008

0.00271

0.09988
Importance of Inverse in Engineering Applications:
 Many engineering problems can be represented by a linear
equation

A x
System design
matrix

b

Response
(e.g., deformation)

Stimulus
(e.g., force)

 The formal solution to this equation

x

A

1

b

For a 3x3 system we can write explicitly
x1

a111b1 a121b2

a131b3

x2

1
1
a21 b1 a22 b2

1
a23 b3

x3

a311b1 a321b2

a331b3

There is a linear relationship
between stimulus and response.
Proportionality constants are the
coefficients of A-1 .
System Condition
 Condition number indicates ill-conditioning of a system.
 We will determine condition number using matrix norms.
Matrix norms:
 A norm is a measure of the size of a multi-component entity
(e.g., a vector)

x3

n

x1

xi

1-norm

i 1

x
1/ 2

n

x

x

2

2
i

x

e
i 1
n

x

xi

p
i 1

2-norm
(Euclidean norm)

x1

1/ p
p

2

p-norm

x2
 We can extend Euclidean norm for matrices:
n

1/ 2

n

Ae

2

a i, j

(Frobenius norm)

i 1 j 1

 There are other norms too…, e.g.,
n

A

max
1 i n

aij

(row-sum norm)

aij

(column-sum norm)

j 1

n

A

max
1 j n

i 1

 Each of these norms returns a single (positive) value for the
characteristics of the matrix.
Matrix Condition Number:
 Matrix condition number can be defined as

Cond A

A

A

1

( Cond A 1 )

 If Cond [A] >> 1
ill-conditioned matrix
 It can be shown that
i.e., the relative error of the

x
x

Cond A

A
A

For example;
[A]
contains element of t S.F.
(precision of 10-t)
Cond [A] 10c

computed solution cannot
be larger than the relative
error of the coefficients of
[A] multiplied by the
condition number.

(x) will contain
elements of (t-c) S.F.
(precision of 10c-t)
EX 10.4 : Estimate the condition number of the 3x3 Hilbert matrix using row sum
norm
1 1/ 2 1/ 3
Hilbert matrix is

A

inherently illconditioned.

1/ 2 1/ 3 1/ 4
1/ 3 1/ 4 1/ 5

First normalize the matrix by dividing each row by the largest coefficient:

1 1/ 2 1/ 3
A

1 2 / 3 1/ 2
1 3/ 4 3/ 5

Row-sum norm:

1 1/ 2 1/ 3
A

1 1/ 2

1/ 3 1.833

1 2 / 3 1/ 2

1

2 / 3 1/ 2

2.1667

1 3/ 4 3/ 5

1

3/ 4

2.35

3/ 5

A

2.35
Inverse of the scaled matrix:

9
A

1

36
30

18
96
90

this part takes
the longest
time of
computation.

10
60
60

Row-sum norm:

9
A

1

36

18
96

30

90

10

A

60

1

36

96

60 192

60

Condition number:

Cond A

(2.35)(192 )

451 .2

matrix is ill-conditioned.

e.g., for a single precision (7.2 digits) computation;

c log(451.2) 2.65

(7.2-2.65)=4.55 ~ 4 S.F. in the solution!
(precision of ~10-4)
Iterative refinement:
 This technique especially useful for reducing round-off errors.
 Consider a system:
a11 x1 a12 x2

a13 x3

b1

a21 x1 a22 x2

a23 x3

b2

a31 x1 a32 x2

a33 x3

b3

 Assume an approximate solution of the form
a11 x1o

o
a12 x2

o
a13 x3

b1o

a21 x1o

o
a22 x2

o
a23 x3

o
b2

a31 x1o

o
a32 x2

o
a33 x3

o
b3

 We can write a relationship between exact and approximate
solutions:
x1

x1o

x1

x2

o
2

x

x2

x3

o
x3

x3
 Insert these into the original equations:
a11 ( x1

x1 ) a12 ( x2

x2 ) a13 ( x3

x3 )

b1

a21 ( x1

x1 ) a22 ( x2

x2 ) a23 ( x3

x3 )

b2

a31 ( x1

x1 ) a32 ( x2

x2 ) a33 ( x3

x3 )

b3

 Now subtract the approximate solution from above to get
a11 x1 a12 x2

a13 x3

b1 b1o

a21 x1 a22 x2

a23 x3

o
b2 b2

e2

a31 x1 a32 x2

a33 x3

o
b3 b3

e3

e1

 This a new set of simultaneous linear equation which can be
solved for the correction factors.
 Solution can be improved by applying the corrections to the
previous solution (iterative refinement procedure)
 It is especially suitable for LU decomposition since constant
vector (b) continuously changes.
Special Matrices
 In engineering applications, special matrices are very common.
> Banded matrices

aij

0

BW=3

if

i

j

(BW 1) / 2

tridiagonal system

> Symmetric matrices

aij

a ji

or

A

A

T

> Spare matrices (most elements are zero)
only black areas
are non-zero

BW
 Application of elimination methods to spare matrices are not
efficient (e.g., need to deal with many zeros unnecessarily).
 We employ special methods in working with these systems.
Cholesky Decomposition:
 This method is applicable to symmetric matrices.
 A symmetric matrix can be decomposed as
i 1

aki

A

L L

T

or

lki

lij lkj
j 1

for i 1,2,...,k 1

lii
k 1

lkk

2
lkj

akk
j 1

Symmetric matrices are very common in engineering applications. So, this
method has wide applications.
EX 11.2 : Apply Cholesky decomposition to

6
A
Apply the recursion relation:
k=1

l11
a11
(k=2,i=1)

6
a21
l11

l21

55

15

55

225

55 225 979

2.4495
15
2.4495

(k=3, i=1)
a31
55
l31
l11 2.4495

l33

15

6.1237

22 .454

l22

2
a22 l21

(k=3, i=2)
a32 l21l31
l32
l22

2
2
a33 l31 l32

979 (22.454) 2 (20.916) 2
6.1106

55 (6.1237) 2

4.1833

225 6.1237 (22 .454 )
4.1833

20 .916

2.4495
L

6.1237 4.1833
22.454 20.916 6.1106
Gauss-Seidel
 Iterative methods are strong alternatives to elimination methods
 In iterative methods solution is constantly improved so there is no
concern of round-off errors.
 As we did in root finding,
> start with an initial guess.
> iterate for refined estimates of the solution.
 Gauss-Seidel is one of the most commonly used iterative method.
 For the solution of
A ( x) (b)

 We write each unknown in the diagonal in terms of the other
unknowns:
In case of a 3x3 system:
x1
x2
x3

b1 a12 x2 a13 x3
a11

Start with initial guesses x2 and x3

b2

Use new x1 and old x3

a21 x1 a23 x3
a11

b3 a31 x1 a32 x2
a33

Use new x1 and x2

calculate new x1

calculate new x2
calculate new x3

iterate...

 In Gauss Seidel, new estimates are immediately used in
subsequent calculations.

 Alternatively, old values (x1 , x2 , x3) are collectively used to
calculate new values (x1 , x2 , x3) Jacobi iteration (not
commonly used)
EX 11.2 : Use Gauss-Seidel method to obtain the solution of
3x1 0.1x2
0.1x1 7 x2

0.2 x3
0.3x3

0.3x1 0.2 x2 10 x3

true results
x1 = 3
x2 = - 2.5
x3 = 7

7.85
19 .3
71 .4

Gauss-Seidel iteration:

x1

7.85 0.1x2 0.2 x3
x2
3

19.3 0.1x1 0.3x3
7

x3

71.4 0.3x1 0.2 x2
10

Assume initial guesses all are 0.

x1

7.85 0 0
3

x3

2.616667

x2

19.3 0.1(2.616667) 0
7

71.4 0.3(2.616667) 0.2( 2.794524)
10

7.005610

2.794524
For the second iteration, we repeat the process:

x1

7.85 0.1( 2.794524) 0.2(7.005610)
3

x2

19.3 0.1(2.990557) 0.3(7.005610)
7

x3

71.4 0.3(2.990557) 0.2( 2.499625)
10

2.990557
2.499625

7.000291

The solution is rapidly converging to the true solution.

t

0.31%

t

0.015%

t

0.0042%
Convergence in Gauss-Seidel:
 Gauss-Seidel is similar to the fixed-point iteration method in root
finding methods. As in the fixed-point iteration, Gauss-Seidel
also is prone to
> divergence
> slow convergence
 Convergence of the method can be checked by the following
criteria:
n

aii

aij
j 1
j i

that is, the absolute value of the diagonal
coefficient in each of the row must be
larger than sum of the absolute values of
all other coefficients in the same row.
(diagonally dominant system)

 Fortunately, many engineering applications fulfill this
requirement.
Improvement of convergence by relaxation:
 After each value of x is computed using Gauss-Seidel equations,
the value is modified by a weighted average of the old and new
values.
xinew

xinew (1

) xiold

0

2

If 0< <1

underrelaxation (to make a system converge)

If 1< <2

overrelaxation (to accelerate the convergence )

 The choice of

is empirical and depends on the problem.

Mais conteúdo relacionado

Mais procurados

systems of linear equations & matrices
systems of linear equations & matricessystems of linear equations & matrices
systems of linear equations & matrices
Student
 
Tensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersTensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineers
Springer
 
Linear Systems Gauss Seidel
Linear Systems   Gauss SeidelLinear Systems   Gauss Seidel
Linear Systems Gauss Seidel
Eric Davishahl
 
Lesson 9: Gaussian Elimination
Lesson 9: Gaussian EliminationLesson 9: Gaussian Elimination
Lesson 9: Gaussian Elimination
Matthew Leingang
 
14.6 triple integrals in cylindrical and spherical coordinates
14.6 triple integrals in cylindrical and spherical coordinates14.6 triple integrals in cylindrical and spherical coordinates
14.6 triple integrals in cylindrical and spherical coordinates
Emiey Shaari
 

Mais procurados (20)

systems of linear equations & matrices
systems of linear equations & matricessystems of linear equations & matrices
systems of linear equations & matrices
 
Tensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineersTensor algebra and tensor analysis for engineers
Tensor algebra and tensor analysis for engineers
 
Partial Differential Equation - Notes
Partial Differential Equation - NotesPartial Differential Equation - Notes
Partial Differential Equation - Notes
 
Numerical method
Numerical methodNumerical method
Numerical method
 
Linear Systems Gauss Seidel
Linear Systems   Gauss SeidelLinear Systems   Gauss Seidel
Linear Systems Gauss Seidel
 
A brief introduction to finite difference method
A brief introduction to finite difference methodA brief introduction to finite difference method
A brief introduction to finite difference method
 
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical MethodsGauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
Gauss Elimination & Gauss Jordan Methods in Numerical & Statistical Methods
 
MATLAB : Numerical Differention and Integration
MATLAB : Numerical Differention and IntegrationMATLAB : Numerical Differention and Integration
MATLAB : Numerical Differention and Integration
 
Solving system of linear equations
Solving system of linear equationsSolving system of linear equations
Solving system of linear equations
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
 
Section 5.4 logarithmic functions
Section 5.4 logarithmic functions Section 5.4 logarithmic functions
Section 5.4 logarithmic functions
 
Lecture 04 newton-raphson, secant method etc
Lecture 04 newton-raphson, secant method etcLecture 04 newton-raphson, secant method etc
Lecture 04 newton-raphson, secant method etc
 
NUMERICAL METHOD
NUMERICAL METHODNUMERICAL METHOD
NUMERICAL METHOD
 
Lesson 9: Gaussian Elimination
Lesson 9: Gaussian EliminationLesson 9: Gaussian Elimination
Lesson 9: Gaussian Elimination
 
CRAMER’S RULE
CRAMER’S RULECRAMER’S RULE
CRAMER’S RULE
 
Matrix presentation By DHEERAJ KATARIA
Matrix presentation By DHEERAJ KATARIAMatrix presentation By DHEERAJ KATARIA
Matrix presentation By DHEERAJ KATARIA
 
Sor
SorSor
Sor
 
14.6 triple integrals in cylindrical and spherical coordinates
14.6 triple integrals in cylindrical and spherical coordinates14.6 triple integrals in cylindrical and spherical coordinates
14.6 triple integrals in cylindrical and spherical coordinates
 
Gauss Elimination Method.pptx
Gauss Elimination Method.pptxGauss Elimination Method.pptx
Gauss Elimination Method.pptx
 
Unit4
Unit4Unit4
Unit4
 

Destaque (20)

Unit5
Unit5Unit5
Unit5
 
Gauss sediel
Gauss sedielGauss sediel
Gauss sediel
 
LU Decomposition Fula
LU Decomposition FulaLU Decomposition Fula
LU Decomposition Fula
 
Lu decomposition
Lu decompositionLu decomposition
Lu decomposition
 
Cholesky method and Thomas
Cholesky method and ThomasCholesky method and Thomas
Cholesky method and Thomas
 
Gauss seidel
Gauss seidelGauss seidel
Gauss seidel
 
Convergence Criteria
Convergence CriteriaConvergence Criteria
Convergence Criteria
 
Meeting13
Meeting13Meeting13
Meeting13
 
Lecture 6 lu factorization & determinants - section 2-5 2-7 3-1 and 3-2
Lecture 6   lu factorization & determinants - section 2-5 2-7 3-1 and 3-2Lecture 6   lu factorization & determinants - section 2-5 2-7 3-1 and 3-2
Lecture 6 lu factorization & determinants - section 2-5 2-7 3-1 and 3-2
 
Choleskymethod
CholeskymethodCholeskymethod
Choleskymethod
 
Specials Methods
Specials MethodsSpecials Methods
Specials Methods
 
Matrix factorization
Matrix factorizationMatrix factorization
Matrix factorization
 
Lu decomposition
Lu decompositionLu decomposition
Lu decomposition
 
Admission in india 2015
Admission in india 2015Admission in india 2015
Admission in india 2015
 
Oop
OopOop
Oop
 
Andrealozada
AndrealozadaAndrealozada
Andrealozada
 
Inheritance
InheritanceInheritance
Inheritance
 
Hierarchical inheritance
Hierarchical inheritanceHierarchical inheritance
Hierarchical inheritance
 
Factorization from Gaussian Elimination
  Factorization from Gaussian Elimination  Factorization from Gaussian Elimination
Factorization from Gaussian Elimination
 
inheritance in C++
inheritance in C++inheritance in C++
inheritance in C++
 

Semelhante a Es272 ch4b

lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...
lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...
lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...
wafahop
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
oscar
 
Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equations
mofassair
 

Semelhante a Es272 ch4b (20)

Es272 ch4a
Es272 ch4aEs272 ch4a
Es272 ch4a
 
Chapter 3: Linear Systems and Matrices - Part 3/Slides
Chapter 3: Linear Systems and Matrices - Part 3/SlidesChapter 3: Linear Systems and Matrices - Part 3/Slides
Chapter 3: Linear Systems and Matrices - Part 3/Slides
 
Mat 223_Ch3-Determinants.ppt
Mat 223_Ch3-Determinants.pptMat 223_Ch3-Determinants.ppt
Mat 223_Ch3-Determinants.ppt
 
Determinants - Mathematics
Determinants - MathematicsDeterminants - Mathematics
Determinants - Mathematics
 
lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...
lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...
lecture0003-numerical-methods-topic-3-solution-of-systems-of-linear-equations...
 
FinalReport
FinalReportFinalReport
FinalReport
 
Linear Algebra- Gauss Elim-converted.pptx
Linear Algebra- Gauss Elim-converted.pptxLinear Algebra- Gauss Elim-converted.pptx
Linear Algebra- Gauss Elim-converted.pptx
 
Determinants. Cramer’s Rule
Determinants. Cramer’s RuleDeterminants. Cramer’s Rule
Determinants. Cramer’s Rule
 
Ch 01 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片
Ch 01 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片Ch 01 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片
Ch 01 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片
 
Matrices ppt
Matrices pptMatrices ppt
Matrices ppt
 
Bba i-bm-u-2- matrix -
Bba i-bm-u-2- matrix -Bba i-bm-u-2- matrix -
Bba i-bm-u-2- matrix -
 
Determinants, crammers law, Inverse by adjoint and the applications
Determinants, crammers law,  Inverse by adjoint and the applicationsDeterminants, crammers law,  Inverse by adjoint and the applications
Determinants, crammers law, Inverse by adjoint and the applications
 
presentation
presentationpresentation
presentation
 
Gauss
GaussGauss
Gauss
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
 
Applied numerical methods lec6
Applied numerical methods lec6Applied numerical methods lec6
Applied numerical methods lec6
 
Solution of System of Linear Equations
Solution of System of Linear EquationsSolution of System of Linear Equations
Solution of System of Linear Equations
 
ALA Solution.pdf
ALA Solution.pdfALA Solution.pdf
ALA Solution.pdf
 
Lesson 7
Lesson 7Lesson 7
Lesson 7
 

Mais de Batuhan Yıldırım (9)

Es272 ch7
Es272 ch7Es272 ch7
Es272 ch7
 
Es272 ch6
Es272 ch6Es272 ch6
Es272 ch6
 
Es272 ch5b
Es272 ch5bEs272 ch5b
Es272 ch5b
 
Es272 ch5a
Es272 ch5aEs272 ch5a
Es272 ch5a
 
Es272 ch1
Es272 ch1Es272 ch1
Es272 ch1
 
Es272 ch0
Es272 ch0Es272 ch0
Es272 ch0
 
Es272 ch3b
Es272 ch3bEs272 ch3b
Es272 ch3b
 
Es272 ch3a
Es272 ch3aEs272 ch3a
Es272 ch3a
 
Es272 ch2
Es272 ch2Es272 ch2
Es272 ch2
 

Último

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 

Es272 ch4b

  • 1. Part 4b: NUMERICAL LINEAR ALGEBRA – – – – – LU Decomposition Matrix Inverse System Condition Special Matrices Gauss-Seidel
  • 2. LU Decomposition:  In Gauss elimination, both coefficieints and constants are munipulated until an upper-triangular matrix is obtained. a11 x1 a12 x2 b1 ' ' a23 x3 ... a2 n xn ' b2 '' ' a33 x3 ... a3' n xn ' a22 x2 a13 x3 ... a1n xn b3'' ... (n ann 1) xn ( bnn 1)  In some applications, the coefficient matrix [A] stays constant while the right-hand-side constants vector (b) changes.  [L][U] decomposition does not require repeated eliminations. Once [L][U] decomposition is applied to matrix [A], it can be repeteadly used for different values of (b) vector.
  • 3. Decomposition methodology: Solution to the linear system A x or b A x b 0 The system can also be stated in an upper triangular form: U x d or U x d 0 Now, suppose there exist a lower triangular matrix (L) such that L U x d A x b Then, it follows that L U A and L d b  Solution for (x) can be obtained by a two-step strategy (explained next).
  • 4. Decomposition strategy: A x b Decomposition U L L (d ) Apply backward substitution to calculate (x) U ( x) (b) Apply forward substitution to calculate (d) (d )  The process involves one decomposition, one forward substitution, and one backward substitution processes.  Once matrices L and U are computed once; manipulated constant vector (d) is repeatedly calculated from matrix L; hence vector (x).
  • 5. LU Decomposition and Gauss Elimination:  Gauss elimination processes involves an LU decomposition in itself.  Forward elimination produces an upper triangular matrix: .. .. .. U 0 .. .. 0 0 ..  In fact, while U is formed during elimination, an L matrix is formed such that (for 3x3) 1 0 f 21 1 0 f 31 L 0 f 32 1 where f 21 a21 a11 a31 a11 f 31 A f 32 L U ' a32 a22 … This decomposition is unique when the diagonals of L are ones.
  • 6. EX 10.1 : Apply LU decomposition based on the Gauss elimination for Example 9.5 (using 6 S.D.): Coefficient matrix: 3 0.2 0.1 7 0.3 0.3 A 0.1 0.2 10 Forward elimination resulted in the following upper triangular form: 3 0.1 0.2 U 0 7.00333 0.293333 0 0 Lower triangular matrix will have L 1 f 21 f 31 0 1 f 32 0 0 1 1 a21 a11 a31 a11 10.0120 0 0 1 0 ' a32 ' a22 1 1 0.0333333 0.100000 0 0 1 0 0.0271300 1
  • 7. Check the result: 1 A 0 3 0.0333333 L U 0 1 0 0 7.00333 0.100000 0.0271300 1 0 We obtain: 0 0.2 0.293333 10.0120 compare to: 3 A 0.1 0.1 7 0.0999999 0.3 0.2 0 .3 0.2 9.99996 3 0.2 0.1 7 0.3 0.3 A 0.1 0.2 10 Some round-off error is introduced To find the solution: Calculate (d) by applying one forward substitution. L (d ) (b) Calculate (x) by applying one back substitution. U ( x) (d ) [L] facilitates obtaining modified (b) each time (b) has been changed during calculations.
  • 8. EX 10.2: Solve the system in the previous example using LU decomposition: We have: A 1 L U 0 0 3 0.0333333 1 0 0 7.00333 0.100000 0.0271300 1 0 0.1 0 0.2 0.293333 10.0120 > Apply the forward substitution: 1 0 0 d1 0.0333333 1 0 d2 0.100000 0.0271300 1 d 3 7.85 19.3 71.4 d1 7.85 d2 19.5617 d3 70.0843 > Apply the backward substitution: 3 0. 1 0.2 x1 0 7.00333 0.293333 x2 0 10.0120 0 x3 7.85 19.3 71.4 x1 3 x2 2.5 x3 7.00003
  • 9. Total FLOPs with LU decomposition n3 3 O n2 same as Gauss elimination Crout Decomposition: 1 A L U (Doolittle decomposition/factorizaton) .. 1 1 A L U U (forming) (Crout decomposition) .. 1 row operation Column ro operation  They have comperable performances.  Crout decompositon can be implemented by a concise series of formulas. (see the book).  Storage can be minimized: L > No need to store 1’s in U. (forming) > No need to store 0’s in L and U. > Elements of U can be stored in zeros of L. A (remaining)
  • 10. Matrix Inverse  If A is a square matrix,there exist an A-1,s.t. A A 1 A 1 A I  LU decomposition offers an efficient way to find A-1. A x b decomposition U L forward substitution Backward substitution For constant vector, enter (I:,i ) (ith column of the identity matrix.) L (d ) 1 :, j U ( A ) (d ) ( I: , j ) Solution gives ith column of A-1.
  • 11. EX 10.3 : Use LU decomposition to determine the inverse of the system in EX 10.1 3 0.2 0.1 7 0.3 0. 3 A 0.1 0.2 10 Corresponding upper and lower triangular matrices are 3 U 0.1 1 0 7.00333 0.293333 0 L 10.0120 0 0 0 0.0333333 0.2 1 0 0.100000 0.0271300 1 To calculate the first column of A-1 : > Forward substitution: 1 0 0.0333333 0.100000 0 d1 1 d1 0 d2 0 d2 0.03333 0.0271300 1 d 3 0 d3 0.1009 1 1
  • 12. > Back substitution: 3 0.1 0.2 x1 1 x1 0.33249 0 7.00333 0.293333 x2 0.03333 x2 0.00518 0 10.0120 0.1009 x3 0.01008 0 x3 To calculate the second column First column of A-1 To calculate the third column b1 0 x1 0.004944 b1 0 x1 0.006798 b2 1 x2 0.142903 b2 0 x2 0.004183 b3 0 x3 0.00271 b3 1 x3 0.09988 We finally get 0.33249 A 1 0.004944 0.006798 0.00518 0.142903 0.004183 0.01008 0.00271 0.09988
  • 13. Importance of Inverse in Engineering Applications:  Many engineering problems can be represented by a linear equation A x System design matrix b Response (e.g., deformation) Stimulus (e.g., force)  The formal solution to this equation x A 1 b For a 3x3 system we can write explicitly x1 a111b1 a121b2 a131b3 x2 1 1 a21 b1 a22 b2 1 a23 b3 x3 a311b1 a321b2 a331b3 There is a linear relationship between stimulus and response. Proportionality constants are the coefficients of A-1 .
  • 14. System Condition  Condition number indicates ill-conditioning of a system.  We will determine condition number using matrix norms. Matrix norms:  A norm is a measure of the size of a multi-component entity (e.g., a vector) x3 n x1 xi 1-norm i 1 x 1/ 2 n x x 2 2 i x e i 1 n x xi p i 1 2-norm (Euclidean norm) x1 1/ p p 2 p-norm x2
  • 15.  We can extend Euclidean norm for matrices: n 1/ 2 n Ae 2 a i, j (Frobenius norm) i 1 j 1  There are other norms too…, e.g., n A max 1 i n aij (row-sum norm) aij (column-sum norm) j 1 n A max 1 j n i 1  Each of these norms returns a single (positive) value for the characteristics of the matrix.
  • 16. Matrix Condition Number:  Matrix condition number can be defined as Cond A A A 1 ( Cond A 1 )  If Cond [A] >> 1 ill-conditioned matrix  It can be shown that i.e., the relative error of the x x Cond A A A For example; [A] contains element of t S.F. (precision of 10-t) Cond [A] 10c computed solution cannot be larger than the relative error of the coefficients of [A] multiplied by the condition number. (x) will contain elements of (t-c) S.F. (precision of 10c-t)
  • 17. EX 10.4 : Estimate the condition number of the 3x3 Hilbert matrix using row sum norm 1 1/ 2 1/ 3 Hilbert matrix is A inherently illconditioned. 1/ 2 1/ 3 1/ 4 1/ 3 1/ 4 1/ 5 First normalize the matrix by dividing each row by the largest coefficient: 1 1/ 2 1/ 3 A 1 2 / 3 1/ 2 1 3/ 4 3/ 5 Row-sum norm: 1 1/ 2 1/ 3 A 1 1/ 2 1/ 3 1.833 1 2 / 3 1/ 2 1 2 / 3 1/ 2 2.1667 1 3/ 4 3/ 5 1 3/ 4 2.35 3/ 5 A 2.35
  • 18. Inverse of the scaled matrix: 9 A 1 36 30 18 96 90 this part takes the longest time of computation. 10 60 60 Row-sum norm: 9 A 1 36 18 96 30 90 10 A 60 1 36 96 60 192 60 Condition number: Cond A (2.35)(192 ) 451 .2 matrix is ill-conditioned. e.g., for a single precision (7.2 digits) computation; c log(451.2) 2.65 (7.2-2.65)=4.55 ~ 4 S.F. in the solution! (precision of ~10-4)
  • 19. Iterative refinement:  This technique especially useful for reducing round-off errors.  Consider a system: a11 x1 a12 x2 a13 x3 b1 a21 x1 a22 x2 a23 x3 b2 a31 x1 a32 x2 a33 x3 b3  Assume an approximate solution of the form a11 x1o o a12 x2 o a13 x3 b1o a21 x1o o a22 x2 o a23 x3 o b2 a31 x1o o a32 x2 o a33 x3 o b3  We can write a relationship between exact and approximate solutions: x1 x1o x1 x2 o 2 x x2 x3 o x3 x3
  • 20.  Insert these into the original equations: a11 ( x1 x1 ) a12 ( x2 x2 ) a13 ( x3 x3 ) b1 a21 ( x1 x1 ) a22 ( x2 x2 ) a23 ( x3 x3 ) b2 a31 ( x1 x1 ) a32 ( x2 x2 ) a33 ( x3 x3 ) b3  Now subtract the approximate solution from above to get a11 x1 a12 x2 a13 x3 b1 b1o a21 x1 a22 x2 a23 x3 o b2 b2 e2 a31 x1 a32 x2 a33 x3 o b3 b3 e3 e1  This a new set of simultaneous linear equation which can be solved for the correction factors.  Solution can be improved by applying the corrections to the previous solution (iterative refinement procedure)  It is especially suitable for LU decomposition since constant vector (b) continuously changes.
  • 21. Special Matrices  In engineering applications, special matrices are very common. > Banded matrices aij 0 BW=3 if i j (BW 1) / 2 tridiagonal system > Symmetric matrices aij a ji or A A T > Spare matrices (most elements are zero) only black areas are non-zero BW
  • 22.  Application of elimination methods to spare matrices are not efficient (e.g., need to deal with many zeros unnecessarily).  We employ special methods in working with these systems. Cholesky Decomposition:  This method is applicable to symmetric matrices.  A symmetric matrix can be decomposed as i 1 aki A L L T or lki lij lkj j 1 for i 1,2,...,k 1 lii k 1 lkk 2 lkj akk j 1 Symmetric matrices are very common in engineering applications. So, this method has wide applications.
  • 23. EX 11.2 : Apply Cholesky decomposition to 6 A Apply the recursion relation: k=1 l11 a11 (k=2,i=1) 6 a21 l11 l21 55 15 55 225 55 225 979 2.4495 15 2.4495 (k=3, i=1) a31 55 l31 l11 2.4495 l33 15 6.1237 22 .454 l22 2 a22 l21 (k=3, i=2) a32 l21l31 l32 l22 2 2 a33 l31 l32 979 (22.454) 2 (20.916) 2 6.1106 55 (6.1237) 2 4.1833 225 6.1237 (22 .454 ) 4.1833 20 .916 2.4495 L 6.1237 4.1833 22.454 20.916 6.1106
  • 24. Gauss-Seidel  Iterative methods are strong alternatives to elimination methods  In iterative methods solution is constantly improved so there is no concern of round-off errors.  As we did in root finding, > start with an initial guess. > iterate for refined estimates of the solution.  Gauss-Seidel is one of the most commonly used iterative method.  For the solution of A ( x) (b)  We write each unknown in the diagonal in terms of the other unknowns:
  • 25. In case of a 3x3 system: x1 x2 x3 b1 a12 x2 a13 x3 a11 Start with initial guesses x2 and x3 b2 Use new x1 and old x3 a21 x1 a23 x3 a11 b3 a31 x1 a32 x2 a33 Use new x1 and x2 calculate new x1 calculate new x2 calculate new x3 iterate...  In Gauss Seidel, new estimates are immediately used in subsequent calculations.  Alternatively, old values (x1 , x2 , x3) are collectively used to calculate new values (x1 , x2 , x3) Jacobi iteration (not commonly used)
  • 26. EX 11.2 : Use Gauss-Seidel method to obtain the solution of 3x1 0.1x2 0.1x1 7 x2 0.2 x3 0.3x3 0.3x1 0.2 x2 10 x3 true results x1 = 3 x2 = - 2.5 x3 = 7 7.85 19 .3 71 .4 Gauss-Seidel iteration: x1 7.85 0.1x2 0.2 x3 x2 3 19.3 0.1x1 0.3x3 7 x3 71.4 0.3x1 0.2 x2 10 Assume initial guesses all are 0. x1 7.85 0 0 3 x3 2.616667 x2 19.3 0.1(2.616667) 0 7 71.4 0.3(2.616667) 0.2( 2.794524) 10 7.005610 2.794524
  • 27. For the second iteration, we repeat the process: x1 7.85 0.1( 2.794524) 0.2(7.005610) 3 x2 19.3 0.1(2.990557) 0.3(7.005610) 7 x3 71.4 0.3(2.990557) 0.2( 2.499625) 10 2.990557 2.499625 7.000291 The solution is rapidly converging to the true solution. t 0.31% t 0.015% t 0.0042%
  • 28. Convergence in Gauss-Seidel:  Gauss-Seidel is similar to the fixed-point iteration method in root finding methods. As in the fixed-point iteration, Gauss-Seidel also is prone to > divergence > slow convergence  Convergence of the method can be checked by the following criteria: n aii aij j 1 j i that is, the absolute value of the diagonal coefficient in each of the row must be larger than sum of the absolute values of all other coefficients in the same row. (diagonally dominant system)  Fortunately, many engineering applications fulfill this requirement.
  • 29. Improvement of convergence by relaxation:  After each value of x is computed using Gauss-Seidel equations, the value is modified by a weighted average of the old and new values. xinew xinew (1 ) xiold 0 2 If 0< <1 underrelaxation (to make a system converge) If 1< <2 overrelaxation (to accelerate the convergence )  The choice of is empirical and depends on the problem.