SlideShare uma empresa Scribd logo
1 de 23
Baixar para ler offline
FIXED POINT ITERATION


We begin with a computational example. Consider
solving the two equations
                E1: x = 1 + .5 sin x
                E2: x = 3 + 2 sin x
Graphs of these two equations are shown on accom-
panying graphs, with the solutions being
            E1: α = 1.49870113351785
            E2: α = 3.09438341304928
We are going to use a numerical scheme called ‘fixed
point iteration’. It amounts to making an initial guess
of x0 and substituting this into the right side of the
equation. The resulting value is denoted by x1; and
then the process is repeated, this time substituting x1
into the right side. This is repeated until convergence
occurs or until the iteration is terminated.

In the above cases, we show the results of the first 10
iterations in the accompanying table. Clearly conver-
gence is occurring with E1, but not with E2. Why?
y




          y = 1 + .5sin x

                        y=x




                                    x
                                α




y



    y = 3 + 2sin x




         y=x



                            α       x
E1: x = 1 + .5 sin x
         E2: x = 3 + 2 sin x

          E1                 E2
n         xn                 xn
0  0.00000000000000   3.00000000000000
1  1.00000000000000   3.28224001611973
2  1.42073549240395   2.71963177181556
3  1.49438099256432   3.81910025488514
4  1.49854088439917   1.74629389651652
5  1.49869535552190   4.96927957214762
6  1.49870092540704   1.06563065299216
7  1.49870112602244   4.75018861639465
8  1.49870113324789   1.00142864236516
9  1.49870113350813   4.68448404916097
10 1.49870113351750   1.00077863465869
The above iterations can be written symbolically as
              E1: xn+1 = 1 + .5 sin xn
              E2: xn+1 = 3 + 2 sin xn
for n = 0, 1, 2, ... Why does one of these iterations
converge, but not the other? The graphs show similar
behaviour, so why the difference.

As another example, note that the Newton method
                                 f (xn)
                xn+1 = xn − 0
                                f (xn)
is also a fixed point iteration, for the equation
                              f (x)
                    x=x− 0
                             f (x)
In general, we are interested in solving equations
                      x = g(x)
by means of fixed point iteration:
          xn+1 = g(xn),       n = 0, 1, 2, ...
It is called ‘fixed point iteration’ because the root α
is a fixed point of the function g(x), meaning that α
is a number for which g(α) = α.
EXISTENCE THEOREM

We begin by asking whether the equation x = g(x)
has a solution. For this to occur, the graphs of y = x
and y = g(x) must intersect, as seen on the earlier
graphs. The lemmas and theorems in the book give
conditions under which we are guaranteed there is a
fixed point α.

Lemma: Let g(x) be a continuous function on the
interval [a, b], and suppose it satisfies the property
           a≤x≤b           ⇒   a ≤ g(x) ≤ b        (#)
Then the equation x = g(x) has at least one solution
α in the interval [a, b]. See the graphs for examples.

The proof of this is fairly intuitive. Look at the func-
tion
          f (x) = x − g(x),       a≤x≤b
Evaluating at the endpoints,
              f (a) ≤ 0,        f (b) ≥ 0
The function f (x) is continuous on [a, b], and there-
fore it contains a zero in the interval.
Theorem: Assume g(x) and g 0(x) exist and are con-
tinuous on the interval [a, b]; and further, assume

          a≤x≤b        ⇒     a ≤ g(x) ≤ b
                        ¯    ¯
                        ¯ 0  ¯
               λ ≡ max ¯g (x)¯ < 1
                  a≤x≤b
Then:
S1. The equation x = g(x) has a unique solution α
in [a, b].
S2. For any initial guess x0 in [a, b], the iteration

          xn+1 = g(xn),      n = 0, 1, 2, ...
will converge to α.
S3.
                    λn
        |α − xn| ≤     |x1 − x0| ,      n≥0
                   1−λ
S4.
                    α − xn+1
               lim           = g 0(α)
              n→∞ α − xn
Thus for xn close to α,

             α − xn+1 ≈ g 0(α) (α − xn)
The proof is given in the text, and I go over only a
portion of it here. For S2, note that from (#), if x0
is in [a, b], then
                     x1 = g(x0)
is also in [a, b]. Repeat the argument to show that

                     x2 = g(x1)
belongs to [a, b]. This can be continued by induction
to show that every xn belongs to [a, b].

We need the following general result. For any two
points w and z in [a, b],

            g(w) − g(z) = g 0(c) (w − z)
for some unknown point c between w and z. There-
fore,
             |g(w) − g(z)| ≤ λ |w − z|
for any a ≤ w, z ≤ b.
For S3, subtract xn+1 = g(xn) from α = g(α) to get

           α − xn+1 = g(α) − g(xn)
                       = g 0(cn) (α − xn)        ($)
          |α − xn+1| ≤ λ |α − xn|                (*)
with cn between α and xn. From (*), we have that
the error is guaranteed to decrease by a factor of λ
with each iteration. This leads to

         |α − xn| ≤ λn |α − xn| ,     n≥0
With some extra manipulation, we can obtain the error
bound in S3.

For S4, use ($) to write
                  α − xn+1
                           = g 0(cn)
                   α − xn
Since xn → α and cn is between α and xn, we have
g 0(cn) → g 0(α).
The statement

             α − xn+1 ≈ g 0(α) (α − xn)
tells us that when near to the root α, the errors will
decrease by a constant factor of g 0(α). If this is nega-
tive, then the errors will oscillate between positive and
negative, and the iterates will be approaching from
both sides. When g 0(α) is positive, the iterates will
approach α from only one side.

The statements

             α − xn+1 = g 0(cn) (α − xn)

             α − xn+1 ≈ g 0(α) (α − xn)
also tell us a bit more of what happens when
                      ¯     ¯
                      ¯ 0   ¯
                      ¯g (α)¯ > 1

Then the errors will increase as we approach the root
rather than decrease in size.
Look at the earlier examples
                 E1: x = 1 + .5 sin x
                 E2: x = 3 + 2 sin x
In the first case E1,
                  g(x) = 1 + .5 sin x
                 g 0(x) = .5 cos x
                ¯ 0 ¯
                ¯g (α¯ ≤ 1
                         2
Therefore the fixed point iteration

                 xn+1 = 1 + .5 sin xn
will converge for E1.

For the second case E2,
    g(x) = 3 + 2 sin x
   g 0(x) = 2 cos x
                                     .
   g 0(α) = 2 cos (3.09438341304928) = −1.998
Therefore the fixed point iteration

                 xn+1 = 3 + 2 sin xn
will diverge for E2.
Corollary: Assume x = g(x) has a solution α, and
further assume that both g(x) and g 0(x) are contin-
uous for all x in some interval about α. In addition,
assume
                     ¯     ¯
                     ¯ 0   ¯
                     ¯g (α)¯ < 1                  (**)
Then any sufficiently small number ε > 0, the interval
[a, b] = [α − ε, α + ε] will satisfy the hypotheses of
the preceding theorem.

This means that if (**) is true, and if we choose x0
sufficiently close to α, then the fixed point iteration
xn+1 = g(xn) will converge and the earlier results
S1-S4 will all hold. The corollary does not tell us how
close we need to be to α in order to have convergence.
NEWTON’S METHOD


For Newton’s method
                             f (xn)
                  xn+1 = xn − 0
                             f (xn)
we have it is a fixed point iteration with
                             f (x)
                   g(x) = x − 0
                             f (x)
Check its convergence by checking the condition (**).
                        f 0(x) f (x)f 00(x)
         g 0(x) = 1− 0        +
                        f (x)    [f 0(x)]2
                  f (x)f 00(x)
                =
                    [f 0(x)]2
         g 0(α) = 0
Therefore the Newton method will converge if x0 is
chosen sufficiently close to α.
HIGHER ORDER METHODS


What happens when g 0(α) = 0? We use Taylor’s
theorem to answer this question.
Begin by writing
                                  1
    g(x) = g(α) + g 0(α) (x − α) + g 00(c) (x − α)2
                                  2
with c between x and α. Substitute x = xn and
recall that g(xn) = xn+1 and g(α) = α. Also assume
g 0(α) = 0.
Then
             xn+1 = α + 1 g 00(cn) (xn − α)2
                        2
         α − xn+1 = − 1 g 00(cn) (xn − α)2
                             2
with cn between α and xn. Thus if g 0(α) = 0, the
fixed point iteration is quadratically convergent or bet-
ter. In fact, if g 00(α) 6= 0, then the iteration is exactly
quadratically convergent.
ANOTHER RAPID ITERATION


Newton’s method is rapid, but requires use of the
derivative f 0(x). Can we get by without this. The
answer is yes! Consider the method
                    f (xn + f (xn)) − f (xn)
           Dn =
                              f (xn)
                          f (xn)
         xn+1 = xn −
                            Dn
This is an approximation to Newton’s method, with
f 0(xn) ≈ Dn. To analyze its convergence, regard it
as a fixed point iteration with
                   f (x + f (x)) − f (x)
           D(x) =
                           f (x)
                        f (x)
            g(x) = x −
                        D(x)
Then we can, with some difficulty, show g 0(α) = 0
and g 00(α) 6= 0. This will prove this new iteration is
quadratically convergent.
FIXED POINT INTERATION: ERROR


Recall the result
                    α − xn
                lim         = g 0(α)
               n→∞ α − x
                        n−1
for the iteration

             xn = g(xn−1),     n = 1, 2, ...
Thus
               α − xn ≈ λ (α − xn−1)           (***)
with λ = g 0(α) and |λ| < 1.

If we were to know λ, then we could solve (***) for
α:
                      xn − λxn−1
                 α≈
                         1−λ
Usually, we write this as a modification of the cur-
rently computed iterate xn:
                xn − λxn−1
          α ≈
                   1−λ
                xn − λxn λxn − λxn−1
              =         +
                  1−λ          1−λ
                      λ
              = xn +      [xn − xn−1]
                     1−λ
The formula
                      λ
               xn +       [xn − xn−1]
                    1−λ
is said to be an extrapolation of the numbers xn−1
and xn. But what is λ?

From
                    α − xn
                lim         = g 0(α)
               n→∞ α − x
                        n−1
we have
                            α − xn
                      λ≈
                           α − xn−1
Unfortunately this also involves the unknown root α
which we seek; and we must find some other way of
estimating λ.

To calculate λ consider the ratio
                        xn − xn−1
                 λn =
                      xn−1 − xn−2
To see this is approximately λ as xn approaches α,
write
     xn − xn−1     g(xn−1) − g(xn−2)
                 =                     = g 0(cn)
    xn−1 − xn−2       xn−1 − xn−2
with cn between xn−1 and xn−2. As the iterates ap-
proach α, the number cn must also approach α. Thus
λn approaches λ as xn → α.
We combine these results to obtain the estimation
           λn                              xn − xn−1
b
xn = xn+        [xn − xn−1] ,      λn =
         1 − λn                          xn−1 − xn−2
        b
We call xn the Aitken extrapolate of {xn−2, xn−1, xn};
          b
and α ≈ xn.

We can also rewrite this as
                                λn
               b
      α − xn ≈ xn − xn =            [xn − xn−1]
                             1 − λn
This is called Aitken’s error estimation formula.

The accuracy of these procedures is tied directly to
the accuracy of the formulas

α−xn ≈ λ (α − xn−1) ,         α−xn−1 ≈ λ (α − xn−2)
If this is accurate, then so are the above extrapolation
and error estimation formulas.
EXAMPLE


Consider the iteration

     xn+1 = 6.28 + sin(xn),       n = 0, 1, 2, ...
for solving
                  x = 6.28 + sin x
Iterates are shown on the accompanying sheet, includ-
ing calculations of λn, the error estimate
                         λn
          b
 α − xn ≈ xn − xn =           [xn − xn−1] (Estimate)
                       1 − λn
The latter is called “Estimate” in the table. In this
instance,
                           .
                    g 0(α) = .9644
and therefore the convergence is very slow. This is
apparent in the table.
AITKEN’S ALGORITHM


Step 1: Select x0
Step 2: Calculate

             x1 = g(x0),   x2 = g(x1)
Step3: Calculate
                λ2                      x2 − x1
   x3 = x2 +        [x2 − x1] ,    λ2 =
             1 − λ2                     x1 − x0
Step 4: Calculate

             x4 = g(x3),   x5 = g(x4)
and calculate x6 as the extrapolate of {x3, x4, x5}.
Continue this procedure, ad infinatum.

Of course in practice we will have some kind of er-
ror test to stop this procedure when believe we have
sufficient accuracy.
EXAMPLE


Consider again the iteration

     xn+1 = 6.28 + sin(xn),         n = 0, 1, 2, ...
for solving
                  x = 6.28 + sin x
Now we use the Aitken method, and the results are
shown in the accompanying table. With this we have

  α − x3 = 7.98 × 10−4,        α − x6 = 2.27 × 10−6
In comparison, the original iteration had

               α − x6 = 1.23 × 10−2
GENERAL COMMENTS


Aitken extrapolation can greatly accelerate the con-
vergence of a linearly convergent iteration

                   xn+1 = g(xn)
This shows the power of understanding the behaviour
of the error in a numerical process. From that un-
derstanding, we can often improve the accuracy, thru
extrapolation or some other procedure.

This is a justification for using mathematical analyses
to understand numerical methods. We will see this
repeated at later points in the course, and it holds
with many different types of problems and numerical
methods for their solution.

Mais conteúdo relacionado

Mais procurados

Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Matthew Leingang
 
Math lecture 10 (Introduction to Integration)
Math lecture 10 (Introduction to Integration)Math lecture 10 (Introduction to Integration)
Math lecture 10 (Introduction to Integration)Osama Zahid
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Daisuke Yoneoka
 
Lesson 28: Lagrange Multipliers II
Lesson 28: Lagrange Multipliers IILesson 28: Lagrange Multipliers II
Lesson 28: Lagrange Multipliers IIguestf32826
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Matthew Leingang
 
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)Matthew Leingang
 
Solution of non-linear equations
Solution of non-linear equationsSolution of non-linear equations
Solution of non-linear equationsZunAib Ali
 
Newton-Raphson Method
Newton-Raphson MethodNewton-Raphson Method
Newton-Raphson MethodJigisha Dabhi
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsChristian Robert
 
Lesson 18: Maximum and Minimum Values (slides)
Lesson 18: Maximum and Minimum Values (slides)Lesson 18: Maximum and Minimum Values (slides)
Lesson 18: Maximum and Minimum Values (slides)Matthew Leingang
 
Indefinite Integral
Indefinite IntegralIndefinite Integral
Indefinite IntegralJelaiAujero
 
Deep generative model.pdf
Deep generative model.pdfDeep generative model.pdf
Deep generative model.pdfHyungjoo Cho
 
__limite functions.sect22-24
  __limite functions.sect22-24  __limite functions.sect22-24
__limite functions.sect22-24argonaut2
 
Newton’s Divided Difference Formula
Newton’s Divided Difference FormulaNewton’s Divided Difference Formula
Newton’s Divided Difference FormulaJas Singh Bhasin
 

Mais procurados (20)

Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)
 
Math lecture 10 (Introduction to Integration)
Math lecture 10 (Introduction to Integration)Math lecture 10 (Introduction to Integration)
Math lecture 10 (Introduction to Integration)
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
 
Lesson 28: Lagrange Multipliers II
Lesson 28: Lagrange Multipliers IILesson 28: Lagrange Multipliers II
Lesson 28: Lagrange Multipliers II
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)
 
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
 
Solution of non-linear equations
Solution of non-linear equationsSolution of non-linear equations
Solution of non-linear equations
 
Newton-Raphson Method
Newton-Raphson MethodNewton-Raphson Method
Newton-Raphson Method
 
Nested sampling
Nested samplingNested sampling
Nested sampling
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
Lesson 18: Maximum and Minimum Values (slides)
Lesson 18: Maximum and Minimum Values (slides)Lesson 18: Maximum and Minimum Values (slides)
Lesson 18: Maximum and Minimum Values (slides)
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Stochastic Assignment Help
Stochastic Assignment Help Stochastic Assignment Help
Stochastic Assignment Help
 
Indefinite Integral
Indefinite IntegralIndefinite Integral
Indefinite Integral
 
Deep generative model.pdf
Deep generative model.pdfDeep generative model.pdf
Deep generative model.pdf
 
__limite functions.sect22-24
  __limite functions.sect22-24  __limite functions.sect22-24
__limite functions.sect22-24
 
Matching
MatchingMatching
Matching
 
Newton’s Divided Difference Formula
Newton’s Divided Difference FormulaNewton’s Divided Difference Formula
Newton’s Divided Difference Formula
 
The integral
The integralThe integral
The integral
 
Big model, big data
Big model, big dataBig model, big data
Big model, big data
 

Destaque

Sharing the Inspiration with Robot Kits
Sharing the Inspiration with Robot KitsSharing the Inspiration with Robot Kits
Sharing the Inspiration with Robot KitsRobotGrrl
 
Ονειρικές φωτογραφίες..
Ονειρικές φωτογραφίες..Ονειρικές φωτογραφίες..
Ονειρικές φωτογραφίες..bymafe
 
Anil saldhana oasisid_cloud
Anil saldhana oasisid_cloudAnil saldhana oasisid_cloud
Anil saldhana oasisid_cloudAnil Saldanha
 
Greece from above
Greece from aboveGreece from above
Greece from abovebymafe
 
Beautiful pictures
Beautiful picturesBeautiful pictures
Beautiful picturesbymafe
 
ERP Concepts for Educational Systems
ERP Concepts for Educational SystemsERP Concepts for Educational Systems
ERP Concepts for Educational SystemsSandeep Singh
 
C/P/R Revives Your Magazine
C/P/R Revives Your MagazineC/P/R Revives Your Magazine
C/P/R Revives Your MagazineAURAS Design
 
S ________
S  ________S  ________
S ________bymafe
 
Nostalgie(de+gr)
Nostalgie(de+gr)Nostalgie(de+gr)
Nostalgie(de+gr)bymafe
 
Hadoop-Quick introduction
Hadoop-Quick introductionHadoop-Quick introduction
Hadoop-Quick introductionSandeep Singh
 
FPO 10 Step DIY Critique
FPO 10 Step DIY CritiqueFPO 10 Step DIY Critique
FPO 10 Step DIY CritiqueAURAS Design
 
RE Non statutory guidance under fives
RE Non statutory guidance under fivesRE Non statutory guidance under fives
RE Non statutory guidance under fivesKatherine Lyddon
 
Under the hood
Under the hoodUnder the hood
Under the hoodlewandle
 
Hellas
HellasHellas
Hellasbymafe
 
09年家园设计部色彩培训课
09年家园设计部色彩培训课09年家园设计部色彩培训课
09年家园设计部色彩培训课jianfeng
 

Destaque (20)

Bt tuan2
Bt tuan2Bt tuan2
Bt tuan2
 
Sharing the Inspiration with Robot Kits
Sharing the Inspiration with Robot KitsSharing the Inspiration with Robot Kits
Sharing the Inspiration with Robot Kits
 
Ονειρικές φωτογραφίες..
Ονειρικές φωτογραφίες..Ονειρικές φωτογραφίες..
Ονειρικές φωτογραφίες..
 
Anil saldhana oasisid_cloud
Anil saldhana oasisid_cloudAnil saldhana oasisid_cloud
Anil saldhana oasisid_cloud
 
vega
vegavega
vega
 
Greece from above
Greece from aboveGreece from above
Greece from above
 
Beautiful pictures
Beautiful picturesBeautiful pictures
Beautiful pictures
 
ERP Concepts for Educational Systems
ERP Concepts for Educational SystemsERP Concepts for Educational Systems
ERP Concepts for Educational Systems
 
C/P/R Revives Your Magazine
C/P/R Revives Your MagazineC/P/R Revives Your Magazine
C/P/R Revives Your Magazine
 
S ________
S  ________S  ________
S ________
 
Let's Listen
Let's ListenLet's Listen
Let's Listen
 
Nostalgie(de+gr)
Nostalgie(de+gr)Nostalgie(de+gr)
Nostalgie(de+gr)
 
Hadoop-Quick introduction
Hadoop-Quick introductionHadoop-Quick introduction
Hadoop-Quick introduction
 
FPO 10 Step DIY Critique
FPO 10 Step DIY CritiqueFPO 10 Step DIY Critique
FPO 10 Step DIY Critique
 
Beyond sacred babysitting
Beyond sacred babysittingBeyond sacred babysitting
Beyond sacred babysitting
 
RE Non statutory guidance under fives
RE Non statutory guidance under fivesRE Non statutory guidance under fives
RE Non statutory guidance under fives
 
Under the hood
Under the hoodUnder the hood
Under the hood
 
Hellas
HellasHellas
Hellas
 
VISIONES FOSFENICAS
VISIONES FOSFENICASVISIONES FOSFENICAS
VISIONES FOSFENICAS
 
09年家园设计部色彩培训课
09年家园设计部色彩培训课09年家园设计部色彩培训课
09年家园设计部色彩培训课
 

Semelhante a Roots equations

Semelhante a Roots equations (20)

Quadrature
QuadratureQuadrature
Quadrature
 
Applications of Differential Calculus in real life
Applications of Differential Calculus in real life Applications of Differential Calculus in real life
Applications of Differential Calculus in real life
 
Improper integral
Improper integralImproper integral
Improper integral
 
Statistical Method In Economics
Statistical Method In EconomicsStatistical Method In Economics
Statistical Method In Economics
 
Chapter 3
Chapter 3Chapter 3
Chapter 3
 
Imc2016 day2-solutions
Imc2016 day2-solutionsImc2016 day2-solutions
Imc2016 day2-solutions
 
Conjugate Gradient Methods
Conjugate Gradient MethodsConjugate Gradient Methods
Conjugate Gradient Methods
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment Help
 
Calculus Assignment Help
Calculus Assignment HelpCalculus Assignment Help
Calculus Assignment Help
 
Interpolation techniques - Background and implementation
Interpolation techniques - Background and implementationInterpolation techniques - Background and implementation
Interpolation techniques - Background and implementation
 
Local linear approximation
Local linear approximationLocal linear approximation
Local linear approximation
 
Calculus Homework Help
Calculus Homework HelpCalculus Homework Help
Calculus Homework Help
 
Calculus First Test 2011/10/20
Calculus First Test 2011/10/20Calculus First Test 2011/10/20
Calculus First Test 2011/10/20
 
Famous problem IMO 1988 Q6.pdf
Famous problem IMO 1988 Q6.pdfFamous problem IMO 1988 Q6.pdf
Famous problem IMO 1988 Q6.pdf
 
SlidesL28.pdf
SlidesL28.pdfSlidesL28.pdf
SlidesL28.pdf
 
Recurrence
RecurrenceRecurrence
Recurrence
 
Integration techniques
Integration techniquesIntegration techniques
Integration techniques
 
Proof of Beal's conjecture
Proof of Beal's conjecture Proof of Beal's conjecture
Proof of Beal's conjecture
 
limits and continuity
limits and continuitylimits and continuity
limits and continuity
 
Convexity in the Theory of the Gamma Function.pdf
Convexity in the Theory of the Gamma Function.pdfConvexity in the Theory of the Gamma Function.pdf
Convexity in the Theory of the Gamma Function.pdf
 

Mais de oscar

Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinantsoscar
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinantsoscar
 
Calculations of roots
Calculations of rootsCalculations of roots
Calculations of rootsoscar
 
Calculations of roots
Calculations of rootsCalculations of roots
Calculations of rootsoscar
 
Roots equations
Roots equationsRoots equations
Roots equationsoscar
 
Darcy´s law
Darcy´s lawDarcy´s law
Darcy´s lawoscar
 

Mais de oscar (8)

Lu
LuLu
Lu
 
Lu
LuLu
Lu
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
 
Matrices and determinants
Matrices and determinantsMatrices and determinants
Matrices and determinants
 
Calculations of roots
Calculations of rootsCalculations of roots
Calculations of roots
 
Calculations of roots
Calculations of rootsCalculations of roots
Calculations of roots
 
Roots equations
Roots equationsRoots equations
Roots equations
 
Darcy´s law
Darcy´s lawDarcy´s law
Darcy´s law
 

Roots equations

  • 1. FIXED POINT ITERATION We begin with a computational example. Consider solving the two equations E1: x = 1 + .5 sin x E2: x = 3 + 2 sin x Graphs of these two equations are shown on accom- panying graphs, with the solutions being E1: α = 1.49870113351785 E2: α = 3.09438341304928 We are going to use a numerical scheme called ‘fixed point iteration’. It amounts to making an initial guess of x0 and substituting this into the right side of the equation. The resulting value is denoted by x1; and then the process is repeated, this time substituting x1 into the right side. This is repeated until convergence occurs or until the iteration is terminated. In the above cases, we show the results of the first 10 iterations in the accompanying table. Clearly conver- gence is occurring with E1, but not with E2. Why?
  • 2. y y = 1 + .5sin x y=x x α y y = 3 + 2sin x y=x α x
  • 3. E1: x = 1 + .5 sin x E2: x = 3 + 2 sin x E1 E2 n xn xn 0 0.00000000000000 3.00000000000000 1 1.00000000000000 3.28224001611973 2 1.42073549240395 2.71963177181556 3 1.49438099256432 3.81910025488514 4 1.49854088439917 1.74629389651652 5 1.49869535552190 4.96927957214762 6 1.49870092540704 1.06563065299216 7 1.49870112602244 4.75018861639465 8 1.49870113324789 1.00142864236516 9 1.49870113350813 4.68448404916097 10 1.49870113351750 1.00077863465869
  • 4. The above iterations can be written symbolically as E1: xn+1 = 1 + .5 sin xn E2: xn+1 = 3 + 2 sin xn for n = 0, 1, 2, ... Why does one of these iterations converge, but not the other? The graphs show similar behaviour, so why the difference. As another example, note that the Newton method f (xn) xn+1 = xn − 0 f (xn) is also a fixed point iteration, for the equation f (x) x=x− 0 f (x) In general, we are interested in solving equations x = g(x) by means of fixed point iteration: xn+1 = g(xn), n = 0, 1, 2, ... It is called ‘fixed point iteration’ because the root α is a fixed point of the function g(x), meaning that α is a number for which g(α) = α.
  • 5. EXISTENCE THEOREM We begin by asking whether the equation x = g(x) has a solution. For this to occur, the graphs of y = x and y = g(x) must intersect, as seen on the earlier graphs. The lemmas and theorems in the book give conditions under which we are guaranteed there is a fixed point α. Lemma: Let g(x) be a continuous function on the interval [a, b], and suppose it satisfies the property a≤x≤b ⇒ a ≤ g(x) ≤ b (#) Then the equation x = g(x) has at least one solution α in the interval [a, b]. See the graphs for examples. The proof of this is fairly intuitive. Look at the func- tion f (x) = x − g(x), a≤x≤b Evaluating at the endpoints, f (a) ≤ 0, f (b) ≥ 0 The function f (x) is continuous on [a, b], and there- fore it contains a zero in the interval.
  • 6. Theorem: Assume g(x) and g 0(x) exist and are con- tinuous on the interval [a, b]; and further, assume a≤x≤b ⇒ a ≤ g(x) ≤ b ¯ ¯ ¯ 0 ¯ λ ≡ max ¯g (x)¯ < 1 a≤x≤b Then: S1. The equation x = g(x) has a unique solution α in [a, b]. S2. For any initial guess x0 in [a, b], the iteration xn+1 = g(xn), n = 0, 1, 2, ... will converge to α. S3. λn |α − xn| ≤ |x1 − x0| , n≥0 1−λ S4. α − xn+1 lim = g 0(α) n→∞ α − xn Thus for xn close to α, α − xn+1 ≈ g 0(α) (α − xn)
  • 7. The proof is given in the text, and I go over only a portion of it here. For S2, note that from (#), if x0 is in [a, b], then x1 = g(x0) is also in [a, b]. Repeat the argument to show that x2 = g(x1) belongs to [a, b]. This can be continued by induction to show that every xn belongs to [a, b]. We need the following general result. For any two points w and z in [a, b], g(w) − g(z) = g 0(c) (w − z) for some unknown point c between w and z. There- fore, |g(w) − g(z)| ≤ λ |w − z| for any a ≤ w, z ≤ b.
  • 8. For S3, subtract xn+1 = g(xn) from α = g(α) to get α − xn+1 = g(α) − g(xn) = g 0(cn) (α − xn) ($) |α − xn+1| ≤ λ |α − xn| (*) with cn between α and xn. From (*), we have that the error is guaranteed to decrease by a factor of λ with each iteration. This leads to |α − xn| ≤ λn |α − xn| , n≥0 With some extra manipulation, we can obtain the error bound in S3. For S4, use ($) to write α − xn+1 = g 0(cn) α − xn Since xn → α and cn is between α and xn, we have g 0(cn) → g 0(α).
  • 9. The statement α − xn+1 ≈ g 0(α) (α − xn) tells us that when near to the root α, the errors will decrease by a constant factor of g 0(α). If this is nega- tive, then the errors will oscillate between positive and negative, and the iterates will be approaching from both sides. When g 0(α) is positive, the iterates will approach α from only one side. The statements α − xn+1 = g 0(cn) (α − xn) α − xn+1 ≈ g 0(α) (α − xn) also tell us a bit more of what happens when ¯ ¯ ¯ 0 ¯ ¯g (α)¯ > 1 Then the errors will increase as we approach the root rather than decrease in size.
  • 10. Look at the earlier examples E1: x = 1 + .5 sin x E2: x = 3 + 2 sin x In the first case E1, g(x) = 1 + .5 sin x g 0(x) = .5 cos x ¯ 0 ¯ ¯g (α¯ ≤ 1 2 Therefore the fixed point iteration xn+1 = 1 + .5 sin xn will converge for E1. For the second case E2, g(x) = 3 + 2 sin x g 0(x) = 2 cos x . g 0(α) = 2 cos (3.09438341304928) = −1.998 Therefore the fixed point iteration xn+1 = 3 + 2 sin xn will diverge for E2.
  • 11. Corollary: Assume x = g(x) has a solution α, and further assume that both g(x) and g 0(x) are contin- uous for all x in some interval about α. In addition, assume ¯ ¯ ¯ 0 ¯ ¯g (α)¯ < 1 (**) Then any sufficiently small number ε > 0, the interval [a, b] = [α − ε, α + ε] will satisfy the hypotheses of the preceding theorem. This means that if (**) is true, and if we choose x0 sufficiently close to α, then the fixed point iteration xn+1 = g(xn) will converge and the earlier results S1-S4 will all hold. The corollary does not tell us how close we need to be to α in order to have convergence.
  • 12. NEWTON’S METHOD For Newton’s method f (xn) xn+1 = xn − 0 f (xn) we have it is a fixed point iteration with f (x) g(x) = x − 0 f (x) Check its convergence by checking the condition (**). f 0(x) f (x)f 00(x) g 0(x) = 1− 0 + f (x) [f 0(x)]2 f (x)f 00(x) = [f 0(x)]2 g 0(α) = 0 Therefore the Newton method will converge if x0 is chosen sufficiently close to α.
  • 13. HIGHER ORDER METHODS What happens when g 0(α) = 0? We use Taylor’s theorem to answer this question. Begin by writing 1 g(x) = g(α) + g 0(α) (x − α) + g 00(c) (x − α)2 2 with c between x and α. Substitute x = xn and recall that g(xn) = xn+1 and g(α) = α. Also assume g 0(α) = 0. Then xn+1 = α + 1 g 00(cn) (xn − α)2 2 α − xn+1 = − 1 g 00(cn) (xn − α)2 2 with cn between α and xn. Thus if g 0(α) = 0, the fixed point iteration is quadratically convergent or bet- ter. In fact, if g 00(α) 6= 0, then the iteration is exactly quadratically convergent.
  • 14. ANOTHER RAPID ITERATION Newton’s method is rapid, but requires use of the derivative f 0(x). Can we get by without this. The answer is yes! Consider the method f (xn + f (xn)) − f (xn) Dn = f (xn) f (xn) xn+1 = xn − Dn This is an approximation to Newton’s method, with f 0(xn) ≈ Dn. To analyze its convergence, regard it as a fixed point iteration with f (x + f (x)) − f (x) D(x) = f (x) f (x) g(x) = x − D(x) Then we can, with some difficulty, show g 0(α) = 0 and g 00(α) 6= 0. This will prove this new iteration is quadratically convergent.
  • 15. FIXED POINT INTERATION: ERROR Recall the result α − xn lim = g 0(α) n→∞ α − x n−1 for the iteration xn = g(xn−1), n = 1, 2, ... Thus α − xn ≈ λ (α − xn−1) (***) with λ = g 0(α) and |λ| < 1. If we were to know λ, then we could solve (***) for α: xn − λxn−1 α≈ 1−λ
  • 16. Usually, we write this as a modification of the cur- rently computed iterate xn: xn − λxn−1 α ≈ 1−λ xn − λxn λxn − λxn−1 = + 1−λ 1−λ λ = xn + [xn − xn−1] 1−λ The formula λ xn + [xn − xn−1] 1−λ is said to be an extrapolation of the numbers xn−1 and xn. But what is λ? From α − xn lim = g 0(α) n→∞ α − x n−1 we have α − xn λ≈ α − xn−1
  • 17. Unfortunately this also involves the unknown root α which we seek; and we must find some other way of estimating λ. To calculate λ consider the ratio xn − xn−1 λn = xn−1 − xn−2 To see this is approximately λ as xn approaches α, write xn − xn−1 g(xn−1) − g(xn−2) = = g 0(cn) xn−1 − xn−2 xn−1 − xn−2 with cn between xn−1 and xn−2. As the iterates ap- proach α, the number cn must also approach α. Thus λn approaches λ as xn → α.
  • 18. We combine these results to obtain the estimation λn xn − xn−1 b xn = xn+ [xn − xn−1] , λn = 1 − λn xn−1 − xn−2 b We call xn the Aitken extrapolate of {xn−2, xn−1, xn}; b and α ≈ xn. We can also rewrite this as λn b α − xn ≈ xn − xn = [xn − xn−1] 1 − λn This is called Aitken’s error estimation formula. The accuracy of these procedures is tied directly to the accuracy of the formulas α−xn ≈ λ (α − xn−1) , α−xn−1 ≈ λ (α − xn−2) If this is accurate, then so are the above extrapolation and error estimation formulas.
  • 19. EXAMPLE Consider the iteration xn+1 = 6.28 + sin(xn), n = 0, 1, 2, ... for solving x = 6.28 + sin x Iterates are shown on the accompanying sheet, includ- ing calculations of λn, the error estimate λn b α − xn ≈ xn − xn = [xn − xn−1] (Estimate) 1 − λn The latter is called “Estimate” in the table. In this instance, . g 0(α) = .9644 and therefore the convergence is very slow. This is apparent in the table.
  • 20.
  • 21. AITKEN’S ALGORITHM Step 1: Select x0 Step 2: Calculate x1 = g(x0), x2 = g(x1) Step3: Calculate λ2 x2 − x1 x3 = x2 + [x2 − x1] , λ2 = 1 − λ2 x1 − x0 Step 4: Calculate x4 = g(x3), x5 = g(x4) and calculate x6 as the extrapolate of {x3, x4, x5}. Continue this procedure, ad infinatum. Of course in practice we will have some kind of er- ror test to stop this procedure when believe we have sufficient accuracy.
  • 22. EXAMPLE Consider again the iteration xn+1 = 6.28 + sin(xn), n = 0, 1, 2, ... for solving x = 6.28 + sin x Now we use the Aitken method, and the results are shown in the accompanying table. With this we have α − x3 = 7.98 × 10−4, α − x6 = 2.27 × 10−6 In comparison, the original iteration had α − x6 = 1.23 × 10−2
  • 23. GENERAL COMMENTS Aitken extrapolation can greatly accelerate the con- vergence of a linearly convergent iteration xn+1 = g(xn) This shows the power of understanding the behaviour of the error in a numerical process. From that un- derstanding, we can often improve the accuracy, thru extrapolation or some other procedure. This is a justification for using mathematical analyses to understand numerical methods. We will see this repeated at later points in the course, and it holds with many different types of problems and numerical methods for their solution.