Contents of the presentation:
- ABOUT ME
- Bisection Method using C#
- False Position Method using C#
- Gauss Seidel Method using MATLAB
- Secant Mod Method using MATLAB
- Report on Numerical Errors
- Optimization using Golden-Section Algorithm with Application on MATLAB
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
Scientific Computing II Numerical Tools & Algorithms - CEI40 - AGA
1. 9-Month Program, Intake40, CEI Track
SCIENTIFIC COMPUTING II
NUMERICAL TOOLS & ALGORITHMS
Ahmed Gamal Abdel Gawad
2. CONTENTS
ABOUT ME
Bisection Method using C#
False Position Method using C#
Gauss Seidel Method using MATLAB
Secant Mod Method using MATLAB
Report on Numerical Errors
Optimization using Golden-Section Algorithm
with Application on MATLAB
3. TEACHING ASSISTANT AT MENOUFIYA UNIVERSITY.
GRADE: EXCELLENT WITH HONORS.
BEST MEMBER AT ‘UTW-7 PROGRAM’, ECG.
ITI 9-MONTH PROGRAM, INT40, CEI TRACK STUDENT.
AUTODESK REVIT CERTIFIED PROFESSIONAL.
BACHELOR OF CIVIL ENGINEERING, 2016.
LECTURER OF ‘DESIGN OF R.C.’ COURSE, YOUTUBE.
ABOUT ME
4. Bisection Method C#
static double Bisection(double x1, double x2, int maxIterations, double tolerance, out int count)
{
double f1 = Function(x1);
double f2 = Function(x2);
double xm = 0.0;
double fm;
if (f1 * f2 > 0.0) throw new InvalidOperationException("No Bracket");
count = 0;
for (int i = 0; i < maxIterations; i++)
{
count++;
xm = (x1 + x2) / 2;
fm = Function(xm);
if (Math.Abs(fm) <= tolerance) break;
if (f1 * fm > 0)
{
x1 = xm;
f1 = fm;
}
else
{
x2 = xm;
f2 = fm;
}
}
return xm;
}
5. False Position Method C#
static double FalsePosition(double x1, double x2, int maxIterations, double tolerance, out int count)
{
double f1 = Function(x1);
double f2 = Function(x2);
double xp = 0.0;
double fp;
double s;
if (f1 * f2 > 0.0) throw new InvalidOperationException("No Bracket");
count = 0;
for (int i = 0; i < maxIterations; i++)
{
count++;
s = (f2 - f1) / (x2 - x1);
xp = x1 - f1 / s;
fp = Function(xp);
if (Math.Abs(fp) <= tolerance) break;
if (f1 * fp > 0)
{
x1 = xp;
f1 = fp;
}
else
{
x2 = xp;
f2 = fp;
}
}
return xp;
}
6. Gauss Seidel
function[x,nit] = gseidel(A,b,nmax,tol)
% Function to run gseidel method
[nr, nc] = size(A);
if (nc ~= nr), error('A is NOT Square'); end % check square matrix
x = zeros(nr,1); % Vector of inital values of x
for k = 1:nr
x(k) = b(k)/A(k,k); % Initial values of x
end
err = zeros(nr,1); % Vector of errors
errmax = 1; % Initial value for errmax > tolerance
nit=0.0; % No of iterations
while (errmax > tol && nit < nmax)
xold = x; % Set xold to the previous values of x
nit = nit + 1; % Increse No of iterations by 1
for k = 1:nr
sum = A(k,:)*x; % Calculate the sum term
sum = sum - A(k,k)*x(k); % Exclude akk and xk from calculations
x(k) = (b(k) - sum)/A(k,k); % Calculate x new values
err(k) = abs(x(k)) - abs(xold(k)); % Record vector of errors
end
errmax = max(abs(err));
end
end
7. Secant Mod
function [xr,nit]= secantmod(func,xo,deltax,kmax,etol)
% Secant method to find root of function “func” using
% one starting point xo and small perturbation ?x for
% max iterations kmax
xv1 = xo;
xv2 = xo + deltax;
nit = 0;
for k = 1:kmax
nit = nit +1;
vf1 = func(xv1);
vf2 = func(xv2);
vsec = (vf2 - vf1) / deltax;
if (abs(vsec) <= 10^(-15)),error('Zero Secant Slope');end
xnew = xv1 - vf1/vsec;
vfnew = func(xnew);
if abs(vfnew) <= etol
xr = xnew;
break
end
xv1 = xnew;
xv2 = xnew + deltax;
end
end
8. Report on Numerical Error
Truncation Error
The word 'Truncate' means 'to shorten'. Truncation error refers to
an error in a method, which occurs because some number/series
of steps (finite or infinite) is truncated (shortened) to a fewer
number. Such errors are essentially algorithmic errors and we can
predict the extent of the error that will occur in the method. For
instance, if we approximate the sine function by the first two non-
zero term of its Taylor series, as in sin 𝑥 = 𝑥 −
1
6
𝑥3 for small x,
the resulting error is a truncation error. It is present even with
infinite-precision arithmetic, because it is caused by truncation of
the infinite Taylor series to form the algorithm.
9. Report on Numerical Error
Roundoff Error
A roundoff error, also called rounding error, is the difference
between the result produced by a given algorithm using exact
arithmetic and the result produced by the same algorithm using
finite-precision, rounded arithmetic. Rounding errors are due to
inexactness in the representation of real numbers and the
arithmetic operations done with them. This is a form of
quantization error. When using approximation equations or
algorithms, especially when using finitely many digits to
represent real numbers (which in theory have infinitely many
digits), one of the goals of numerical analysis is to estimate
computation errors. Computation errors, also called numerical
errors, include both truncation errors and roundoff errors.
11. Report on Numerical Error
Accuracy and Precision
Measurements and calculations can be characterized with regard
to their accuracy and precision. Accuracy refers to how closely a
value agrees with the true value. Precision refers to how closely
values agree with each other. The following figures illustrate the
difference between accuracy and precision. In the first figure, the
given values (black dots) are more accurate; whereas in the
second figure, the given values are more precise. The term error
represents the imprecision and inaccuracy of a numerical
computation.
13. Report on Numerical Error
Real world example: Patriot missile failure due to
magnification of roundoff error
On 25 February 1991, during the Gulf
War, an American Patriot missile
battery in Dharan, Saudi Arabia, failed
to intercept an incoming Iraqi Scud
missile. The Scud struck an American
Army barracks and killed 28 soldiers.
It turns out that the cause was an
inaccurate calculation of the time
since boot due to computer
arithmetic errors.
14. Optimization using Golden-
Section Algorithm
Euclid’s definition of the golden ratio is based
on dividing a line into two segments so that
the ratio of the whole line to the larger
segment is equal to the ratio of the larger
segment to the smaller segment. This ratio is
called the golden ratio.
15. Optimization using Golden-
Section Algorithm
The actual value of the golden ratio can be
derived by expressing Euclid’s definition as
𝑙1+𝑙2
𝑙1
=
𝑙1
𝑙2
Multiplying by
𝑙1
𝑙2
and collecting terms yields
∅2 − ∅ − 1 = 0
Where ∅ = 𝑙1/𝑙2 .The positive root of this
equation is the golden ratio:
∅ =
1+ 5
2
= 1.61803398874989
16. Optimization using Golden-
Section Algorithm
The golden-section search is similar in spirit to
the bisection approach for locating roots. Recall
that bisection hinged on defining an interval,
specified by a lower guess (xl) and an upper
guess (xu) that bracketed a single root. The
presence of a root between these bounds was
verified by determining that f (xl) and f (xu) had
different signs. The root was then estimated as
the midpoint of this interval:
𝑥 𝑟 =
𝑥 𝑢 + 𝑥𝑙
2
17. Optimization using Golden-
Section Algorithm
The key to making this approach efficient is the
wise choice of the intermediate points. As in
bisection, the goal is to minimize function
evaluations by replacing old values with new
values. For bisection, this was accomplished by
choosing the midpoint. For the golden-section
search, the two intermediate points are chosen
according to the golden ratio:
𝑥1 = 𝑥𝑙 + 𝑑
𝑥2 = 𝑥 𝑢 − 𝑑
where
𝑑 = (∅ − 1)(𝑥 𝑢 − 𝑥𝑙)
18. Optimization using Golden-
Section Algorithm
The function is evaluated at these two interior
points. Two results can occur:
1. If, as in Fig. 7.6a, f (x1)< f (x2), then f (x1) is
the minimum, and the domain of x to the
left of x 2, from xl to x2, can be eliminated
because it does not contain the minimum.
For this case, x2 becomes the new xl for the
next round.
2. If f (x2)< f (x1), then f (x2) is the minimum
and the domain of x to the right of x1, from
x 1 to xu would be eliminated. For this case,
x1 becomes the new xu for the next round.
20. Optimization using Golden-
Section Algorithm
function [x,fx,ea,iter]=goldmin(f,xl,xu,es,maxit)
% goldmin: minimization golden section search
% uses golden section search to find the minimum of f
if nargin<3,error('at least 3 input arguments required'),end
if nargin<4||isempty(es), es=0.0001;end
if nargin<5||isempty(maxit), maxit=50;end
phi=(1+sqrt(5))/2;
iter=0;
while(1)
d = (phi-1)*(xu - xl);
x1 = xl + d;
x2 = xu - d;
if f(x1) < f(x2)
xopt = x1;
xl = x2;
else
xopt = x2;
xu = x1;
end
iter=iter+1;
if xopt~=0, ea = (2 - phi) * abs((xu - xl) / xopt) * 100;end
if ea <= es || iter >= maxit,break,end
end
x=xopt;fx=f(xopt);
MATLAB Function
21. Optimization using Golden-
Section Algorithm
Use the following parameter values for your calculation: g =
9.81 m/s2, z0 = 100 m, v0 = 55 m/s, m = 80 kg, and c = 15 kg/s.
Example
22. Optimization using Golden-
Section Algorithm
Command Window
>> g=9.81;v0=55;m=80;c=15;z0=100;
>> z=@(t) -(z0+m/c*(v0+m*g/c)*(1-exp(-c/m*t))-m*g/c*t);
>> [xmin,fmin,ea,iter]=goldmin(z,0,8)
xmin =
3.8317
fmin =
-192.8609
ea =
6.9356e-05
iter =
29
Notice how because this is a maximization, we have
entered the negative of the equation. Consequently,
fmin corresponds to a maximum height of 192.8609.