SlideShare uma empresa Scribd logo
1 de 59
Compiled ver 1.0

Dr. C.V. Suresh Babu

Revision ver 1.1
Revision ver 1.2
Revision ver 1.3
Revision ver 1.4
Revision ver 1.5
Edited ver 2.0

Dr. C.V. Suresh Babu

UNIT I ITERATIVE AND RECURSIVE ALGORITHMS 9
1.1.

Iterative Algorithms:
1.1.1. Measures of Progress and Loop Invariants1.2. Paradigm Shift:
1.2.1. Sequence of Actions versus Sequence of Assertions1.2.2. Steps to Develop an Iterative Algorithm1.2.3. Different Types of Iterative Algorithms—
1.2.4. Typical Errors1.3.
1.3.1.
1.3.2.
1.3.3.
1.3.4.
1.4.
1.4.1.
1.4.2.
1.5.
1.6.
1.7.
1.8.
1.8.1.
1.9.
1.10.
1.11.

RecursionForward versus BackwardTowers of HanoiChecklist for Recursive AlgorithmsThe Stack FrameProving Correctness with Strong InductionExamples of Recursive AlgorithmsSorting and Selecting AlgorithmsOperations on IntegersAckermann’s FunctionRecursion on TreesTree TraversalsExamplesGeneralizing the Problem –
Heap Sort and Priority QueuesRepresenting Expressions.

5/12/2013
UNIT 1
ITERATIVE AND RECURSIVE ALGORITHMS
1.1.ITERATIVE ALGORITHMS:
An iterative algorithm executes steps in iterations. It aims to find successive approximation in sequence
to reach a solution. They are most commonly used in linear programs where large numbers of variables
are involved.

1.1.1. MEASURES OF PROGRESS AND LOOP INVARIANTS:
An invariant of a loop is a property that holds before (and after) each repetition. It is a logical assertion,
sometimes programmed as an assertion. Knowing its invariant(s) is essential for understanding the
effect of a loop.
A loop invariant is a boolean expression that is true each time the loop guard is evaluated. Typically the
boolean expression is composed of variables used in the loop. The invariant is true every time the loop
guard is evaluated. In particular, the initial establishment of the truth of the invariant helps determine
the proper initial values of variables used in the loop guard and body. In the loop body, some statements
make the invariant false, and other statements must then re-establish the invariant so that it is true
before the loop guard is evaluated again.

Invariants can serve as both aids in recalling the details of an implementation of a particular
algorithm and in the construction of an algorithm to meet a specification.
For example, the partition phase of Quicksort is a classic example where an invariant can help in
developing the code and in reconstructing the code. An invariant is shown pictorially and as the
postcondition of the function partition below. The initial value of pivot is left. This ensures
that the invariant is true the first time the loop test is evaluated. When pivot is incremented in
the loop, the invariant is false, but the swap re-establishes the invariant.

int partition(vector<string> & a, int left, int right)
// precondition: left <= right
// postcondition: rearranges entries in vector a
//
returns pivot such that
//
forall k, left <= k <= pivot, a[k] <= a[pivot] and
//
forall k, pivot < k <= right, a[pivot] < a[k]
//
{
int k, pivot = left;
string center = a[left];
for(k=left+1, k <= right; k++)
{
if (a[k] <= center)
{
pivot++;
swap(a[k],a[pivot]);
}
}
swap(a[left],a[pivot]);
return piv;
}

The invariant for this code can be shown pictorially.
DESCRIPTION: The Challenge of the Sequence-of-Actions
View: Suppose one is designing a new algorithm or explaining an algorithm to a friend. If one is
thinking of it as sequence of actions, then one will likely start at the beginning: Do this. Do that.
Shortly one can get lost and not know where one is. To handle this, one simultaneously needs to
keep track of how the state of the computer changes with each new action. In order to know what
action to take next, one needs to have a global plan of where the computation is to go. To make it
worse, the computation has many IFs and LOOPS so one has to consider all the various paths
that the computation may take.
1.1.1. PARADIGM SHIFT: SEQUENCE OFACTIONS VERSUS SEQUENCE OF
ASSERTIONS
Understanding iterative algorithms requires understanding the difference between a loop
invariant, which is an assertion or picture of the computation at a particular point in time, and
the actions that are required to maintain such a loop invariant. Hence, we will start with trying to
understand this difference.
One of the first important paradigm shifts that programmers struggle to make is from viewing an
algorithm as a sequence of actions to viewing it as a sequence of snapshots of the state of the
computer. Programmers tend to fixate on the first view, because code is a sequence of
instructions for action and a computation is a sequence of actions. Though this is an important
view, there is another. Imagine stopping time at key points during the computation and taking
still pictures of the state of the computer. Then a computation can equally be viewed as a
sequence of such snapshots. Having two ways of viewing the same thing gives one both more
tools to handle it and a deeper understanding of it.
1.1.2. STEPS TO DEVELOP AN ITERATIVE ALGORITHM:
Iterative Algorithms: A good way to structure many computer programs is to store the key
information you currently know in some data structure and then have each iteration of the main
loop take a step towards your destination by making a simple change to this data.
Loop Invariant: A loop invariant expresses important relationships among the variables that
must be true at the start of every iteration and when the loop terminates. If it is true, then the
computation is still on the road. If it is false, then the algorithm has failed.
The Code Structure:
The basic structure of the code is as follows.
1.1.3. DIFFERENT TYPES OF ITERATIVE ALGORITHMS:
There are two main classes of iterative methods they are:
i.
stationary iterative methods, and
ii.
Krylov subspace methods.
i.

Stationary iterative methods

Stationary iterative methods solve a linear system with an operator approximating the original
one; and based on a measurement of the error in the result (the residual), form a "correction
equation" for which this process is repeated. While these methods are simple to derive,
implement, and analyze, convergence is only guaranteed for a limited class of matrices.
Examples of stationary iterative methods are the
i.
ii.
iii.

Jacobi method,
Gauss–Seidel method and
Successive over-relaxation method.

Linear stationary iterative methods are also called relaxation methods.
ii.

Krylov subspace methods

Krylov subspace methods work by forming a basis of the sequence of successive matrix powers
times the initial residual (the Krylov sequence). The approximations to the solution are then
formed by minimizing the residual over the subspace formed.
Example of this prototypical method in this class is the
i.
ii.
iii.

conjugate gradient method (CG)
generalized minimal residual method (GMRES)
biconjugate gradient method
1.1.4. TYPICAL ERRORS:
Many errors made in analysing the problem, developing an algorithm, and/or coding the
algorithm, only become apparent when you try to compile, run, and test the resulting program.
The earlier in the development process an error is made, or the later it is discovered, the more
serious the consequences.
Sources of errors:
Understanding the problem to solve. An error here may be obvious e.g. your program does
nothing useful at all. Or it may be more subtle, and only become apparent when some
exceptional condition occurs e.g. a leap year, incompetent user
Algorithm design. Mistakes here result in logic errors. The program will run, but will not
perform the intended task e.g. a program to add numbers which returns 6 when given 3+2 has a
logic error.
Coding of the algorithm. Often the compiler will complain, but messages from the compiler can
be cryptic. These errors are usually simple to correct e.g. spelling errors, misplaced punctuation
Runtime. Errors may appear at run time e.g. divide some number by zero. These errors may be
coding errors or logic errors.
Sources of errors
Programs rarely run correctly the first time (!)
Errors are of four types:
syntax errors
run-time errors
logic errors
Latent errors
i.

SYNTAX ERRORS:

Any violation of rules of the language result comes under syntax error. The compiler can detect
and isolate these types of errors. When syntax errors present in the program then compilation
fails.
Examples:
 undeclared variable
 missing semicolon at end of statement comment not closed
 Often one mistake leads to multiple error messages – can be confusing!

ii.

RUN-TIME ERRORS:

A program having this type of errors will run but produce erroneous result and there for the name
run time error is given to such errors. Generally it is a difficult task to isolate run time errors.
Examples:
int x=y/0;
will stop program execution and display message
iii.
LOGIC ERRORS:
As the name implies, these errors are basically related to the program execution. These errors
causes wrong results and these errors are primarily due to a poor understanding of a problem, in
accurate translation of the algorithms in to the program and lake of hierarchy of operation.
For example:
If (x=y)
{
Printf("they are equal");
}
When x and y are equal then rarely x and y becomes equal and it will create a logical error.
Examples:
float f; scanf("%d",&f);
difficult to detect for large and complex algorithms
sign of error: incorrect program output
cure: thorough testing and comparison with expected results16
iv.
LATENT ERRORS:
These types of errors are also known as 'hidden' errors and show up only when that particular set
of data is used
For example:
Ratio=(x+y)/ (p-q)
That statement will generate the error if and only if when p and q are equal
This type of error can be detected only by using all possible combination of test data.
Sources of errors & how to reduce them
Errors made in understanding the problem may well require you to restart
from scratch.
You may be tempted to start coding, making up the solution andalgorithm as you go
along. This may work for trivial problems, but is acertain way to waste time and effort for
realistic problems.
 A program can be tested by giving it inputs and comparing the observedoutputs to the
expected ones.
 Testing is very important but (in general) cannot prove that aprogram works correctly.
 For small programs, formal (mathematical) methods can be used toprove that a program
will produce the desired outputs for all possibleinputs. These methods are rarely
applicable to large programs.
 A logic error is referred to as a bug, so finding logic errors is calleddebugging.
 Debugging is a continuous process, leading to an edit-compiledebugcycle. General idea:
insert extra printf() statementsshowing the values of variables affected by each major
step. Whenyou’re satisfied program is correct, comment out these printf()’s.
Debugging tips – common errors in C
 In a forloop, initialisation and condition end with semicolons.
 Wrong – for(initialisation, condition; update)
 Must use braces in for and while loops to repeat more than one statement.
 nested structure: first closing brace is associated with innermost structure.
Example of “unexpected” behaviour:
if(c=='y') {
scanf("%d", &d);
while(d!=0) {
sum+=d;
scanf("%d", &d);
} else printf("end"); /* done even when c==„y‟ */
• inequality test on floating point numbers. Example of wrong way:
while(value!=0.0) { /* could have value<1e-9 */
...
value/=2.8; /* now value will be regarded as 0 */
}
Debugging tips – common errors in C (contd.)
• Should ensure that loop repetition condition will eventually become false.
Example where this doesn’t happen:
do {
...
printf("one more time?");
scanf("%d", &again);
} while(again=1); /* assignment, not equality test */
• loop count off by one, either too many or too few iterations. Or infinite loop,
or loop body never executed. Can be hard to discover!
1.2. RECURSION:
A recursive algorithm is an algorithm which calls itself with "smaller (or simpler)" input values, and
which obtains the result for the current input by applying simple operations to the returned value for
the smaller (or simpler) input. More generally if a problem can be solved utilizing solutions to smaller
versions of the same problem, and the smaller versions reduce to easily solvable cases, then one can use
a recursive algorithm to solve that problem. For example, the elements of a recursively defined set, or
the value of a recursively defined function can be obtained by a recursive algorithm.
If a set or a function is defined recursively, then a recursive algorithm to compute its members or values
mirrors the definition. Initial steps of the recursive algorithm correspond to the basis clause of the
recursive definition and they identify the basis elements. They are then followed by steps corresponding
to the inductive clause, which reduce the computation for an element of one generation to that of
elements of the immediately preceding generation.
In general, recursive computer programs require more memory and computation compared with
iterative algorithms, but they are simpler and for many cases a natural way of thinking about the
problem.
Example 1: Algorithm for finding the k-th even natural number
Note here that this can be solved very easily by simply outputting 2*(k - 1) for a given k . The purpose
here, however, is to illustrate the basic idea of recursion rather than solving the problem.
Algorithm 1: Even(positive integer k)
Input: k , a positive integer
Output: k-th even natural number (the first even being 0)
Algorithm:
if k = 1, then return 0;
else return Even(k-1) + 2 .
Here the computation of Even(k) is reduced to that of Even for a smaller input value, that is Even(k-1).
Even(k) eventually becomes Even(1) which is 0 by the first line. For example, to compute Even(3),
Algorithm Even(k) is called with k = 2. In the computation of Even(2), Algorithm Even(k) is called with k
= 1. Since Even(1) = 0, 0 is returned for the computation of Even(2), and Even(2) = Even(1) + 2 = 2 is
obtained. This value 2 for Even(2) is now returned to the computation of Even(3), and Even(3) = Even(2)
+ 2 = 4 is obtained.
As can be seen by comparing this algorithm with the recursive definition of the set of nonnegative even
numbers, the first line of the algorithm corresponds to the basis clause of the definition, and the second
line corresponds to the inductive clause.
By way of comparison, let us see how the same problem can be solved by an iterative algorithm.
Algorithm 1-a: Even(positive integer k)
Input: k, a positive integer
Output: k-th even natural number (the first even being 0)
Algorithm:
int i, even;
i := 1;
even := 0;
while( i < k ) {
even := even + 2;
i := i + 1;
}
return even .

Example 2: Algorithm for computing the k-th power of 2
Algorithm 2 Power_of_2(natural number k)
Input: k , a natural number
Output: k-th power of 2
Algorithm:
if k = 0, then return 1;
else return 2*Power_of_2(k - 1) .
By way of comparison, let us see how the same problem can be solved by an iterative algorithm.
Algorithm 2-a Power_of_2(natural number k)
Input: k , a natural number
Output: k-th power of 2
Algorithm:
int i, power;
i := 0;
power := 1;
while( i < k ) {
power := power * 2;
i := i + 1;
}
return power .

The next example does not have any corresponding recursive definition. It shows a recursive way of
solving a problem.
Example 3: Recursive Algorithm for Sequential Search
Algorithm 3 SeqSearch(L, i, j, x)
Input: L is an array, i and j are positive integers, i j, and x is the key to be searched for in L.
Output: If x is in L between indexes i and j, then output its index, else output 0.
Algorithm:
if i j , then
{
if L(i) = x, then return i ;
else return SeqSearch(L, i+1, j, x)
}
else return 0.

Recursive algorithms can also be used to test objects for membership in a set.
Example 4: Algorithm for testing whether or not a number x is a natural number
Algorithm 4 Natural(a number x)
Input: A number x
Output: "Yes" if x is a natural number, else "No"
Algorithm:
if x < 0, then return "No"
else
if x = 0, then return "Yes"
else return Natural( x - 1 )
Example 5: Algorithm for testing whether or not an expression w is a proposition(propositional form)
Algorithm 5 Proposition( a string w )
Input: A string w
Output: "Yes" if w is a proposition, else "No"
Algorithm:
if w is 1(true), 0(false), or a propositional variable, then return "Yes"
else if w = ~w1, then return Proposition(w1)
else
if ( w = w1 w2 or w1 w2 or w1
w2 or w1
w2 ) and
Proposition(w1) = Yes and Proposition(w2) = Yes
then return Yes
else return No
end
1.2.1. Recursive vs. Iterative algorithms
Comparison between Iterative and Recursive Approaches from Performance Considerations
Factorial
//recursive function calculates n!
static int FactorialRecursive(int n)
{
if (n <= 1) return 1;
return n * FactorialRecursive(n - 1);
}
//iterative function calculates n!
static int FactorialIterative(int n)
{
int sum = 1;
if (n <= 1) return sum;
while (n > 1)
{
sum *= n;
n--;
}
return sum;
}

N
10
100
1000
10000
100000

Recursive
334 ticks
846 ticks
3368 ticks
9990 ticks
stack overflow

Iterative
11 ticks
23 ticks
110 ticks
975 ticks
9767 ticks

As we can clearly see, the recursive is a lot slower than the iterative (considerably) and
limiting (stackoverflow).
The reason for the poor performance is heavy push-pop of the registers in the ill level of each
recursive call.
Fibonacci
//--------------- iterative version --------------------static int FibonacciIterative(int n)
{
if (n == 0) return 0;
if (n == 1) return 1;
int prevPrev = 0;
int prev = 1;
int result = 0;
for (int i = 2; i <= n; i++)
{
result = prev + prevPrev;
prevPrev = prev;
prev = result;
}
return result;
}
//--------------- naive recursive version --------------------static int FibonacciRecursive(int n)
{
if (n == 0) return 0;
if (n == 1) return 1;
return FibonacciRecursive(n - 1) + FibonacciRecursive(n - 2);
}
//--------------- optimized recursive version --------------------static Dictionary<int> resultHistory = new Dictionary<int>();
static
{
if
if
if

int FibonacciRecursiveOpt(int n)
(n == 0) return 0;
(n == 1) return 1;
(resultHistory.ContainsKey(n))
return resultHistory[n];

int result = FibonacciRecursiveOpt(n - 1) + FibonacciRecursiveOpt(n - 2);
resultHistory[n] = result;
return result;
}

N
5
10
20
30
100
1000
10000
100000

Recursive
5 ticks
36 ticks
2315 ticks
180254 ticks
too long/stack overflow
too long/stack overflow
too long/stack overflow
too long/stack overflow

Recursive opt.
22 ticks
49 ticks
61 ticks
65 ticks
158 ticks
1470 ticks
13873 ticks
too long/stack overflow

Iterative
9 ticks
10 ticks
10 ticks
10 ticks
11 ticks
27 ticks
190 ticks
3952 ticks

As before, the recursive approach is worse than iterative however, we could apply memorization
pattern (saving previous results in dictionary for quick key based access), although this pattern
isn't a match for the iterative approach (but definitely an improvement over the simple
recursion).
1.2.2. FORWARD VERSUS BACKWARD:
The term forward–backward algorithm is also used to refer to any algorithm belonging to
the general class of algorithms that operate on sequence models in a forward–backward manner.
In this sense, the descriptions in the remainder of this article refer but to one specific instance of
this class.
In general a recursive object looks something like:
Function xxx(parameters)
list of instructions 1
If condition Then Call xxx
list of instructions 2
Return
The first list of instructions is obeyed on the way up the spiral i.e. as the instances of the
function are being built and the second list is obeyed on the way back down the spiral as the
instances are being destroyed. You can also see the sense in which first list occurs in the order
you would expect but the second is obeyed in the reverse order. Sometimes a recursive function
has only the first or the second list of instructions and these are easier to analyse and implement.
For example
A recursive function that doesn't have a second list doesn't need the copies created on the
way down the recursion to be kept because they aren't made any use of on the way up!
This is usually called tail end recursion because the recursive call to the function is the
very last instruction in the function i.e. in the tail end of the function. Many languages can
implement tail end recursion more efficiently than general recursion because they can throw
away the state of the function before the recursive call so saving stack space.
1.2.3. TOWERS OF HANOI:
The Tower of Hanoi is a mathematicalgame or puzzle. It consists of three rods, and a
number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks
in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a
conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the
following rules:
 Only one disk must be moved at a time.
 Each move consists of taking the upper disk from one of the rods and sliding it onto
another rod, on top of the other disks that may already be present on that rod.
 No disk may be placed on top of a smaller disk.

How to solve the Towers of Hanoi puzzle

The Classical Towers of Hanoi - an initial position of all disks is on post 'A'.
Fig. 1

The solution of the puzzle is to build the tower on post 'C'.

Fig. 2

The Arbitrary Towers of Hanoi - at start, disks can be in any position provided that a bigger
disk is never on top of the smaller one (see Fig. 3). At the end, disks should be in another
arbitrary position. * )

Fig. 3

Solving the Tower of Hanoi
'Solution' ≡ shortest path
Recursive Solution:

1. Identify biggest discrepancy (=disk N)
2. If moveable to goal peg Then move
Else
3. Subgoal: set-up (N−1)-disk tower on non-goal peg.
4. Go to 1. ...
Solving the Tower of Hanoi - 'regular' to 'perfect'

Let's start thinking how to solve it.
Let's, for the sake of clarity, assume that our goal is to set a 4 disk-high tower on peg 'C' - just
like in the classical Towers of Hanoi (see Fig. 2).
Let's assume we 'know' how to move a 'perfect' 3 disk-high tower.
Then on the way of solving there is one special setup. Disk 4 is on peg 'A' and the 3 disk-high
tower is on peg 'B' and target peg 'C' is empty.

Fig. 4

From that position we have to move disk 4 from 'A' to 'C' and move by some magic the 3 diskhigh tower from 'B' to 'C'.
So think back. Forget the disks bigger than 3.
Disk 3 is on peg 'C'. We need disk 3 on peg 'B'. To obtain that, we need disk 3 in place where it
is now, free peg 'B' and disks 2 and 1 stacked on peg 'A'. So our goal now is to put disk 2 on peg
'A'.

Fig. 5

Forget for the moment disk 3 (see Fig. 6). To be able to put disk 2 on peg 'A' we need to empty
peg 'A' (above the thin blue line), disks smaller than disk 2 stacked on peg 'B'. So, our goal now
is to put disk 1 on peg 'B'.
As we can see, this is an easy task because disk 1 has no disk above it and peg 'B' is free.
Fig. 6

So let's move it.

Fig. 7

The steps above are made by the algorithm implemented in Towers of Hanoi when one clicks the
"Help me" button. This button-function makes analysis of the current position and generates only
one single move which leads to the solution. It is by design.
When the 'Help me' button is clicked again, the algorithm repeats all steps of the analysis starting
from the position of the biggest disk - in this example disk 4 - and generates the next move - disk
2 from peg 'C' to peg 'A'.

Fig. 8

If one needs a recursive or iterative algorithm which generates the series of moves for solving
arbitrary Towers of Hanoi then one should use a kind of back track programming, that is to
remember previous steps of the analysis and not to repeat the analysis of the Towers from the
ground.
1.3.3. Checklist for recursive algorithm
The following is a checklist that will help you identify the most common sources of error.










Does your recursive implementation begin by checking for simple cases?
Before you attempt to solve a problem by transforming it into a recursive
subproblem, you must first check to see if the problem is so simple that such
decomposition is unnecessary. In almost all cases, recursive functions begin with
the keyword if. If your function doesn’t, you should look carefully at your
program and make sure that you know what you’re doing.
Have you solved the simple cases correctly?
A surprising number of bugs in recursive programs arise from having incorrect
solutions to the simple cases. If the simple cases are wrong, the recursive
solutions to more complicated problems will inherit the same mistake. For
example, if you had mistakenly defined Fact(0) as 0 instead of 1, calling Fact
on any argument would end up returning 0.
Does your recursive decomposition make the problem simpler?
For recursion to work, the problems have to get simpler as you go along. More
formally, there must be some metric—a standard of measurement that assigns a
numeric difficulty rating to the problem—that gets smaller as the computation
proceeds. For mathematical functions like Fact and Fib, the value of the integer
argument serves as a metric. On each recursive call, the value of the argument
gets smaller. For the IsPalindrome function, the appropriate metric is the length
of the argument string, because the string gets shorter on each recursive call. If
the problem instances do not get simpler, the decomposition process will just
keep making more and more calls, giving rise to the recursive analogue of the
infinite loop, which is called nonterminating recursion.
Does the simplification process eventually reach the simple cases, or have
you left out some of the possibilities?
A common source of error is failing to include simple case tests for all the cases
that can arise as the result of the recursive decomposition. For example, in the
IsPalindrome implementation presented in Figure 5-3, it is critically important for
the function to check the zero-character case as well as the one-character case,
even if the client never intends to call IsPalindrome on the empty string. As the
recursive decomposition proceeds, the string arguments get shorter by two
characters at each level of the recursive call. If the original argument string is
even in length, the recursive decomposition will never get to the one-character
case.
Do the recursive calls in your function represent sub problems that are
truly identical in form to the original?
When you use recursion to break down a problem, it is essential that the
subproblems be of the same form. If the recursive calls change the nature of the
problem or violate one of the initial assumptions, the entire process can break
down. As several of the examples in this chapter illustrate, it is often useful to
define the publicly exported function as a simple wrapper that calls a more
general recursive function which is private to the implementation. Because the
private function has a more general form, it is usually easier to decompose the
original problem and still have it fit within the recursive structure.
 When you apply the recursive leap of faith, do the solutions to the
recursive subproblems provide a complete solution to the original
problem?
Breaking a problem down into recursive subinstances is only part of the recursive
process. Once you get the solutions, you must also be able to reassemble them
to generate the complete solution. The way to check whether this process in fact
generates the solution is to walk through the decomposition, religiously applying
the recursive leap of faith. Work through all the steps in the current function call,
but assume that every recursive call generates the correct answer. If following
this process yields the right solution, your program should work.
Factorial coding:
int Factorial(int n)
{
if (n==0) // base case
return 1;
else
return n * Factorial(n-1);
}

Now, Let us consider an another example for checklist of recursive algorithm
algorithmAlg(a,b,c)
pre-cond:here a is a tuple,b an integer & c as binary tree.
post-cond:outputx,y& z, which are useful objects
begin
if(_a,b,c_is a sufficient small instance)return(_0,0,0_)
_asub1,bsub1,csub1_=a part of _a,b,c_
_xsub1,ysub1,zsub1_= Alg(_asub1,bsub1,csub1_)
_asub2,bsub2,csub2_=a different part of _a,b,c_
_xsub2,ysub2,zsub2_= Alg(_asub2,bsub2,csub2_)
_x,y,z_=combine _xsub1,ysub1,zsub1 and _xsub2,ysub2,zsub2_
return(_x,y,z_)
end algorithm
some of the things to to checked are:
0.The Code Structure- the code need not be too much complex.
1.specifications-define what the algorithm is about.
2.variables-the variables used in the codingshould be well documented and to have meaningful
names.in eg, variables a,b and c are clearly explained.
2.1 your input: the inputs should be mentioned clearly, in our eg, Alg(a,b,c)specifies
both the name Alg of the routine and the name of the inputs.
2.2 your output: the post condition must be explained clearly, here x,y,z is a solution.
2.3 everypath:the flow of the coding or if any IF or LOOP statements are used then
every paththrough the code must end with an return statement.
2.4 type of output:each return statement must return a right solution of righttype.If the
postcondition leaves open the possibilities that it doesn’t return any solution then some paths
through the code may end with the statement return(no solution exists)
2.5 your friend‟s input: create subinstances like asub,bsub,csub, provided it must meet
the precondition of your problem.
2.6your friends‟s output: the friends input return the output and saved in
xsub,ysub,zsub=Alg(asub,dsub,csub)
2.7rarely need new inputs or output:only add them if absolutely necessary,if so then
mention in pre and post conditions of the algorithm clearly.
3.task to complete: goal is given an arbitrary instances a,b,c meeting the precon to return
solution x,y, z.
3.1accept your mission: know the correct range of things that your instances might
be.eg, it must accept all thre operations of an binary tree
3.2construct subinstances: like xsub.ysub,zsub=Alg(a1,a2…..,an-1,b-5,leftsub(c))
3.2.1valid subinstances:asub,bsub.csub should meet the preconditions
3.2.2smaller instances: own subinstances must be greater them subinstances
4.Base cases: if the input istancesissufficiently small accordind to your definition ofsize then
youmust solve it urself as a base case.

1.4. PROVING CORRECTNESS WITH STRONG INDUCTION
OBJECTIVE:
To prove that this suffices to produce an algorithm that successfully solves the problem
for every input instance.
INTRODUCTION:
When proving this it is tempting to talk about stack frames. This stack frame calls this
one which calls that one until you hit the base case. Then the solutions bubble back up to the
surface. These profs tend to make little sense. Instead, we use strong induction to prove formally.

Strong Induction:
Strong induction is similar to induction, except that instead of assuming only S(n − 1) to
prove S(n), you must assume all of S(0), S(1), S(2), . . . ,S(n − 1).

A Statement for Each n:
For each value of n ≥ 0, let S(n) represent a Boolean statement. For some values of n this
statement may be true, and for others it may be false.

Goal:

Our goal is to prove that it is true for every value of n, namely that ∀n ≥ 0, S(n).
EXAMPLE:
Fibonacci(n):
if n = 0 then
// base case
return 0
elseif n = 1 then // base case
return 1
else
return Fibonacci(n - 1) + Fibonacci(n - 2)
endif

Proof by Strong Induction
Base Case: for inputs 0 and 1, the algorithm returns 0 and 1 respectively. So this is Correct.
Induction Hypothesis: Fibonacci(k) is correct for all values of k≤n, where n,k∈N
Inductive Step:
1. let Fibonacci(k) be true for all values until n
2. From IH, we know Fibonacci(k) correctly computes Fk and Fibonacci(k-1) correctly
computes Fk−1
3. So,
Fibonacci(k+1)=Fibonacci(k) + Fibonacci(k-1)
(by definition of the Fibonacci function)
=Fk+Fk−1
=Fk+1
(By definition of Fibonacci numbers)
4. Thus by rules of mathematical Induction, Fibonacci(n) always returns the correct result
for all values of n.

1.4.1. Examples of Recursive Algorithms-
1.4.2. Sorting and Selecting AlgorithmsSorting and searching are among the most common programming processes.
We want to keep information in a sensible order.
alphabetical order
ascending/descending order
order according to names, ids, years, departments etc.
•The aim of sorting algorithms is to put unordered information in an ordered form.
•There are many sorting algorithms, such as:
Selection Sort
Bubble Sort
Insertion Sort
Merge Sort
Quick Sort
•The first three are the foundations for faster and more efficient algorithms.
Selection Sort
The list is divided into two sublists, sorted and unsorted, which are divided by an
imaginary wall.
We find the smallest element from the unsorted sublist and swap it with the element at
the beginning of the unsorted data.
After each selection and swapping, the imaginary wall between the two sublists move one
element ahead, increasing the number of sorted elements and decreasing the number of
unsorted ones.
Each time we move one element from the unsorted sublist to the sorted sublist, we say
that we have completed a sort pass.
A list of n elements requires n-1 passes to completely rearrange the data.
Bubble Sort
The list is divided into two sublists: sorted and unsorted.
The smallest element is bubbled from the unsorted list and moved to the sorted sublist.
After that, the wall moves one element ahead, increasing the number of sorted elements
and decreasing the number of unsorted ones.
Each time an element moves from the unsorted part to the sorted part one sort pass is
completed.
Given a list of n elements, bubble sort requires up to n-1 passes to sort the data.
Bubble sort was originally written to “bubble up” the highest element in the list. From an
efficiency point of view it makes no difference whether the high element is bubbled or
the low element is bubbled.
Insertion Sort
Most common sorting technique used by card players.
Again, the list is divided into two parts: sorted and unsorted.
In each pass, the first element of the unsorted part is picked up, transferred to the sorted
sublist, and inserted at the appropriate place.
A list of n elements will take at most n-1 passes to sort the data.
1.6. ACKERMANN‟S FUNCTION
Definition:
In computability theory, the Ackermann function, named after Wilhelm
Ackermann, is one of the simplest and earliest-discovered examples of
a total computable function that is not primitive recursive.
It is the function of two parameters whose values grows very fast.

Function:

Algorithm:
algorithm A(k, n)
if( k = 0) then
return( n +1 + 1 )
else
if( n = 0) then
if( k = 1) then
return( 0 )
else
return( 1 )
else
return( A(k − 1, A(k, n − 1)))
end if
end if
end algorithm
Example:
Recurrence Relation:
Let Tk (n) denote the value returned by A(k, n). This gives
T0(n) = 2 +n, T1(0) = 0, Tk (0) = 1 for k ≥ 2, and Tk (n) = Tk−1(Tk (n − 1)) for k
> 0 and n > 0.

STEPS

EXPLANATION

1. T0(n) = 2+n

Substitute k=0 (by ackermann’s function,
if k=0, then T0(n)= 2+n)

2. T1(n) = T1-1(T1(n-1))
= T0(T1(n-1))
= 2+T1(n-1)
= 2+(2+T1(n-2))
= 2+2+(2+T1(n-3))
……………
= 2n+ T1(n-n)
= 2n+ T1(0)
T1(n) = 2n

Substitute k=1 (by ackermann’s
function, if k>0, then
Tk(n)=Tk-1(TK(n-1)))
Now consider T1(n-1) as n. (i.e.,
T0(n)=2+n). so, T0(T1(n-1)) = 2+T1(n-1)
After substitution, continuously
increment it by 2+2+2 etc., in left side
and (n-1), (n-2), (n-3)..etc., in right side.
This can be written as 2n + T1(n-n).
Therefore T1(n-n)=T1(0). (by
ackermann’s function T1(0)=0). So
T1(n) = 2n
3. T2(n) = T2-1(T2(n-1))
= T1(T2(n-1))
= 2.T2(n-1)
= 2(2(T2(n-2)))
= 2(2(2(T2(n-3))))
…………..
= 2nT2(n-n)
T2(n) = 2n

Substitute k=2 (by ackermann’s
function, if k>0, then
Tk(n)=Tk-1(TK(n-1)))
Now consider T2(n-1) as n. (i.e.,
T1(n)=2n). so, T1(T2(n-1)) = 2T2(n-1)
After substitution, continuously
multiply it by 2*2*2 etc., in left side and
(n-1), (n-2), (n-3)..etc., in right side.
This can be written as 2nT2(nn).Therefore T2(n-n)=T2(0). (by
ackermann’s function T2(0)=1). So
T2(n) = 2n

4. T3(n) = T3-1(T3(n-1))
= T2(T3(n-1))
= 2T3(n-1)
= 22 T3(n-2)
…………..

Substitute k=3 (by ackermann’s
function, if k>0, then
Tk(n)=Tk-1(TK(n-1)))
Now consider T3(n-1) as n. (i.e.,
T3(n)=2n). so, T2(T3(n-1)) = 2T3(n-1)
After substitution, continuously
increment the power as 22^2^2 etc., in left
side and (n-1), (n-2), (n-3)..etc., in right
side.
This can be written as 2nT3(nn).Therefore T3(n-n)=T3(0). (by
ackermann’s function T3(0)=1)

T3(n) = 22^2^2

5. T4(n) = T4-1(T4(n-1))
T4(0) = 1
T4(1) = T4-1(T4(1-1)) =
T3(T4(0)) = T3(1) = 2n = 2
T4(2) = T4-1(T4(2-1)) = T3(T4(1)) =
T3(2) = 22 = 4
T4(3) = T4-1(T4(3-1)) = T3(T4(2)) =

We have substituted the values for k
upto 3. Now start substituting the n
values.
First, Substitute n=0.
T4(0) = 1 (because Tk(0) = 1)
Substitute n=1.
T3(4) = 24 = 16
T4(4) = T4-1(T4(4-1)) = T3(T4(3)) =
T3(16) = 216 = 65,536
T4(5) = T4-1(T4(5-1)) = T3(T4(4)) =
T3(65,536) = 265,536

Solving:
1. T0(n) = 2+n
2. T1(n) = T1-1(T1(n-1))
= T0(T1(n-1))
= 2+T1(n-1)
= 2+(2+T1(n-2))
= 2+2+(2+T1(n-3))
……………
= 2n+ T1(n-n)
= 2n+ T1(0)
T1(n) = 2n
3. T2(n) = T2-1(T2(n-1))
= T1(T2(n-1))

( by ackermann function, if n>0 Tk(n)=
Tk-1(TK(n-1))). S0, T4(1) = T4-1(T4(1-1))
= T3(T4(0)) = T3(1) = 21= 2 [T3(n)=2n]
Repeat the steps by substituting the
values of T4(2), T4(3) and T4(4) .
= 2.T2(n-1)
= 2(2(T2(n-2)))
= 2(2(2(T2(n-3))))
…………..
= 2nT2(n-n)
T2(n) = 2n
4. T3(n) = T3-1(T3(n-1))
= T2(T3(n-1))
= 2T3(n-1)
= 22 T3(n-2)
…………..

T3(n) = 22^2^2
5. T4(n) = T4-1(T4(n-1))
T4(0) = 1
T4(1) = T4-1(T4(1-1)) = T3(T4(0)) = T3(1) = 2n = 2
T4(2) = T4-1(T4(2-1)) = T3(T4(1)) = T3(2) = 22 = 4
T4(3) = T4-1(T4(3-1)) = T3(T4(2)) = T3(4) = 24 = 16

T4(4) = T4-1(T4(4-1)) = T3(T4(3)) = T3(16) = 216 = 65,536
T4(5) = T4-1(T4(5-1)) = T3(T4(4)) = T3(65,536) = 265,536
Ackermann’s function is defined to be A(n) = Tn(n). We see that A(4) is bigger
than any number in the natural world. A(5) is unimaginable.

Running Time:
The only way that the program builds up a big number is by
continually incrementing it by one. Hence, the number of times one is added is at
least as huge as the value Tk (n) returned.
Crashing:
Programs can stop at run time because of:
(1) overflow in an integer value;
(2) running out of memory;
(3) running out of time. Which is likely to happen first?
If the machine’s integers are 32 bits, then they hold a value that is about 1010.
Incrementing up to this value will take a long time. However, much worse than this, each two
increments need another recursive call creating a stack of about thismany recursive stack
frames.Themachine is bound to run out ofmemory first.
Use as benchmark:
The Ackermann function, due to its definition in terms of extremely deep recursion,
can be used as a benchmark of a compiler's ability to optimize recursion.
The Ackermann function, due to its definition in terms of extremely deep recursion, can be used
as a benchmark of a compiler's ability to optimize recursion. The first use of Ackermann's
function in this way was by YngveSundblad, The Ackermann function. A Theoretical,
computational and formula manipulative study.
1.7. Recursion on Trees
A recursion tree is useful for visualizing what happens when a recurrence is iterated. It diagrams
the tree of recursive calls and the amount of work done at each call.
For instance, consider the recurrence
T(n) = 2T(n/2) + n2.
The recursion tree for this recurrence has the following form:
In this case, it is straightforward to sum across each row of the tree to obtain the total work done
at a given level:

This a geometric series, thus in the limit the sum is O(n2). The depth of the tree in this case does
not really matter; the amount of work at each level is decreasing so quickly that the total is only a
constant factor more than the root.
Recursion trees can be useful for gaining intuition about the closed form of a recurrence, but they
are not a proof (and in fact it is easy to get the wrong answer with a recursion tree, as is the case
with any method that includes ''...'' kinds of reasoning). As we saw last time, a good way of
establishing a closed form for a recurrence is to make an educated guess and then prove by
induction that your guess is indeed a solution. Recurrence trees can be a good method of
guessing.
Let's consider another example,
T(n) = T(n/3) + T(2n/3) + n.
Expanding out the first few levels, the recurrence tree is:
Note that the tree here is not balanced: the longest path is the rightmost one, and its length is
log3/2 n. Hence our guess for the closed form of this recurrence is O(n log n).

1.8. Tree Traversals
Tree traversal (also known as tree search) refers to the process of visiting (examining and/or updating)
each node in a tree data structure, exactly once, in a systematic way. Such traversals are classified by
the order in which the nodes are visited. The following algorithms are described for a binary tree, but
they may be generalized to other trees as well.

There are three types of depth-first traversal: pre-order, in-order, and post-order. For a binary
tree, they are defined as operations recursively at each node, starting with the root node follows:
Pre-order
1. Visit the root.
2. Traverse the left subtree.
3. Traverse the right subtree.
In-order (symmetric)
1. Traverse the left subtree.
2. Visit the root.
3. Traverse the right subtree.
Post-order
1. Traverse the left subtree.
2. Traverse the right subtree.
3. Visit the root.

The trace of a traversal is called a sequentialisation of the tree. No one sequentialisation
according to pre-, in- or post-order describes the underlying tree uniquely. Given a tree with
distinct elements, either pre-order or post-order paired with in-order is sufficient to describe the
tree uniquely. However, pre-order with post-order leaves some ambiguity in the tree structure.
Generic tree

To traverse any tree in depth-first order, perform the following operations recursively at each
node:
1. Perform pre-order operation
2. For each i (with i = 1 to n − 1) do:
1. Visit i-th, if present
2. Perform in-order operation
3. Visit n-th (last) child, if present
4. Perform post-order operation

where n is the number of child nodes. Depending on the problem at hand, the pre-order, in-order
or post-order operations may be void, or you may only want to visit a specific child node, so
these operations are optional. Also, in practice more than one of pre-order, in-order and postorder operations may be required. For example, when inserting into a ternary tree, a pre-order
operation is performed by comparing items. A post-order operation may be needed afterwards to
re-balance the tree.
1.8.1. Example
Binary tree traversal: Preorder, Inorder, and Postorder
In order to illustrate few of the binary tree traversals, let us consider the below binary tree:

Preorder traversal: To traverse a binary tree in Preorder, following operations are carried-out (i) Visit
the root, (ii) Traverse the left subtree, and (iii) Traverse the right subtree.
Therefore, the Preorder traversal of the above tree will outputs:
7, 1, 0, 3, 2, 5, 4, 6, 9, 8, 10
Inorder traversal: To traverse a binary tree in Inorder, following operations are carried-out (i) Traverse
the left most subtree starting at the left external node, (ii) Visit the root, and (iii) Traverse the right
subtree starting at the left external node.
Therefore, the Inorder traversal of the above tree will outputs:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Postorder traversal: To traverse a binary tree in Postorder, following operations are carried-out (i)
Traverse all the left external nodes starting with the left most subtree which is then followed by bubbleup all the internal nodes, (ii) Traverse the right subtree starting at the left external node which is then
followed by bubble-up all the internal nodes, and (iii) Visit the root.
Therefore, the Postorder traversal of the above tree will outputs:
0, 2, 4, 6, 5, 3, 1, 8, 10, 9, 7

1.10. Heap Sort and Priority Queues

Heaps: A data structure and associated algorithms, NOT GARBAGE COLLECTION
A heap data structure is an array of objects than can be viewed as a complete binary tree such that:
1. Each tree node corresponds to elements of the array
2. The tree is complete except possibly the lowest level, filled from left to right
The heap property is defined as an ordering relation R between each node and its descendants. For
example, R could be smaller than or bigger than. In the examples that follow, we will use the bigger
than relation.
Example: Given array [22 13 10 8 7 6 2 4 3 5]
Note that the elements are not sorted, only max element at root of tree.
The height of a node in the tree is the number of edges on the longest simple downward path from
the node to a leaf; e.g. height of node 6 is 0, height of node 4 is 1, height of node 1 is 3.
The height of the tree is the height from the root. As in any complete binary tree of size n, this is lg n.
h

h+1

There are 2 nodes at level h and 2 - 1 total nodes in a complete binary tree.
We can represent a heap as an array A that has two attributes:
1. Length(A) – Size of the array
2. HeapSize(A) - Size of the heap
The property Length(A) ≥HeapSize(A) must be maintained.
The heap property is stated as A[parent(i)] ≥A[i]
The root of the tree is A[1]. Formula to compute parents, children in an array:
Parent(i) = A[⎣i /2⎦ ]
Left Child(I) = A[2i]
Right Child(I) = A[2i+1]
Where might we want to use heaps? Consider the Priority Queue problem: Given a sequence of
objects with varying degrees of priority, and we want to deal with the highest-priority item first.
Managing air traffic control - want to do most important tasks first.
Jobs placed in queue with priority, controllers take off queue from top
Scheduling jobs on a processor - critical applications need high priority
Event-driven simulator with time of occurrence as key. Use min-heap, which keeps smallest
element on top, get next occurring event.
To support these operations we need to extract the maximum element from the heap:
HEAP-EXTRACT-MAX(A)
remove A[1]
A[1] =A[n] ; n is HeapSize(A), the length of the heap, not array
n = n-1 ; decrease size of heap
Heapify(A,1,n) ; Remake heap to conform to heap properties
Runtime: Q(1) +Heapify time
Note: Successive removals will result in items in reverse sorted order!
We will look at:
Heapify : Maintain the heap property
Build Heap : How to initially build a heap
Heapsort : Sorting using a heap
Heapify: Maintain heap property by “floating” a value down the heap that starts at I until it is in the
right position.
Heapify(A,i,n) ; Array A, heapify node i, heapsize is n
; Note that the left and right subtrees of i are also heaps
; Make i’s subtree be a heap.
If (2i ≤ n) and A[2i]>A[i]
; see which is largest of current node and its children largest = 2i
else
largest = i
If (2i+1 ≤ n) and A[2i+1]>A[largest]
largest = 2i+1
If largest ≠ i
swap (A[i], A[largest]) Heapify(A,largest,n)
Example: Heapify(A,1,10). A=[1 13 10 8 7 6 2 4 3 5] Find largest of children and swap. All subtrees
are valid heaps so we know the children are the maximums.
Find largest of children and swap. All subtrees are valid heaps so we know the children are the
maximums.
Next is Heapify(A,2,10). A=[13 1 10 8 7 6 2 4 3 5]

Next is Heapify(A,4,10). A=[13 8 10 1 7 6 2 4 3 5]
Next is Heapify(A,8,10). A=[13 8 10 4 7 6 2 1 3 5]

On this iteration we have reached a leaf and are finished. (Consider if started at node 3, n=7)
Runtime: Intuitively this is Q(lg n)since we make one trip down the path of the tree and we have an
almost complete binary tree.
Building the Heap:
Given an array A, we want to build this array into a heap. Note: Leaves are already a heap! Start
from the leaves and build up from there.
Build-Heap(A,n)
for i = n downto 1 ; could start at n/2
Heapify(A,i,n)
Start with the leaves (last ½ of A) and consider each leaf as a 1 element heap. Call Heapify on the
parents of the leaves, and continue recursively to call Heapify, moving up the tree to the root.
Example: Build-Heap(A,10). A=[1 5 9 4 7 10 2 6 3 14]

Heapify(A,10,10) exits since this is a leaf.
Heapify(A,9,10) exits since this is a leaf.
Heapify(A,8,10) exits since this is a leaf.
Heapify(A,7,10) exits since this is a leaf.
Heapify(A,6,10) exits since this is a leaf.
Heapify(A,5,10) puts the largest of A[5] and its children, A[10] into A[5]:
Heapify(A,4,10):

Heapify(A,3,10):
Heapify(A,2,10): First iteration:

this calls Heapify(A,5,10):
Heapify(A,1,10):

Calls Heapify(A,2,10):
Calls Heapify(A,5,10):

Finished heap: A=[14 7 10 6 5 9 2 4 3 1]
Running Time: We have a loop of n times, and each time call heapify which runs in Θ (lgn). This
implies a bound of O(nlgn).

HeapSort: Once we can build a heap and heapify a heap, sorting is easy. Idea is to:
HeapSort(A,n)
Build-Heap(A,n)
for i = n downto 2
Swap(A[1], A[i])
Heapify(A,1,i-1)
Runtime is O(nlgn) since we do Heapify on n-1 elements, and we do Heapify on the whole tree. .
Variation on heaps:

Heap could have min on top instead of max,
Heap could be k-ary tree instead of binary

Priority Queues: A priority queue is a data structure for maintaining a set of S elements each
with an associated key value.
Operations:
Insert(S,x) puts element x into set S
Max(S,x) returns the largest element in set S
Extract-Max(S) removes the largest element in set S
Uses: Job scheduling, event driven simulation, etc.
We can model priority queues nicely with a heap. This is nice because heaps allow us to do fast
queue maintenance.
Max(S,x) : Just return root element. Takes O(1) time.
Heap-Insert(A,key)
n=n+1
i= n
while (i > 1) and A[⎣i /2⎦] < key
A[i] =A[⎣i / 2⎦]
i = ⎣i/2⎦
A[i] = key
Idea: same as heapify. Start from a new node, and propagate its value up to right level. Example:
Insert new element “11” starting at new node on bottom, i=8

Bubble up:
i=4, bubble up again

At this point, the parent of 2 is larger so the algorithm stops.
Runtime = O(lgn) since we only move once up the tree levels.
Heap-Extract-Max(A,n)
max = A[1]
Α[1]= Α[n]
n= n-1
Heapify(A,1,n)
return max
Idea: Make the nth element the root, then call Heapify to fix. Uses a constant amount of time plus
the time to call Heapify, which is O(lgn). Total time is then O(lgn).
Example: Extract(A,7):

Remove 14, so max=14. Stick 7 into 1:

Heapify (A,1,6):
We have a new heap that is valid, with the max of 14 being returned. The 2 is sitting in the array
twice, but since n is updated to equal 6, it will be overwritten if a new element is added, and
otherwise ignored.

1.12. Representing Expressions.

The evaluation of the tree takes place by reading the expression one symbol at a time. If the
symbol is an operand, one-node tree is created and a pointer is pushed onto a stack. If the symbol
is an operator, the pointers are popped to two trees T1 and T2 from the stack and a new tree
whose root is the operator and whose left and right children point to T2 and T1 respectively is
formed . A pointer to this new tree is then pushed to the Stack.
Example

The input is: a b + c d e + * *
Since the first two symbols are operands, one-node trees are created and pointers are pushed to
them onto a stack. For convenience the stack will grow from left to right.
Stack growing from Left to Right

The next symbol is a '+'. It pops the two pointers to the trees, a new tree is formed, and a pointer
to it is pushed onto to the stack.

Formation of a New Tree

Next, c, d, and e are read. A one-node tree is created for each and a pointer to the corresponding
tree is pushed onto the stack.

Creating One-Node Tree

Continuing, a '+' is read, and it merges the last two trees.
Merging Two Trees

Now, a '*' is read. The last two tree pointers are popped and a new tree is formed with a '*' as the
root.

Forming a New Tree with a Root

Finally, the last symbol is read. The two trees are merged and a pointer to the final tree remains
on the stack.
Steps to Construct an Expression tree a b + c d e + * *

Algebraic expressions

Binary algebraic expression tree equivalent to ((5 + z) / -8) * (4 ^ 2)

Algebraic expression trees represent expressions that contain numbers, variables, and unary and
binary operators. Some of the common operators are × (multiplication), ÷ (division), +
(addition), − (subtraction), ^ (exponentiation), and - (negation). The operators are contained in
the internal nodes of the tree, with the numbers and variables in the leaf nodes. The nodes of
binary operators have two child nodes, and the unary operators have one child node.

Boolean expressions
Binary boolean expression tree equivalent to ((true

false)

false)

(true

false))

Boolean expressions are represented very similarly to algebraic expressions, the only difference
being the specific values and operators used. Boolean expressions use true and false as constant
values, and the operators include (AND), (OR), (NOT).

Mais conteúdo relacionado

Mais procurados

Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)
Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)
Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)Farwa Ansari
 
Lesson11 more behavioural patterns
Lesson11 more behavioural patternsLesson11 more behavioural patterns
Lesson11 more behavioural patternsOktJona
 
Lesson10 behavioral patterns
Lesson10 behavioral patternsLesson10 behavioral patterns
Lesson10 behavioral patternsOktJona
 
Headless fragments in Android
Headless fragments in AndroidHeadless fragments in Android
Headless fragments in AndroidAli Muzaffar
 
AVB201.1 MS Access VBA Module 1
AVB201.1 MS Access VBA Module 1AVB201.1 MS Access VBA Module 1
AVB201.1 MS Access VBA Module 1guest38bf
 
How to ace your .NET technical interview :: .Net Technical Check Tuneup
How to ace your .NET technical interview :: .Net Technical Check TuneupHow to ace your .NET technical interview :: .Net Technical Check Tuneup
How to ace your .NET technical interview :: .Net Technical Check TuneupBala Subra
 
Notes of java first unit
Notes of java first unitNotes of java first unit
Notes of java first unitgowher172236
 
Lesson12 other behavioural patterns
Lesson12 other behavioural patternsLesson12 other behavioural patterns
Lesson12 other behavioural patternsOktJona
 
Critical Section in Operating System
Critical Section in Operating SystemCritical Section in Operating System
Critical Section in Operating SystemMOHIT AGARWAL
 
Integrating Model Checking and Procedural Languages
Integrating Model Checking and Procedural LanguagesIntegrating Model Checking and Procedural Languages
Integrating Model Checking and Procedural Languagesbutest
 
Exception handling
Exception handlingException handling
Exception handlingMinal Maniar
 
Tracking my face with matlab ws word format
Tracking my face with matlab ws word formatTracking my face with matlab ws word format
Tracking my face with matlab ws word formatGaspard Ggas
 
100% code coverage by static analysis - is it that good?
100% code coverage by static analysis - is it that good?100% code coverage by static analysis - is it that good?
100% code coverage by static analysis - is it that good?PVS-Studio
 
Critical section problem in operating system.
Critical section problem in operating system.Critical section problem in operating system.
Critical section problem in operating system.MOHIT DADU
 
1.1 the introduction of design and analysis of algorithm
1.1 the introduction of design and analysis of algorithm1.1 the introduction of design and analysis of algorithm
1.1 the introduction of design and analysis of algorithmMohammed khaja Jamaluddin
 
10 implementing subprograms
10 implementing subprograms10 implementing subprograms
10 implementing subprogramsMunawar Ahmed
 

Mais procurados (20)

Critical section operating system
Critical section  operating systemCritical section  operating system
Critical section operating system
 
Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)
Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)
Chapter 5: Names, Bindings and Scopes (review Questions and Problem Set)
 
Lesson11 more behavioural patterns
Lesson11 more behavioural patternsLesson11 more behavioural patterns
Lesson11 more behavioural patterns
 
Lesson10 behavioral patterns
Lesson10 behavioral patternsLesson10 behavioral patterns
Lesson10 behavioral patterns
 
Headless fragments in Android
Headless fragments in AndroidHeadless fragments in Android
Headless fragments in Android
 
AVB201.1 MS Access VBA Module 1
AVB201.1 MS Access VBA Module 1AVB201.1 MS Access VBA Module 1
AVB201.1 MS Access VBA Module 1
 
How to ace your .NET technical interview :: .Net Technical Check Tuneup
How to ace your .NET technical interview :: .Net Technical Check TuneupHow to ace your .NET technical interview :: .Net Technical Check Tuneup
How to ace your .NET technical interview :: .Net Technical Check Tuneup
 
Notes of java first unit
Notes of java first unitNotes of java first unit
Notes of java first unit
 
Lesson12 other behavioural patterns
Lesson12 other behavioural patternsLesson12 other behavioural patterns
Lesson12 other behavioural patterns
 
Critical Section in Operating System
Critical Section in Operating SystemCritical Section in Operating System
Critical Section in Operating System
 
Integrating Model Checking and Procedural Languages
Integrating Model Checking and Procedural LanguagesIntegrating Model Checking and Procedural Languages
Integrating Model Checking and Procedural Languages
 
Exception handling
Exception handlingException handling
Exception handling
 
C question
C questionC question
C question
 
Tracking my face with matlab ws word format
Tracking my face with matlab ws word formatTracking my face with matlab ws word format
Tracking my face with matlab ws word format
 
100% code coverage by static analysis - is it that good?
100% code coverage by static analysis - is it that good?100% code coverage by static analysis - is it that good?
100% code coverage by static analysis - is it that good?
 
SYNCHRONIZATION
SYNCHRONIZATIONSYNCHRONIZATION
SYNCHRONIZATION
 
Chap10
Chap10Chap10
Chap10
 
Critical section problem in operating system.
Critical section problem in operating system.Critical section problem in operating system.
Critical section problem in operating system.
 
1.1 the introduction of design and analysis of algorithm
1.1 the introduction of design and analysis of algorithm1.1 the introduction of design and analysis of algorithm
1.1 the introduction of design and analysis of algorithm
 
10 implementing subprograms
10 implementing subprograms10 implementing subprograms
10 implementing subprograms
 

Semelhante a Adsa u1 ver 1.0

A General Framework for Electronic Circuit Verification
A General Framework for Electronic Circuit VerificationA General Framework for Electronic Circuit Verification
A General Framework for Electronic Circuit VerificationIRJET Journal
 
Android Application Development - Level 3
Android Application Development - Level 3Android Application Development - Level 3
Android Application Development - Level 3Isham Rashik
 
A study of the Behavior of Floating-Point Errors
A study of the Behavior of Floating-Point ErrorsA study of the Behavior of Floating-Point Errors
A study of the Behavior of Floating-Point Errorsijpla
 
S D D Program Development Tools
S D D  Program  Development  ToolsS D D  Program  Development  Tools
S D D Program Development Toolsgavhays
 
Unit 3 principles of programming language
Unit 3 principles of programming languageUnit 3 principles of programming language
Unit 3 principles of programming languageVasavi College of Engg
 
Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6Shipra Swati
 
Model Driven Testing: requirements, models & test
Model Driven Testing: requirements, models & test Model Driven Testing: requirements, models & test
Model Driven Testing: requirements, models & test Gregory Solovey
 
Monitor(karthika)
Monitor(karthika)Monitor(karthika)
Monitor(karthika)Nagarajan
 
classVII_Coding_Teacher_Presentation.pptx
classVII_Coding_Teacher_Presentation.pptxclassVII_Coding_Teacher_Presentation.pptx
classVII_Coding_Teacher_Presentation.pptxssusere336f4
 
Numerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagationNumerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagationScilab
 
Application Fault Tolerance (AFT)
Application Fault Tolerance (AFT)Application Fault Tolerance (AFT)
Application Fault Tolerance (AFT)Daniel S. Katz
 
Concepts of predictive control
Concepts of predictive controlConcepts of predictive control
Concepts of predictive controlJARossiter
 
GUI Programming in JAVA (Using Netbeans) - A Review
GUI Programming in JAVA (Using Netbeans) -  A ReviewGUI Programming in JAVA (Using Netbeans) -  A Review
GUI Programming in JAVA (Using Netbeans) - A ReviewFernando Torres
 
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications neirew J
 
Error isolation and management in agile
Error isolation and management in agileError isolation and management in agile
Error isolation and management in agileijccsa
 

Semelhante a Adsa u1 ver 1.0 (20)

Lect 3-4 Zaheer Abbas
Lect 3-4 Zaheer AbbasLect 3-4 Zaheer Abbas
Lect 3-4 Zaheer Abbas
 
Parallel programming model
Parallel programming modelParallel programming model
Parallel programming model
 
A General Framework for Electronic Circuit Verification
A General Framework for Electronic Circuit VerificationA General Framework for Electronic Circuit Verification
A General Framework for Electronic Circuit Verification
 
Android Application Development - Level 3
Android Application Development - Level 3Android Application Development - Level 3
Android Application Development - Level 3
 
A study of the Behavior of Floating-Point Errors
A study of the Behavior of Floating-Point ErrorsA study of the Behavior of Floating-Point Errors
A study of the Behavior of Floating-Point Errors
 
Chap4
Chap4Chap4
Chap4
 
S D D Program Development Tools
S D D  Program  Development  ToolsS D D  Program  Development  Tools
S D D Program Development Tools
 
ES-CH5.ppt
ES-CH5.pptES-CH5.ppt
ES-CH5.ppt
 
Unit 3 principles of programming language
Unit 3 principles of programming languageUnit 3 principles of programming language
Unit 3 principles of programming language
 
Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6Fundamental of Information Technology - UNIT 6
Fundamental of Information Technology - UNIT 6
 
Model Driven Testing: requirements, models & test
Model Driven Testing: requirements, models & test Model Driven Testing: requirements, models & test
Model Driven Testing: requirements, models & test
 
Monitor(karthika)
Monitor(karthika)Monitor(karthika)
Monitor(karthika)
 
classVII_Coding_Teacher_Presentation.pptx
classVII_Coding_Teacher_Presentation.pptxclassVII_Coding_Teacher_Presentation.pptx
classVII_Coding_Teacher_Presentation.pptx
 
Numerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagationNumerical analysis using Scilab: Error analysis and propagation
Numerical analysis using Scilab: Error analysis and propagation
 
Application Fault Tolerance (AFT)
Application Fault Tolerance (AFT)Application Fault Tolerance (AFT)
Application Fault Tolerance (AFT)
 
Concepts of predictive control
Concepts of predictive controlConcepts of predictive control
Concepts of predictive control
 
GE8151 notes pdf.pdf
GE8151 notes pdf.pdfGE8151 notes pdf.pdf
GE8151 notes pdf.pdf
 
GUI Programming in JAVA (Using Netbeans) - A Review
GUI Programming in JAVA (Using Netbeans) -  A ReviewGUI Programming in JAVA (Using Netbeans) -  A Review
GUI Programming in JAVA (Using Netbeans) - A Review
 
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications
Error Isolation and Management in Agile Multi-Tenant Cloud Based Applications
 
Error isolation and management in agile
Error isolation and management in agileError isolation and management in agile
Error isolation and management in agile
 

Mais de Dr. C.V. Suresh Babu (20)

Data analytics with R
Data analytics with RData analytics with R
Data analytics with R
 
Association rules
Association rulesAssociation rules
Association rules
 
Clustering
ClusteringClustering
Clustering
 
Classification
ClassificationClassification
Classification
 
Blue property assumptions.
Blue property assumptions.Blue property assumptions.
Blue property assumptions.
 
Introduction to regression
Introduction to regressionIntroduction to regression
Introduction to regression
 
DART
DARTDART
DART
 
Mycin
MycinMycin
Mycin
 
Expert systems
Expert systemsExpert systems
Expert systems
 
Dempster shafer theory
Dempster shafer theoryDempster shafer theory
Dempster shafer theory
 
Bayes network
Bayes networkBayes network
Bayes network
 
Bayes' theorem
Bayes' theoremBayes' theorem
Bayes' theorem
 
Knowledge based agents
Knowledge based agentsKnowledge based agents
Knowledge based agents
 
Rule based system
Rule based systemRule based system
Rule based system
 
Formal Logic in AI
Formal Logic in AIFormal Logic in AI
Formal Logic in AI
 
Production based system
Production based systemProduction based system
Production based system
 
Game playing in AI
Game playing in AIGame playing in AI
Game playing in AI
 
Diagnosis test of diabetics and hypertension by AI
Diagnosis test of diabetics and hypertension by AIDiagnosis test of diabetics and hypertension by AI
Diagnosis test of diabetics and hypertension by AI
 
A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”
 
A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”
 

Último

Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxAreebaZafar22
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfPoh-Sun Goh
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsMebane Rash
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIShubhangi Sonawane
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxnegromaestrong
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...Nguyen Thanh Tu Collection
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.MaryamAhmad92
 
Role Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptxRole Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptxNikitaBankoti2
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibitjbellavia9
 

Último (20)

Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-IIFood Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
Food Chain and Food Web (Ecosystem) EVS, B. Pharmacy 1st Year, Sem-II
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Role Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptxRole Of Transgenic Animal In Target Validation-1.pptx
Role Of Transgenic Animal In Target Validation-1.pptx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 

Adsa u1 ver 1.0

  • 1. Compiled ver 1.0 Dr. C.V. Suresh Babu Revision ver 1.1 Revision ver 1.2 Revision ver 1.3 Revision ver 1.4 Revision ver 1.5 Edited ver 2.0 Dr. C.V. Suresh Babu UNIT I ITERATIVE AND RECURSIVE ALGORITHMS 9 1.1. Iterative Algorithms: 1.1.1. Measures of Progress and Loop Invariants1.2. Paradigm Shift: 1.2.1. Sequence of Actions versus Sequence of Assertions1.2.2. Steps to Develop an Iterative Algorithm1.2.3. Different Types of Iterative Algorithms— 1.2.4. Typical Errors1.3. 1.3.1. 1.3.2. 1.3.3. 1.3.4. 1.4. 1.4.1. 1.4.2. 1.5. 1.6. 1.7. 1.8. 1.8.1. 1.9. 1.10. 1.11. RecursionForward versus BackwardTowers of HanoiChecklist for Recursive AlgorithmsThe Stack FrameProving Correctness with Strong InductionExamples of Recursive AlgorithmsSorting and Selecting AlgorithmsOperations on IntegersAckermann’s FunctionRecursion on TreesTree TraversalsExamplesGeneralizing the Problem – Heap Sort and Priority QueuesRepresenting Expressions. 5/12/2013
  • 2. UNIT 1 ITERATIVE AND RECURSIVE ALGORITHMS 1.1.ITERATIVE ALGORITHMS: An iterative algorithm executes steps in iterations. It aims to find successive approximation in sequence to reach a solution. They are most commonly used in linear programs where large numbers of variables are involved. 1.1.1. MEASURES OF PROGRESS AND LOOP INVARIANTS: An invariant of a loop is a property that holds before (and after) each repetition. It is a logical assertion, sometimes programmed as an assertion. Knowing its invariant(s) is essential for understanding the effect of a loop. A loop invariant is a boolean expression that is true each time the loop guard is evaluated. Typically the boolean expression is composed of variables used in the loop. The invariant is true every time the loop guard is evaluated. In particular, the initial establishment of the truth of the invariant helps determine the proper initial values of variables used in the loop guard and body. In the loop body, some statements make the invariant false, and other statements must then re-establish the invariant so that it is true before the loop guard is evaluated again. Invariants can serve as both aids in recalling the details of an implementation of a particular algorithm and in the construction of an algorithm to meet a specification. For example, the partition phase of Quicksort is a classic example where an invariant can help in developing the code and in reconstructing the code. An invariant is shown pictorially and as the postcondition of the function partition below. The initial value of pivot is left. This ensures that the invariant is true the first time the loop test is evaluated. When pivot is incremented in the loop, the invariant is false, but the swap re-establishes the invariant. int partition(vector<string> & a, int left, int right) // precondition: left <= right // postcondition: rearranges entries in vector a // returns pivot such that // forall k, left <= k <= pivot, a[k] <= a[pivot] and // forall k, pivot < k <= right, a[pivot] < a[k] // { int k, pivot = left; string center = a[left]; for(k=left+1, k <= right; k++) {
  • 3. if (a[k] <= center) { pivot++; swap(a[k],a[pivot]); } } swap(a[left],a[pivot]); return piv; } The invariant for this code can be shown pictorially.
  • 4. DESCRIPTION: The Challenge of the Sequence-of-Actions View: Suppose one is designing a new algorithm or explaining an algorithm to a friend. If one is thinking of it as sequence of actions, then one will likely start at the beginning: Do this. Do that. Shortly one can get lost and not know where one is. To handle this, one simultaneously needs to keep track of how the state of the computer changes with each new action. In order to know what action to take next, one needs to have a global plan of where the computation is to go. To make it worse, the computation has many IFs and LOOPS so one has to consider all the various paths that the computation may take. 1.1.1. PARADIGM SHIFT: SEQUENCE OFACTIONS VERSUS SEQUENCE OF ASSERTIONS Understanding iterative algorithms requires understanding the difference between a loop invariant, which is an assertion or picture of the computation at a particular point in time, and the actions that are required to maintain such a loop invariant. Hence, we will start with trying to understand this difference. One of the first important paradigm shifts that programmers struggle to make is from viewing an algorithm as a sequence of actions to viewing it as a sequence of snapshots of the state of the computer. Programmers tend to fixate on the first view, because code is a sequence of instructions for action and a computation is a sequence of actions. Though this is an important view, there is another. Imagine stopping time at key points during the computation and taking still pictures of the state of the computer. Then a computation can equally be viewed as a sequence of such snapshots. Having two ways of viewing the same thing gives one both more tools to handle it and a deeper understanding of it. 1.1.2. STEPS TO DEVELOP AN ITERATIVE ALGORITHM: Iterative Algorithms: A good way to structure many computer programs is to store the key information you currently know in some data structure and then have each iteration of the main loop take a step towards your destination by making a simple change to this data. Loop Invariant: A loop invariant expresses important relationships among the variables that must be true at the start of every iteration and when the loop terminates. If it is true, then the computation is still on the road. If it is false, then the algorithm has failed. The Code Structure: The basic structure of the code is as follows.
  • 5. 1.1.3. DIFFERENT TYPES OF ITERATIVE ALGORITHMS: There are two main classes of iterative methods they are: i. stationary iterative methods, and ii. Krylov subspace methods. i. Stationary iterative methods Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error in the result (the residual), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. Examples of stationary iterative methods are the i. ii. iii. Jacobi method, Gauss–Seidel method and Successive over-relaxation method. Linear stationary iterative methods are also called relaxation methods. ii. Krylov subspace methods Krylov subspace methods work by forming a basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. Example of this prototypical method in this class is the i. ii. iii. conjugate gradient method (CG) generalized minimal residual method (GMRES) biconjugate gradient method
  • 6. 1.1.4. TYPICAL ERRORS: Many errors made in analysing the problem, developing an algorithm, and/or coding the algorithm, only become apparent when you try to compile, run, and test the resulting program. The earlier in the development process an error is made, or the later it is discovered, the more serious the consequences. Sources of errors: Understanding the problem to solve. An error here may be obvious e.g. your program does nothing useful at all. Or it may be more subtle, and only become apparent when some exceptional condition occurs e.g. a leap year, incompetent user Algorithm design. Mistakes here result in logic errors. The program will run, but will not perform the intended task e.g. a program to add numbers which returns 6 when given 3+2 has a logic error. Coding of the algorithm. Often the compiler will complain, but messages from the compiler can be cryptic. These errors are usually simple to correct e.g. spelling errors, misplaced punctuation Runtime. Errors may appear at run time e.g. divide some number by zero. These errors may be coding errors or logic errors. Sources of errors Programs rarely run correctly the first time (!) Errors are of four types: syntax errors run-time errors logic errors Latent errors i. SYNTAX ERRORS: Any violation of rules of the language result comes under syntax error. The compiler can detect and isolate these types of errors. When syntax errors present in the program then compilation fails. Examples:  undeclared variable  missing semicolon at end of statement comment not closed  Often one mistake leads to multiple error messages – can be confusing! ii. RUN-TIME ERRORS: A program having this type of errors will run but produce erroneous result and there for the name run time error is given to such errors. Generally it is a difficult task to isolate run time errors. Examples: int x=y/0; will stop program execution and display message iii. LOGIC ERRORS:
  • 7. As the name implies, these errors are basically related to the program execution. These errors causes wrong results and these errors are primarily due to a poor understanding of a problem, in accurate translation of the algorithms in to the program and lake of hierarchy of operation. For example: If (x=y) { Printf("they are equal"); } When x and y are equal then rarely x and y becomes equal and it will create a logical error. Examples: float f; scanf("%d",&f); difficult to detect for large and complex algorithms sign of error: incorrect program output cure: thorough testing and comparison with expected results16 iv. LATENT ERRORS: These types of errors are also known as 'hidden' errors and show up only when that particular set of data is used For example: Ratio=(x+y)/ (p-q) That statement will generate the error if and only if when p and q are equal This type of error can be detected only by using all possible combination of test data. Sources of errors & how to reduce them Errors made in understanding the problem may well require you to restart from scratch. You may be tempted to start coding, making up the solution andalgorithm as you go along. This may work for trivial problems, but is acertain way to waste time and effort for realistic problems.  A program can be tested by giving it inputs and comparing the observedoutputs to the expected ones.  Testing is very important but (in general) cannot prove that aprogram works correctly.  For small programs, formal (mathematical) methods can be used toprove that a program will produce the desired outputs for all possibleinputs. These methods are rarely applicable to large programs.
  • 8.  A logic error is referred to as a bug, so finding logic errors is calleddebugging.  Debugging is a continuous process, leading to an edit-compiledebugcycle. General idea: insert extra printf() statementsshowing the values of variables affected by each major step. Whenyou’re satisfied program is correct, comment out these printf()’s. Debugging tips – common errors in C  In a forloop, initialisation and condition end with semicolons.  Wrong – for(initialisation, condition; update)  Must use braces in for and while loops to repeat more than one statement.  nested structure: first closing brace is associated with innermost structure. Example of “unexpected” behaviour: if(c=='y') { scanf("%d", &d); while(d!=0) { sum+=d; scanf("%d", &d); } else printf("end"); /* done even when c==„y‟ */ • inequality test on floating point numbers. Example of wrong way: while(value!=0.0) { /* could have value<1e-9 */ ... value/=2.8; /* now value will be regarded as 0 */ } Debugging tips – common errors in C (contd.) • Should ensure that loop repetition condition will eventually become false. Example where this doesn’t happen: do { ... printf("one more time?"); scanf("%d", &again); } while(again=1); /* assignment, not equality test */ • loop count off by one, either too many or too few iterations. Or infinite loop, or loop body never executed. Can be hard to discover! 1.2. RECURSION: A recursive algorithm is an algorithm which calls itself with "smaller (or simpler)" input values, and which obtains the result for the current input by applying simple operations to the returned value for the smaller (or simpler) input. More generally if a problem can be solved utilizing solutions to smaller versions of the same problem, and the smaller versions reduce to easily solvable cases, then one can use a recursive algorithm to solve that problem. For example, the elements of a recursively defined set, or the value of a recursively defined function can be obtained by a recursive algorithm. If a set or a function is defined recursively, then a recursive algorithm to compute its members or values mirrors the definition. Initial steps of the recursive algorithm correspond to the basis clause of the recursive definition and they identify the basis elements. They are then followed by steps corresponding
  • 9. to the inductive clause, which reduce the computation for an element of one generation to that of elements of the immediately preceding generation. In general, recursive computer programs require more memory and computation compared with iterative algorithms, but they are simpler and for many cases a natural way of thinking about the problem. Example 1: Algorithm for finding the k-th even natural number Note here that this can be solved very easily by simply outputting 2*(k - 1) for a given k . The purpose here, however, is to illustrate the basic idea of recursion rather than solving the problem. Algorithm 1: Even(positive integer k) Input: k , a positive integer Output: k-th even natural number (the first even being 0) Algorithm: if k = 1, then return 0; else return Even(k-1) + 2 . Here the computation of Even(k) is reduced to that of Even for a smaller input value, that is Even(k-1). Even(k) eventually becomes Even(1) which is 0 by the first line. For example, to compute Even(3), Algorithm Even(k) is called with k = 2. In the computation of Even(2), Algorithm Even(k) is called with k = 1. Since Even(1) = 0, 0 is returned for the computation of Even(2), and Even(2) = Even(1) + 2 = 2 is obtained. This value 2 for Even(2) is now returned to the computation of Even(3), and Even(3) = Even(2) + 2 = 4 is obtained. As can be seen by comparing this algorithm with the recursive definition of the set of nonnegative even numbers, the first line of the algorithm corresponds to the basis clause of the definition, and the second line corresponds to the inductive clause. By way of comparison, let us see how the same problem can be solved by an iterative algorithm. Algorithm 1-a: Even(positive integer k) Input: k, a positive integer Output: k-th even natural number (the first even being 0) Algorithm: int i, even; i := 1; even := 0; while( i < k ) { even := even + 2; i := i + 1;
  • 10. } return even . Example 2: Algorithm for computing the k-th power of 2 Algorithm 2 Power_of_2(natural number k) Input: k , a natural number Output: k-th power of 2 Algorithm: if k = 0, then return 1; else return 2*Power_of_2(k - 1) . By way of comparison, let us see how the same problem can be solved by an iterative algorithm. Algorithm 2-a Power_of_2(natural number k) Input: k , a natural number Output: k-th power of 2 Algorithm: int i, power; i := 0; power := 1; while( i < k ) { power := power * 2; i := i + 1; } return power . The next example does not have any corresponding recursive definition. It shows a recursive way of solving a problem. Example 3: Recursive Algorithm for Sequential Search Algorithm 3 SeqSearch(L, i, j, x) Input: L is an array, i and j are positive integers, i j, and x is the key to be searched for in L. Output: If x is in L between indexes i and j, then output its index, else output 0.
  • 11. Algorithm: if i j , then { if L(i) = x, then return i ; else return SeqSearch(L, i+1, j, x) } else return 0. Recursive algorithms can also be used to test objects for membership in a set. Example 4: Algorithm for testing whether or not a number x is a natural number Algorithm 4 Natural(a number x) Input: A number x Output: "Yes" if x is a natural number, else "No" Algorithm: if x < 0, then return "No" else if x = 0, then return "Yes" else return Natural( x - 1 ) Example 5: Algorithm for testing whether or not an expression w is a proposition(propositional form) Algorithm 5 Proposition( a string w ) Input: A string w Output: "Yes" if w is a proposition, else "No" Algorithm: if w is 1(true), 0(false), or a propositional variable, then return "Yes" else if w = ~w1, then return Proposition(w1) else if ( w = w1 w2 or w1 w2 or w1 w2 or w1 w2 ) and Proposition(w1) = Yes and Proposition(w2) = Yes then return Yes else return No end
  • 12. 1.2.1. Recursive vs. Iterative algorithms Comparison between Iterative and Recursive Approaches from Performance Considerations Factorial //recursive function calculates n! static int FactorialRecursive(int n) { if (n <= 1) return 1; return n * FactorialRecursive(n - 1); } //iterative function calculates n! static int FactorialIterative(int n) { int sum = 1; if (n <= 1) return sum; while (n > 1) { sum *= n; n--; } return sum; } N 10 100 1000 10000 100000 Recursive 334 ticks 846 ticks 3368 ticks 9990 ticks stack overflow Iterative 11 ticks 23 ticks 110 ticks 975 ticks 9767 ticks As we can clearly see, the recursive is a lot slower than the iterative (considerably) and limiting (stackoverflow). The reason for the poor performance is heavy push-pop of the registers in the ill level of each recursive call. Fibonacci //--------------- iterative version --------------------static int FibonacciIterative(int n) { if (n == 0) return 0; if (n == 1) return 1; int prevPrev = 0; int prev = 1; int result = 0; for (int i = 2; i <= n; i++) { result = prev + prevPrev; prevPrev = prev;
  • 13. prev = result; } return result; } //--------------- naive recursive version --------------------static int FibonacciRecursive(int n) { if (n == 0) return 0; if (n == 1) return 1; return FibonacciRecursive(n - 1) + FibonacciRecursive(n - 2); } //--------------- optimized recursive version --------------------static Dictionary<int> resultHistory = new Dictionary<int>(); static { if if if int FibonacciRecursiveOpt(int n) (n == 0) return 0; (n == 1) return 1; (resultHistory.ContainsKey(n)) return resultHistory[n]; int result = FibonacciRecursiveOpt(n - 1) + FibonacciRecursiveOpt(n - 2); resultHistory[n] = result; return result; } N 5 10 20 30 100 1000 10000 100000 Recursive 5 ticks 36 ticks 2315 ticks 180254 ticks too long/stack overflow too long/stack overflow too long/stack overflow too long/stack overflow Recursive opt. 22 ticks 49 ticks 61 ticks 65 ticks 158 ticks 1470 ticks 13873 ticks too long/stack overflow Iterative 9 ticks 10 ticks 10 ticks 10 ticks 11 ticks 27 ticks 190 ticks 3952 ticks As before, the recursive approach is worse than iterative however, we could apply memorization pattern (saving previous results in dictionary for quick key based access), although this pattern isn't a match for the iterative approach (but definitely an improvement over the simple recursion). 1.2.2. FORWARD VERSUS BACKWARD: The term forward–backward algorithm is also used to refer to any algorithm belonging to the general class of algorithms that operate on sequence models in a forward–backward manner. In this sense, the descriptions in the remainder of this article refer but to one specific instance of this class. In general a recursive object looks something like:
  • 14. Function xxx(parameters) list of instructions 1 If condition Then Call xxx list of instructions 2 Return The first list of instructions is obeyed on the way up the spiral i.e. as the instances of the function are being built and the second list is obeyed on the way back down the spiral as the instances are being destroyed. You can also see the sense in which first list occurs in the order you would expect but the second is obeyed in the reverse order. Sometimes a recursive function has only the first or the second list of instructions and these are easier to analyse and implement. For example A recursive function that doesn't have a second list doesn't need the copies created on the way down the recursion to be kept because they aren't made any use of on the way up! This is usually called tail end recursion because the recursive call to the function is the very last instruction in the function i.e. in the tail end of the function. Many languages can implement tail end recursion more efficiently than general recursion because they can throw away the state of the function before the recursive call so saving stack space. 1.2.3. TOWERS OF HANOI: The Tower of Hanoi is a mathematicalgame or puzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape. The objective of the puzzle is to move the entire stack to another rod, obeying the following rules:  Only one disk must be moved at a time.  Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod.  No disk may be placed on top of a smaller disk. How to solve the Towers of Hanoi puzzle The Classical Towers of Hanoi - an initial position of all disks is on post 'A'.
  • 15. Fig. 1 The solution of the puzzle is to build the tower on post 'C'. Fig. 2 The Arbitrary Towers of Hanoi - at start, disks can be in any position provided that a bigger disk is never on top of the smaller one (see Fig. 3). At the end, disks should be in another arbitrary position. * ) Fig. 3 Solving the Tower of Hanoi 'Solution' ≡ shortest path Recursive Solution: 1. Identify biggest discrepancy (=disk N) 2. If moveable to goal peg Then move Else
  • 16. 3. Subgoal: set-up (N−1)-disk tower on non-goal peg. 4. Go to 1. ... Solving the Tower of Hanoi - 'regular' to 'perfect' Let's start thinking how to solve it. Let's, for the sake of clarity, assume that our goal is to set a 4 disk-high tower on peg 'C' - just like in the classical Towers of Hanoi (see Fig. 2). Let's assume we 'know' how to move a 'perfect' 3 disk-high tower. Then on the way of solving there is one special setup. Disk 4 is on peg 'A' and the 3 disk-high tower is on peg 'B' and target peg 'C' is empty. Fig. 4 From that position we have to move disk 4 from 'A' to 'C' and move by some magic the 3 diskhigh tower from 'B' to 'C'. So think back. Forget the disks bigger than 3. Disk 3 is on peg 'C'. We need disk 3 on peg 'B'. To obtain that, we need disk 3 in place where it is now, free peg 'B' and disks 2 and 1 stacked on peg 'A'. So our goal now is to put disk 2 on peg 'A'. Fig. 5 Forget for the moment disk 3 (see Fig. 6). To be able to put disk 2 on peg 'A' we need to empty peg 'A' (above the thin blue line), disks smaller than disk 2 stacked on peg 'B'. So, our goal now is to put disk 1 on peg 'B'. As we can see, this is an easy task because disk 1 has no disk above it and peg 'B' is free.
  • 17. Fig. 6 So let's move it. Fig. 7 The steps above are made by the algorithm implemented in Towers of Hanoi when one clicks the "Help me" button. This button-function makes analysis of the current position and generates only one single move which leads to the solution. It is by design. When the 'Help me' button is clicked again, the algorithm repeats all steps of the analysis starting from the position of the biggest disk - in this example disk 4 - and generates the next move - disk 2 from peg 'C' to peg 'A'. Fig. 8 If one needs a recursive or iterative algorithm which generates the series of moves for solving arbitrary Towers of Hanoi then one should use a kind of back track programming, that is to remember previous steps of the analysis and not to repeat the analysis of the Towers from the ground.
  • 18. 1.3.3. Checklist for recursive algorithm The following is a checklist that will help you identify the most common sources of error.      Does your recursive implementation begin by checking for simple cases? Before you attempt to solve a problem by transforming it into a recursive subproblem, you must first check to see if the problem is so simple that such decomposition is unnecessary. In almost all cases, recursive functions begin with the keyword if. If your function doesn’t, you should look carefully at your program and make sure that you know what you’re doing. Have you solved the simple cases correctly? A surprising number of bugs in recursive programs arise from having incorrect solutions to the simple cases. If the simple cases are wrong, the recursive solutions to more complicated problems will inherit the same mistake. For example, if you had mistakenly defined Fact(0) as 0 instead of 1, calling Fact on any argument would end up returning 0. Does your recursive decomposition make the problem simpler? For recursion to work, the problems have to get simpler as you go along. More formally, there must be some metric—a standard of measurement that assigns a numeric difficulty rating to the problem—that gets smaller as the computation proceeds. For mathematical functions like Fact and Fib, the value of the integer argument serves as a metric. On each recursive call, the value of the argument gets smaller. For the IsPalindrome function, the appropriate metric is the length of the argument string, because the string gets shorter on each recursive call. If the problem instances do not get simpler, the decomposition process will just keep making more and more calls, giving rise to the recursive analogue of the infinite loop, which is called nonterminating recursion. Does the simplification process eventually reach the simple cases, or have you left out some of the possibilities? A common source of error is failing to include simple case tests for all the cases that can arise as the result of the recursive decomposition. For example, in the IsPalindrome implementation presented in Figure 5-3, it is critically important for the function to check the zero-character case as well as the one-character case, even if the client never intends to call IsPalindrome on the empty string. As the recursive decomposition proceeds, the string arguments get shorter by two characters at each level of the recursive call. If the original argument string is even in length, the recursive decomposition will never get to the one-character case. Do the recursive calls in your function represent sub problems that are truly identical in form to the original? When you use recursion to break down a problem, it is essential that the subproblems be of the same form. If the recursive calls change the nature of the problem or violate one of the initial assumptions, the entire process can break down. As several of the examples in this chapter illustrate, it is often useful to define the publicly exported function as a simple wrapper that calls a more
  • 19. general recursive function which is private to the implementation. Because the private function has a more general form, it is usually easier to decompose the original problem and still have it fit within the recursive structure.  When you apply the recursive leap of faith, do the solutions to the recursive subproblems provide a complete solution to the original problem? Breaking a problem down into recursive subinstances is only part of the recursive process. Once you get the solutions, you must also be able to reassemble them to generate the complete solution. The way to check whether this process in fact generates the solution is to walk through the decomposition, religiously applying the recursive leap of faith. Work through all the steps in the current function call, but assume that every recursive call generates the correct answer. If following this process yields the right solution, your program should work.
  • 20. Factorial coding: int Factorial(int n) { if (n==0) // base case return 1; else return n * Factorial(n-1); } Now, Let us consider an another example for checklist of recursive algorithm algorithmAlg(a,b,c) pre-cond:here a is a tuple,b an integer & c as binary tree. post-cond:outputx,y& z, which are useful objects begin if(_a,b,c_is a sufficient small instance)return(_0,0,0_) _asub1,bsub1,csub1_=a part of _a,b,c_ _xsub1,ysub1,zsub1_= Alg(_asub1,bsub1,csub1_) _asub2,bsub2,csub2_=a different part of _a,b,c_ _xsub2,ysub2,zsub2_= Alg(_asub2,bsub2,csub2_) _x,y,z_=combine _xsub1,ysub1,zsub1 and _xsub2,ysub2,zsub2_ return(_x,y,z_) end algorithm some of the things to to checked are: 0.The Code Structure- the code need not be too much complex.
  • 21. 1.specifications-define what the algorithm is about. 2.variables-the variables used in the codingshould be well documented and to have meaningful names.in eg, variables a,b and c are clearly explained. 2.1 your input: the inputs should be mentioned clearly, in our eg, Alg(a,b,c)specifies both the name Alg of the routine and the name of the inputs. 2.2 your output: the post condition must be explained clearly, here x,y,z is a solution. 2.3 everypath:the flow of the coding or if any IF or LOOP statements are used then every paththrough the code must end with an return statement. 2.4 type of output:each return statement must return a right solution of righttype.If the postcondition leaves open the possibilities that it doesn’t return any solution then some paths through the code may end with the statement return(no solution exists) 2.5 your friend‟s input: create subinstances like asub,bsub,csub, provided it must meet the precondition of your problem. 2.6your friends‟s output: the friends input return the output and saved in xsub,ysub,zsub=Alg(asub,dsub,csub) 2.7rarely need new inputs or output:only add them if absolutely necessary,if so then mention in pre and post conditions of the algorithm clearly. 3.task to complete: goal is given an arbitrary instances a,b,c meeting the precon to return solution x,y, z. 3.1accept your mission: know the correct range of things that your instances might be.eg, it must accept all thre operations of an binary tree 3.2construct subinstances: like xsub.ysub,zsub=Alg(a1,a2…..,an-1,b-5,leftsub(c)) 3.2.1valid subinstances:asub,bsub.csub should meet the preconditions 3.2.2smaller instances: own subinstances must be greater them subinstances 4.Base cases: if the input istancesissufficiently small accordind to your definition ofsize then youmust solve it urself as a base case. 1.4. PROVING CORRECTNESS WITH STRONG INDUCTION OBJECTIVE:
  • 22. To prove that this suffices to produce an algorithm that successfully solves the problem for every input instance. INTRODUCTION: When proving this it is tempting to talk about stack frames. This stack frame calls this one which calls that one until you hit the base case. Then the solutions bubble back up to the surface. These profs tend to make little sense. Instead, we use strong induction to prove formally. Strong Induction: Strong induction is similar to induction, except that instead of assuming only S(n − 1) to prove S(n), you must assume all of S(0), S(1), S(2), . . . ,S(n − 1). A Statement for Each n: For each value of n ≥ 0, let S(n) represent a Boolean statement. For some values of n this statement may be true, and for others it may be false. Goal: Our goal is to prove that it is true for every value of n, namely that ∀n ≥ 0, S(n). EXAMPLE: Fibonacci(n): if n = 0 then // base case return 0 elseif n = 1 then // base case return 1 else return Fibonacci(n - 1) + Fibonacci(n - 2) endif Proof by Strong Induction Base Case: for inputs 0 and 1, the algorithm returns 0 and 1 respectively. So this is Correct. Induction Hypothesis: Fibonacci(k) is correct for all values of k≤n, where n,k∈N Inductive Step: 1. let Fibonacci(k) be true for all values until n 2. From IH, we know Fibonacci(k) correctly computes Fk and Fibonacci(k-1) correctly computes Fk−1 3. So, Fibonacci(k+1)=Fibonacci(k) + Fibonacci(k-1) (by definition of the Fibonacci function) =Fk+Fk−1 =Fk+1 (By definition of Fibonacci numbers)
  • 23. 4. Thus by rules of mathematical Induction, Fibonacci(n) always returns the correct result for all values of n. 1.4.1. Examples of Recursive Algorithms-
  • 24.
  • 25.
  • 26.
  • 27. 1.4.2. Sorting and Selecting AlgorithmsSorting and searching are among the most common programming processes. We want to keep information in a sensible order. alphabetical order ascending/descending order order according to names, ids, years, departments etc. •The aim of sorting algorithms is to put unordered information in an ordered form. •There are many sorting algorithms, such as: Selection Sort Bubble Sort Insertion Sort Merge Sort Quick Sort •The first three are the foundations for faster and more efficient algorithms. Selection Sort The list is divided into two sublists, sorted and unsorted, which are divided by an imaginary wall. We find the smallest element from the unsorted sublist and swap it with the element at the beginning of the unsorted data. After each selection and swapping, the imaginary wall between the two sublists move one element ahead, increasing the number of sorted elements and decreasing the number of unsorted ones.
  • 28. Each time we move one element from the unsorted sublist to the sorted sublist, we say that we have completed a sort pass. A list of n elements requires n-1 passes to completely rearrange the data.
  • 29. Bubble Sort The list is divided into two sublists: sorted and unsorted. The smallest element is bubbled from the unsorted list and moved to the sorted sublist. After that, the wall moves one element ahead, increasing the number of sorted elements and decreasing the number of unsorted ones. Each time an element moves from the unsorted part to the sorted part one sort pass is completed. Given a list of n elements, bubble sort requires up to n-1 passes to sort the data. Bubble sort was originally written to “bubble up” the highest element in the list. From an efficiency point of view it makes no difference whether the high element is bubbled or the low element is bubbled.
  • 30.
  • 31. Insertion Sort Most common sorting technique used by card players. Again, the list is divided into two parts: sorted and unsorted. In each pass, the first element of the unsorted part is picked up, transferred to the sorted sublist, and inserted at the appropriate place. A list of n elements will take at most n-1 passes to sort the data.
  • 33. Definition: In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. It is the function of two parameters whose values grows very fast. Function: Algorithm: algorithm A(k, n) if( k = 0) then return( n +1 + 1 ) else if( n = 0) then if( k = 1) then return( 0 ) else return( 1 ) else return( A(k − 1, A(k, n − 1))) end if end if end algorithm
  • 34. Example: Recurrence Relation: Let Tk (n) denote the value returned by A(k, n). This gives T0(n) = 2 +n, T1(0) = 0, Tk (0) = 1 for k ≥ 2, and Tk (n) = Tk−1(Tk (n − 1)) for k > 0 and n > 0. STEPS EXPLANATION 1. T0(n) = 2+n Substitute k=0 (by ackermann’s function, if k=0, then T0(n)= 2+n) 2. T1(n) = T1-1(T1(n-1)) = T0(T1(n-1)) = 2+T1(n-1) = 2+(2+T1(n-2)) = 2+2+(2+T1(n-3)) …………… = 2n+ T1(n-n) = 2n+ T1(0) T1(n) = 2n Substitute k=1 (by ackermann’s function, if k>0, then Tk(n)=Tk-1(TK(n-1))) Now consider T1(n-1) as n. (i.e., T0(n)=2+n). so, T0(T1(n-1)) = 2+T1(n-1) After substitution, continuously increment it by 2+2+2 etc., in left side and (n-1), (n-2), (n-3)..etc., in right side. This can be written as 2n + T1(n-n). Therefore T1(n-n)=T1(0). (by ackermann’s function T1(0)=0). So T1(n) = 2n
  • 35. 3. T2(n) = T2-1(T2(n-1)) = T1(T2(n-1)) = 2.T2(n-1) = 2(2(T2(n-2))) = 2(2(2(T2(n-3)))) ………….. = 2nT2(n-n) T2(n) = 2n Substitute k=2 (by ackermann’s function, if k>0, then Tk(n)=Tk-1(TK(n-1))) Now consider T2(n-1) as n. (i.e., T1(n)=2n). so, T1(T2(n-1)) = 2T2(n-1) After substitution, continuously multiply it by 2*2*2 etc., in left side and (n-1), (n-2), (n-3)..etc., in right side. This can be written as 2nT2(nn).Therefore T2(n-n)=T2(0). (by ackermann’s function T2(0)=1). So T2(n) = 2n 4. T3(n) = T3-1(T3(n-1)) = T2(T3(n-1)) = 2T3(n-1) = 22 T3(n-2) ………….. Substitute k=3 (by ackermann’s function, if k>0, then Tk(n)=Tk-1(TK(n-1))) Now consider T3(n-1) as n. (i.e., T3(n)=2n). so, T2(T3(n-1)) = 2T3(n-1) After substitution, continuously increment the power as 22^2^2 etc., in left side and (n-1), (n-2), (n-3)..etc., in right side. This can be written as 2nT3(nn).Therefore T3(n-n)=T3(0). (by ackermann’s function T3(0)=1) T3(n) = 22^2^2 5. T4(n) = T4-1(T4(n-1)) T4(0) = 1 T4(1) = T4-1(T4(1-1)) = T3(T4(0)) = T3(1) = 2n = 2 T4(2) = T4-1(T4(2-1)) = T3(T4(1)) = T3(2) = 22 = 4 T4(3) = T4-1(T4(3-1)) = T3(T4(2)) = We have substituted the values for k upto 3. Now start substituting the n values. First, Substitute n=0. T4(0) = 1 (because Tk(0) = 1) Substitute n=1.
  • 36. T3(4) = 24 = 16 T4(4) = T4-1(T4(4-1)) = T3(T4(3)) = T3(16) = 216 = 65,536 T4(5) = T4-1(T4(5-1)) = T3(T4(4)) = T3(65,536) = 265,536 Solving: 1. T0(n) = 2+n 2. T1(n) = T1-1(T1(n-1)) = T0(T1(n-1)) = 2+T1(n-1) = 2+(2+T1(n-2)) = 2+2+(2+T1(n-3)) …………… = 2n+ T1(n-n) = 2n+ T1(0) T1(n) = 2n 3. T2(n) = T2-1(T2(n-1)) = T1(T2(n-1)) ( by ackermann function, if n>0 Tk(n)= Tk-1(TK(n-1))). S0, T4(1) = T4-1(T4(1-1)) = T3(T4(0)) = T3(1) = 21= 2 [T3(n)=2n] Repeat the steps by substituting the values of T4(2), T4(3) and T4(4) .
  • 37. = 2.T2(n-1) = 2(2(T2(n-2))) = 2(2(2(T2(n-3)))) ………….. = 2nT2(n-n) T2(n) = 2n 4. T3(n) = T3-1(T3(n-1)) = T2(T3(n-1)) = 2T3(n-1) = 22 T3(n-2) ………….. T3(n) = 22^2^2 5. T4(n) = T4-1(T4(n-1)) T4(0) = 1 T4(1) = T4-1(T4(1-1)) = T3(T4(0)) = T3(1) = 2n = 2 T4(2) = T4-1(T4(2-1)) = T3(T4(1)) = T3(2) = 22 = 4 T4(3) = T4-1(T4(3-1)) = T3(T4(2)) = T3(4) = 24 = 16 T4(4) = T4-1(T4(4-1)) = T3(T4(3)) = T3(16) = 216 = 65,536 T4(5) = T4-1(T4(5-1)) = T3(T4(4)) = T3(65,536) = 265,536
  • 38. Ackermann’s function is defined to be A(n) = Tn(n). We see that A(4) is bigger than any number in the natural world. A(5) is unimaginable. Running Time: The only way that the program builds up a big number is by continually incrementing it by one. Hence, the number of times one is added is at least as huge as the value Tk (n) returned. Crashing: Programs can stop at run time because of: (1) overflow in an integer value; (2) running out of memory; (3) running out of time. Which is likely to happen first? If the machine’s integers are 32 bits, then they hold a value that is about 1010. Incrementing up to this value will take a long time. However, much worse than this, each two increments need another recursive call creating a stack of about thismany recursive stack frames.Themachine is bound to run out ofmemory first. Use as benchmark: The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. The Ackermann function, due to its definition in terms of extremely deep recursion, can be used as a benchmark of a compiler's ability to optimize recursion. The first use of Ackermann's function in this way was by YngveSundblad, The Ackermann function. A Theoretical, computational and formula manipulative study. 1.7. Recursion on Trees A recursion tree is useful for visualizing what happens when a recurrence is iterated. It diagrams the tree of recursive calls and the amount of work done at each call. For instance, consider the recurrence T(n) = 2T(n/2) + n2. The recursion tree for this recurrence has the following form:
  • 39. In this case, it is straightforward to sum across each row of the tree to obtain the total work done at a given level: This a geometric series, thus in the limit the sum is O(n2). The depth of the tree in this case does not really matter; the amount of work at each level is decreasing so quickly that the total is only a constant factor more than the root. Recursion trees can be useful for gaining intuition about the closed form of a recurrence, but they are not a proof (and in fact it is easy to get the wrong answer with a recursion tree, as is the case with any method that includes ''...'' kinds of reasoning). As we saw last time, a good way of establishing a closed form for a recurrence is to make an educated guess and then prove by induction that your guess is indeed a solution. Recurrence trees can be a good method of guessing. Let's consider another example, T(n) = T(n/3) + T(2n/3) + n. Expanding out the first few levels, the recurrence tree is:
  • 40. Note that the tree here is not balanced: the longest path is the rightmost one, and its length is log3/2 n. Hence our guess for the closed form of this recurrence is O(n log n). 1.8. Tree Traversals Tree traversal (also known as tree search) refers to the process of visiting (examining and/or updating) each node in a tree data structure, exactly once, in a systematic way. Such traversals are classified by the order in which the nodes are visited. The following algorithms are described for a binary tree, but they may be generalized to other trees as well. There are three types of depth-first traversal: pre-order, in-order, and post-order. For a binary tree, they are defined as operations recursively at each node, starting with the root node follows: Pre-order 1. Visit the root. 2. Traverse the left subtree. 3. Traverse the right subtree. In-order (symmetric) 1. Traverse the left subtree. 2. Visit the root. 3. Traverse the right subtree. Post-order 1. Traverse the left subtree. 2. Traverse the right subtree. 3. Visit the root. The trace of a traversal is called a sequentialisation of the tree. No one sequentialisation according to pre-, in- or post-order describes the underlying tree uniquely. Given a tree with
  • 41. distinct elements, either pre-order or post-order paired with in-order is sufficient to describe the tree uniquely. However, pre-order with post-order leaves some ambiguity in the tree structure. Generic tree To traverse any tree in depth-first order, perform the following operations recursively at each node: 1. Perform pre-order operation 2. For each i (with i = 1 to n − 1) do: 1. Visit i-th, if present 2. Perform in-order operation 3. Visit n-th (last) child, if present 4. Perform post-order operation where n is the number of child nodes. Depending on the problem at hand, the pre-order, in-order or post-order operations may be void, or you may only want to visit a specific child node, so these operations are optional. Also, in practice more than one of pre-order, in-order and postorder operations may be required. For example, when inserting into a ternary tree, a pre-order operation is performed by comparing items. A post-order operation may be needed afterwards to re-balance the tree. 1.8.1. Example Binary tree traversal: Preorder, Inorder, and Postorder In order to illustrate few of the binary tree traversals, let us consider the below binary tree: Preorder traversal: To traverse a binary tree in Preorder, following operations are carried-out (i) Visit the root, (ii) Traverse the left subtree, and (iii) Traverse the right subtree. Therefore, the Preorder traversal of the above tree will outputs: 7, 1, 0, 3, 2, 5, 4, 6, 9, 8, 10 Inorder traversal: To traverse a binary tree in Inorder, following operations are carried-out (i) Traverse the left most subtree starting at the left external node, (ii) Visit the root, and (iii) Traverse the right subtree starting at the left external node.
  • 42. Therefore, the Inorder traversal of the above tree will outputs: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 Postorder traversal: To traverse a binary tree in Postorder, following operations are carried-out (i) Traverse all the left external nodes starting with the left most subtree which is then followed by bubbleup all the internal nodes, (ii) Traverse the right subtree starting at the left external node which is then followed by bubble-up all the internal nodes, and (iii) Visit the root. Therefore, the Postorder traversal of the above tree will outputs: 0, 2, 4, 6, 5, 3, 1, 8, 10, 9, 7 1.10. Heap Sort and Priority Queues Heaps: A data structure and associated algorithms, NOT GARBAGE COLLECTION A heap data structure is an array of objects than can be viewed as a complete binary tree such that: 1. Each tree node corresponds to elements of the array 2. The tree is complete except possibly the lowest level, filled from left to right The heap property is defined as an ordering relation R between each node and its descendants. For example, R could be smaller than or bigger than. In the examples that follow, we will use the bigger than relation. Example: Given array [22 13 10 8 7 6 2 4 3 5]
  • 43. Note that the elements are not sorted, only max element at root of tree. The height of a node in the tree is the number of edges on the longest simple downward path from the node to a leaf; e.g. height of node 6 is 0, height of node 4 is 1, height of node 1 is 3. The height of the tree is the height from the root. As in any complete binary tree of size n, this is lg n. h h+1 There are 2 nodes at level h and 2 - 1 total nodes in a complete binary tree. We can represent a heap as an array A that has two attributes: 1. Length(A) – Size of the array 2. HeapSize(A) - Size of the heap The property Length(A) ≥HeapSize(A) must be maintained. The heap property is stated as A[parent(i)] ≥A[i] The root of the tree is A[1]. Formula to compute parents, children in an array: Parent(i) = A[⎣i /2⎦ ] Left Child(I) = A[2i] Right Child(I) = A[2i+1] Where might we want to use heaps? Consider the Priority Queue problem: Given a sequence of objects with varying degrees of priority, and we want to deal with the highest-priority item first.
  • 44. Managing air traffic control - want to do most important tasks first. Jobs placed in queue with priority, controllers take off queue from top Scheduling jobs on a processor - critical applications need high priority Event-driven simulator with time of occurrence as key. Use min-heap, which keeps smallest element on top, get next occurring event. To support these operations we need to extract the maximum element from the heap: HEAP-EXTRACT-MAX(A) remove A[1] A[1] =A[n] ; n is HeapSize(A), the length of the heap, not array n = n-1 ; decrease size of heap Heapify(A,1,n) ; Remake heap to conform to heap properties Runtime: Q(1) +Heapify time Note: Successive removals will result in items in reverse sorted order! We will look at: Heapify : Maintain the heap property Build Heap : How to initially build a heap Heapsort : Sorting using a heap Heapify: Maintain heap property by “floating” a value down the heap that starts at I until it is in the right position. Heapify(A,i,n) ; Array A, heapify node i, heapsize is n ; Note that the left and right subtrees of i are also heaps ; Make i’s subtree be a heap. If (2i ≤ n) and A[2i]>A[i] ; see which is largest of current node and its children largest = 2i else largest = i If (2i+1 ≤ n) and A[2i+1]>A[largest] largest = 2i+1 If largest ≠ i swap (A[i], A[largest]) Heapify(A,largest,n) Example: Heapify(A,1,10). A=[1 13 10 8 7 6 2 4 3 5] Find largest of children and swap. All subtrees are valid heaps so we know the children are the maximums.
  • 45. Find largest of children and swap. All subtrees are valid heaps so we know the children are the maximums. Next is Heapify(A,2,10). A=[13 1 10 8 7 6 2 4 3 5] Next is Heapify(A,4,10). A=[13 8 10 1 7 6 2 4 3 5]
  • 46. Next is Heapify(A,8,10). A=[13 8 10 4 7 6 2 1 3 5] On this iteration we have reached a leaf and are finished. (Consider if started at node 3, n=7) Runtime: Intuitively this is Q(lg n)since we make one trip down the path of the tree and we have an almost complete binary tree. Building the Heap: Given an array A, we want to build this array into a heap. Note: Leaves are already a heap! Start from the leaves and build up from there. Build-Heap(A,n)
  • 47. for i = n downto 1 ; could start at n/2 Heapify(A,i,n) Start with the leaves (last ½ of A) and consider each leaf as a 1 element heap. Call Heapify on the parents of the leaves, and continue recursively to call Heapify, moving up the tree to the root. Example: Build-Heap(A,10). A=[1 5 9 4 7 10 2 6 3 14] Heapify(A,10,10) exits since this is a leaf. Heapify(A,9,10) exits since this is a leaf. Heapify(A,8,10) exits since this is a leaf. Heapify(A,7,10) exits since this is a leaf. Heapify(A,6,10) exits since this is a leaf. Heapify(A,5,10) puts the largest of A[5] and its children, A[10] into A[5]:
  • 49. Heapify(A,2,10): First iteration: this calls Heapify(A,5,10):
  • 51. Calls Heapify(A,5,10): Finished heap: A=[14 7 10 6 5 9 2 4 3 1] Running Time: We have a loop of n times, and each time call heapify which runs in Θ (lgn). This implies a bound of O(nlgn). HeapSort: Once we can build a heap and heapify a heap, sorting is easy. Idea is to: HeapSort(A,n) Build-Heap(A,n)
  • 52. for i = n downto 2 Swap(A[1], A[i]) Heapify(A,1,i-1) Runtime is O(nlgn) since we do Heapify on n-1 elements, and we do Heapify on the whole tree. . Variation on heaps: Heap could have min on top instead of max, Heap could be k-ary tree instead of binary Priority Queues: A priority queue is a data structure for maintaining a set of S elements each with an associated key value. Operations: Insert(S,x) puts element x into set S Max(S,x) returns the largest element in set S Extract-Max(S) removes the largest element in set S Uses: Job scheduling, event driven simulation, etc. We can model priority queues nicely with a heap. This is nice because heaps allow us to do fast queue maintenance. Max(S,x) : Just return root element. Takes O(1) time. Heap-Insert(A,key) n=n+1 i= n while (i > 1) and A[⎣i /2⎦] < key A[i] =A[⎣i / 2⎦] i = ⎣i/2⎦ A[i] = key Idea: same as heapify. Start from a new node, and propagate its value up to right level. Example: Insert new element “11” starting at new node on bottom, i=8 Bubble up:
  • 53. i=4, bubble up again At this point, the parent of 2 is larger so the algorithm stops. Runtime = O(lgn) since we only move once up the tree levels. Heap-Extract-Max(A,n) max = A[1] Α[1]= Α[n] n= n-1 Heapify(A,1,n) return max
  • 54. Idea: Make the nth element the root, then call Heapify to fix. Uses a constant amount of time plus the time to call Heapify, which is O(lgn). Total time is then O(lgn). Example: Extract(A,7): Remove 14, so max=14. Stick 7 into 1: Heapify (A,1,6):
  • 55. We have a new heap that is valid, with the max of 14 being returned. The 2 is sitting in the array twice, but since n is updated to equal 6, it will be overwritten if a new element is added, and otherwise ignored. 1.12. Representing Expressions. The evaluation of the tree takes place by reading the expression one symbol at a time. If the symbol is an operand, one-node tree is created and a pointer is pushed onto a stack. If the symbol is an operator, the pointers are popped to two trees T1 and T2 from the stack and a new tree whose root is the operator and whose left and right children point to T2 and T1 respectively is formed . A pointer to this new tree is then pushed to the Stack. Example The input is: a b + c d e + * * Since the first two symbols are operands, one-node trees are created and pointers are pushed to them onto a stack. For convenience the stack will grow from left to right.
  • 56. Stack growing from Left to Right The next symbol is a '+'. It pops the two pointers to the trees, a new tree is formed, and a pointer to it is pushed onto to the stack. Formation of a New Tree Next, c, d, and e are read. A one-node tree is created for each and a pointer to the corresponding tree is pushed onto the stack. Creating One-Node Tree Continuing, a '+' is read, and it merges the last two trees.
  • 57. Merging Two Trees Now, a '*' is read. The last two tree pointers are popped and a new tree is formed with a '*' as the root. Forming a New Tree with a Root Finally, the last symbol is read. The two trees are merged and a pointer to the final tree remains on the stack.
  • 58. Steps to Construct an Expression tree a b + c d e + * * Algebraic expressions Binary algebraic expression tree equivalent to ((5 + z) / -8) * (4 ^ 2) Algebraic expression trees represent expressions that contain numbers, variables, and unary and binary operators. Some of the common operators are × (multiplication), ÷ (division), + (addition), − (subtraction), ^ (exponentiation), and - (negation). The operators are contained in the internal nodes of the tree, with the numbers and variables in the leaf nodes. The nodes of binary operators have two child nodes, and the unary operators have one child node. Boolean expressions
  • 59. Binary boolean expression tree equivalent to ((true false) false) (true false)) Boolean expressions are represented very similarly to algebraic expressions, the only difference being the specific values and operators used. Boolean expressions use true and false as constant values, and the operators include (AND), (OR), (NOT).