SlideShare a Scribd company logo
1 of 90
UNIT-I Heuristic Search And Knowledge Representation
• Heuristic Search, Hill Climbing ,best first search,
Problem Reduction, Means-Ends Analysis,
• Representation and mapping, Knowledge
Representation, Issues in Knowledge Representation,
• Representing simple facts in logic, representing
instances and ISA relationships , computable
functions and predicates
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
1
enabling a person to discover or learn something
for themselves.
a method of learning or solving problems that
allows people to discover things themselves
and learn from their own experiences:
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
2
Heuristic search is an AI search technique that
employs heuristic for its moves.
Heuristic is a rule of thumb that probably leads
to a solution.
Heuristics play a major role in search strategies
because of exponential nature of the most
problems.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
3
• A heuristic technique ("find" or "discover"), often called simply
a heuristic, is any approach to problem solving or self-discovery that
employs a practical method, not guaranteed to be optimal, perfect,
logical, or rational, but instead sufficient for reaching an immediate
goal.
• Where finding an optimal solution is impossible or impractical, heuristic methods can be used to
speed up the process of finding a satisfactory solution.
• Heuristics can be mental shortcuts that ease the cognitive load of making a decision.
• Examples that employ heuristics include using a rule of thumb, an educated guess,
an intuitive judgment, a guesstimate, profiling, or common sense.
Heuristics are problem solving tools for solving challenging and non routine problems.
educated guess, a guess based on knowledge and experience and therefore likely to
be correct.
intuitive is based on what one feels to be true even without conscious reasoning
guesstimate, an estimate based on a mixture of guesswork and calculation.
profiling the recording and analysis of a person's psychological and behavioural
characteristics, so as to assess or predict their capabilities in a certain sphere or to
assist in identifying categories of people.“ we put everyone through psychometric
profiling"
common sense. good sense and sound judgement in practical matters.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
4
• There is a framework for describing search methods
and there are several general-purpose search
techniques
• These methods are all varieties of heuristic search.
• They can be described independently of any particular
task or problem domain.
• But when applied to particular problems, their efficacy
is highly dependent on the way they exploit domain-
specific knowledge since in and of themselves they are
unable to overcome the combinatorial explosion to
which search processes are so vulnerable.
• For this reason, these techniques are often called
weak methods.
• Still they continue to form the core of most Al systems
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
5
6
• Direct techniques (blind search) are not
always possible (they require too much time
or memory).
• Weak techniques can be effective if applied
correctly on the right kinds of tasks.
– Typically require domain specific information.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
provide the framework into which domain-specific
knowledge can be placed, either by hand or as a
result of automatic learning.
Thus they continue to form the core of most Al
systems.
two very basic search strategies:
• Depth-first search • Breadth-first search
some others:
• Generate-and-test
• Hill climbing
• Best-first search
• Problem reduction
• Constraint satisfaction
• Means-ends analysis
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
7
8
Example: 8 Puzzle
1 2 3
7 4
6
8
5
1 2 3
8 4
7 6 5
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
9
1 2 3
6 5
7 8 4
Which move is best?
right
1 2 3
5
7 8 4
1 2 3
6 5
7 8 4
1 2 3
6 5
7
8
4
6
1 2 3
7 6 5
8 4
GOAL
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
10
8 Puzzle Heuristics
• Blind search techniques used an arbitrary
ordering (priority) of operations.
• Heuristic search techniques make use of
domain specific information - a heuristic.
• What heurisitic(s) can we use to decide which
8-puzzle move is “best” (worth considering
first).
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
11
8 Puzzle Heuristics
• For now - we just want to establish some
ordering to the possible moves (the values of
our heuristic does not matter as long as it
ranks the moves).
• Later - we will worry about the actual values
returned by the heuristic function.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
12
A Simple 8-puzzle heuristic
• Number of tiles in the correct position.
– The higher the number the better.
– Easy to compute (fast and takes little memory).
– Probably the simplest possible heuristic.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
13
Another approach
• Number of tiles in the incorrect position.
– This can also be considered a lower bound on the
number of moves from a solution!
– The “best” move is the one with the lowest
number returned by the heuristic.
– Is this heuristic more than a heuristic (is it always
correct?).
• Given any 2 states, does it always order them properly
with respect to the minimum number of moves away
from a solution?
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
14
1 2 3
6 5
7 8 4
right
1 2 3
5
7 8 4
1 2 3
6 5
7 8 4
1 2 3
6 5
7
8
4
6
1 2 3
7 6 5
8 4
GOAL
h=2 h=4 h=3
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
15
Generate-and-test
• Very simple strategy - just keep guessing.
do while goal not accomplished
generate a possible solution
test solution to see if it is a goal
• Heuristics may be used to determine the specific
rules for solution generation.
Guess and Check :
• Making a reasonable guess
• Checking the guess
• Revising it when necessary
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
Algorithm: Generate-and-Test
1. Generate a possible solution. For some problems,
this means generating a particular point in the
problem space. For others, it means generating a
path from a start state.
2. Test to see if this is actually a solution by
comparing the chosen point or the endpoint of the
chosen path to the set of acceptable goal states.
3. If a solution has been found, quit. Otherwise,
return to step 1.
If the generation of possible solutions is done systematically,
then this procedure will find a solution eventually, if one exists.
Unfortunately, if the problem space is very large, "eventually" may
be a very long time prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
16
17
Example - Traveling Salesman Problem
(TSP)
• Traveler needs to visit n cities.
• Know the distance between each pair of cities.
• Want to know the shortest route that visits all
the cities once.
• n=80 will take millions of years to solve
exhaustively!
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
18
TSP Example
A B
C
D 4
6
3
5
1 2
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
19
• TSP - generation of
possible solutions is done
in order of cities:
1. A - B - C - D
2. A - B - D - C
3. A - C - B - D
4. A - C - D - B
...
Generate-and-test Example
A B C D
B C D
C D C B
B D
D C B C
D B
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
• In numerical analysis, hill climbing is a
mathematical optimization technique which
belongs to the family of local search.
• It is an iterative algorithm that starts with an
arbitrary solution to a problem, then attempts
to find a better solution by making an
incremental change to the solution.
Hill Climbing
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
20
21
Hill Climbing
• Variation on generate-and-test: in which feedback from the test procedure
is used to help the generator decide which direction to move in the search
space.
• In a pure generate-and-test procedure, the test function responds with
only a yes or no.
• But if the test function is augmented with a heuristic function that
provides an estimate of how close a given state is to a goal state, the
generate procedure can exploit it
• Hill climbing is often used when a good heuristic function is available for
evaluating states but when no other useful knowledge is available.
– generation of next state depends on feedback from the test
procedure.
– Test now includes a heuristic function that provides a guess as to how
good each possible state is.
• There are a number of ways to use the information returned by the test
procedure.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
22
Simple Hill Climbing
• Use heuristic to move only to states that are
better than the current state.
• Always move to better state when possible.
• The process ends when all operators have
been applied and none of the resulting states
are better than the current state.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
Algorithm: Simple Hill Climbing
I. Evaluate the initial state. If it is also a goal state, then
return it and quit. Otherwise, continue with the initial
state as the current state.
2. Loop until a solution is found or until there are no new
operators left to be applied in the current state:
(a) Select an operator that has not yet been applied to
the current state and apply it to produce a new state.
(b) Evaluate the new state.
(i) If it is a goal state, then return it and quit.
(ii) If it is not a goal state but it is better than the
current state, then make it the current state.
(iii) If it is not better than the current state, then
continue in the loop.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
23
24
Simple Hill Climbing
Function Optimization
y = f(x)
x
y
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
• The key difference between this algorithm and
the for generate-and-test is the use of an
evaluation function as a way to inject task-
specific knowledge into the control process
• It is the use of such knowledge that makes this
and the other methods heuristic search
methods, and it is that same knowledge that
gives these methods their power to solve
some otherwise intractable problems.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
25
“Is one state better than another”
For the algorithm to work, a precise definition of
better must be provided. In some cases, it
means a higher value of the heuristic function.
In others, it means a lower value.
It does not matter which, as long as a particular
hill-climbing program is consistent in its
interpretation.
https://www.geeksforgeeks.org/introduction-
hill-climbing-artificial-intelligence/
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
26
27
Potential Problems with
Simple Hill Climbing
• Will terminate when at local optimum.
• The order of application of operators can
make a big difference.
• Can’t see past a single move in the state
space.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
28
Simple Hill Climbing Example
• TSP - define state space as the set of all
possible tours.
• Operators exchange the position of adjacent
cities within the current tour.
• Heuristic function is the length of a tour.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
29
TSP Hill Climb State Space
CABD ABCD ACDB DCBA
Initial State
Swap 1,2 Swap 2,3
Swap 3,4 Swap 4,1
Swap 1,2
Swap 2,3
Swap 3,4
Swap 4,1
ABCD
BACD ACBD ABDC DBCA
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
30
Steepest-Ascent Hill Climbing (Sharpest rise)
• A variation on simple hill climbing.
• considers all the moves from the current state and
selects the best one as the next state. This method is
called steepest-ascent hill climbing or gradient search.
• this contrast with the basic method in which the first
state that is better than the current state is selected.
The algorithm works as follows.
• The order of operators does not matter.
• Not just climbing to a better state, climbing up the
steepest slope.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
Algorithm: Steepest-Ascent Hill Climbing
1. Evaluate the initial state. If it is also a goal state, then return
it and quit. Otherwise, continue with the initial state as the
current state.
2. Loop until a solution is found or until a complete iteration
produces no change to current state:
(a) Let SUCC be a state such that any possible successor of
the current state will be better than SUCC.
(b) For each operator that applies to the current state do:
(i) Apply the operator and generate a new state.
(ii) Evaluate the new state. If it is a goal state, then return
it and quit. If not, compare it to SUCC. If it is better, then set
SUCC to this state. If it is not better, leave SUCC alone.
(c) If the SUCC is better than current state, then set current
state to SUCC.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
31
• There is a trade-off between the time required
to select a move (usually longer for steepest-
accent hill climbing) and the number of moves
required to get to a solution (usually longer
for basic hill climbing) that must be
considered when deciding which method will
work better for a particular problem.
• trade-off =a balance achieved between two
desirable but incompatible features; a
compromise.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
32
33
Hill Climbing Termination
• Both basic and steepest-ascent hill climbing may fail to find a
solution.
• Either algorithm may terminate not by finding a goal state but
by getting to a state from which no better states can be
generated.
• This will happen if the program has reached either a local
maximum, a plateau, or a ridge.
• Local Optimum(maximum) :-
– is a state that is better than all its neighbors but is not better
than some other states farther away.
– At a local maximum, all moves appear to make things
worse.
– Local maxima are frustrating because they often occur
almost within sight of a solution. In this case, they
are called foothills.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
34
35
Hill Climbing Termination
• Plateau – is a flat area of the search space in which all
neighboring states are the same as the current state.
On a plateau, it is not possible to determine the best
direction in which to move by making local
comparisons.
• Ridge - local optimum that is caused by inability to
apply 2 operators at once. is a special kind of local
maximum. It is an area of the search space that is
higher than surrounding areas and that itself has a
slope
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
36
There are some ways of dealing with these problems,
although these methods are by no means guaranteed:
• Backtrack to some earlier node and try going in a different
direction. This is particularly reasonable if at that node
there was another direction that looked promising or
almost as promising as the one that was chosen earlier. To
implement this strategy, maintain a list of paths almost
taken and go back to one of them if the path that was taken
leads to a dead end. This is a fairly good way of dealing with
local maxima.
• Make a big jump in some direction to try to get to a new
section of the search space. This is a particularly good way
of dealing with plateaus. If the only rules available describe
single small steps, apply them several times in the same
direction.
• Apply two or more rules before doing the test. This
corresponds to moving in several directions at once. This is
a particularly good strategy for dealing with ridges.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
37
• hill climbing is not always very effective. It is
particularly unsuited to problems where the
value of the heuristic function drops off suddenly
as you move away from a solution
• This is often the case whenever any sort of
threshold effect is present.
• Hill climbing is a local method, by which we mean
that it decides what to do next by looking only at
the "immediate" consequences of its choice
rather than by exhaustively exploring all the
consequences
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
38
• the advantage of being less combinatorial
explosive than comparable global methods.
• Disadv:- it is having a lack of a guarantee that
it will be effective compared to with other
local methods.
• combinatorial :- relating to the selection of a
given number of elements from a larger
number without regard to their arrangement.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
39
Simulated Annealing
• Simulated annealing is a variation of hill climbing in which, at the beginning of the
process, some downhill moves may be made.
• The idea is to do enough exploration of the whole space early on so that the final
solution is relatively insensitive to the starting state.
• This should lower the chances of getting caught at a local maximum, a plateau, or
a ridge.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
41
• We use the term objective function in place of
the term heuristic function.
• we attempt to minimize rather than maximize
the value of the objective function.
• Thus we actually describe a process of valley
descending rather than hill climbing.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
42
• Simulated annealing [Kirkpatrick et al., 1983] as a computational
process is patterned after the physical process of annealing, in
which physical substances such as metals are melted ( i.e., raised to
high energy levels) and then gradually cooled until some solid state
is reached.
• The goal of this process is to produce a minimal-energy final state.
• Thus this process is one of valley descending in which the objective
function is the energy level.
• Physical substances usually move from higher energy configurations
to lower ones, so the valley descending occurs naturally.
• But there is some probability that a transition to a higher energy
state will occur.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
43
• in the physical valley descending that occurs during annealing,
the probability of a large uphill move is lower than the
probability of a small one.
• Also, the probability that an uphill move will be made decreases
as the temperature decreases.
• Thus such moves are more likely during the beginning of the
process when the temperature is high, and they become less
likely at the end as the temperature becomes lower.
• One way to characterize this process is that downhill moves are
allowed anytime.
• Large upward moves may occur early on, but as the process
progresses, only relatively small upward moves are allowed until
finally the process converges to a local minimum configuration.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
44
• The rate at which the system is cooled is called
the annealing schedule.
• Physical annealing processes are very sensitive
to the annealing schedule.
• If cooling occurs too rapidly, stable regions of
high energy will form.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
45
The algorithm for simulated annealing is only slightly different
from the simple hill-climbing procedure. The three differences
are:
• The annealing schedule must be maintained.
• Moves to worse states may be accepted.
• It is a good idea to maintain, in addition to the current state,
the best state found so far. Then, if the final state is worse than
that earlier state (because of bad luck in accepting moves to
worse states), the earlier state is still available
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
46
Algorithm: Simulated Annealing
1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue
with the initial state as the current state.
2. Initialize BEST-SO-FAR to the current state.
3. Initialize T according to the annealing schedule.
4. Loop until a solution is found or until there are no new operators left to be applied in the
current state.
(a) Select an operator that has not yet been applied to the current state and apply it to
produce a new state.
(b) Evaluate the new state. Compute
∆E = (value of current) - (value of new state)
• If the new state is a goal state, then return it and quit.
• If it is not a goal state but is better than the current state, then make it the current state.
Also set BEST-SO-FAR to this new state.
• If it is not better than the current state, then make it the current state with probability p'
as defined above. This step is usually implemented by invoking a random number generator to
produce a number in the range [0,1 ]. If that number is less than p', then the move is
accepted. Otherwise, do nothing.
(c) Revise T as necessary according to the annealing schedule.
5. Return BEST-SO-FAR as the answer.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
47
To implement this algorithm, it is necessary to select an
annealing schedule, which has three components.
• The first is the initial value to be used for temperature.
• The second is the criteria that will be used to decide when
the temperature of the system should be reduced.
• The third is the amount by which the temperature will be
reduced each time it is changed. There may also be a
fourth component of the schedule, namely, when to quit.
• Simulated annealing is often used to solve problems in
which the number of moves from a given state is very large
(such as the number of permutations that can be made to a
proposed traveling salesman route).
• For such problems, it may not make sense to try all possible
moves. Instead, it may be useful to exploit some criterion
involving the number of moves that have been tried since
an improvement was found.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
48
• BEST-FIRST SEARCH
• two systematic control strategies, breadth-first
search and depth-first search (of several
varieties).
• new method, best-first search,is a way of
combining the advantages of both depth-first
and breadth-first search into a single method.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
49
50
Best-First Search (cont.)
While goal not reached:
1. Generate all potential successor states and
add to a list of states.
2. Pick the best state in the list and go to it.
• Similar to steepest-ascent, but don’t throw
away states that are not chosen.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
51
• Figure shows the beginning of a best-first search procedure.
• Initially, there is only one node, so it will be expanded.
• Doing so generates three new nodes.
• The heuristic function, which, in this example, is an estimate of the
cost of getting to a solution from a given node, is applied to each of
these new nodes.
• Since node D is the most promising, it is expanded next, producing
two successor nodes, E and F.
• But then the heuristic function is applied to them .
• Now another path, that going through node B, looks more
promising, so it is pursued, generating nodes G and H.
• But again when these new nodes are evaluated they look less
promising than another path, so attention is returned to the path
through D to E. E is then expanded, yielding nodes I and J.
• At the next step, J will be expanded, since it is the most promising.
• This process can continue until a solution is found.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
52
Notice that this procedure is very similar to the procedure
for steepest-ascent hill climbing, with two exceptions.
• In hill climbing, one move is selected and all the others
are rejected, never to be reconsidered. This produces
the straightl ine behavior that is characteristic of hill
climbing. In best-first search, one move is selected, but
the others are kept around so that they can be
revisited later if the selected path becomes less
promising.
• Further, the best available state is selected in best-first
search, even if that state has a value that is lower than
the value of the state that was just explored. This
contrasts with hill climbing, which will stop if there are
no successor states with better values than the current
state.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
53
• Although the example shown above illustrates a best-first
search of a tree, it is sometimes important to search a graph
instead so that duplicate paths will not be pursued.
• An algorithm to do this will operate by searching a directed
graph in which each node represents a point in the problem
space.
• Each node will contain, in addition to a description or the
problem state it represents, an indication of how promising it
is, a parent link that points back to the best node from which
it came, and a list of the nodes that were generated from it.
• The parent link will make it possible to recover the path to the
goal once the goal is found.
• The list of successors will make it possible, if a better path is
found to an already existing node, to propagate the
improvement down to its successors.
• We will call a graph of this sort an OR graph, since each of its
branches represents an alternative problem-solving path.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
54
• To implement such a graph-search procedure, we will
need to use two lists of nodes:
• OPEN - nodes that have been generated and have had
the heuristic function applied to them but which have not
yet been examined (i.e., had their successors generated).
OPEN is actually a priority queue in which the elements
with the highest priority are those with the most
promising value of the heuristic function. Standard
techniques for manipulating priority queues can be used
to manipulate the list.
• CLOSED - nodes that have already been examined.
We need to keep these nodes in memory if we want to
search a graph rather than a tree, since whenever a new
node is generated, we need to check whether it has been
generated before.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
55
• We will also need a heuristic function that
estimates the merits of each node we
generate. This will enable the algorithm to
search more promising paths first.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
56
Algorithm: Best-First Search
I. Start with OPEN containing just the initial state.
2. Until a goal is found or there are no nodes left on OPEN do:
(a) Pick the best node on OPEN.
(b) Generate its successors.
(c) For each successor do:
(i) If it has not been generated before, evaluate it, add it to OPEN, and
record its parent.
(ii) If it has been generated before, change the parent if this new path
is better than the previous
one. In that case, update the cost of getting to this node and to any
successors that this node
may already, have .
The basic idea of this algorithm is simple. Unfortunately, it is rarely the case
that graph traversal algorithms are simple to write correctly
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
57
The A* Algorithm
• The best-first search algorithm is a simplification of an algorithm
called A*, which was first presented by Hart et al. [1968; 1972].
• This algorithm uses the f ‘, g, and h' functions, as well as the lists
OPEN and CLOSED,
• It avoids expanding already expensive path ,but expanding more
promising path first
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
58
59
A* Algorithm (a sure test topic)
• The A* algorithm uses a modified evaluation
function and a Best-First search.
• A* minimizes the total path cost.
• Under the right conditions A* provides the
cheapest cost solution in the optimal time!
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
Algorithm: A*
l. Start with OPEN containing only the initial
node. Set that node's g value to 0, its h' value to
whatever it is, and its f' value to h' + 0, or h'. Set
CLOSED to the empty list.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
60
61
A* evaluation function
• The evaluation function f is an estimate of the
value of a node x given by:
f(x) = g(x) + h’(x)
• g(x) is the cost to get from the start state to
state x.
• h’(x) is the estimated cost to get from state x
to the goal state (the heuristic).
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
62
Modified State Evaluation
• Value of each state is a combination of:
– the cost of the path to the state
– estimated cost of reaching a goal from the state.
• The idea is to use the path to a state to
determine (partially) the rank of the state
when compared to other states.
• This doesn’t make sense for DFS or BFS, but is
useful for Best-First Search.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
63
A* Algorithm
• The general idea is:
– Best First Search with the modified evaluation
function.
– h’(x) is an estimate of the number of steps from
state x to a goal state.
– loops are avoided - we don’t expand the same
state twice.
– Information about the path to the goal state is
retained.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
64
Example - Maze
START GOAL
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
65
Example - Maze
START GOAL
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
2. Until a goal node is found, repeat the following procedure: If there are no nodes on OPEN, report
failure. Otherwise, pick the node on OPEN with the lowest f’ value. Call it BESTNODE. Remove it from
OPEN. Place it on CLOSED. See if BESTNODE is a goal node. If so, exit and report a solution (either
BESTNODE if all we want is the node or the path that has been created between the initial state and
BESTNODE if we are interested in the path). Otherwise, generate the successors of BESTNODE but do
not set BESTNODE to point to them yet. (First we need to see if any of them have already been
generated.) For each such SUCCESSOR, do the following:
(a) Set SUCCESSOR to point back to BESTNODE. These backwards links will make it possible to
recover the path once a solution is found.
(b) Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from BESTNODE to SUCCESSOR.
(c) See if SUCCESSOR is the same as any node on OPEN (i.e., It has already been generated but not
processed). If so, call that node OLD. Since this node already exists in the graph, we can throw
SUCCESSOR away and add OLD to the list of BESTNODE's successors. Now we must decide
whether OLD's parent link should be reset to point to BESTNODE. It should be if the path we have
just found to SUCCESSOR is cheaper than the current best path to OLD (since SUCCESSOR and OLD
are really the same node). So see whether it is cheaper to get to OLD via its current parent or to
SUCCESSOR via BESTNODE by comparing their g values. If OLD is cheaper (or just as cheap), then
we need do nothing. If SUCCESSOR is cheaper, then reset OLD's parent link to point to BESTNODE,
record the new cheaper path in g(OLD), and update f’(OLD).
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
66
(d) If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so, call the node on CLOSED OLD and add
OLD to the list of BESTNODE's successors. Check to see if the new path or the old path is better
just as in step 2(c), and set the parent link-and g and f’ values appropriately. If we have just found
a better path to OLD, we must propagate the improvement to OLD's successors. This is a bit
tricky. OLD points to its successors. Each successor in turn points to its successors, and so forth,
until each branch terminates with a node that either is still on OPEN or has no successors. So to
propagate the new cost downward, do a depth-first traversal of the tree starting at OLD, changing
each node's g value (and thus also its f' value), terminating each branch when you reach either a
node with no successors or a node to which an equivalent or better path has already been found.
This condition is easy to check for. Each node's parent link points back to its best known parent. As
we propagate down to a node, see if its parent points to the node we are corning from. If so,
continue the propagation. If not, then its g value already reflects the better path of which it is
part. So the propagation may stop here. But it is possible that with the new value of g being
propagated downward, the path we are following may become better than the path through the
current parent. So compare the two. If the path through the current parent is still better, stop the
propagation. If the path we are propagating through is now better, reset the parent and continue
propagation.
(c) If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN, and add it to the
list of BESTNODE's successors. Compute f'(SUCCESSOR) = g(SUCCESSOR) + h'(SUCCESSOR).
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
67
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
68
Several interesting observations can be made about this algorithm. The first concerns
the role of the g function. It lets us choose which node to expand next on the basis not only of
how good the node itself looks (as measured by h'), but also on the basis of how good the path to
the node was. By incorporating g into f’, we will not always choose as our next node to expand
the node that appears to be closest to the goal. This is useful if we care about the path we find. If,
on the other hand, we only care about getting to a solution somehow, we can define g always to
be 0, thus always choosing the node that seems closest to a goal. If we want to find a path
involving the fewest number of steps, then we set the cost of going from a node to its successor
as a constant, usually 1. If, on the other hand, we want to find the cheapest path and some
operators cost more than others, then we set the cost of going from one node to another to
reflect those costs. Thus the A* algorithm can be used whether we are interested in finding a
minimal-cost overall path or simply any path as quickly as possible.
The second observation involves h’, the estimator of h, the distance of a node to the
goal. If h' is a perfect estimator of h, then A* will converge immediately to the goal with no
search. The better h’ is, the closer we will get to that direct approach. If, on the other hand, the
value of h' is always 0, the search will be controlled by g. If the value of g is also 0, the search
strategy will be random. If the value of g is always 1, the search will be breadth first. All nodes on
one level will have lower g values, and thus lower f’ values than will all nodes on the next level.
What if, on the other hand, h' is neither perfect nor 0? Can we say anything interesting about the
behavior of the search? The answer is yes if we can guarantee that h' never overestimates h. In
that case, the A* algorithm is guaranteed to find an optimal (as determined by g) path to a goal, if
one exists. This can easily be seen from a few examples.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
69
Consider the situation shown in Fig. 3.4. Assume that the cost of all arcs is 1. Initially, all nodes except A
are on OPEN (although the Fig. shows the situation two steps later, after B and E have been expanded).
For each node, f' is indicated as the sum of h' and g. In this example, node B has the lowest f’, 4, so it is
expanded first. Suppose it has only one successor E, which also appears to be three moves away from a
goal. Now f'(E) is 5, the same as f'(C). Suppose we resolve this in favor of the path we are currently
following. Then we will expand E next. Suppose it too has a single successor F, also judged to be three
moves from a goal. We are clearly using up moves and making no progress. But f’(F) = 6, which is
greater than f'(C). So we will expand C next. Thus we see that by underestimating h'(B) we have wasted
some effort. But eventually we discover that B was farther away than we thought and we go back and
try another path.
Now consider the situation shown in Fig. 3.5. Again we expand B on the first step. On the
second step we again expand E. At the next step we expand F, and finally we generate G, for a solution
path of length 4. But suppose there is a direct path from D to a solution, giving a path of length 2. We
will never find it. By overestimating h'(D) we make D look so bad that we may find some other, worse
solution without ever expanding D. In general, if h' might overestimate h, we cannot be guaranteed of
finding the cheapest path solution unless we expand the entire graph until all paths are longer than the
best solution . An interesting question is, "Of what practical significance is the theorem that if h’ never
overestimates h then A* is admissible?" The answer is, "almost none," because, for most real problems,
the only way to guarantee that h’ never overestimates h is to set it to zero. But then we are back to
breadth-first search, which is admissible but not efficient. But there is a corollary to this theorem that is
very useful. We can state it loosely as follows:
Graceful Decay of Admissibility: If h' rarely overestimates h by more than δ, then the A*
algorithm will rarely find a solution whose cost is more than δ greater than the cost of the
optimal solution.
The formalization and proof of this corollary will be left as an exercise.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
70
The third observation we can make about the A* algorithm has to do with the relationship
between trees and graphs. The algorithm was stated in its most general form as it applies to graphs. It
can , of course, be simplified to apply to trees by not bothering to check whether a new node is already
on OPEN or CLOSED. This makes it faster to generate nodes but may result In the same search being
conducted many times if nodes are often duplicated.
Under certain conditions, the A* algorithm can be shown to be optimal in that it generates
the fewest nodes in the process of finding a solution to a problem. Under other conditions it is not
optimal. For formal discussions of these conditions, see Gelperin [1977] and Martelli [1977].
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
71
Problem Reduction Search
• „
l Panning how best to solve a problem that
can be recursively decomposed into sub
problems in multiple ways
– Matrix multiplication problem
– Tower of Hanoi Tower of Hanoi
– ‹Blocks World problems
– ‹
Theorem proving
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
72
• When a problem can be divided into a set of sub problems,
where each sub problem can be solved separately and a
combination of these will be a solution, AND-OR graphs or
AND - OR trees are used for representing the solution.
• The decomposition of the problem or problem reduction
generates AND arcs.
• One AND arc may point to any number of successor nodes.
• All these must be solved so that the arc will rise to many arcs,
indicating several possible solutions.
• Hence the graph is known as AND - OR instead of AND..
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
73
An algorithm to find a solution in an AND - OR graph must handle AND area
appropriately. A* algorithm can not search AND - OR graphs efficiently.
AO* is best for solving cyclic AND-OR Graph
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
74
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
75
Knowledge representation and
reasoning (KR, KR², KR&R)
• is the field of artificial intelligence (AI) dedicated to representing
information about the world in a form that a computer system can
utilize to solve complex tasks such as diagnosing a medical
condition or having a dialog in a natural language. Knowledge
representation incorporates findings from psychology[1] about how
humans solve problems and represent knowledge in order to
design formalisms that will make complex systems easier to design
and build. Knowledge representation and reasoning also
incorporates findings from logic to automate various kinds
of reasoning, such as the application of rules or the relations
of sets and subsets.[2]
• Examples of knowledge representation formalisms include semantic
nets, systems architecture, frames, rules, and ontologies. Examples
of automated reasoning engines include inference
engines, theorem provers, and classifiers.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
76
prepared by Vijaya Ahire Asst
Prof Dept of IT JNEC
77
KR Introduction
• General problem in Computer Science
• Solutions = Data Structures
– words
– arrays
– records
– list
• More specific problem in AI
• Solutions = knowledge structures
– lists
– trees
– procedural representations
– logic and predicate calculus
– rules
– semantic nets and frames
– scripts
prepared by Vijaya Ahire Asst
Prof Dept of IT JNEC
78
Kinds of Knowledge
• Objects
– Descriptions
– Classifications
• Events
– Time sequence
– Cause and effect
• Relationships
– Among objects
– Between objects and events
• Meta-knowledge
Things we need to talk about and reason about; what do we know?
Distinguish between knowledge and its representation
•Mappings are not one-to-one
•Never get it complete or exactly right
prepared by Vijaya Ahire Asst
Prof Dept of IT JNEC
79
Types of Knowledge
• a priori knowledge
– comes before knowledge perceived through senses
– considered to be universally true
• a posteriori knowledge
– knowledge verifiable through the senses
– may not always be reliable
• procedural knowledge
– knowing how to do something
• declarative knowledge
– knowing that something is true or false
• tacit (unspoken, understood) knowledge
– knowledge not easily expressed by language
prepared by Vijaya Ahire Asst
Prof Dept of IT JNEC
80
Characteristics of a good KR:
• It should
– Be able to represent the knowledge important to the
problem
– Reflect the structure of knowledge in the domain
• Otherwise our development is a constant process of distorting things
to make them fit.
– Capture knowledge at the appropriate level of granularity
– Support incremental, iterative development
• It should not
– Be too difficult to reason about
– Require that more knowledge be represented than is
needed to solve the problem
Representation and mapping
• In order to solve complex problems encounterd in AI, we need large
amount of knowledge and mechanism to manipulate it to create
solution.
• Two things must be considered
– Facts-truth in some relevant world. These are the things we want to
represent.
– Representation of facts in some chosen formalism. These are the
things we will actually be able to manipulate.
• One way to think of structuring these entities is as two levels
– The knowledge level, at which facts(including each agents behaviors
and current goals) are described.
– The symbol level,at which representation of objects at the knowledge
level are defined in terms of symbols that can be manipulated by
programs
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
81
Rather than thinking on one level
on top of another, focus is on
facts, on representation and on
the two-way mapping that must
exits between them
Links are called representation
mapping
The forword represntation
mapping maps from facts to
representation.
The backword representation is
reverse.
Facts -natural languge (English sentence)
Simple example using mathematical logic as representational formalism
Spot is a dog ------------- dod(Spot)
All dogs have tail
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
82
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
83
Approaches to Knowledge Representation
Inference-a conclusion reached on the basis of evidence and reasoning
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
84
Approaches to Knowledge
Representation
1. Simple relational knowledge
2. Inheritable Knowledge.
3. Inferential knowledge
4. Procedural knowledge
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
85
Correspond to set of attributed and associated values that together describe the objects
of knowledge base
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
86
Inheritable Knowledge.
• All data should be stored into a hierarchy of
classes.
• Classes must arranged in generalization
• Every individual frame can represent the
collection of attributes and its value.
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
87
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
88
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
89
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
90
A semantic network is a knowledge structure that depicts how concepts are
related to one another and illustrates how they interconnect. Semantic
networks use artificial intelligence (AI) programming to mine data, connect concepts
and call attention to relationships.
Inferential knowledge
• Represent knowledge as formal logic
Inheritance property is very useful form of interference
prepared by Vijaya Ahire Asst Prof Dept of
IT JNEC
91

More Related Content

Similar to UNIT-I Part2 Heuristic Search Techniques.pptx

Will Robots Replace Testers?
Will Robots Replace Testers?Will Robots Replace Testers?
Will Robots Replace Testers?TEST Huddle
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxkitsenthilkumarcse
 
Dowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inferenceDowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inferenceAmit Sharma
 
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHMSTUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHMAvay Minni
 
AI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAsst.prof M.Gokilavani
 
ICIS2021 Making the Crowd Wiser: (Re)combination through teaming
ICIS2021 Making the Crowd Wiser: (Re)combination through teamingICIS2021 Making the Crowd Wiser: (Re)combination through teaming
ICIS2021 Making the Crowd Wiser: (Re)combination through teamingssuserb4c6711
 
Utah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas HollowayUtah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas HollowayThomas Holloway
 
Introduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningIntroduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningNAVER Engineering
 
An Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGIAn Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGIAnirban Santara
 
Heuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligenceHeuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligencegrinu
 
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...Madhav Mishra
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement LearningDongHyun Kwak
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningKhaled Saleh
 
Test case design techniques
Test case design techniquesTest case design techniques
Test case design techniquesAshutosh Garg
 
Test case design techniques
Test case design techniquesTest case design techniques
Test case design techniques2PiRTechnologies
 

Similar to UNIT-I Part2 Heuristic Search Techniques.pptx (20)

Will Robots Replace Testers?
Will Robots Replace Testers?Will Robots Replace Testers?
Will Robots Replace Testers?
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptx
 
Dowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inferenceDowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inference
 
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHMSTUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
 
Twelve steps of qc
Twelve steps of qcTwelve steps of qc
Twelve steps of qc
 
AI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptx
 
ICIS2021 Making the Crowd Wiser: (Re)combination through teaming
ICIS2021 Making the Crowd Wiser: (Re)combination through teamingICIS2021 Making the Crowd Wiser: (Re)combination through teaming
ICIS2021 Making the Crowd Wiser: (Re)combination through teaming
 
Robots, Testing and LAST
Robots, Testing and LASTRobots, Testing and LAST
Robots, Testing and LAST
 
Utah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas HollowayUtah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas Holloway
 
Introduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningIntroduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement Learning
 
Business idea
Business ideaBusiness idea
Business idea
 
An Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGIAn Introduction to Reinforcement Learning - The Doors to AGI
An Introduction to Reinforcement Learning - The Doors to AGI
 
Heuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligenceHeuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligence
 
greedy method.pdf
greedy method.pdfgreedy method.pdf
greedy method.pdf
 
Root Cause Analysis تحليل أسباب جذور المشكلة
Root Cause Analysis تحليل أسباب جذور المشكلةRoot Cause Analysis تحليل أسباب جذور المشكلة
Root Cause Analysis تحليل أسباب جذور المشكلة
 
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement Learning
 
Test case design techniques
Test case design techniquesTest case design techniques
Test case design techniques
 
Test case design techniques
Test case design techniquesTest case design techniques
Test case design techniques
 

Recently uploaded

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...ranjana rawat
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 

Recently uploaded (20)

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 

UNIT-I Part2 Heuristic Search Techniques.pptx

  • 1. UNIT-I Heuristic Search And Knowledge Representation • Heuristic Search, Hill Climbing ,best first search, Problem Reduction, Means-Ends Analysis, • Representation and mapping, Knowledge Representation, Issues in Knowledge Representation, • Representing simple facts in logic, representing instances and ISA relationships , computable functions and predicates prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 1
  • 2. enabling a person to discover or learn something for themselves. a method of learning or solving problems that allows people to discover things themselves and learn from their own experiences: prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 2
  • 3. Heuristic search is an AI search technique that employs heuristic for its moves. Heuristic is a rule of thumb that probably leads to a solution. Heuristics play a major role in search strategies because of exponential nature of the most problems. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 3
  • 4. • A heuristic technique ("find" or "discover"), often called simply a heuristic, is any approach to problem solving or self-discovery that employs a practical method, not guaranteed to be optimal, perfect, logical, or rational, but instead sufficient for reaching an immediate goal. • Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. • Heuristics can be mental shortcuts that ease the cognitive load of making a decision. • Examples that employ heuristics include using a rule of thumb, an educated guess, an intuitive judgment, a guesstimate, profiling, or common sense. Heuristics are problem solving tools for solving challenging and non routine problems. educated guess, a guess based on knowledge and experience and therefore likely to be correct. intuitive is based on what one feels to be true even without conscious reasoning guesstimate, an estimate based on a mixture of guesswork and calculation. profiling the recording and analysis of a person's psychological and behavioural characteristics, so as to assess or predict their capabilities in a certain sphere or to assist in identifying categories of people.“ we put everyone through psychometric profiling" common sense. good sense and sound judgement in practical matters. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 4
  • 5. • There is a framework for describing search methods and there are several general-purpose search techniques • These methods are all varieties of heuristic search. • They can be described independently of any particular task or problem domain. • But when applied to particular problems, their efficacy is highly dependent on the way they exploit domain- specific knowledge since in and of themselves they are unable to overcome the combinatorial explosion to which search processes are so vulnerable. • For this reason, these techniques are often called weak methods. • Still they continue to form the core of most Al systems prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 5
  • 6. 6 • Direct techniques (blind search) are not always possible (they require too much time or memory). • Weak techniques can be effective if applied correctly on the right kinds of tasks. – Typically require domain specific information. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 7. provide the framework into which domain-specific knowledge can be placed, either by hand or as a result of automatic learning. Thus they continue to form the core of most Al systems. two very basic search strategies: • Depth-first search • Breadth-first search some others: • Generate-and-test • Hill climbing • Best-first search • Problem reduction • Constraint satisfaction • Means-ends analysis prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 7
  • 8. 8 Example: 8 Puzzle 1 2 3 7 4 6 8 5 1 2 3 8 4 7 6 5 prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 9. 9 1 2 3 6 5 7 8 4 Which move is best? right 1 2 3 5 7 8 4 1 2 3 6 5 7 8 4 1 2 3 6 5 7 8 4 6 1 2 3 7 6 5 8 4 GOAL prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 10. 10 8 Puzzle Heuristics • Blind search techniques used an arbitrary ordering (priority) of operations. • Heuristic search techniques make use of domain specific information - a heuristic. • What heurisitic(s) can we use to decide which 8-puzzle move is “best” (worth considering first). prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 11. 11 8 Puzzle Heuristics • For now - we just want to establish some ordering to the possible moves (the values of our heuristic does not matter as long as it ranks the moves). • Later - we will worry about the actual values returned by the heuristic function. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 12. 12 A Simple 8-puzzle heuristic • Number of tiles in the correct position. – The higher the number the better. – Easy to compute (fast and takes little memory). – Probably the simplest possible heuristic. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 13. 13 Another approach • Number of tiles in the incorrect position. – This can also be considered a lower bound on the number of moves from a solution! – The “best” move is the one with the lowest number returned by the heuristic. – Is this heuristic more than a heuristic (is it always correct?). • Given any 2 states, does it always order them properly with respect to the minimum number of moves away from a solution? prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 14. 14 1 2 3 6 5 7 8 4 right 1 2 3 5 7 8 4 1 2 3 6 5 7 8 4 1 2 3 6 5 7 8 4 6 1 2 3 7 6 5 8 4 GOAL h=2 h=4 h=3 prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 15. 15 Generate-and-test • Very simple strategy - just keep guessing. do while goal not accomplished generate a possible solution test solution to see if it is a goal • Heuristics may be used to determine the specific rules for solution generation. Guess and Check : • Making a reasonable guess • Checking the guess • Revising it when necessary prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 16. Algorithm: Generate-and-Test 1. Generate a possible solution. For some problems, this means generating a particular point in the problem space. For others, it means generating a path from a start state. 2. Test to see if this is actually a solution by comparing the chosen point or the endpoint of the chosen path to the set of acceptable goal states. 3. If a solution has been found, quit. Otherwise, return to step 1. If the generation of possible solutions is done systematically, then this procedure will find a solution eventually, if one exists. Unfortunately, if the problem space is very large, "eventually" may be a very long time prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 16
  • 17. 17 Example - Traveling Salesman Problem (TSP) • Traveler needs to visit n cities. • Know the distance between each pair of cities. • Want to know the shortest route that visits all the cities once. • n=80 will take millions of years to solve exhaustively! prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 18. 18 TSP Example A B C D 4 6 3 5 1 2 prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 19. 19 • TSP - generation of possible solutions is done in order of cities: 1. A - B - C - D 2. A - B - D - C 3. A - C - B - D 4. A - C - D - B ... Generate-and-test Example A B C D B C D C D C B B D D C B C D B prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 20. • In numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. • It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. Hill Climbing prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 20
  • 21. 21 Hill Climbing • Variation on generate-and-test: in which feedback from the test procedure is used to help the generator decide which direction to move in the search space. • In a pure generate-and-test procedure, the test function responds with only a yes or no. • But if the test function is augmented with a heuristic function that provides an estimate of how close a given state is to a goal state, the generate procedure can exploit it • Hill climbing is often used when a good heuristic function is available for evaluating states but when no other useful knowledge is available. – generation of next state depends on feedback from the test procedure. – Test now includes a heuristic function that provides a guess as to how good each possible state is. • There are a number of ways to use the information returned by the test procedure. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 22. 22 Simple Hill Climbing • Use heuristic to move only to states that are better than the current state. • Always move to better state when possible. • The process ends when all operators have been applied and none of the resulting states are better than the current state. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 23. Algorithm: Simple Hill Climbing I. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state. 2. Loop until a solution is found or until there are no new operators left to be applied in the current state: (a) Select an operator that has not yet been applied to the current state and apply it to produce a new state. (b) Evaluate the new state. (i) If it is a goal state, then return it and quit. (ii) If it is not a goal state but it is better than the current state, then make it the current state. (iii) If it is not better than the current state, then continue in the loop. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 23
  • 24. 24 Simple Hill Climbing Function Optimization y = f(x) x y prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 25. • The key difference between this algorithm and the for generate-and-test is the use of an evaluation function as a way to inject task- specific knowledge into the control process • It is the use of such knowledge that makes this and the other methods heuristic search methods, and it is that same knowledge that gives these methods their power to solve some otherwise intractable problems. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 25
  • 26. “Is one state better than another” For the algorithm to work, a precise definition of better must be provided. In some cases, it means a higher value of the heuristic function. In others, it means a lower value. It does not matter which, as long as a particular hill-climbing program is consistent in its interpretation. https://www.geeksforgeeks.org/introduction- hill-climbing-artificial-intelligence/ prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 26
  • 27. 27 Potential Problems with Simple Hill Climbing • Will terminate when at local optimum. • The order of application of operators can make a big difference. • Can’t see past a single move in the state space. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 28. 28 Simple Hill Climbing Example • TSP - define state space as the set of all possible tours. • Operators exchange the position of adjacent cities within the current tour. • Heuristic function is the length of a tour. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 29. 29 TSP Hill Climb State Space CABD ABCD ACDB DCBA Initial State Swap 1,2 Swap 2,3 Swap 3,4 Swap 4,1 Swap 1,2 Swap 2,3 Swap 3,4 Swap 4,1 ABCD BACD ACBD ABDC DBCA prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 30. 30 Steepest-Ascent Hill Climbing (Sharpest rise) • A variation on simple hill climbing. • considers all the moves from the current state and selects the best one as the next state. This method is called steepest-ascent hill climbing or gradient search. • this contrast with the basic method in which the first state that is better than the current state is selected. The algorithm works as follows. • The order of operators does not matter. • Not just climbing to a better state, climbing up the steepest slope. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 31. Algorithm: Steepest-Ascent Hill Climbing 1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state. 2. Loop until a solution is found or until a complete iteration produces no change to current state: (a) Let SUCC be a state such that any possible successor of the current state will be better than SUCC. (b) For each operator that applies to the current state do: (i) Apply the operator and generate a new state. (ii) Evaluate the new state. If it is a goal state, then return it and quit. If not, compare it to SUCC. If it is better, then set SUCC to this state. If it is not better, leave SUCC alone. (c) If the SUCC is better than current state, then set current state to SUCC. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 31
  • 32. • There is a trade-off between the time required to select a move (usually longer for steepest- accent hill climbing) and the number of moves required to get to a solution (usually longer for basic hill climbing) that must be considered when deciding which method will work better for a particular problem. • trade-off =a balance achieved between two desirable but incompatible features; a compromise. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 32
  • 33. 33 Hill Climbing Termination • Both basic and steepest-ascent hill climbing may fail to find a solution. • Either algorithm may terminate not by finding a goal state but by getting to a state from which no better states can be generated. • This will happen if the program has reached either a local maximum, a plateau, or a ridge. • Local Optimum(maximum) :- – is a state that is better than all its neighbors but is not better than some other states farther away. – At a local maximum, all moves appear to make things worse. – Local maxima are frustrating because they often occur almost within sight of a solution. In this case, they are called foothills. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 34. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 34
  • 35. 35 Hill Climbing Termination • Plateau – is a flat area of the search space in which all neighboring states are the same as the current state. On a plateau, it is not possible to determine the best direction in which to move by making local comparisons. • Ridge - local optimum that is caused by inability to apply 2 operators at once. is a special kind of local maximum. It is an area of the search space that is higher than surrounding areas and that itself has a slope prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 36. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 36
  • 37. There are some ways of dealing with these problems, although these methods are by no means guaranteed: • Backtrack to some earlier node and try going in a different direction. This is particularly reasonable if at that node there was another direction that looked promising or almost as promising as the one that was chosen earlier. To implement this strategy, maintain a list of paths almost taken and go back to one of them if the path that was taken leads to a dead end. This is a fairly good way of dealing with local maxima. • Make a big jump in some direction to try to get to a new section of the search space. This is a particularly good way of dealing with plateaus. If the only rules available describe single small steps, apply them several times in the same direction. • Apply two or more rules before doing the test. This corresponds to moving in several directions at once. This is a particularly good strategy for dealing with ridges. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 37
  • 38. • hill climbing is not always very effective. It is particularly unsuited to problems where the value of the heuristic function drops off suddenly as you move away from a solution • This is often the case whenever any sort of threshold effect is present. • Hill climbing is a local method, by which we mean that it decides what to do next by looking only at the "immediate" consequences of its choice rather than by exhaustively exploring all the consequences prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 38
  • 39. • the advantage of being less combinatorial explosive than comparable global methods. • Disadv:- it is having a lack of a guarantee that it will be effective compared to with other local methods. • combinatorial :- relating to the selection of a given number of elements from a larger number without regard to their arrangement. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 39
  • 40. Simulated Annealing • Simulated annealing is a variation of hill climbing in which, at the beginning of the process, some downhill moves may be made. • The idea is to do enough exploration of the whole space early on so that the final solution is relatively insensitive to the starting state. • This should lower the chances of getting caught at a local maximum, a plateau, or a ridge. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 41
  • 41. • We use the term objective function in place of the term heuristic function. • we attempt to minimize rather than maximize the value of the objective function. • Thus we actually describe a process of valley descending rather than hill climbing. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 42
  • 42. • Simulated annealing [Kirkpatrick et al., 1983] as a computational process is patterned after the physical process of annealing, in which physical substances such as metals are melted ( i.e., raised to high energy levels) and then gradually cooled until some solid state is reached. • The goal of this process is to produce a minimal-energy final state. • Thus this process is one of valley descending in which the objective function is the energy level. • Physical substances usually move from higher energy configurations to lower ones, so the valley descending occurs naturally. • But there is some probability that a transition to a higher energy state will occur. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 43
  • 43. • in the physical valley descending that occurs during annealing, the probability of a large uphill move is lower than the probability of a small one. • Also, the probability that an uphill move will be made decreases as the temperature decreases. • Thus such moves are more likely during the beginning of the process when the temperature is high, and they become less likely at the end as the temperature becomes lower. • One way to characterize this process is that downhill moves are allowed anytime. • Large upward moves may occur early on, but as the process progresses, only relatively small upward moves are allowed until finally the process converges to a local minimum configuration. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 44
  • 44. • The rate at which the system is cooled is called the annealing schedule. • Physical annealing processes are very sensitive to the annealing schedule. • If cooling occurs too rapidly, stable regions of high energy will form. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 45
  • 45. The algorithm for simulated annealing is only slightly different from the simple hill-climbing procedure. The three differences are: • The annealing schedule must be maintained. • Moves to worse states may be accepted. • It is a good idea to maintain, in addition to the current state, the best state found so far. Then, if the final state is worse than that earlier state (because of bad luck in accepting moves to worse states), the earlier state is still available prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 46
  • 46. Algorithm: Simulated Annealing 1. Evaluate the initial state. If it is also a goal state, then return it and quit. Otherwise, continue with the initial state as the current state. 2. Initialize BEST-SO-FAR to the current state. 3. Initialize T according to the annealing schedule. 4. Loop until a solution is found or until there are no new operators left to be applied in the current state. (a) Select an operator that has not yet been applied to the current state and apply it to produce a new state. (b) Evaluate the new state. Compute ∆E = (value of current) - (value of new state) • If the new state is a goal state, then return it and quit. • If it is not a goal state but is better than the current state, then make it the current state. Also set BEST-SO-FAR to this new state. • If it is not better than the current state, then make it the current state with probability p' as defined above. This step is usually implemented by invoking a random number generator to produce a number in the range [0,1 ]. If that number is less than p', then the move is accepted. Otherwise, do nothing. (c) Revise T as necessary according to the annealing schedule. 5. Return BEST-SO-FAR as the answer. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 47
  • 47. To implement this algorithm, it is necessary to select an annealing schedule, which has three components. • The first is the initial value to be used for temperature. • The second is the criteria that will be used to decide when the temperature of the system should be reduced. • The third is the amount by which the temperature will be reduced each time it is changed. There may also be a fourth component of the schedule, namely, when to quit. • Simulated annealing is often used to solve problems in which the number of moves from a given state is very large (such as the number of permutations that can be made to a proposed traveling salesman route). • For such problems, it may not make sense to try all possible moves. Instead, it may be useful to exploit some criterion involving the number of moves that have been tried since an improvement was found. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 48
  • 48. • BEST-FIRST SEARCH • two systematic control strategies, breadth-first search and depth-first search (of several varieties). • new method, best-first search,is a way of combining the advantages of both depth-first and breadth-first search into a single method. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 49
  • 49. 50 Best-First Search (cont.) While goal not reached: 1. Generate all potential successor states and add to a list of states. 2. Pick the best state in the list and go to it. • Similar to steepest-ascent, but don’t throw away states that are not chosen. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 50. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 51
  • 51. • Figure shows the beginning of a best-first search procedure. • Initially, there is only one node, so it will be expanded. • Doing so generates three new nodes. • The heuristic function, which, in this example, is an estimate of the cost of getting to a solution from a given node, is applied to each of these new nodes. • Since node D is the most promising, it is expanded next, producing two successor nodes, E and F. • But then the heuristic function is applied to them . • Now another path, that going through node B, looks more promising, so it is pursued, generating nodes G and H. • But again when these new nodes are evaluated they look less promising than another path, so attention is returned to the path through D to E. E is then expanded, yielding nodes I and J. • At the next step, J will be expanded, since it is the most promising. • This process can continue until a solution is found. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 52
  • 52. Notice that this procedure is very similar to the procedure for steepest-ascent hill climbing, with two exceptions. • In hill climbing, one move is selected and all the others are rejected, never to be reconsidered. This produces the straightl ine behavior that is characteristic of hill climbing. In best-first search, one move is selected, but the others are kept around so that they can be revisited later if the selected path becomes less promising. • Further, the best available state is selected in best-first search, even if that state has a value that is lower than the value of the state that was just explored. This contrasts with hill climbing, which will stop if there are no successor states with better values than the current state. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 53
  • 53. • Although the example shown above illustrates a best-first search of a tree, it is sometimes important to search a graph instead so that duplicate paths will not be pursued. • An algorithm to do this will operate by searching a directed graph in which each node represents a point in the problem space. • Each node will contain, in addition to a description or the problem state it represents, an indication of how promising it is, a parent link that points back to the best node from which it came, and a list of the nodes that were generated from it. • The parent link will make it possible to recover the path to the goal once the goal is found. • The list of successors will make it possible, if a better path is found to an already existing node, to propagate the improvement down to its successors. • We will call a graph of this sort an OR graph, since each of its branches represents an alternative problem-solving path. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 54
  • 54. • To implement such a graph-search procedure, we will need to use two lists of nodes: • OPEN - nodes that have been generated and have had the heuristic function applied to them but which have not yet been examined (i.e., had their successors generated). OPEN is actually a priority queue in which the elements with the highest priority are those with the most promising value of the heuristic function. Standard techniques for manipulating priority queues can be used to manipulate the list. • CLOSED - nodes that have already been examined. We need to keep these nodes in memory if we want to search a graph rather than a tree, since whenever a new node is generated, we need to check whether it has been generated before. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 55
  • 55. • We will also need a heuristic function that estimates the merits of each node we generate. This will enable the algorithm to search more promising paths first. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 56
  • 56. Algorithm: Best-First Search I. Start with OPEN containing just the initial state. 2. Until a goal is found or there are no nodes left on OPEN do: (a) Pick the best node on OPEN. (b) Generate its successors. (c) For each successor do: (i) If it has not been generated before, evaluate it, add it to OPEN, and record its parent. (ii) If it has been generated before, change the parent if this new path is better than the previous one. In that case, update the cost of getting to this node and to any successors that this node may already, have . The basic idea of this algorithm is simple. Unfortunately, it is rarely the case that graph traversal algorithms are simple to write correctly prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 57
  • 57. The A* Algorithm • The best-first search algorithm is a simplification of an algorithm called A*, which was first presented by Hart et al. [1968; 1972]. • This algorithm uses the f ‘, g, and h' functions, as well as the lists OPEN and CLOSED, • It avoids expanding already expensive path ,but expanding more promising path first prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 58
  • 58. 59 A* Algorithm (a sure test topic) • The A* algorithm uses a modified evaluation function and a Best-First search. • A* minimizes the total path cost. • Under the right conditions A* provides the cheapest cost solution in the optimal time! prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 59. Algorithm: A* l. Start with OPEN containing only the initial node. Set that node's g value to 0, its h' value to whatever it is, and its f' value to h' + 0, or h'. Set CLOSED to the empty list. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 60
  • 60. 61 A* evaluation function • The evaluation function f is an estimate of the value of a node x given by: f(x) = g(x) + h’(x) • g(x) is the cost to get from the start state to state x. • h’(x) is the estimated cost to get from state x to the goal state (the heuristic). prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 61. 62 Modified State Evaluation • Value of each state is a combination of: – the cost of the path to the state – estimated cost of reaching a goal from the state. • The idea is to use the path to a state to determine (partially) the rank of the state when compared to other states. • This doesn’t make sense for DFS or BFS, but is useful for Best-First Search. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 62. 63 A* Algorithm • The general idea is: – Best First Search with the modified evaluation function. – h’(x) is an estimate of the number of steps from state x to a goal state. – loops are avoided - we don’t expand the same state twice. – Information about the path to the goal state is retained. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 63. 64 Example - Maze START GOAL prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 64. 65 Example - Maze START GOAL prepared by Vijaya Ahire Asst Prof Dept of IT JNEC
  • 65. 2. Until a goal node is found, repeat the following procedure: If there are no nodes on OPEN, report failure. Otherwise, pick the node on OPEN with the lowest f’ value. Call it BESTNODE. Remove it from OPEN. Place it on CLOSED. See if BESTNODE is a goal node. If so, exit and report a solution (either BESTNODE if all we want is the node or the path that has been created between the initial state and BESTNODE if we are interested in the path). Otherwise, generate the successors of BESTNODE but do not set BESTNODE to point to them yet. (First we need to see if any of them have already been generated.) For each such SUCCESSOR, do the following: (a) Set SUCCESSOR to point back to BESTNODE. These backwards links will make it possible to recover the path once a solution is found. (b) Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from BESTNODE to SUCCESSOR. (c) See if SUCCESSOR is the same as any node on OPEN (i.e., It has already been generated but not processed). If so, call that node OLD. Since this node already exists in the graph, we can throw SUCCESSOR away and add OLD to the list of BESTNODE's successors. Now we must decide whether OLD's parent link should be reset to point to BESTNODE. It should be if the path we have just found to SUCCESSOR is cheaper than the current best path to OLD (since SUCCESSOR and OLD are really the same node). So see whether it is cheaper to get to OLD via its current parent or to SUCCESSOR via BESTNODE by comparing their g values. If OLD is cheaper (or just as cheap), then we need do nothing. If SUCCESSOR is cheaper, then reset OLD's parent link to point to BESTNODE, record the new cheaper path in g(OLD), and update f’(OLD). prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 66
  • 66. (d) If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so, call the node on CLOSED OLD and add OLD to the list of BESTNODE's successors. Check to see if the new path or the old path is better just as in step 2(c), and set the parent link-and g and f’ values appropriately. If we have just found a better path to OLD, we must propagate the improvement to OLD's successors. This is a bit tricky. OLD points to its successors. Each successor in turn points to its successors, and so forth, until each branch terminates with a node that either is still on OPEN or has no successors. So to propagate the new cost downward, do a depth-first traversal of the tree starting at OLD, changing each node's g value (and thus also its f' value), terminating each branch when you reach either a node with no successors or a node to which an equivalent or better path has already been found. This condition is easy to check for. Each node's parent link points back to its best known parent. As we propagate down to a node, see if its parent points to the node we are corning from. If so, continue the propagation. If not, then its g value already reflects the better path of which it is part. So the propagation may stop here. But it is possible that with the new value of g being propagated downward, the path we are following may become better than the path through the current parent. So compare the two. If the path through the current parent is still better, stop the propagation. If the path we are propagating through is now better, reset the parent and continue propagation. (c) If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN, and add it to the list of BESTNODE's successors. Compute f'(SUCCESSOR) = g(SUCCESSOR) + h'(SUCCESSOR). prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 67
  • 67. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 68
  • 68. Several interesting observations can be made about this algorithm. The first concerns the role of the g function. It lets us choose which node to expand next on the basis not only of how good the node itself looks (as measured by h'), but also on the basis of how good the path to the node was. By incorporating g into f’, we will not always choose as our next node to expand the node that appears to be closest to the goal. This is useful if we care about the path we find. If, on the other hand, we only care about getting to a solution somehow, we can define g always to be 0, thus always choosing the node that seems closest to a goal. If we want to find a path involving the fewest number of steps, then we set the cost of going from a node to its successor as a constant, usually 1. If, on the other hand, we want to find the cheapest path and some operators cost more than others, then we set the cost of going from one node to another to reflect those costs. Thus the A* algorithm can be used whether we are interested in finding a minimal-cost overall path or simply any path as quickly as possible. The second observation involves h’, the estimator of h, the distance of a node to the goal. If h' is a perfect estimator of h, then A* will converge immediately to the goal with no search. The better h’ is, the closer we will get to that direct approach. If, on the other hand, the value of h' is always 0, the search will be controlled by g. If the value of g is also 0, the search strategy will be random. If the value of g is always 1, the search will be breadth first. All nodes on one level will have lower g values, and thus lower f’ values than will all nodes on the next level. What if, on the other hand, h' is neither perfect nor 0? Can we say anything interesting about the behavior of the search? The answer is yes if we can guarantee that h' never overestimates h. In that case, the A* algorithm is guaranteed to find an optimal (as determined by g) path to a goal, if one exists. This can easily be seen from a few examples. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 69
  • 69. Consider the situation shown in Fig. 3.4. Assume that the cost of all arcs is 1. Initially, all nodes except A are on OPEN (although the Fig. shows the situation two steps later, after B and E have been expanded). For each node, f' is indicated as the sum of h' and g. In this example, node B has the lowest f’, 4, so it is expanded first. Suppose it has only one successor E, which also appears to be three moves away from a goal. Now f'(E) is 5, the same as f'(C). Suppose we resolve this in favor of the path we are currently following. Then we will expand E next. Suppose it too has a single successor F, also judged to be three moves from a goal. We are clearly using up moves and making no progress. But f’(F) = 6, which is greater than f'(C). So we will expand C next. Thus we see that by underestimating h'(B) we have wasted some effort. But eventually we discover that B was farther away than we thought and we go back and try another path. Now consider the situation shown in Fig. 3.5. Again we expand B on the first step. On the second step we again expand E. At the next step we expand F, and finally we generate G, for a solution path of length 4. But suppose there is a direct path from D to a solution, giving a path of length 2. We will never find it. By overestimating h'(D) we make D look so bad that we may find some other, worse solution without ever expanding D. In general, if h' might overestimate h, we cannot be guaranteed of finding the cheapest path solution unless we expand the entire graph until all paths are longer than the best solution . An interesting question is, "Of what practical significance is the theorem that if h’ never overestimates h then A* is admissible?" The answer is, "almost none," because, for most real problems, the only way to guarantee that h’ never overestimates h is to set it to zero. But then we are back to breadth-first search, which is admissible but not efficient. But there is a corollary to this theorem that is very useful. We can state it loosely as follows: Graceful Decay of Admissibility: If h' rarely overestimates h by more than δ, then the A* algorithm will rarely find a solution whose cost is more than δ greater than the cost of the optimal solution. The formalization and proof of this corollary will be left as an exercise. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 70
  • 70. The third observation we can make about the A* algorithm has to do with the relationship between trees and graphs. The algorithm was stated in its most general form as it applies to graphs. It can , of course, be simplified to apply to trees by not bothering to check whether a new node is already on OPEN or CLOSED. This makes it faster to generate nodes but may result In the same search being conducted many times if nodes are often duplicated. Under certain conditions, the A* algorithm can be shown to be optimal in that it generates the fewest nodes in the process of finding a solution to a problem. Under other conditions it is not optimal. For formal discussions of these conditions, see Gelperin [1977] and Martelli [1977]. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 71
  • 71. Problem Reduction Search • „ l Panning how best to solve a problem that can be recursively decomposed into sub problems in multiple ways – Matrix multiplication problem – Tower of Hanoi Tower of Hanoi – ‹Blocks World problems – ‹ Theorem proving prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 72
  • 72. • When a problem can be divided into a set of sub problems, where each sub problem can be solved separately and a combination of these will be a solution, AND-OR graphs or AND - OR trees are used for representing the solution. • The decomposition of the problem or problem reduction generates AND arcs. • One AND arc may point to any number of successor nodes. • All these must be solved so that the arc will rise to many arcs, indicating several possible solutions. • Hence the graph is known as AND - OR instead of AND.. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 73
  • 73. An algorithm to find a solution in an AND - OR graph must handle AND area appropriately. A* algorithm can not search AND - OR graphs efficiently. AO* is best for solving cyclic AND-OR Graph prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 74
  • 74. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 75
  • 75. Knowledge representation and reasoning (KR, KR², KR&R) • is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology[1] about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.[2] • Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 76
  • 76. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 77 KR Introduction • General problem in Computer Science • Solutions = Data Structures – words – arrays – records – list • More specific problem in AI • Solutions = knowledge structures – lists – trees – procedural representations – logic and predicate calculus – rules – semantic nets and frames – scripts
  • 77. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 78 Kinds of Knowledge • Objects – Descriptions – Classifications • Events – Time sequence – Cause and effect • Relationships – Among objects – Between objects and events • Meta-knowledge Things we need to talk about and reason about; what do we know? Distinguish between knowledge and its representation •Mappings are not one-to-one •Never get it complete or exactly right
  • 78. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 79 Types of Knowledge • a priori knowledge – comes before knowledge perceived through senses – considered to be universally true • a posteriori knowledge – knowledge verifiable through the senses – may not always be reliable • procedural knowledge – knowing how to do something • declarative knowledge – knowing that something is true or false • tacit (unspoken, understood) knowledge – knowledge not easily expressed by language
  • 79. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 80 Characteristics of a good KR: • It should – Be able to represent the knowledge important to the problem – Reflect the structure of knowledge in the domain • Otherwise our development is a constant process of distorting things to make them fit. – Capture knowledge at the appropriate level of granularity – Support incremental, iterative development • It should not – Be too difficult to reason about – Require that more knowledge be represented than is needed to solve the problem
  • 80. Representation and mapping • In order to solve complex problems encounterd in AI, we need large amount of knowledge and mechanism to manipulate it to create solution. • Two things must be considered – Facts-truth in some relevant world. These are the things we want to represent. – Representation of facts in some chosen formalism. These are the things we will actually be able to manipulate. • One way to think of structuring these entities is as two levels – The knowledge level, at which facts(including each agents behaviors and current goals) are described. – The symbol level,at which representation of objects at the knowledge level are defined in terms of symbols that can be manipulated by programs prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 81
  • 81. Rather than thinking on one level on top of another, focus is on facts, on representation and on the two-way mapping that must exits between them Links are called representation mapping The forword represntation mapping maps from facts to representation. The backword representation is reverse. Facts -natural languge (English sentence) Simple example using mathematical logic as representational formalism Spot is a dog ------------- dod(Spot) All dogs have tail prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 82
  • 82. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 83
  • 83. Approaches to Knowledge Representation Inference-a conclusion reached on the basis of evidence and reasoning prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 84
  • 84. Approaches to Knowledge Representation 1. Simple relational knowledge 2. Inheritable Knowledge. 3. Inferential knowledge 4. Procedural knowledge prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 85
  • 85. Correspond to set of attributed and associated values that together describe the objects of knowledge base prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 86
  • 86. Inheritable Knowledge. • All data should be stored into a hierarchy of classes. • Classes must arranged in generalization • Every individual frame can represent the collection of attributes and its value. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 87
  • 87. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 88
  • 88. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 89
  • 89. prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 90 A semantic network is a knowledge structure that depicts how concepts are related to one another and illustrates how they interconnect. Semantic networks use artificial intelligence (AI) programming to mine data, connect concepts and call attention to relationships.
  • 90. Inferential knowledge • Represent knowledge as formal logic Inheritance property is very useful form of interference prepared by Vijaya Ahire Asst Prof Dept of IT JNEC 91