O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

chapterThree.pptx

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Próximos SlideShares
3 probsolver edited.ppt
3 probsolver edited.ppt
Carregando em…3
×

Confira estes a seguir

1 de 54 Anúncio

Mais Conteúdo rRelacionado

Semelhante a chapterThree.pptx (20)

Mais recentes (20)

Anúncio

chapterThree.pptx

  1. 1. Problem solving by searching Chapter Three
  2. 2. Introduction • In Chapter 2 we introduced the idea of an agent-centred approach to AI. This approach involves first specifying the environment in which a rational agent must operate, thereby clearly defining the type of “intelligent” behavior that is required of the agent. We have seen that environments come in many different types, based on the behavior of the environment, and the agent‟s perception of and interaction with the environment. • In this chapter we look at how we can take an environment and formulate a problem for the rational agent to solve. We will see that the different environment types mentioned in Chapter 2 lead to different types of problem. To begin with, we will concentrate on formulating and solving the simplest type of problem, known as a single-state problem.
  3. 3. Introduction • The basic algorithm for problem-solving agents consists of 4 phases: formulate the goal and problem , search for a solution and execute the solution. In solving problems, it is important to understand the concept of a state space. The state space of a problem is the set of all possible states that the environment/agent can be in. A limited set (possibly one) of these will correspond to the goal of the agent. The aim of the problem-solving agent is therefore to perform a sequence of actions that change the environment so that it ends up in one of the goal states. The search phase of the problem- solving agent consists of searching the state space for this sequence of actions.
  4. 4. Introduction • In this part you will show how an agent can act by establishing goals and considering sequences of actions that might achieve those goals. • A goal and a set of means for achieving the goal is called a problem, and the process of exploring what the means can do is called search.
  5. 5. It is a gap between what actually is and what is desired. • A problem exists when an individual becomes aware of the existence of an obstacle which makes it difficult to achieve a desired goal or objective. • A goal and a set of means for achieving the goal is called a problem, and the process of exploring what the means can do is called search. A number of problems are addressed in AI, both: • Toy problems: are problems that are useful to test and demonstrate methodologies. • Can be used by researchers to compare the performance of different algorithms • e.g. 8-puzzle, n-queens, vacuum cleaner world, … • Real-life problems: are problems that have much greater commercial/economic impact if solved. • Such problems are more difficult and complex to solve, and there is no single agreed-upon description • E.g. Route finding, Traveling sales person, etc.
  6. 6. Solving a problem Formalize the problem: Identify the collection of information that the agent will use to decide what to do. Define states • States describe distinguishable stages during the problem- solving process • Example- What are the various states in route finding problem? • The various places including the location of the agent Define the available operators/rules for getting from one state to the next • Operators cause an action that brings transitions from one state to another by applying on a current state • Suggest a suitable representation for the problem space/state space • Graph, table, list, set, … or a combination of them
  7. 7. • The state space defines the set of all relevant states reachable from the initial state by (any) sequence of actions through iterative application of all permutations and combinations of operators • State space (also called search space/problem space) of the problem includes the various states :  Initial state: what state are the environment/agent in to begin with?  Actions: the successor function specifies what actions are possible in each state, and what their result would be. It consists of a set of action-state pairs.  Goal test: either an implicit or explicit statement of when the agent‟s goal has been achieved.  Path cost: a step cost c(x,a,y) for each action „ a ‟that takes the agent from state „ x ‟to state „y ‟– the sum of all step costs for a sequence of actions is the path cost.
  8. 8. Example Find the state space for route finding problem where the agent wants to go from sidist_kilo to stadium.  Think of the states reachable from the initial state until we reach to the goal state .
  9. 9. Example: Vacuum world problem To simplify the problem (rather than the full version), let; • The world has only two locations Each location may or may not contain dirt The agent may be in one location or the other • Eight possible world states . • Three possible actions (Left, Right, Suck) • Goal: to clean up all the dirt . • Path cost: Each step costs 1, so the path cost is the number of steps in the path
  10. 10. Example: consider the map of Romania For example, consider the map of Romania in Figure . Let's say that an agent is in the town of Arad, and has to goal of getting to Bucharest. What sequence of actions will lead to the agent achieving its goal?
  11. 11. Cont…. If we assume that the environment is fully-observable and deterministic, then we can formulate this problem as a single-state problem. • The environment is fully-observable if the agent knows the map of Romania and his/her current location. • It is deterministic if the agent is guaranteed to arrive at the city at the other end of each road it takes. These are both reasonable assumptions in this case. • The single-state problem formulation is therefor :
  12. 12. Cont.…
  13. 13. The 8 puzzle problem Arrange the tiles so that all the tiles are in the correct positions. You do this by moving tiles or space. You can move a tile/space up, down, left, or right, so long as the following conditions are met: A)there's no other tile blocking you in the direction of the movement; and B)you're not trying to move outside of the boundaries/edges. 1 2 3 8 4 5 7 6 1 2 3 8 4 7 6 5
  14. 14. The 8 puzzle problem States: a state description specifies the location of each of the eight tiles and blank in one of the nine squares . Initial State: Any state in state space Successor function: the blank moves Left, Up Goal test: current state matches the goal configuration Path cost: each step costs 1, so the path cost is just the length of the path
  15. 15. Missionary-and-cannibal problem: Three missionaries and three cannibals are on one side of a river that they wish to cross. There is a boat that can hold one or two people. Find an action sequence that brings everyone safely to the opposite bank (i.e. Cross the river). But you must never leave a group of missionaries outnumbered by cannibals on the same bank (in any place). 1. Identify the set of states and operators 2. Show using suitable representation the state space of the problem
  16. 16. Based on the environment types discussed in Chapter 2, we can identify a number of common environment types. These are summarized in Table 1.
  17. 17. Example: Vacuum world problem To simplify the problem (rather than the full version), let; The world has only two locations • Each location may or may not contain dirt • The agent may be in one location or the other Eight possible world states • Three possible actions (Left, Right, Suck) Suck operator clean the dirt Left and Right operators move the agent from location to location • Goal: to clean up all the dirt
  18. 18. Clean House Task
  19. 19. Vacuum Cleaner state Space
  20. 20. Fully observable: The world is accessible to the agent • It can determine its exact state through its sensors • The agent's sensor knows which state it is in. Deterministic: The agent knows exactly the effect of its actions • It can then calculate exactly which state it will be in after any sequence of actions Action sequence is completely planned. Example - Vacuum cleaner world • What will happen if the agent is initially at state = 5 and formulates action sequence - [Right, Suck]? • Agent calculates and knows that it will get to a goal state • Right  {6} • Suck  {8} Single state problem
  21. 21. Multiple state problems Partially observable: The agent has limited access to the world state • It might not have sensors to get full access to the environment states or as an extreme, it can have no sensors at all (due to lack of percepts) Deterministic: The agent knows exactly what each of its actions do • It can then calculate which state it will be in after any sequence of actions • If the agent has full knowledge of how its actions change the world, but does not know of the state of the world, it can still solve the task Example - Vacuum cleaner world • Agent's initial state is one of the 8 states: {1,2,3,4,5,6,7,8} • Action sequence: {right, suck, left, suck} • Because agent knows what its actions do, it can discover and reach to goal state. Right  [2.4.6.8.] Suck  {4,8} Left  {3,7} Suck  {7}
  22. 22. Partially observable: The agent has limited access to the world state Non-deterministic: The agent is ignorant of the effect of its actions. • Sometimes ignorance prevents the agent from finding a guaranteed solution sequence. • Suppose the agent is in Murphy’s law world The agent has to sense during the execution phase, since things might have changed while it was carrying out an action. This implies that • the agent has to compute a tree of actions, rather than a linear sequence of action Example - Vacuum cleaner world: • Action ‘Suck' deposits dirt on the carpet, but only if there is no dirt already. Depositing dirt rather than sucking returns from ignorance about the effects of actions Contingency problems
  23. 23. The agent has no knowledge of the environment • World Partially observable : No knowledge of states (environment) • Unknown state space (no map, no sensor) • Non-deterministic: No knowledge of the effects of its actions Problem faced by (intelligent) agents (like, new-born babies) This is a kind of problem in the real world rather than in a model, which may involve significant danger for an ignorant agent. If the agent survives, it learns about the environment The agent must experiment, learn and build the model of the environment through its results, gradually, discovering • What sort of states exist and What its action do • Then it can use these to solve subsequent (future) problems • Example: in solving Vacuum cleaner world problem the agent learns the state space and effects of its action sequences say: [suck, Right] Exploration problems
  24. 24. To define a problem, we need the following elements: states, operators, goal test function and cost function. Well-defined problem and solutions
  25. 25.  Goal formulation Is a step that specifies exactly what the agent is trying to achieve. This step narrows down the scope that the agent has to look at.  Problem formulation Is a step that puts down the actions and states that the agent has to consider given a goal (avoiding any redundant states), like: • the initial state • the allowable actions etc…  Search Is the process of looking for the various sequence of actions that lead to a goal state, evaluating them and choosing the optimal sequence.  Execute Is the final step that the agent executes the chosen sequence of actions to get it to the solution/goal
  26. 26. AI Group Assignment 30% Write a about Introduction to Robot(Robotics). Contents: introduction, types of robots, robots hardware…. Due date: 21/01/2015 EC group assignmet
  27. 27. Tree Searching Actually we can solve this state space search problem by using a tree search algorithm. For example, the tree shown in Figure 3 illustrates the start of the tree search process: at each iteration we select a node. If the node represents a goal state we stop searching. Otherwise we “expand” the selected node (i.e. generate its possible successors using the successor function) and add the successors as child nodes of the selected node. This process continues until either we find a goal state, or there are no nodes left to expand, in which case the search has failed .
  28. 28. Tree Searching
  29. 29. Search Terminology Problem Space − It is the environment in which the search takes place. (A set of states and set of operators to change those states) Problem Instance − It is Initial state + Goal state. Problem Space Graph − It represents problem state. States are shown by nodes and operators are shown by edges. Depth of a problem − Length of a shortest path or shortest sequence of operators from Initial State to goal state. Admissibility − A property of an algorithm to always find an optimal solution. Branching Factor − The average number of child nodes in the problem space graph. Depth − Length of the shortest path from initial state to goal state.
  30. 30. Tree Searching • In the following sections we will examine a number of tree search strategies. For each strategy, we will assess it against a number of criteria: • Completeness: does the algorithm always find a solution if there is one? • Optimality: does the algorithm always find the least-cost solution? • Space Complexity − The maximum number of nodes that are stored in memory. • Time Complexity − The maximum number of nodes that are created.
  31. 31. Uninformed Searching • The simplest type of tree search algorithm is called uninformed, or blind, tree search. These algorithms do not use any additional information about states apart from that provided in the problem definition . • These are an inefficient .
  32. 32. Breadth-First Search • One of the simplest uninformed tree search strategies is breadth- first search. • Breadth-first search can be implemented by using a FIFO (First-In First-Out) queue for the list of unexpanded nodes. • In breadth first search we always select the minimum depth node for expansion. This has the effect that we “explore” the tree by moving across the breadth of the tree, completely exploring every level before moving down to the next level. Figure 4 illustrates the order of node expansion using breadth-first search on a sample tree with branching factor 3.
  33. 33. Example of Breadth-First Search
  34. 34. Summary of breadth-first search analysis  Complete: Yes (assuming b is finite)  Time Complexity: O( bd )  Space complexity: O( bd )  Optimal: Yes, if Step cost = 1 (i.e. no cost/all step costs are same) where b – maximum branching factor in a tree. d – depth of the shallowest (depth of the goal node).
  35. 35. Uniform Cost Search Uniform cost search is similar to breadth-first search except that it tries to overcome the limitation of not being optimal when step costs are not identical. Instead of always expanding the minimum depth node, uniform cost search always expands the node with the least path cost. In other words, uniform cost search always expands the node which is “closest” to the initial state, and therefore has the greatest potential for leading to a least- cost solution. If all step costs are identical, uniform cost search is equivalent to breadth-first search .
  36. 36. Cont..
  37. 37. Summary of uniform cost search where b – maximum branching factor in a tree. ϵ – cost of each step c – optimal cost  Complete: Yes (if b is finite and costs are stepped costs are zero)  Time Complexity: O(b(c/ϵ))  Space complexity: O(b(c/ϵ))  Optimal: Yes (b/c it chooses lowest cost)
  38. 38. Depth-First Search • Depth-first search can be implemented by using a LIFO (Last-In First-Out) stack for the list of unexpanded node • Depth-first search is an alternative tree search algorithm that has linear space complexity. • With depth-first search we always choose the deepest node for expansion. For example, Figure 5 illustrates the order of node expansion for the same simple tree we saw in Figure 4.
  39. 39. Example of Depth-First Search  Complete: NO  Time Complexity: O(bm)  Space complexity: O(bm)  Optimal: YES Where b – maximum branching factor in a tree. m – max depth of any node tree.
  40. 40. Depth-limited search  Depth-first search has much better space complexity than breadth-first or uniform cost, but it is not complete for infinite depth trees and it is not optimal. Depth-limited search attempts to overcome the first of these weaknesses. The idea behind depth-limited search is to run a depth-first search but place a limit (a “cut-off”) on the maximum depth to search to. For example, if our cut-off depth is l, then we will never expand any nodes at level l.  Depth-first search does indeed handle infinite depth trees better, but it introduces some new weaknesses. If the goal state is below the cut-off level l it will not be found. Also if the goal is above level l depth-limited search cannot be guaranteed to find the least-cost solution, so it is not optimal.
  41. 41. Summary of depth-limited search  Complete: Complete (if solution > depth-limit)  Time Complexity: O(bl)  Space complexity: O(bl)  Optimal: Yes (only if l > d) Where b – maximum branching factor in a tree. l – depth-limit
  42. 42. Informed Search • Informed search algorithms attempt to use extra domain knowledge to inform the search, in an attempt to reduce search time. • A particular class of informed search algorithms is known as best-first search. Note that best-first search is not an algorithm itself, but a general approach. In best-first search, we use a heuristic function to estimate which of the nodes in the fringe is the “best” node for expansion. This heuristic function, h(n), estimates the cost of the cheapest path from node n to the goal state. In other words, it tells us which of the nodes in the fringe it think is “closest” to the goal. • We will now examine two similar, but not identical, best-first search algorithms: greedy best-first and A* search.
  43. 43. Greedy Best-First Search  The simplest best-first search algorithm is greedy best-first search.  This algorithm simply expands the node that is estimated to be closest to the goal, i.e. the one with the lowest value of the heuristic function h(n) .  For example, let us return to the Romania example we introduced in the previous chapter (the state space is reproduced in Figure 1 for ease of reference). What information can we use to estimate the actual road distance from a city to Bucharest? In other words, what domain knowledge can we use to estimate which of the unexpanded nodes is closest to Bucharest? One possible answer is to use the straight-line distance from each city to Bucharest. Table 1 shows a list of all these distances .
  44. 44. Greedy Best-First Search
  45. 45. Greedy Best-First Search
  46. 46. Using this information, the greedy best-first search algorithm will select for expansion the node from the unexpanded fringe list with the lowest value of HSLD(n) .
  47. 47. Summary of Greedy Best-First Search Completeness: no, can get stuck in loops Optimality: no, can go for non-optimal solutions that look good in the short term  Time complexity: O(bm), but good heuristic can make dramatic improvement  Space complexity: same as time complexity Where b – maximum branching factor in a tree. m – max depth of the node in a tree.
  48. 48. A* Search algorithm A* search is similar to greedy best-first search, except that it also takes into account the actual path cost taken so far to reach each node. The node with the lowest estimated total path cost, f(n), is expanded, f(n) = g(n) + h(n) Where g(n) = total actual path cost to get to node n h(n) = estimated path cost to get from node n to goal .
  49. 49. Cont.…. Red colored numbers indicates heuristic value for each node and blue colored numbers indicate path cost from one node to the next node. And our initial sate is node A and goal state is node G.
  50. 50. Conti… Execution of A* search is given below for the map Ramanian . Step1:  Fringe=[Arad]  Lowest value of evaluation function f(Arad)=0+366=366  Action: expand Arad Step 2:  Fringe=[Sibiu,Timisoara,Zerind]  Lowest value of evaluation function f(Sibiu)=140+253=393  Action: expand Sibiu Step 3:  Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Rimnicu Vilcea]  Lowest value of evaluation function f(Rimnicu Vilcea)=220+193=413  Action: expand Rimnicu Vilcea
  51. 51. Cont.… Step 4: Fringe=[Timisoara,Zerind,Arad,Fagaras,Oradea,Craiova,Pitesti,S ibiu]  Lowest value of evaluation function f(Fagaras)=239+176=415 Action: expand Fagaras • Step 5: o Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Pitesti,Sibiu,Sibiu ,Bucharest]. • o Lowest value of evaluation function f(Pitesti)=317+100=417 o Action: expand Pitesti
  52. 52. Cont.…. Step 6:  Fringe=[Timisoara,Zerind,Arad,Oradea,Craiova,Sibiu,Sibiu,Buc harest, Bucharest,Craiova,Rimnicu Vilcea]  Lowest value of evaluation function f(Bucharest)=418+0=418  Action: find goal at Bucharest . Notice that A* search finds a different (and optimal) solution to greedy best-first search, getting to Bucharest via Sibiu, Rimnicu Vilcea and Pitesti, rather than via Sibiu and Fagaras .
  53. 53. Summary of A* Search  Completeness: YES  Optimality: YES  Time complexity: O(bm), but good heuristic can make dramatic improvement  Space complexity: same as time complexity Where b – maximum branching factor in a tree. m – depth of the least-cost tree.
  54. 54. Quiz 10% 1. Which searching algorithm do you think that it is better and why? 2. What is the difference b/n blind and heuristics searching algorithms? 3. List and explain the criteria's that can asses or examine the searching algorithm performance.

×