In computer science, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution. If the change produces a better solution, an incremental change is made to the new solution, repeating until no further improvements can be found.
For example, hill climbing can be applied to the travelling salesman problem. It is easy to find an initial solution that visits all the cities but will be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained.
Hill climbing is good for finding a local optimum (a solution that cannot be improved by considering a neighbouring configuration) but it is not necessarily guaranteed to find the best possible solution (the global optimum) out of all possible solutions (the search space). In convex problems, hill-climbing is optimal. Examples of algorithms that solve convex problems by hill-climbing include the simplex algorithm for linear programming and binary search.[1]:253
The characteristic that only local optima are guaranteed can be cured by using restarts (i.e. repeated local search). It can also be cured by using more complex schemes based on iterations, like iterated local search, based on memory, like reactive search optimization and tabu search, or based on memory-less stochastic modifications, like simulated annealing.
The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms. It is used widely in artificial intelligence, for reaching a goal state from a starting node. Choice of next node and starting node can be varied to give a list of related algorithms. Although more advanced algorithms such as simulated annealing or tabu search may give better results, in some situations hill climbing works just as well. Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems. It is an anytime algorithm: it can return a valid solution even if it's interrupted at any time before it ends.
2. Hill Climbing
• This is a variety of depth-first (generate - and - test) search.
• A feedback is used here to decide on the direction of motion in the
search space.
• In the depth-first search, the test function will merely accept or
reject a solution.
• But in hill climbing the test function is provided with a heuristic
function which provides an estimate of how close a given state is to
goal state.
4. The hill climbing test procedure is as follows :
1. General we first proposed a solution as done in depth-first procedure. See if it
is a solution. If so quit , else continue.
2. From this solution generate new set of solutions use , some application rules
3. For each element of this set
(i) Apply test function. It is a solution quit.
(ii) Else see whether it is closer to the goal state than the solution already generated. If yes,
remember it else discard it.
4. Take the best element so far generated and use it as the next proposed
solution.
This step corresponds to move through the problem space in the direction
Towards the goal state.
5. Go back to step 2.
5. Issues
Position, which is not a solution, but from which there is no move
that improves things.
(a) A "local maximum " " Foothills“ : Due to high non-linearity
(b) A "plateau'' which is a flat area of the search space, in which
neighboring states have the same value.
Hill climbing becomes inefficient in large problem spaces, and when
combinatorial explosion occurs.
7. What is Genetic Algorithms?
• Genetic Algorithms (GAs) are adaptive heuristic search algorithm
• Based on the evolutionary ideas of natural selection and genetics.
• The basic techniques of the GAs are designed to simulate processes in
natural systems necessary for evolution
• Charles Darwin - "survival of the fittest.".
8. Why Genetic Algorithms?
• in searching a
• large state-space,
• multi-modal state-space,
• n-dimensional surface,
• genetic algorithm offer significant benefits over more typical search of
optimization techniques
9. Genetic Algorithms Overview
• Individuals in a population compete for resources and mates.
• Those individuals most successful in each 'competition' will produce
more offspring than those individuals that perform poorly.
• Genes from `good' individuals propagate throughout the population
so that two good parents will sometimes produce offspring that are
better than either parent.
• Thus each successive generation will become more suited to their
environment.
11. Implementation Details
• After an initial population is randomly generated, the algorithm
evolves the through three operators:
• selection which equates to survival of the fittest;
• crossover which represents mating between individuals;
• mutation which introduces random modifications.
12. Selection Operator
• key idea: give preference to better individuals, allowing them to pass
on their genes to the next generation.
• The goodness of each individual depends on its fitness.
• Fitness may be determined by an objective function or by a subjective
judgement.
15. The Algorithms
• randomly initialize population(t)
• determine fitness of population(t)
• repeat
• select parents from population(t)
• perform crossover on parents creating population(t+1)
• perform mutation of population(t+1)
• determine fitness of population(t+1)
• until best individual is good enough