Hill-climbing is a heuristic search algorithm used in artificial intelligence and optimization
problems. It's a variant of greedy local search, which continuously moves in the direction of
increasing value (uphill) to find the peak (optimal solution).
Hill-Climbing Algorithm – Theory
Goal: Maximize (or minimize) a given objective function by iteratively improving the current
state.
Key Characteristics:
Local Search: It only considers the immediate neighbors of the current state.
Greedy Approach: Chooses the neighbor with the best improvement.
No backtracking: Once a move is made, previous states are not revisited.
Variants:
1. Simple Hill Climbing: Considers one neighbor at a time and moves if it's better.
2. Steepest-Ascent Hill Climbing: Evaluates all neighbors and moves to the best.
3. Stochastic Hill Climbing: Chooses a random better neighbor (to avoid local maxima).
Hill-Climbing Algorithm – Pseudocode
function HillClimb(problem):
current = initial_state(problem)
loop:
neighbor = best_improving_neighbor(current)
if neighbor.value <= current.value:
return current // reached peak
current = neighbor
A (h=10)
/ \
B(h=8) C(h=9)
/ \ \
D(h=7) E(h=6) F(h=5)
Goal: Find the node with the lowest heuristic (best state).
Hill-Climbing Process (Best neighbor selection):
1. Start at A (h=10)
o Neighbors: B(h=8), C(h=9)
o Best neighbor: B (h=8) → move to B
2. At B (h=8)
o Neighbors: D(h=7), E(h=6)
o Best neighbor: E (h=6) → move to E
3. At E (h=6)
o No better neighbors → stop
Final state: E (h=6)
But observe: F (h=5) is better, yet we never reached it.
Problem: Local Minimum
The algorithm stops at E (local minimum) and misses F (global minimum).
Hill-climbing does not backtrack, so it gets stuck.
How to Improve:
To handle this, we can use:
Random restarts
Simulated annealing
Backtracking
Stochastic hill-climbing