0% found this document useful (0 votes)
56 views4 pages

CS 307 Lab 08

The document provides an overview of the Hill Climbing algorithm, a local search method that seeks to optimize mathematical problems by moving towards higher values without backtracking. It outlines key features, including its greedy approach and lack of memory for previous states, and describes the algorithm's steps for finding a solution. Additionally, it discusses the implementation of the algorithm using a simple objective function and outlines a task for students to apply the algorithm to a specific function and visualize the results.

Uploaded by

rk786bd6u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views4 pages

CS 307 Lab 08

The document provides an overview of the Hill Climbing algorithm, a local search method that seeks to optimize mathematical problems by moving towards higher values without backtracking. It outlines key features, including its greedy approach and lack of memory for previous states, and describes the algorithm's steps for finding a solution. Additionally, it discusses the implementation of the algorithm using a simple objective function and outlines a task for students to apply the algorithm to a specific function and visualize the results.

Uploaded by

rk786bd6u
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Lab 8

Hill Climbing Search

Overview
 Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to find the peak of the mountain or best solution to the
problem. It terminates when it reaches a peak value where no neighbor has a higher value.
 Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by the salesman.
 It is also called greedy local search as it only looks to its good immediate neighbor state and
not beyond that.
 A node of hill climbing algorithm has two components which are state and value.
 Hill Climbing is mostly used when a good heuristic is available.
 In this algorithm, we don't need to maintain and handle the search tree or graph as it only
keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

 Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move
in the search space.
 Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes
the cost.
 No backtracking: It does not backtrack the search space, as it does not remember the
previous states.
Algorithm for Simple Hill Climbing:
 Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
 Step 2: Loop Until a solution is found or there is no new operator left to apply.
 Step 3: Select and apply an operator to the current state.
 Step 4: Check new state:
1. If it is goal state, then return success and quit.
2. Else if it is better than the current state then assign new state as a current state.
3. Else if not better than the current state, then return to step2.
 Step 5: Exit.

Hill Climbing Algorithm Implementation

At the time of writing, the SciPy library does not provide an implementation of stochastic hill
climbing. Nevertheless, we can implement it ourselves. First, we must define our objective
function and the bounds on each input variable to the objective function. The objective function
is just a Python function we will name objective(). The bounds will be a 2D array with one
dimension for each input variable that defines the minimum and maximum for the variable.

For example, a one-dimensional objective function and bounds would be defined as follows:

Next, we can generate our initial solution as a random point within the bounds of the problem,
then evaluate it using the objective function.
Now we can loop over a predefined number of iterations of the algorithm defined as
“n_iterations“, such as 100 or 1,000.

The first step of the algorithm iteration is to take a step.

This requires a predefined “step_size” parameter, which is relative to the bounds of the search
space. We will take a random step with a Gaussian distribution where the mean is our current
point and the standard deviation is defined by the “step_size“. That means that about 99 percent
of the steps taken will be within (3 * step_size) of the current point.

We don’t have to take steps in this way. You may wish to use a uniform distribution between 0
and the step size. For example:

Next we need to evaluate the new candidate solution with the objective function.

We then need to check if the evaluation of this new point is as good as or better than the current
best point, and if it is, replace our current best point with this new point.
And that’s it.

We can implement this hill climbing algorithm as a reusable function that takes the name of the
objective function, the bounds of each input variable, the total iterations and steps as arguments,
and returns the best solution found and its evaluation.

Task
In this section, each student will apply the hill climbing optimization algorithm to an objective
function.

First, let’s define the objective function.

Use a simple one-dimensional x 2 objective function with the bounds [-5, 5]. For the given task, create a
line plot of the response surface of the function for a grid of input values and marks the optima at f(0.0) =
0.0 with a red line.

You might also like