0% found this document useful (0 votes)
16 views63 pages

Chapter 3

Uploaded by

9811765048yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views63 pages

Chapter 3

Uploaded by

9811765048yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 63

Prepared By : Er.

Manoj Kr Basnet 1
Problem Solving:
Problem solving, particularly in artificial intelligence, may be characterized as a
systematic search through a range of possible actions in order to reach some
predefined goal or solution.

Problem-solving methods divide into special purpose and general purpose.

A special-purpose method is tailor-made for a particular problem and often exploits


very specific features of the situation in which the problem is embedded.

In contrast, a general-purpose method is applicable to a wide variety of problems.

One general-purpose technique used in AI is means-end analysis—a step-by-step, or


incremental, reduction of the difference between the current state and the final goal.

Four general steps in problem solving:


•Goal formulation
-What are the
successful world
states
Prepared By : Er. Manoj Kr Basnet 2
•Problem formulation
-What actions and states to consider given the goal

•Search
-Determine the possible sequence of actions that lead to the states of
known values and then choosing the best sequence.

•Execute
-Give the solution perform the actions.

Problem formulation:
A problem is defined by:
An initial state: State from which agent start

Successor function: Description of possible actions available to the agent.

Goal test: Determine whether the given state is goal state or not

Prepared By : Er. Manoj Kr Basnet 3


Path cost: Sum of cost of each path from initial state to the given state.

A solution is a sequence of actions from initial to goal state.

Optimal solution has the lowest path cost.

Example Problems
The problem-solving approach has been applied to a vast array of task
environments.

We list some of the best known here, distinguishing between toy and real world
problems.

A toy problem is intended to illustrate or exercise various problem-solving


methods.

It can be given a concise, exact description and hence is used by different


researchers to compare the
performance of algorithm.

Prepared By : Er. Manoj Kr Basnet 4


A real world problem is one whose solutions people actually care about.

Such problems tend not to have a single agreed-upon description, but we can give
the general flavor of their formulations.

Toy problems
The first we examine is the vacuum cleaner world which is shown in fig.

Fig: Vacuum cleaner world with just two locations

This can be formulated as a problem as follows:


•States:
The state is determined by
both the agent location and
the dirt location.
Prepared By : Er. Manoj Kr Basnet 5
The agent is in one of two locations, each of which might or might not contain dirt.

Thus there are 2 x 22 = 8 possible world states.

A larger environment with n locations has n.2n states.

Initial state:
Any state can be designated as the initial state.

Actions:
In this simple environment, each state has just actions: Left, Right and Suck.

Larger environments might include Up and Down.

Transitional model:
The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square and Sucking in the clean square
have no effect.

Prepared By : Er. Manoj Kr Basnet 6


Goal test:
This checks whether all squares are clean.

Path cost:
Each step costs 1, so the path cost is the number of steps in the path. 8-puzzle
It consist of 3x3 board with eight numbered tiles and a blank space.

A tile adjacent to the blank space can slide into the space.

The object is to reach a specified goal state, such as the shown on right of the
figure.

The standard formulation


is as follows:
Prepared By : Er. Manoj Kr Basnet 7
States:
A state description specifies the location of each of the eight tiles and the blank in
one of the nine squares.

Initial state:
Any state can be designated as the initial state.

Actions:
The simplest formulation defines the actions as movements of the blank space Left,
Right, Up or Down.

Transitional model:
Given a state and action, this returns the resulting state, for example, if we apply Left
to the start state in fig, the resulting state has 5 and blank switched.

Goal test:
This checks whether the state matches the goal configuration shown in fig.

Prepared By : Er. Manoj Kr Basnet 8


Path cost:
Each step costs 1, so the path cost is the number of steps in the path.

Real world problems


Route finding algorithms are used in a variety of applications.

Routing video streams in computer networks, military operations planning and


airline travel-planning systems, involve much more complex specifications.

Consider the airline travel problems that must be solved by travel-planning


website:

States:
Each state obviously includes a location(e.g. airport) and the current time.

Initial state:
This is specified by the user’s query.

Prepared By : Er. Manoj Kr Basnet 9


Actions:
Take any flight from the current location, in any seat class, leaving after the current
time, leaving enough time for within airport transfer if needed.

Transitional model:
The state resulting from taking a flight will have the flight’s destination as the
current location and the flight’s arrival time as the current time.

Goal test:
Are we at the final destination specified by the user?

Path cost:
This depends on monetary cost, waiting time, flight time, customs and
immigration procedures, seat quality, time of day, type of airplane and so on.

Prepared By : Er. Manoj Kr Basnet 10


-VLSI layout problem requires positioning millions of components and
connections on a chip to minimize area, minimize circuit delays and
maximize manufacturing yield.

-Robot navigation

-Automatic assembly sequencing

-Traveling salesperson problem

-Touring problems

State Space representation


The state space is commonly defined as a directed graph in which each node is a
state and each arc represents the application of an operator transforming a state
to a successor state.

A solution is a path from


the initial state to a goal
state.
Prepared By : Er. Manoj Kr Basnet 11
State Space representation of Vacuum World Problem:

Prepared By : Er. Manoj Kr Basnet 12


States?? two locations with or without dirt: 2 x 22=8 states.

Initial state?? Any state can be initial

Actions?? {Left, Right, Suck}

Goal test?? Check whether squares are clean.

Path cost?? Number of actions to reach goal.

Prepared By : Er. Manoj Kr Basnet 13


Water Leakage Problem:

Prepared By : Er. Manoj Kr Basnet 14


If
hall _wet and kitchen_dry
then leak_in_bathroom
If
hall_wet and bathroom_dry
then problem_in_kitchen
If
window_closed or no_rain
then no_water_from_outside
Well defined problem
Well defined problem can be described by:
•Successor function

•State space

•Path

•Path cost

Prepared By : Er. Manoj Kr Basnet 15


•Goal test

Importance of Search in AI
• Many of the tasks underlying AI can be phrased in terms of search for the solution
to the problem.

• Many goal based agents are essentially problem solving agents which must decide
what to do by searching for a sequences of actions that lead to their solutions.

• For the production systems, need to search for a sequence of rule applications that
lead to the required fact or action.

• For neural network system, need to search for the set of connection weights that will
result in the required input to output mapping.

Searching
A search problem
Figure below contains a
representation of a map.
Prepared By : Er. Manoj Kr Basnet 16
The nodes represent cities and the links represent direct road connections between
cities.

The number associated to a link represents the length of the corresponding road.

The search problem is to find a path from a city S to a city G


A 4 4
B C
3

S 5 5

4 G
D E F 3
2 4

Figure : A graph representation of a map

This problem will be used to illustrate some search methods.

Prepared By : Er. Manoj Kr Basnet 17


There are two broad classes of search methods:
- uninformed (or blind) search methods; -
heuristically informed search methods.

In the case of the uninformed search methods, the order in which potential solution
paths are considered is arbitrary, using no domain-specific information to judge
where the solution is likely to lie.

In the case of the heuristically informed search methods, one uses


domaindependent (heuristic) information in order to search the space more
efficiently.

Prepared By : Er. Manoj Kr Basnet 18


Measuring problem Solving Performance
We will evaluate the performance of a search algorithm in four ways

•Completeness: An algorithm is said to be complete if it definitely finds solution


to the problem, if exist.

•Time Complexity: How long (worst or average case) does it take to find a
solution? Usually measured in terms of the number of nodes expanded

•Space Complexity: How much space is used by the algorithm? Usually measured
in terms of the maximum number of nodes in memory at a time

•Optimality/Admissibility: If a solution is found, is it guaranteed to be an optimal


one? For example, is it the one with minimum cost?

Time and space complexity are measured in terms of

b -- maximum branching
factor (number of successor
of any node) of the search
Prepared By : Er. Manoj Kr Basnet 19
tree
m -- depth of the least-cost solution d --
maximum length of any path in the space
Breadth First Search
All nodes are expended at a given depth in the search tree before any nodes at the
next level are expanded until the goal reached.

Expand shallowest unexpanded node.

Prepared By : Er. Manoj Kr Basnet 20


BFS Evaluation:
Completeness:
Does it always find a solution if one exists?
YES
If shallowest goal node is at some finite depth d and If b is finite.

Time complexity:
Assume a state space where every state has b successors.
root has b successors, each node at the next level has again b successors
(total b2), …

Prepared By : Er. Manoj Kr Basnet 21


Assume solution is at depth d
Worst case; expand all except the last node at depth d
Total no. of nodes generated:
b + b2 + b3 + ………………….. bd + ( bd+1 –b) = O(bd+1)

Space complexity:
Each node that is generated must remain in memory.
Total no. of nodes in memory:
1+ b + b2 + b3 + ………………….. bd + ( bd+1 –b) = O(bd+1)

Optimal (i.e., admissible):


If all paths have the same cost.

Otherwise, not optimal but finds solution with shortest path length
(shallowest solution).

If each path does not have same path cost shallowest solution may not be
optimal.

Prepared By : Er. Manoj Kr Basnet 22


Two lessons:
Memory requirements are a bigger problem than its execution time.

Exponential complexity search problems cannot be solved by uninformed


search methods for any but the smallest instances.

DEPTH NODE TIME MEMORY


S
2 1100 0.11 seconds 1 megabyte
4 111100 11 106
seconds megabytes
6 107 19 10 gigabytes
minutes
8 109 31 1 terabyte
hours
10 1011 129 101 terabytes

Prepared By : Er. Manoj Kr Basnet 23


days
12 1013 35 10 petabytes
years
14 1015 3523 1 exabyte
years

Depth First Search


Looks for the goal node among all the children of the current node before using the
sibling of this node i.e.
expand deepest unexpanded
node.

Prepared By : Er. Manoj Kr Basnet 24


S

C E

D F

G
DFS Evaluation:
Completeness;
Does it always find a solution if one exists?
NO
If search space is infinite and search space contains loops then DFS may
not find solution.

Time complexity;

Prepared By : Er. Manoj Kr Basnet 25


Let m is the maximum depth of the search tree.

In the worst case Solution may exist at depth m.


root has b successors, each node at the next level has again b successors
(total b2), …
Worst case; expand all except the last node at depth m

Total no. of nodes generated:


b + b2 + b3 + ………………….. bm = O(bm)
Space complexity:
It needs to store only a single path from the root node to a leaf node, along
with remaining unexpanded sibling nodes for each node on the path.

Total no. of nodes in memory:


1+ b + b + b + ………………….. b m times = O(bm)

Optimal (i.e., admissible):

Prepared By : Er. Manoj Kr Basnet 26


DFS expand deepest node first, if expands entire left sub-tree even if right subtree
contains goal nodes at levels 2 or 3.

Thus we can say DFS may not always give optimal solution.

Uniform Cost Search:


Uniform-cost search (UCS) is modified version of BFS to make optimal.

It is basically a tree search algorithm used for traversing or searching a weighted


tree, tree structure, or graph.

The search begins at the root node.

The search continues by visiting the next node which has the least total cost from
the root.

Nodes are visited in this manner until a goal state is reached.

Typically, the search


algorithm involves
Prepared By : Er. Manoj Kr Basnet 27
expanding nodes by adding all unexpanded neighboring nodes that are connected
by directed paths to a priority queue.

In the queue, each node is associated with its total path cost from the root, where
the least-cost paths are given highest priority.

The node at the head of the queue is subsequently expanded, adding the next set
of connected nodes with the total path cost from the root to the respective node.

The uniform-cost search is complete and optimal if the cost of each step exceeds
some positive bound ε.

Does not care about the number of steps, only care about total cost.

•Complete? Yes, if step cost ≥ε (small positive number).

•Time? Maximum as of BFS

•Space? Maximum as of
BFS.
Prepared By : Er. Manoj Kr Basnet 28
•Optimal? Yes

A A
Note : Heurisitic Start with root node
estimates are not Paths from root
4 11
used in this search ! 0 are generated.
A.
B C

15 13 4 3
4 11

D E F H
B CA
12 10 18 16 6 3 1 7
4 11

I J G1 K L M N G2
Since B has the least cost, we
expand it.

C
11
A
A
4 11

4 11

B C
B
15 13 4 3

15 13
D 19 E 17 F H
15 14
D 19 E 17

Node H has the


least cost thus far,

Prepared By : Er. Manoj Kr Basnet 29


least cost so we’ll expand
it.
A A Note : Both
nodes F and N
have a cost of
4 11 4 11
15, we chose to
expand the
B C B C leftmost node
first. We
15 13 4 3 15 13 4 3 continue
expanding until
all remaining
D 19 E 17 F H 19 E 17 F H
15 paths are
We have a goal , G2 but 1 7 1 7
greater than 21,
6 3
need to expand other the cost of G 2
branches to see if there is
another goal with less 15 N G2 L M N G2
21 21 18 15 21
distance .

Depth Limited Search:


The problem of unbounded trees can be solve by supplying depth-first search with a
determined depth limit (nodes at depth are treated as they have no successors) –
Depth limited search.

Depth-limited search is an algorithm to explore the vertices of a graph.

It is a modification of depth-first search and is used for example in the iterative


deepening depth-first search
algorithm.

Prepared By : Er. Manoj Kr Basnet 30


Like the normal depth-first search, depth-limited search is an uninformed search.

It works exactly like depth-first search, but avoids its drawbacks regarding
completeness by imposing a maximum limit on the depth of the search.

Even if the search could still expand a vertex beyond that depth, it will not do so and
thereby it will not follow infinitely deep paths or get stuck in cycles.

Therefore depth-limited search will find a solution if it is within the depth limit,
which guarantees at least completeness on all graphs.

It solves the infinite-path problem of DFS.

Yet it introduces another source of problem if we are unable to find good guess
of l.

Let d is the depth of shallowest solution.

If l < d then incompleteness


results.
Prepared By : Er. Manoj Kr Basnet 31
If l > d then not optimal.

Time complexity: O( bl )

Space complexity: O ( bl )

Iterative Deepening Depth First Search:


In this strategy, depth-limited search is run repeatedly, increasing the depth
limit with each iteration until it reaches d, the depth of the shallowest goal
state.

On each iteration, IDDFS visits the nodes in the search tree in the same order as
depth-first search, but the cumulative order in which nodes are first visited,
assuming no pruning, is effectively breadth-first.

IDDFS combines depth-first search's space-efficiency and breadth-first search's


completeness (when the branching factor is finite).

Prepared By : Er. Manoj Kr Basnet 32


It is optimal when the path cost is a non-decreasing function of the depth of the
node.

The technique of iterative deepening is based on this idea.

Iterative deepening is depth-first search to a fixed depth in the tree being


searched.

If no solution is found up to this depth then the depth to be searched is increased


and the whole `bounded' depth-first search begun again.

It works by setting a depth of search -say, depth 1- and doing depth-first search
to that depth.

If a solution is found then the process stops -otherwise, increase the depth by,
say, 1 and repeat until a solution is found.

Note that every time we


start up a new bounded

Prepared By : Er. Manoj Kr Basnet 33


depth search we start from scratch - i.e. we throw away any results from the
previous search.

Now iterative deepening is a popular method of search.

We explain why this is so.

Depth-first search can be implemented to be much cheaper than breadth-first


search in terms of memory usage -but it is not guaranteed to find a solution even
where one is guaranteed.

On the other hand, breadth-first search can be guaranteed to terminate if there is a


winning state to be found and will always find the `quickest' solution (in terms of
how many steps need to be taken from the root node).

It is, however, a very expensive method in terms of memory usage.

Prepared By : Er. Manoj Kr Basnet 34


Iterative deepening is liked because it is an effective compromise between the two
other methods of search.

It is a form of depth-first search with a lower bound on how deep the search can
go.

Iterative deepening terminates if there is a solution.

It can produce the same solution that breadth-first search would produce but does
not require the same memory usage (as for breadth-first search).

Note that depth-first search achieves its efficiency by generating the next node to
explore only when this needed.

The breadth-first search algorithm has to grow all the search paths available until a
solution is found -and this takes up memory.

Iterative deepening
achieves its memory saving
in the same way that depth-
Prepared By : Er. Manoj Kr Basnet 35
first search does -at the expense of redoing some computations again and again (a
time cost rather than a memory one).

•Complete (like BFS)

•Has linear memory requirements (like DFS)

•Classical time-space tradeoff.

•This is the preferred method for large state spaces, where the solution path length
is unknown.

The overall idea goes as follows until the goal node is not found i.e. the depth
limit is increased gradually.

Prepared By : Er. Manoj Kr Basnet 36


Prepared By : Er. Manoj Kr Basnet 37
Iterative Deepening search evaluation:
Completeness:
YES (no infinite paths)
Time complexity:
Algorithm seems costly due to repeated generation of certain states.
Node generation: level d: once level d-1: 2 level d-2: 3

level 2: d-1
level 1: d
Total no. of nodes generated:
d.b +(d-1). b2 + (d-2). b3 + …………………..+1. bd = O(bd) Space
complexity:
It needs to store only a single path from the root node to a leaf node, along
with remaining unexpanded sibling nodes for each node on the path. Total
no. of nodes in memory:
1+ b + b + b + ………………….. b d times = O(bd)

Optimality:

Prepared By : Er. Manoj Kr Basnet 38


YES if path cost is non-decreasing function of the depth of the node.
Bidirectional Search:
This is a search algorithm which replaces a single search graph, which is likely to
with two smaller graphs -- one starting from the initial state and one starting from the
goal state.

It then, expands nodes from the start and goal state simultaneously.

Check at each stage if the nodes of one have been generated by the other, i.e, they
meet in the middle.

If so, the path concatenation is the solution.

Prepared By : Er. Manoj Kr Basnet 39


Performance parameter
Completeness: yes

Optimality: yes (If done with correct strategy- e.g. breadth first)

Time complexity: O(bd/2)

Space complexity: O(bd/2)

Prepared By : Er. Manoj Kr Basnet 40


Informed or Heuristic Search:
Heuristic Search Uses domain-dependent (heuristic) information in order to search
the space more efficiently.

Ways of using heuristic information:


•Deciding which node to expand next, instead of doing the expansion in a strictly
breadth-first or depth-first order;

•In the course of expanding a node, deciding which successor or successors to


generate, instead of blindly generating all possible successors at one time;

•Deciding that certain nodes should be discarded, or pruned, from the search space.

Heuristic Searches - Why Use?


•It may be too resource intensive (both time and space) to use a blind search

•Even if a blind search will work we may want a more efficient search method

Prepared By : Er. Manoj Kr Basnet 41


Informed Search uses domain specific information to improve the search pattern
-Define a heuristic function, h(n), that estimates the "goodness" of a node n.
-Specifically, h(n) = estimated cost (or distance) of minimal cost path from n
to a goal state.
-The heuristic function is an estimate, based on domain-specific information
that is computable from the current state description, of how close we are to
a goal.

Best-First Search
Idea: use an evaluation function f(n) that gives an indication of which node to
expand next for each node.
-usually gives an estimate to the goal.
-the node with the lowest value is expanded first.

A key component of f(n) is a heuristic function, h(n),which is a additional


knowledge of the problem.

Prepared By : Er. Manoj Kr Basnet 42


There is a whole family of best-first search strategies, each with a different
evaluation function.

Typically, strategies use estimates of the cost of reaching the goal and try to
minimize it.

Special cases: based on the evaluation function.


-Greedy best-first search
-A*search
Greedy Best First Search
The best-first search part of the name means that it uses an evaluation function to
select which node is to be expanded next.

The node with the lowest evaluation is selected for expansion because that is the
best node, since it supposedly has the closest path to the goal (if the heuristic is
good).

Unlike A* which uses


both the link costs and a

Prepared By : Er. Manoj Kr Basnet 43


heuristic of the cost to the goal, greedy best-first search uses only the heuristic,
and not any link costs.

A disadvantage of this approach is that if the heuristic is not accurate, it can go


down paths with high link cost since there might be a low heuristic for the
connecting node.

Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal.


e.g., hSLD(n) = straight-line distance from n to goal
Greedy best-first search expands the node that appears to be closest to goal.

The greedy best-first search algorithm is O(bm) in terms of space and time
complexity. (Where b is the average branching factor (the average number of
successors from a state), and m is the maximum depth of the search tree.)

For example consider the following graph

Prepared By : Er. Manoj Kr Basnet 44


Straight Line distances to node G (goal node) from other nodes
is given below:
S→ G = 12
B→G=4
E→G=7
D→G=5
C→G=3
Let H(n)= Straight Line distance
Now Greedy Search operation is done as below:

Prepared By : Er. Manoj Kr Basnet 45


Prepared By : Er. Manoj Kr Basnet 46
Prepared By : Er. Manoj Kr Basnet 47
Evaluation Greedy Search Completeness:
– While minimizing the value of heuristic greedy may oscillate between two
nodes. Thus it is not complete.

Optimality:
– Greedy search is not optimal. Same as DFS.

Time complexity:
– In worst case Greedy search is same as DFS therefore it’s time complexity is
O(bm).

Space Complexity:
– Space complexity of DFS is O(bm). No nodes can be deleted from memory.

A * Search
Like all informed search algorithms, it first searches the routes that appear to be
most likely to lead towards the goal.

Prepared By : Er. Manoj Kr Basnet 48


What sets A* apart from a greedy best-first search is that it also takes the distance
already traveled into account (the g(n) part of the heuristic is the cost from the
start, and not simply the local cost from the previously expanded node).

The algorithm traverses various paths from start to goal.

Evaluation function used by A* search algorithm is


f(n) = h(n) + g(n) Where:
•g(n): the actual shortest distance traveled from initial node to current node
•h(n): the estimated (or "heuristic") distance from current node to goal
•f(n): the sum of g(n) and h(n)

For example consider the following graph

Prepared By : Er. Manoj Kr Basnet 49


Straight Line distances to node G (goal node) from other nodes is given
below:
S→ G = 12
B→G=4
E→G=7

Prepared By : Er. Manoj Kr Basnet 50


D→G=6
C→G=3
Let H(n)= Straight Line distance
Now A* Search operation is done as below:

Prepared By : Er. Manoj Kr Basnet 51


Prepared By : Er. Manoj Kr Basnet 52
Prepared By : Er. Manoj Kr Basnet 53
Prepared By : Er. Manoj Kr Basnet 54
Prepared By : Er. Manoj Kr Basnet 55
G is goal node, Hence we are done.
Evaluating A* Search:
Completeness:
– Yes A* search always gives us solution.

Optimality:
– A* search gives optimal solution when the heuristic function is admissible
heuristic.

Time complexity:
– Exponential with path length i.e. O(bd) where d is length of the goal node from
start node.

Space complexity:
– It keeps all generated nodes in memory. Hence space is the major problem not
time

Prepared By : Er. Manoj Kr Basnet 56


Hill Climbing Search:
Hill climbing can be used to solve problems that have many solutions, some of which
are better than others.

It starts with a random (potentially poor) solution, and iteratively makes small
changes to the solution, each time improving it a little.

When the algorithm cannot see any improvement anymore, it terminates.

Ideally, at that point the current solution is close to optimal, but it is not guaranteed
that hill climbing will ever come close to the optimal solution.

Hill climbing is depth-first search with a heuristic measurement that orders choices
as nodes are expanded.

It always selects the most promising successor of the node last expanded.

Prepared By : Er. Manoj Kr Basnet 57


For instance, consider that the most promising successor of a node is the one that has
the shortest straight-line distance to the goal node G.

In figure below, the straight line distances between each city and goal G is indicated
in square brackets, i.e. the heuristic.

[8.5] [6] [3]


A 4 4
B C
3

[10] S 5
5

4 G
D E F 3
2 4
[8] [6] [3]

Prepared By : Er. Manoj Kr Basnet 58


The hill climbing search from S to G proceeds as follows:

A 8.5 8 D

A 8.5 6 E

6 B F 3

Prepared By : Er. Manoj Kr Basnet 59


The difference between the hill climbing search method and the best first search
method is the following one:
•the best first search method selects for expansion the most promising leaf node of the
current search tree;
•the hill climbing search method selects for expansion the most promising successor
of the node last expanded.

Problems with Hill Climbing


-Gets stuck at local maxima when we reach a position where there are no better
neighbors, it is not a guarantee that we have found the best solution. Ridge is a
sequence of local maxima.

-Another type of problem we may find with hill climbing searches is finding a
plateau. This is an area where the search space is flat so that all neighbors return the
same evaluation

Simulated Annealing:

Prepared By : Er. Manoj Kr Basnet 60


It is motivated by the physical annealing process in which material is heated and slowly
cooled into a uniform structure.

Compared to hill climbing the main difference is that SA allows downwards steps.

Simulated annealing also differs from hill climbing in that a move is selected at random
and then decides whether to accept it.

If the move is better than its current position then simulated annealing will always take
it.

If the move is worse (i.e. lesser quality) then it will be accepted based on some
probability.

The probability of accepting a worse state is given by the equation


P = exponential(-c /t) > r
Where

Prepared By : Er. Manoj Kr Basnet 61


c = the change in the evaluation function
t = the current value
r = a random number between 0 and 1
The probability of accepting a worse state is a function of both the current value and the
change in the cost function.

The most common way of implementing an SA algorithm is to implement hill


climbing with an accept function and modify it for SA

By analogy with this physical process, each step of the SA algorithm replaces the
current solution by a random "nearby" solution, chosen with a probability that depends
on the difference between the corresponding function values and on a global parameter
T (called the temperature), that is gradually decreased during the process.

The dependency is such that the current solution changes almost randomly when T is
large, but increasingly "downhill" as T goes to zero.

Prepared By : Er. Manoj Kr Basnet 62


The allowance for "uphill" moves saves the method from becoming stuck at local
optima—which are the bane of greedier methods.

Prepared By : Er. Manoj Kr Basnet 63

You might also like