AI Book

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

Unit 1

Introduction to Artificial Intelligence

What is Artificial Intelligence?

According to the father of Artificial Intelligence, John McCarthy, it is “The science and
engineering of making intelligent machines, especially intelligent computer programs”.

Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a


software think intelligently, in the similar manner the intelligent humans think.

AI is accomplished by studying how human brain thinks, and how humans learn, decide, and
work while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.

Agents and Environments

An AI system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.

What are Agent and Environment?

An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.

 A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel
to the sensors, and other organs such as hands, legs, mouth, for effectors.
 A robotic agent replaces cameras and infrared range finders for the sensors, and
various motors and actuators for effectors.
 A software agent has encoded bit strings as its programs and actions.
Agent Terminology

 Performance Measure of Agent − It is the criteria, which determines how successful


an agent is.
 Behavior of Agent − It is the action that agent performs after any given sequence of
percepts.
 Percept − It is agent’s perceptual inputs at a given instance.
 Percept Sequence − It is the history of all that an agent has perceived till date.
 Agent Function − It is a map from the precept sequence to an action.

Rationality

Rationality is being reasonable, sensible, and having good sense of judgment.

Rationality is concerned with expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful information is an important
part of rationality.

What is Ideal Rational Agent?

An ideal rational agent is the one, which is capable of doing expected actions to maximize its
performance measure, on the basis of

 Its percept sequence


 Its built-in knowledge base

Rationality of an agent depends on the following

 The performance measures, which determine the degree of success.


 Agent’s Percept Sequence till now.
 The agent’s prior knowledge about the environment.
 The actions that the agent can carry out.

A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the agent
solves is characterized by Performance Measure, Environment, Actuators, and Sensors
(PEAS).

The Structure of Intelligent Agents

Agent’s structure can be viewed as −

 Agent = Architecture + Agent Program


 Architecture = the machinery that an agent executes on.
 Agent Program = an implementation of an agent function.

Simple Reflex Agents


 They choose actions only based on the current percept.
 They are rational only if a correct decision is made only on the basis of current
precept.
 Their environment is completely observable.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.

Model Based Reflex Agents

They use a model of the world to choose their actions. They maintain an internal state.

Model − knowledge about “how the things happen in the world”.

Internal State − It is a representation of unobserved aspects of current state depending on


percept history.

Updating the state requires the information about −

 How the world evolves.


 How the agent’s actions affect the world.
Goal Based Agents

They choose their actions in order to achieve goals. Goal-based approach is more flexible
than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby
allowing for modifications.

Goal − It is the description of desirable situations.

Utility Based Agents

They choose actions based on a preference (utility) for each state.

Goals are inadequate when −

 There are conflicting goals, out of which only few can be achieved.
 Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a goal.

The Nature of Environments

Some programs operate in the entirely artificial environment confined to keyboard input,
database, computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich, unlimited
softbots domains. The simulator has a very detailed, complex environment. The software
agent needs to choose from a long array of actions in real time. A softbot designed to scan the
online preferences of the customer and show interesting items to the customer works in the
real as well as an artificial environment.

The most famous artificial environment is the Turing Test environment, in which one real
and other artificial agents are tested on equal ground. This is a very challenging environment
as it is highly difficult for a software agent to perform as well as a human.

Turing Test

The success of an intelligent behavior of a system can be measured with Turing Test.

Two persons and a machine to be evaluated participate in the test. Out of the two persons,
one plays the role of the tester. Each of them sits in different rooms. The tester is unaware of
who is machine and who is a human. He interrogates the questions by typing and sending
them to both intelligences, to which he receives typed responses.

This test aims at fooling the tester. If the tester fails to determine machine’s response from
the human response, then the machine is said to be intelligent.

Properties of Environment

The environment has multifold properties −

Discrete / Continuous − If there are a limited number of distinct, clearly defined, states of
the environment, the environment is discrete (For example, chess); otherwise it is continuous
(For example, driving).

Observable / Partially Observable − If it is possible to determine the complete state of the


environment at each time point from the percepts it is observable; otherwise it is only
partially observable.

Static / Dynamic − If the environment does not change while an agent is acting, then it is
static; otherwise it is dynamic.

Single agent / Multiple agents − The environment may contain other agents which may be
of the same or different kind as that of the agent.

Accessible / Inaccessible − If the agent’s sensory apparatus can have access to the complete
state of the environment, then the environment is accessible to that agent.

Deterministic / Non-deterministic − If the next state of the environment is completely


determined by the current state and the actions of the agent, then the environment is
deterministic; otherwise it is non-deterministic.

Episodic / Non-episodic − In an episodic environment, each episode consists of the agent


perceiving and then acting. The quality of its action depends just on the episode itself.
Subsequent episodes do not depend on the actions in the previous episodes. Episodic
environments are much simpler because the agent does not need to think ahead.

Search Algorithms

Problem-solving agents:

In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational


agents or Problem-solving agents in AI mostly used these search strategies or algorithms to
solve a specific problem and provide the best result. Problem-solving agents are the goal-
based agents and use atomic representation. In this topic, we will learn various problem-
solving search algorithms.

Search Algorithm Terminologies

 Search: Searching is a step by step procedure to solve a search-problem in a


given search space. A search problem can have three main factors:
(i) Search Space: Search space represents a set of possible solutions,
which a system may have.
(ii) Start State: It is a state from where agent begins the search.
(iii) Goal test: It is a function which observe the current state and
returns whether the goal state is achieved or not.
 Search tree: A tree representation of search problem is called Search tree.
The root of the search tree is the root node which is corresponding to the
initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented
as a transition model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal
node.
 Optimal Solution: If a solution has the lowest cost among all solutions.

Properties of Search Algorithms:

Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a solution


if at least any solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest
path cost) among all other solutions, then such a solution for is said to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.

Space Complexity: It is the maximum storage space required at any point during the search,
as the complexity of the problem.

Types of Search Algorithm

Based on the search problems we can classify the search algorithms into uninformed (Blind
search) search and informed search (Heuristic search) algorithms.

Uninformed/Blind Search:

The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search applies a
way in which search tree is searched without any information about the search space like
initial state operators and test for the goal, so it is also called blind search. It examines each
node of the tree until it achieves the goal node.

It can be divided into five main types:

 Breadth-first search
 Uniform cost search
 Depth-first search
 Iterative deepening depth-first search
 Bidirectional Search

Informed Search

Informed search algorithms use domain knowledge. In an informed search, problem


information is available which can guide the search. Informed search strategies can find a
solution more efficiently than an uninformed search strategy. Informed search is also called a
Heuristic search.

A heuristic is a way which might not always be guaranteed for best solutions but guaranteed
to find a good solution in reasonable time.

Informed search can solve much complex problem which could not be solved in another way.

An example of informed search algorithms is a traveling salesman problem.

1. Greedy Search
2. A* Search

Uninformed Search Algorithms

Uninformed search is a class of general-purpose search algorithms which operates in brute


force-way. Uninformed search algorithms do not have additional information about state or
search space other than how to traverse the tree, so it is also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1. Breadth-first Search

 Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
 BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
 The breadth-first search algorithm is an example of a general-graph search
algorithm.
 Breadth-first search implemented using FIFO queue data structure.

Advantages:

 BFS will provide a solution if any solution exists.


 If there is more than one solution for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.

Disadvantages:

 It requires lots of memory since each level of the tree must be saved into memory
to expand the next level.
 BFS needs lots of time if the solution is far away from the root node.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow
the path which is shown by the dotted arrow, and the traversed path will be:

S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the
number of nodes traversed in BFS until the shallowest Node. Where the d= depth of
shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite
depth, then BFS will find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.

2. Depth-first Search

 Depth-first search isa recursive algorithm for traversing a tree or graph data
structure.
 It is called the depth-first search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.

Advantage:

 DFS requires very less memory as it only needs to store a stack of the nodes on
the path from root node to the current node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in
the right path).

Disadvantage:

 There is the possibility that many states keep re-occurring, and there is no
guarantee of finding the solution.
 DFS algorithm goes for deep down searching and sometime it may go to the
infinite loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will follow the
order as:

Root node--->Left node ----> right node.


It will start searching from root node S, and traverse A, then B, then D and E, after traversing
E, it will backtrack the tree as E has no other successor and still goal node is not found. After
backtracking it will traverse node C and then G, and here it will terminate as it found goal
node.

Completeness: DFS search algorithm is complete within finite state space as it will expand
every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence
space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or
high cost to reach to the goal node.

3. Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a predetermined limit.


Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In
this algorithm, the node at the depth limit will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.

Example:

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not
optimal even if ℓ>d.

4. Uniform-cost Search Algorithm:

Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge. The primary
goal of the uniform-cost search is to find a path to the goal node which has the lowest
cumulative cost.

Uniform-cost search expands nodes according to their path costs form the root node. It can be
used to solve any graph/tree where the optimal cost is in demand. A uniform-cost search
algorithm is implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all
edges is the same.

Advantages:

 Uniform cost search is optimal because at every state the path with the least cost is
chosen.

Disadvantages:

 It does not care about the number of steps involve in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an infinite
loop.

Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then
the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to
C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:
The same logic is for space complexity so, the worst-case space complexity of Uniform-cost
search is O(b1 + [C*/ε]).

Optimal:

Uniform-cost search is always optimal as it only selects a path with the lowest path cost.

5. Iterative deepening depth-first Search

The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a
goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps
increasing the depth limit after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and depth-
first search's memory efficiency.

The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.

Advantages:

 It combines the benefits of BFS and DFS search algorithm in terms of fast search
and memory efficiency.

Disadvantages:

 The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example

Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration
performed by the algorithm is given as:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is ifthe branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case time complexity
is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:

IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the
node.

6. Bidirectional Search Algorithm:

Bidirectional search algorithm runs two simultaneous searches, one form initial state called as
forward-search and other from goal node called as backward-search, to find the goal node.
Bidirectional search replaces one single search graph with two small subgraphs in which one
starts the search from an initial vertex and other starts from goal vertex. The search stops
when these two graphs intersect each other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

Advantages:

 Bidirectional search is fast.


 Bidirectional search requires less memory

Disadvantages:

 Implementation of the bidirectional search tree is difficult.


 In bidirectional search, one should know the goal state in advance.

Example:

In the below search tree, bidirectional search algorithm is applied. This algorithm divides one
graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and
starts from goal node 16 in the backward direction.

The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in both searches.

Time Complexity: Time complexity of bidirectional search using BFS is O(bd).


Space Complexity: Space complexity of bidirectional search is O(bd).

Optimal: Bidirectional search is Optimal.

Unit II

Heuristic search strategies

Heuristic functions

Heuristic is a function which is used in Informed Search, and it finds the most
promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal. The heuristic method, however,
might not always give the best solution, but it guaranteed to find a good solution in
reasonable time. Heuristic function estimates how close a state is to the goal. It is
represented by h(n), and it calculates the cost of an optimal path between the pair of
states. The value of the heuristic function is always positive.

Admissibility of the heuristic function is given as:

H(n)<=h*(n)

Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should
be less than or equal to the estimated cost.

Pure Heuristic Search:


Pure heuristic search is the simplest form of heuristic search algorithms. It expands
nodes based on their heuristic value h(n). It maintains two lists, OPEN and CLOSED
list. In the CLOSED list, it places those nodes which have already expanded and in the
OPEN list, it places nodes which have yet not been expanded.

On each iteration, each node n with the lowest heuristic value is expanded and
generates all its successors and n is placed to the closed list. The algorithm continues
unit a goal state is found.

In the informed search we will discuss two main algorithms which are given below:

o Best First Search Algorithm(Greedy search)


o A* Search Algorithm

1.) Best-first Search Algorithm (Greedy


Search):
Greedy best-first search algorithm always selects the path which appears best at that
moment. It is the combination of depth-first search and breadth-first search
algorithms. It uses the heuristic function and search. Best-first search allows us to
take the advantages of both algorithms. With the help of best-first search, at each
step, we can choose the most promising node. In the best first search algorithm, we
expand the node which is closest to the goal node and the closest cost is estimated
by heuristic function, i.e.

1. f(n)= g(n).

Were, h(n)= estimated cost from node n to the goal.

The greedy best first algorithm is implemented by the priority queue.

Best first search algorithm:

o Step 1: Place the starting node into the OPEN list.


o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and
terminate the search, else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function f(n),
and then check if the node has been in either OPEN or CLOSED list. If the
node has not been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.

Advantages:

o Best first search can switch between BFS and DFS by gaining the advantages
of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.

Disadvantages:

o It can behave as an unguided depth-first search in the worst case scenario.


o It can get stuck in a loop as DFS.
o This algorithm is not optimal.

Example:
Consider the below search problem, and we will traverse it using greedy best-first
search. At each iteration, each node is expanded using evaluation function f(n)=h(n) ,
which is given in the below table.

In this search example, we are using two lists which are OPEN and CLOSED Lists.
Following are the iteration for traversing the above example.
Expand the nodes of S and put in the CLOSED list

Initialization: Open [A, B], Closed [S]

Iteration 1: Open [A], Closed [S, B]

Iteration 2: Open [E, F, A], Closed [S, B]


: Open [E, A], Closed [S, B, F]

Iteration 3: Open [I, G, E, A], Closed [S, B, F]


: Open [I, E, A], Closed [S, B, F, G]

Hence the final solution path will be: S----> B----->F----> G

Time Complexity: The worst case time complexity of Greedy best first search is
O(bm).

Space Complexity: The worst case space complexity of Greedy best first search is
O(bm). Where, m is the maximum depth of the search space.

Complete: Greedy best-first search is also incomplete, even if the given state space
is finite.

Optimal: Greedy best first search algorithm is not optimal.

A* Search Algorithm:
A* search is the most commonly known form of best-first search. It uses heuristic
function h(n), and cost to reach the node n from the start state g(n). It has combined
features of UCS and greedy best-first search, by which it solve the problem
efficiently. A* search algorithm finds the shortest path through the search space
using the heuristic function. This search algorithm expands less search tree and
provides optimal result faster. A* algorithm is similar to UCS except that it uses g(n)
+h(n) instead of g(n).

In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a fitness
number.

Algorithm of A* search:
Step1: Place the starting node in the OPEN list.

Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure
and stops.

Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise

Step 4: Expand node n and generate all of its successors, and put n into the closed
list. For each successor n', check whether n' is already in the OPEN or CLOSED list, if
not then compute evaluation function for n' and place into Open list.

Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to
the back pointer which reflects the lowest g(n') value.

Step 6: Return to Step 2.

Advantages:

o A* search algorithm is the best algorithm than other search algorithms.


o A* search algorithm is optimal and complete.
o This algorithm can solve very complex problems.

Disadvantages:

o It does not always produce the shortest path as it mostly based on heuristics
and approximation.
o A* search algorithm has some complexity issues.
o The main drawback of A* is memory requirement as it keeps all generated
nodes in the memory, so it is not practical for various large-scale problems.

Example:
In this example, we will traverse the given graph using the A* algorithm. The heuristic
value of all states is given in the below table so we will calculate the f(n) of each state
using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from
start state.
Here we will use OPEN and CLOSED list.

Solution:
Initialization: {(S, 5)}

Iteration1: {(S--> A, 4), (S-->G, 10)}

Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)}

Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G,
10)}

Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal
path with cost 6.

Points to remember:

o A* algorithm returns the path which occurred first, and it does not search for
all remaining paths.
o The efficiency of A* algorithm depends on the quality of heuristic.
o A* algorithm expands all nodes which satisfy the condition f(n)<="" li="">

Complete: A* algorithm is complete as long as:

o Branching factor is finite.


o Cost at every action is fixed.
Optimal: A* search algorithm is optimal if it follows below two conditions:

o Admissible: the first condition requires for optimality is that h(n) should be an
admissible heuristic for A* tree search. An admissible heuristic is optimistic in
nature.
o Consistency: Second required condition is consistency for only A* graph-
search.

If the heuristic function is admissible, then A* tree search will always find the least
cost path.

Time Complexity: The time complexity of A* search algorithm depends on heuristic


function, and the number of nodes expanded is exponential to the depth of solution
d. So the time complexity is O(b^d), where b is the branching factor.

Space Complexity: The space complexity of A* search algorithm is O(b^d)

Local Search and Optimization problems

The local search algorithm in artificial intelligence is a family of optimization algorithms


used to find the best possible solution to a problem by iteratively making small changes to an
initial solution.

These algorithms are used to solve optimization problems in which an exact solution is either
impossible to find or computationally expensive.

Hill climbing, simulated annealing, tabu search, and genetic algorithms are a few
examples of different kinds of local search algorithms. Each of these algorithms operates a
little bit differently, but they all follow the same fundamental procedure of iteratively creating
new solutions and comparing them to the existing solution to determine whether they are
superior.

The local search algorithm in artificial intelligence is a crucial tool in the field of artificial
intelligence and is frequently employed to address a range of optimization issues.

Applications for local search algorithms include scheduling, routing, and resource
allocation. They are particularly helpful for issues where the search space is very large and
can be used to solve both discrete and continuous optimization problems.

Local Search
The nodes are expanded systematically by informed and uninformed searches in different
ways:

 storing various routes in memory and


 choosing the most appropriate route,
Hence, a solution state is needed to get to the goal node. Beyond these "classical search
algorithms," however, there are some "local search algorithms" that only consider the
solution state required to reach the target node and disregard path cost.

In contrast to multiple paths, a local search algorithm completes its task by traversing a single
current node and generally following that node's neighbors.

To solve challenging optimization issues, local search algorithms are frequently combined
with other optimization methods like constraint satisfaction or linear programming. Since
they can quickly converge on a solution that is close to the optimal solution, even if they do
not find the exact optimal solution, they are especially helpful for problems where the search
space is very large.

Local search algorithms are a type of optimization algorithm that iteratively improves the
solution to a problem by making small, local changes to it. Here are the general steps of a
local search algorithm:

 Initialization:
The algorithm starts with an initial solution to the problem. This solution can be
generated randomly or using a heuristic.
 Evaluation:
The quality of the initial solution is evaluated using an objective function. The
objective function measures how good the solution is, based on the problem
constraints and requirements.
 Neighborhood search:
The algorithm generates neighboring solutions by making small modifications to the
current solution. These modifications can be random or guided by heuristics.
 Selection:
The neighboring solutions are evaluated using the objective function, and the best
solution is selected as the new current solution.
 Termination:
The algorithm terminates when a stopping criterion is met. This criterion can be a
maximum number of iterations, a threshold value for the objective function, or a time
limit.
 Solution:
The final solution is the best solution found during the search process.

Hill Climbing

Hill climbing is a local search algorithm in artificial intelligence applied to optimization and
artificial intelligence issues. It is a heuristic search algorithm that starts with an initial
solution and iteratively enhances it by making small adjustments to it, one at a time, and
choosing the best adjustment that enhances the solution the most.

The term "hill climbing" refers to the analogous activity of ascending a hill to reach its
summit (the optimal solution). The algorithm operates by making incremental progress (or
"climbs") towards increasing values of the objective function up until it hits a local maximum
beyond which no additional gains are possible.

The basic steps of the hill climbing algorithm are as follows:


1. Start with an initial solution.
2. Evaluate the objective function to determine the quality of the solution.
3. Generate a set of candidate solutions by making small modifications to the current
solution.
4. Evaluate the objective function for each candidate solution.
5. Select the candidate solution that improves the objective function the most.
6. Repeat steps 3-5 until no further improvement can be made.

The simplicity and effectiveness of hill climbing, which does not require the creation and
upkeep of a search tree, is one of its main benefits. Its main drawback, however, is that it
might get stuck in local optima, where it is impossible to improve without first making a non-
improving move. Numerous modifications to the fundamental hill climbing algorithm,
including simulated annealing and tabu search, have been suggested to address this issue.

Local Beam Search

A heuristic search algorithm called local beam search is applied to optimization and
artificial intelligence issues. It is a modification of the standard hill climbing algorithm in
which the current states are the starting set (or "beam") of k solutions rather than a single
solution.

The algorithm creates a set of candidate solutions from the current states for each iteration,
evaluates those solutions, and chooses the k best ones to become the new current states.

The basic steps of the local beam search algorithm are as follows:

1. Start with k randomly generated solutions.


2. Evaluate the objective function for each solution.
3. Generate a set of candidate solutions by making small modifications to the current
states.
4. Evaluate the objective function for each candidate solution.
5. Select the k best candidate solutions to become the new current states.
6. Repeat steps 3-5 until a stopping criterion is met.

The ability to explore multiple search paths simultaneously gives local beam search an edge
over traditional hill climbing, which can increase the likelihood of discovering the global
optimum. Even so, it can still find itself in local optima, especially if the search space is big
or complicated.

Numerous modifications to the fundamental local beam search algorithm, including


stochastic beam search and beam search with restarts, have been put forth to address this
issue. With these adjustments, the search process is given a random or cyclical restart, which
can aid the algorithm in leaving local optima and discovering new regions of the search
space.

Simulated Annealing

Simulated Annealing is a heuristic search algorithm applied to optimization and artificial


intelligence issues. By allowing the algorithm to occasionally accept moves that do not
improve, this variation of the hill climbing algorithm can avoid the issue of getting stuck in
local optima.

Starting with an initial solution, the algorithm iteratively generates new solutions by making
minor adjustments to the original solution. The algorithm assesses the objective function of
the new solution after each iteration and determines whether to accept or reject it using a
probability function that depends on the difference between the values of the objective
functions of the old and new solutions as well as a temperature parameter.

The basic steps of the simulated annealing algorithm are as follows:

1. Start with an initial solution.


2. Set the initial temperature to a high value.
3. Repeat the following steps until the stopping criterion is met:
o Generate a new solution by making a small modification to the current
solution.
o Evaluate the objective function of the new solution.
o If the new solution improves the objective function, accept it as the new
current solution.
o If the new solution does not improve the objective function, accept it with a
probability that depends on the difference between the objective function
values of the current and new solutions and the current temperature.
o Decrease the temperature according to a cooling schedule.
4. Return the current solution as the final solution.

The main principle of the simulated annealing algorithm is to control the level of randomness
in the search process by altering the temperature parameter. High temperatures enable the
algorithm to explore new regions of the search space by increasing its propensity to accept
non-improving moves. The algorithm becomes more selective and concentrates on improving
the solution as the temperature drops.

Simulated annealing has been successfully applied to a wide range of optimization problems,
such as the traveling salesman problem, the vehicle routing problem, and the job shop
scheduling problem. However, it requires careful tuning of the temperature and cooling
schedule parameters to achieve good performance.

Travelling Salesman Problem

The Traveling Salesman Problem (TSP) is a well-known example of a combinatorial


optimization problem in which the goal is to determine the quickest path between the starting
city and a given set of cities. No known algorithm can complete it in polynomial time
because it is an NP-hard problem.

Due to their capacity to efficiently search sizable solution spaces, local search algorithms in
artificial intelligence are frequently used to solve the TSP. Local search algorithms for the
TSP work by starting with an initial solution and incrementally improving it by making small
changes to it one at a time until no more advancements are possible.
Local search in continuous space

search with non-deterministic actions

You might also like