0% found this document useful (0 votes)
14 views117 pages

AM EC Unit1 Final

Uploaded by

jee.extra7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views117 pages

AM EC Unit1 Final

Uploaded by

jee.extra7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

UNIT 1 INTRODUCTION TO

EVOLUTIONARY
COMPUTING
Ami Munshi
Syllabus
Course Outcomes
References
Some typical problems we face
 Typical ones with which artificial intelligence is associated:
 more akin to puzzles
 e.g., the famous zebra puzzle
 numerical problems
 e.g., what is the shortest route from a northern city to a southern city
 pattern discovery
 e.g., what will a new customer buy in our online book store, given their
gender, age, address, etc

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Optimisation, Modelling, and Simulation
Problems
 Designing aircraft wings
 Inputs
a description of a proposed wing shape
 Model
 equations of complex fluid dynamics to estimate the drag and lift
coefficients of any wing shape
 Output
 estimates form the output of the system
Optimisation, Modelling, and Simulation
Problems
 Voice control system for smart homes
 Input
 electrical signal produced when a user speaks into a microphone
 Outputs
 commands to be sent to the heating system, the TV set, or the
lights.
 Model
 consists of a mapping from certain patterns in electrical
waveforms coming from an audio input onto the outputs that would
normally be created by key-presses on a keyboard
Optimisation
 An optimisation problem
 Model is known, together with the desired output (or a
description of the desired output)
 Task is to find the input(s) leading to this output

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Example for optimisation
 Travelling Salesman Problem (TSP)
 Given :
A formula (the model) that for each given sequence of cities
(the inputs) will compute the length of the tour (the output)
 To find
 aninput with a desired output, that is, a sequence of cities
with optimal (minimal) length
Example for optimisation
 8 Queens Problem
 Given -DIY
 To find-DIY
Modelling
 Modelling or system identification problem
 Given
 corresponding sets of inputs and outputs
 To obtain
 model of the system is sought that delivers the correct output for each known input
 In terms of human learning this corresponds to finding a model of the world that
matches our previous experience

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Example of modelling
 Stock exchange
 Input
 some economic and societal indices (e.g., the unemployment rate, gold price, euro–dollar exchange
rate, etc.)
 Output
 the Dow Jones index
 Task
 Find a formula that links the known inputs to the known outputs, thereby representing a model of this
economic system
 If one can find a correct model for the known data (from the past), and if we have good
reasons to believe that the relationships captured in this model remain true, then we have
a prediction tool for the value of the Dow Jones index given new data
Simulation
 Simulation problem
 we know
 the system model and some inputs
 and need to compute
 the outputs corresponding to these inputs

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Example for simulation
 An electronic circuit, say, a filter cutting out low
frequencies in a signal
 Our model is a complex system of formulas (equations
and inequalities) describing the working of the circuit
 For any given input signal this model can compute the
output signal
Objective Function Optimization Versus
Constraint Satisfaction
 Distinguishing on the basis of whether
 theobjective function is to be optimized
 Constraints to be satisfied
Objective Function Optimization Versus
Constraint Satisfaction
 Some objective functions in terms of  Example 2 refers to a problem whose solution is
optimality/constraint/both defined purely in terms of optimisation.
1. the number of unchecked queens
on a chess board (to be  Example 4 illustrates the case where a solution is
maximised) defined solely in terms of a constraint.
2. the length of a tour visiting each  Example 5 is a mixture of these two basic types
city in a given set exactly once (to
be minimised) since it has an objective function (tour length) and
a constraint (visit X after Y ).
3. the number of images in a
collection that are labelled
correctly by a given model m (to
be maximised)
4. Find a configuration of eight
queens on a chess board such that
no two queens check each other
5. Find a tour with minimal length for
a travelling salesman such that
city X is visited after city Y
Objective Function Optimization Versus
Constraint Satisfaction
 Travelling salesman problem
(item 2 above) is a free
optimization problem (FOP)
 Eight-queens problem (item 4
above) is a constraint
satisfaction problem (CSP)
 Item 5 is a constrained
optimisation problem (COP)

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Evolutionary Computing: The Origin
 Research area in computer science
 Draws inspiration from the process of natural evolution
 the power of evolution in nature is evident in the diverse
species that make up our world, each tailored to survive
well in its own niche
 The fundamental metaphor of evolutionary computing
relates this powerful natural evolution to a particular
style of problem solving – that of trial-and-error
Evolutionary computing metaphor

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Inspiration from Biology
 Darwin’s theory of evolution
 Offers an explanation of the origins of biological diversity and
its underlying mechanisms
 In macroscopic view of evolution, natural selection plays a central
role
 Natural selection favours those individuals that compete for the
given resources most effectively
 In other those that are adapted or fit to the environmental conditions
best
 This phenomenon is also known as survival of the fittest
Inspiration from Biology
 Phenotypic traits are those behavioural and physical features of an
individual
 that directly affect its response to the environment (including other individuals),
thus determining its fitness
 Each individual represents a unique combination of phenotypic traits that is
evaluated by the environment
 If this combination evaluates favourably, then the individual has a higher
chance of creating offspring
 Otherwise the individual is discarded by dying without offspring
 Importantly, if they are heritable (and not all traits are), favourable
phenotypic traits may be propagated via the individual’s offspring
Inspiration from Biology
 Darwin’s insight was that
 Small, random variations – mutations – in phenotypic traits occur during reproduction from
generation to generation
 Through these variations, new combinations of traits occur and get evaluated
 The best ones survive and reproduce, and so evolution progresses
 To summarize this basic model
 A population consists of a number of individuals
 These individuals are the units of selection, that is to say that their reproductive success depends on
how well they are adapted to their environment relative to the rest of the population
 As the more successful individuals reproduce, occasional mutations give rise to new individuals to be
tested
 Thus, as time passes, there is a change in the constitution of the population, i.e., the population is the
unit of evolution
Evolutionary computing-why?
Motivation
 Developing automated problem solvers (that is, algorithms) is one of the central themes of mathematics and
computer science
 When looking for the most powerful natural problem solver, there are two rather obvious candidates
 the human brain
 the evolutionary process (that created the human brain)
 Trying to design problem solvers based on
 The first candidate leads to the field of neuro-computing
 The second option forms a basis for evolutionary computing
 Another motivation can be identified from a technical perspective
 Computerisation in the second half of the 20th century created a growing demand for problem-solving automation
 The growth rate of the research and development capacity has not kept pace with these needs
 Hence, the time available for thorough problem analysis and tailored algorithm design has been decreasing
 A parallel trend has been the increase in the complexity of problems to be solved
 A third motivation is one that can be found behind every science: human curiosity
 These two trends imply an urgent need for robust algorithms with satisfactory performance
Evolutionary computing-why?
Motivation
 There is a need for algorithms that are
 applicable to a wide range of problems
 do not need much tailoring for specific problems
 deliver good (not necessarily optimal) solutions within
acceptable time
 Evolutionary algorithms do all this
 They provide an answer to the challenge of deploying
automated solution methods for more and more problems,
which are ever more complex, in less and less time
Applications of evolutionary computation
 Timetable scheduling
 Finanacial markets
 Vehicle Routing Problem (VRP)
 Travelling Salesman Problem (TSP)
 Neural Networks
Applications of evolutionary computation

Ref: https://www.turing.com/kb/genetic-algorithm-applications-in-ml
Evolutionary algorithms
 Common underlying idea behind all these
techniques is the same:
 Given a population of individuals within some
environment that has limited resources, competition for
those resources causes natural selection- survival of the
fittest
Evolutionary algorithms
 Survival of the fittest causes
 Rise in the fitness of the population
 Given a quality function to be maximised, we can randomly create a set of candidate solutions, i.e., elements of
the function’s domain
 We then apply the quality function to these as an abstract fitness measure – the higher the better
 On the basis of these fitness values some of the better candidates are chosen to seed the next generation
 This is done by applying recombination and/or mutation to them
 Recombination is an operator that is applied to two or more selected candidates (the so-called parents),
producing one or more new candidates (the children)
 Mutation is applied to one candidate and results in one new candidate
 Therefore executing the operations of recombination and mutation on the parents leads to the creation of a set of
new candidates (the offspring)
 These have their fitness evaluated and then compete – based on their fitness (and possibly age) – with the old
ones for a place in the next generation
 This process can be iterated until a candidate with sufficient quality (a solution) is found or a previously set
computational limit is reached
Evolutionary algorithms
 There are two main forces that form the basis of evolutionary systems:
 Variation operators (recombination and mutation) create the necessary diversity within the population, and thereby
facilitate novelty.
 Selection acts as a force increasing the mean quality of solutions in the population
 Combined application of variation and selection generally leads to improving fitness values in consecutive
populations
 It is easy to view this process as if evolution is optimising (or at least ‘approximising’) the fitness function, by
approaching the optimal values closer and closer over time
 An alternative view is that evolution may be seen as a process of adaptation
 From this perspective, the fitness is not seen as an objective function to be optimised, but as an expression of
environmental requirements
 Matching these requirements more closely implies an increased viability, which is reflected in a higher
number of offspring
 The evolutionary process results in a population which is increasingly better adapted to the environment
Points to note
 Evolutionary processes are stochastic
 Example
 Selection of best individuals is not done deterministically
 So even weak individuals have chance of becoming a parent or of
surviving
 During the recombination process, the choice of which pieces from
the parents will be recombined is made at random
 Similarly for mutation, the choice of which pieces will be changed
within a candidate solution, and of the new pieces to replace them,
is made randomly
Evolutionary Algorithm

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Evolutionary algorithm Flow chart

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Components of Evolutionary Algorithm
 Representation
 Evaluation function (Fitness Function)
 Population
 Parent Selection Mechanism
 Variation Operators, Recombination, Mutation
 Survivor Selection Method
 Termination Condition*
* Only for some cases
Representation
 Mapping real world problem to (Evolutionary
Algorithm) EA world
 Simplifying or abstracting some aspects of the real
world
 to create a well-defined and tangible problem context
 within which possible solutions can exist and
 can be evaluated, and this work is often undertaken by
domain experts
Representation
 The first step from the point of view of automated problem-
solving is
 to decide how possible solutions should be specified and stored in
a way that can be manipulated by a computer
 Objects forming possible solutions within the original
problem context are referred to as phenotypes, while their
encoding, that is, the individuals within the EA, are called
genotypes
 This first design step is commonly called representation, as it
amounts to specifying a mapping from the phenotypes onto a
set of genotypes that are said to represent them
Representation
Encoding

Phenotype Genotype

Decoding
Some terms
 Candidate solution or individual or phenotype
 Terms used to denote possible solutions
 Phenotype Space
 Space for all possible candidate solution
 Genotype or chromosome
 Mapped value of phenotype in EA world
 Genotype Space
 Space for all possible genotypes
 Variable or locus (plural-loci) or a position, or a gene
 A place holder
 Object in place holder is called as allele
Basic Terminology
 Population: subset of all the possible solutions to the
given problem
 Chromosomes: is one solution to the given problem
 Gene: one element position of a chromosome.
 Allele: value a gene takes for a particular chromosome
 Genotype: population in computation space
 Phenotype: Population in the actual real world solution
 Decoding: transform a solution from genotype to
phenotype space
 Encoding: transform from phenotype to genotype space
 Fitness function: Function which has solution as input
and suitability of the solution as its output
 Genetic operator: they alter the genetic composition of
the off spring
Optimization Techniques
 Act of achieving best possible result under given circumstances
 Reduce computational cost, memory, time, error
 Different techniques are
 Ant Colony Optimization
 Based on the idea of ant foraging for communication
 Runner Root Algorithm
 Inspired by the function of runners and roots of some plants in nature
 Artificial Bee Colony Algorithm
 Based on honey bee foraging behaviour
 Particle Swarm Optimization
 Based on idea of animals flocking behaviour
 Cuckoo search algorithm
 Inspired by brooding parasitism of the cuckoo species
Components of Evolutionary Algorithm
 Representation
 Evaluation function (Fitness Function)
 Population
 Parent Selection Mechanism
 Variation Operators, Recombination, Mutation
 Survivor Selection Method
 Termination Condition*
* Only for some cases
Evaluation Function or Fitness Function
 Role of the evaluation function is
 to represent the requirements the population should adapt to meet
 It forms the basis for selection, and so it facilitates improvements
 It defines what improvement means
 From the problem-solving perspective, it represents the task to be
solved in the evolutionary context
 Technically, it is a function or procedure that assigns a quality
measure to genotypes
Components of Evolutionary Algorithm
 Representation
 Evaluation function (Fitness Function)
 Population
 Parent Selection Mechanism
 Variation Operators, Recombination, Mutation
 Survivor Selection Method
 Termination Condition*
* Only for some cases
Population
 The role of the population is to hold (the representation of) possible solutions
 A population is a multiset of genotypes
 The population forms the unit of evolution
 Individuals are static objects that do not change or adapt; it is the population that
does
 Given a representation, defining a population may be as simple as specifying how
many individuals are in it, that is, setting the population size
 Best individual of a given population is chosen to seed the next generation
 Worst individual of a given population is chosen to be replaced by a new one
 Diversity of a population is a measure of the number of different solutions present
 No single measure for diversity exists
 Diversity measure might be different fitness values, number of phenotypes, number of
genotypes, entropy
Components of Evolutionary Algorithm
 Representation
 Evaluation function (Fitness Function)
 Population
 Parent Selection Mechanism
 Variation Operators, Recombination, Mutation
 Survivor Selection Method
 Termination Condition*
* Only for some cases
Parent Selection Mechanism
 The role of parent selection or mate selection is to distinguish among individuals based on their
quality
 To allow the better individuals to become parents of the next generation
 An individual is a parent if it has been selected to undergo variation in order to create
offspring
 Together with the survivor selection mechanism, parent selection is responsible for pushing
quality improvements
 In EC, parent selection is typically probabilistic
 Means high-quality individuals have more chance of becoming parents than those with low quality
 Nevertheless, low-quality individuals are often given a small, but positive chance
 otherwise the whole search could become too greedy and the population could get stuck in a local
optimum
Components of Evolutionary Algorithm
 Representation
 Evaluation function (Fitness Function)
 Population
 Parent Selection Mechanism
 Variation Operators, Recombination, Mutation
 Survivor Selection Method
 Termination Condition*
* Only for some cases
Variation Operators, Recombination,
Mutation
 Variation operators is to create new individuals from
old ones
 In the corresponding phenotype space this amounts to
generating new candidate solutions
 Variation operators in EC are divided into two types
based
 Mutation
 recombination
Variation Operators, Recombination,
Mutation
 Mutation
 Unary variation operator is commonly called mutation
 It is applied to one genotype and delivers a (slightly) modified mutant, the child or
offspring
 A mutation operator is always stochastic
 its output – the child – depends on the outcomes of a series of random choices
 Recombination
 A binary variation operator is called recombination or crossover
 As the names indicate, such an operator merges information from two parent genotypes
into one or two offspring genotypes
 Like mutation, recombination is a stochastic operator
 the choices of what parts of each parent are combined, and how this is done, depend on random
drawings
Components of Evolutionary Algorithm
 Representation
 Evaluation function (Fitness Function)
 Population
 Parent Selection Mechanism
 Variation Operators, Recombination, Mutation
 Survivor Selection Method
 Termination Condition*
* Only for some cases
Survivor Selection Method/Replacement
strategy
 Similar to parent selection, the role of survivor selection or environmental selection is to
distinguish among individuals based on their quality
 However, it is used in a different stage of the evolutionary cycle – the survivor selection
mechanism is called after the creation of the offspring from the selected parents
 In EC the population size is almost always constant
 This requires a choice to be made about which individuals will be allowed in to the next
generation
 This decision is often based on their fitness values, favouring those with higher quality, although
the concept of age is also frequently used
 In contrast to parent selection, which is typically stochastic, survivor selection is often
deterministic
 Two common methods are
 the fitness-based method of ranking the unified multiset of parents and offspring and selecting the top
segment, or
 the age-biased approach of selecting only from the offspring
Components of Evolutionary Algorithm
 Representation
 Evaluation function (Fitness Function)
 Population
 Parent Selection Mechanism
 Variation Operators, Recombination, Mutation
 Survivor Selection Method
 Termination Condition*
* Only for some cases
Termination Condition
 If the problem has a known optimal fitness level, probably coming from a known optimum of
the given objective function, then in an ideal world our stopping condition would be the
discovery of a solution with this fitness
 If we know that our model of the real-world problem contains necessary simplifications, or
may contain noise, we may accept a solution that reaches the optimal fitness to within a given
precision ∈> 0
 However, EAs are stochastic and mostly there are no guarantees of reaching such an optimum,
so this condition might never get satisfied, and the algorithm may never stop
 Therefore we must extend this condition with one that certainly stops the algorithm
 The following options are commonly used for this purpose:
 The maximally allowed CPU time elapses.
 The total number of fitness evaluations reaches a given limit
 The fitness improvement remains under a threshold value for a given period of time (i.e., for a number
of generations or fitness evaluations)
 The population diversity drops under a given threshold
Evolutionary cycle by hand- Example
𝑝𝑖 = 𝑓(𝑖)/(෍ 𝑓 𝑗 , 𝑤ℎ𝑒𝑟𝑒 𝑃 = 𝑃𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛
𝑗∈𝑃

Genotype

Phenotype

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Evolutionary cycle by hand- Example

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Evolutionary cycle by hand- Example

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015
Example Applications
 8 Queens Problem
 Knapsack Problem
8 Queens’ Problem
 Problem statement-
 placing eight queens on a regular 8 × 8 chessboard so that no two of them can check each
other
 How do you solve 8 queens’ problem generally?
 Classical approach
 Work in a constructive, or incremental fashion
 start by placing one queen
 after having placed n queens, attempt to place the (n + 1)th in a feasible position where
the new queen does not check any others
 typically some sort of backtracking mechanism is applied
 if there is no feasible position for the (n+1)th queen, the nth is moved to another position
8 Queens’ Problem- Evolutionary Approach
 An evolutionary approach to this problem is drastically different in that it is not incremental
 Our candidate solutions are complete (rather than partial) board configurations, which specify
the positions of all eight queens
 The phenotype space P is the set of all such configurations
 Clearly, most elements of this space are infeasible, violating the condition of nonchecking
queens
 The quality q(p) of any phenotype p ∈ P can be simply quantified by the number of checking
queen pairs
 The lower this measure, the better a phenotype (board configuration),
 a zero value, q(p) = 0, indicates a good solution
 From this observation we can formulate a suitable objective function to be minimised, with a
known optimal value
8 Queens’ Problem- Evolutionary Approach
 Defining genotypes
 Fitness (to be maximised) of a genotype g that represents phenotype p is some
inverse of q(p)
 Possible ways of specifying what kind of inverse we wish to use
 For instance, 1/q(p) is an easy option
 but has the disadvantage that attempting division by zero is a problem for many
computing systems
 We could circumvent this by watching for q(p) = 0 and saying that when this
occurs we have a solution, or by adding a small value ∈, i.e., 1/(q(p) + ∈)
 Other options are to use −q(p) or M − q(p)
 where M is a sufficiently large number to make all fitness values positive,
 e.g., M ≥ max{q(p) | p ∈ P}
 This fitness function inherits the property of q that it has a known optimum M
8 Queens’ Problem- Evolutionary
Approach
 To design an EA to search the space P
 we need to define a representation of phenotypes from P
 The most straightforward idea is to use a matrix representation of elements of P directly
as genotypes
 In this example, however, we define a more clever representation as follows
 A genotype, or chromosome, is a permutation of the numbers 1, . . . , 8
 Given 𝑔 =< 𝑖1 , . . . , 𝑖8 >denotes the (unique) board configuration, where the 𝑛𝑡ℎ column
contains exactly one queen placed on the 𝑖𝑛𝑡ℎ row
 For instance, the permutation 𝑔 = 1, . . . , 8 represents a board where the queens are
placed along the main diagonal
 The genotype space 𝐺 is now the set of all permutations of 1, . . . , 8 and we also have
defined a mapping 𝐹 ∶ 𝐺 → 𝑃
8 Queens’s Example Evolutionary
Approach
 Fig shows the example of 8 digit
string of genenotype
 [2 4 7 4 8 5 5 2]
 𝐶 𝑡ℎ digit represents the row
number of the queen in column 𝑐
 Number of max non attacking
pair of queens is given by
𝑛
𝐶𝑟 = 8𝐶2 = 28

Ref: Stuart Russel and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th edition, Pearson Education, 2021
8 Queens’s Example Evolutionary
Approach
 Values of the four states in (b) are 24,
23, 20, and 11
 Fitness scores are then normalized to
probabilities, and the resulting values
are shown next to the fitness values in
(b)
 Notice that one individual is selected
twice and one not at all
 For each selected pair, a crossover
point (dotted line) is chosen randomly
 In (d), we cross over the parent strings
at the crossover points, yielding new
offspring
 For example
 the first child of the first pair gets the
first three digits (327) from the first
parent and the remaining digits
(48552) from the second parent.
Ref: Stuart Russel and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th edition, Pearson Education, 2021
8 Queens’s Example Evolutionary
Approach
 Summary
 Initial population in
(a)
 Is ranked by a fitness
function in (b)
 Resulting in pairs for
mating in (c)
 They produce
offspring in (d)
 Which are subject to
mutation in (e)
Ref: Stuart Russel and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th edition, Pearson Education, 2021
8 Queens’s Example Evolutionary
Approach

Ref: Stuart Russel and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th edition, Pearson Education, 2021
Example: 8 Queens problem
 Consider the chess board shown below
 8 queens are available
 Task is to place a queen in each column of the board such that no queen
can attack any other queen
 Use GA algorithm
Example: 8 Queens problem

8 Q
1 2 3 4 5 6 7  One of the solutions
1 Q  Moves of a queen:
2 Q 1)Forward
Q
3 2)Backward
4 Q
Q
3)Diagonals
5
6 Q
Q
7
8
Problem formulation
3 2 4 6 8 1 7 5

• Model/encode the in the form of a string


• i.e encode the solution in string format
• Array indices indicate column numbers and the value in the array
indicates the row numbers
• In the first column, Queen is placed in row 3
• In the second column, Queen is placed in row 2 and so on..
• Initial population is randomly generated
• Randomly generate k states, i.e. k random strings of digits
• Each state is one solution
Further steps
Invoking the evaluation function:
 For each solution(k randomly generated states), invoke
the evaluation function to see how good that function is
 Evaluation function should assign a higher score to a
better solution
 One possible evaluation function for this could be, count
the pairs of queens that are not attacking each other
 Total number of pairs in which queens could possibly kill
each other = 8C2
= 8!/(2! 6!) =28
 Normalize evaluation function by dividing by 28
Problem Solution
3 2 4 6 8 1 7 5
1 2 3 4 5 6 7
8 Q
1 Q
2
3
Q Queens in first two
Q columns can attack
4
Q
5 each other
Q
6
Q
7
Q
8
Problem Solutions
First solution
3 2 4 6 1 8 5 7
1 2 3 4 5 6 7
8 Q
1 Q
2 Q • out of 4 pairs, 1 pair can
3 Q attack each other
4 Q • Fitness value = 3/28
5 Q
6 Q
7 Q
8
Q1 – Q2
Problem Solutions
First solution
3 2 4 6 1 8 5 7

Second solution
2 8 1 7 5 4 6 3

Third solution
1 2 3 4 5 6 7 8
Selection
 After getting the results of the evaluation function,
 Select a few that may be considered in the solution
based on the fitness value
 Perform crossover operation between them
 Select a random crossover point
Crossover
3 2 4 6 1 8 5 7

2 8 1 7 5 4 6 3

• Consider the above two parents in which crossover has


to be performed
• Choose a crossover point randomly
• Ex: 3 as crossover point
• To check elements are not repeated, swap those
which are already present in that half
Crossover
3 2 4 6 1 8 5 7

2 8 1 7 5 4 6 3

Offspring 1
3 2 4 7 5 4 6 3

Offspring 2
2 8 1 6 1 8 5 7
Mutation
 Out of all the children obtained from crossover
 Based on the fitness value choose a few to
perform mutation
 Randomly choose a point in the array and change
its value to some other number
 In this example, do not use two queens in the same
row and new number cannot exceed 8
 Therefore choose two numbers from the array and
then swap
Termination
 The set of k states, some mutated, chosen as the from the
next fitness value, becomes the population for the next
iteration.
 Generally the algorithm should run for a significant number
of iterations to give best results
 Iterations can be stopped when there is no significant
change in solutions
 Or fitness value is very close to optimal
 Or number of iterations has reached
Knapsack Problem

https://www.hackerearth.com/practice/notes/the-knapsack-problem/
Basic Idea of Knapsack problem
 It derives its name from the problem faced by someone
who is constrained by a fixed-size knapsack and must
fill it with the most valuable items
 Given a set of items, each with a weight and a value
 Determine items to include in a collection so that
 total weight is less than or equal to a given limit

 and the total value is as large as possible


Knapsack Problem
 Knapsack problem is generalization of many industrial problems
 0–1 knapsack problem is described as follows
 We are given a set of n items
 Each of which has attached to it some value 𝑣𝑖 , and some cost 𝑐𝑖
 Task is to select a subset of those items that
 Maximises the sum of the values
 While keeping the summed cost within some capacity 𝐶𝑚𝑎𝑥
 For example
 When packing a backpack for a round-the-world trip
 We must balance likely utility of the items against the fact that we have a limited
volume (the items chosen must fit in one bag), and weight (airlines impose fees for
luggage over a given weight)
Knapsack- Evolutionary way
 It is a natural idea to represent candidate solutions for this problem
as binary strings of length n
 Where a 1 in a given position indicates that an item is included and
a 0 that it is omitted
 Corresponding genotype space G is the set of all such strings with
size 2n, which increases exponentially with the number of items
considered
 Using this G, we fix the representation in the sense of data structure,
and next we need to define the mapping from genotypes to
phenotypes
Knapsack- Evolutionary way
 The first representation (in the sense of a mapping) that we consider takes
the phenotype space P and the genotype space to be identical
 Quality of a given solution p, represented by a binary genotype g, is thus
determined by summing the values of the included items, i.e., 𝑞(𝑝) =
σ𝑛𝑖=1 𝑣𝑖 . 𝑔𝑖
 However, this simple representation leads us to some immediate problems
 By using a one-to-one mapping between the genotype space G and the
phenotype space P,
 individual genotypes may correspond to invalid solutions that have an
associated cost greater than the capacity σ𝑛𝑖=1 𝑐𝑖 . 𝑔𝑖 > 𝐶𝑚𝑎𝑥
 There are a number of mechanisms have been proposed for dealing with it
Knapsack- Evolutionary way
 Second representation solves this problem by employing a decoder function, that breaks the
one-to-one correspondence between the genotype space G and the solution space P
 In essence, our genotype representation remains the same, but when creating a solution we
read from left to right along the binary string, and keep a running tally of the cost of included
items
 When we encounter a value 1, we first check to see whether including the item would break
our capacity constraint
 In other words,
 rather than interpreting a value 1 as meaning include this item,
 we interpret it as meaning include this item IF it does not take us over the cost constraint
 Effect of this scheme is to make the mapping from genotype to phenotype space many-to-one,
since once the capacity has been reached, the values of all bits to the right of the current
position are irrelevant, as no more items will be added to the solution
 Furthermore, this mapping ensures that all binary strings represent valid solutions with a unique
fitness (to be maximised).
Knapsack- Evolutionary way
 A suitable (but not necessarily optimal) recombination operator is one-point
crossover,
 where we align two parents and pick a random point along their length
 The two offspring are created by exchanging the tails of the parents at
that point
 We will apply this with 70% probability,
 i.e., for each pair of parents there is a 70% chance that we will create two
offspring by crossover and
 30% that the children will be just copies of the parents
 A suitable mutation operator is called bit-flipping
 in each position we invert the value with a small probability P𝑚 ∈ [0, 1).
Knapsack- Evolutionary way
 In this case we will create the same number of offspring as we have
members in our initial population
 We create two offspring from each two parents, so we will select
that many parents and pair them randomly
 We will use a tournament for selecting the parents, where each time
we pick two members of the population at random (with
replacement), and the one with the highest value q(p) wins the
tournament and becomes a parent
 We will institute a generational scheme for survivor selection, i.e., all
of the population in each iteration are discarded and replaced by
their offspring
Knapsack- Evolutionary way
 Finally, we should consider initialisation (which
we will do by random choice of 0 and 1 in
each position of our initial population), and
termination
 In this case, we do not know the maximum
value that we can achieve, so we will run our
algorithm until no improvement in the fitness of
the best member of the population has been
observed for 25 generations
 Our crossover probability as 0.7
 We will work with a population size of 500
and a mutation rate of 𝑝𝑚 = 1/𝑛, i.e., that
will on average change one value in every
offspring
 Our evolutionary algorithm to tackle this
problem can be specified as below
Evolutionary Algorithm
Knapsack example
 Going to spend a month in the wilderness
 Carrying a backpack which can hold a maximum weight of 30 kg
 You have different survival items, each having its own “Survival Points”
 Objective is to maximize the survival points
Initialization
 Population contains individuals, each having their own set of chromosomes
 Chromosomes are binary strings
 ‘1’ means item is taken and ‘0’ means that it is dropped
 Following set of chromosome is considered as initial population
Genetic Algorithm
Fitness Function
 A1 chromosome = [100110]

Population Fitness Value for a1


Fitness Function
• A2 chromosome [001110]

Population Fitness Value for a2

• Chromosome is considered as more fit when it contains more survival points


• Chromosome 1 is more fit than chromosome 2
Fitness Function
 Takes a candidate solution to the problem as input and
produces as output as quantitative measure for how “fit”
our how “good” the solution is
 Calculation of fitness value is done repeatedly in a GA
 Therefore it should be sufficiently fast
 A slow computation of the fitness value can adversely
affect a GA and make it exceptionally slow
 Objective is to either maximize or minimize the given
fitness function
Example: Fitness function
 Scanning the elements from left to right till the knapsack is full
Genetic Algorithm
Selection
 Select fit chromosomes from our population for better off-springs
 Use Roulette Wheel Selection method or Tournament Selection Method

Note : It will be done in detail in unit 3


Genetic Algorithm
Crossover
 Generating offspring from two selected
chromosomes (parent)
 One point (Single point crossover)
 Two point (Multipoint crossover)

 Uniform cross over


One Point Crossover
 Find the crossover of chromosome 1 and 4, which were selected in the
previous step
 Select a random crossover point and the trailing bits of chromosomes are
swapped to produce a new off-springs

Chromosome 1

Chromosome 4
One Point Crossover
 A random crossover point is selected
and the tails of its two parents are
swapped to get new off-springs
Multi Point Crossover
• For two point crossover,

1 0 0 1 1 1

1 1 1 0 1 0
Multi Point Crossover
 Is a generalization of the one-point crossover wherein
alternating segments are swapped to get new off-
springs
Uniform Crossover
• Flip a coin to choose an element from the given two
chromosomes

3 0 1 5 4 2
3 0 2 4 4 0
5 1 2 4 3 0
Uniform Crossover
 Don’t divide the chromosome into segments
 Treat each gene separately
 Flip a coin for each chromosome to decide whether or not it’ll be included
in the off-spring
 Can also bias the coin to one parent, to have more genetic material in the
child from that parent
Genetic Algorithm
Mutation
 Defined as a small random tweak in the chromosome, to get
a new solution
 Used to maintain and introduce diversity in the genetic
population
 Usually applied with a low probability
 If the probability is very high, the GA gets reduced to a
random search
 Mutation is essential to the convergence of the GA while
crossover is not
Mutation Operators
 Bit Flip Mutation
 Select one or more random bits and flip them. This is
used for binary encoded GAs

• Random Selection from permissible values


• values is assigned to a randomly chosen gene

3 0 2 4 4 0 3 0 2 4 5 0
Mutation Operators
 Swap Mutation
 Select two positions on the chromosome at random,
and interchange the values
 common in permutation based encodings

• Scramble Mutation
• Popular with permutation representations
• from the entire chromosome, a subset of genes is
chosen and their values are scrambled randomly
Mutation Operators
 Inversion Mutation
 select a subset of genes like in scramble
mutation, but instead of shuffling the subset
 Invert the entire string in the subset
Genetic Algorithm
Survivor Selection
 Which individuals are to be removed and which are to be kept in
the next generation
 Should ensure that the fitter individuals are not removed out of the
population
 At the same time diversity should be maintained in the population

 Use

 Age Based Selection

 Fitness Based Selection

 Note- It will be done in detail in unit 3


Age Based Selection
 Children replace individuals in
the population
 Age is the number of
generations for which the
individual has been in the
population
 Oldest members of the
population are removed
 And the ages of the rest of the
members are incremented by
one
Termination Condition
 Determines when a GA run will end
 Initially, the GA progresses fast with better solutions coming in
first few iterations
 This tends to saturate in the later stages where the improvement
is not substantial
 Keep one of the following termination conditions when

 There has been no improvement in fitness value for the


population
 Reach an absolute number of iterations

 Fitness has reached the required threshold


Termination Condition (no of iterations)
 Keep a counter which keeps track of the generations(iterations) for
which there has been no improvement in the population
 Initially, set this counter to zero
 Each time it does not generate off-springs which are better than the
individuals in the population
 Increment the counter
 If the fitness any of the off-springs is better, then reset the counter to
zero
 The algorithm terminates when the counter reaches a predetermined
value
Entire Process
Entire Process
Entire Process
Comparison of Natural and Artificial Evolution

Ref: A.E. Eiben, J.E. Smith, Introduction to Evolutionary Computing, 2nd edition, Natural Computing Series, Springer, 2015

You might also like