Project Report New
Project Report New
Project Report New
1.1 Introduction
1.1.1 Chemical Reactors
In chemical engineering, chemical reactors are vessels designed to contain chemical
reactions. One example is a pressure reactor. The design of a chemical reactor deals with
multiple aspects of chemical engineering. Chemical engineers design reactors to maximize
net present value for the given reaction. Designers ensure that the reaction proceeds with the
highest efficiency towards the desired output product, producing the highest yield of product
while requiring the least amount of money to purchase and operate. Normal operating
expenses include energy input, energy removal, raw material costs, labour, etc. Energy
changes can occur in the form of heating or cooling, pumping to increase pressure, frictional
pressure loss (such as pressure drop across a 90 elbow or an orifice plate) or agitation
A tubular reactor is a vessel through which flow is continuous, usually at steady state, and
configured so that conversion of the chemicals and other dependent variables are functions of
position within the reactor rather than of time. In the ideal tubular reactor, the fluids flow as if
they were solid plugs or pistons, and reaction time is the same for all flowing material at any
given tube cross section. Tubular reactors resemble batch reactors in providing initially high
driving forces, which diminish as the reactions progress down the tubes. Figure 1.1 shows the
production of phallic anhydride using a tubular reactor in the presence of V 2 O 5 catalyst.
Flow in tubular reactors can be laminar, as with viscous fluids in small-diameter tubes, and
greatly deviate from ideal plug-flow behaviour, or turbulent, as with gases. Turbulent flow
generally is preferred to laminar flow, because mixing and heat transfer are improved. For
slow reactions and especially in small laboratory and pilot-plant reactors, establishing
turbulent flow can result in inconveniently long reactors or may require unacceptably high
feed rates
1
Figure 1.1 Production of Phallic Anhydride Using a Tubular Reactor in the Presence of
V 2 O 5 Catalyst.
2
1.1.2.1(a) Description of Tubular Reactor
Figure 1.2 shows a tubular reactor. The above tubular reactor is used as an accessory to the
CE 310 Chemical Reactors Trainer to be able to study its behaviour in respect of the reaction
kinetics. The two liquid chemicals are pumped continuously through a plastic hose wound
into a coil to form the reaction tube where the chemicals react. The coiled tube is placed
inside a cylinder made of borosilicate glass that can be connected to the heating circuit on the
basic unit. All hose connections are self-sealing.
Although, tubular packed-bed reactors are used extensively in industry, due to some
disadvantages of this type of reactors the spherical packed-bed reactors attract more
attentions. Some potential disadvantages of tubular reactors are the pressure drop along the
reactor, high manufacturing costs and low production capacity. In order to avoid serious
pressure drop in the TR, the effective diameters of the catalyst particles are usually
considered more than 3 mm which lead to a certain inner mass transfer resistance. In this
study, the spherical reactor is proposed for naphtha reforming process.
In the spherical reactor, catalysts are situated between two perforated screens. As depicted in
figure below, the naphtha feed enters the top of the reactor and flows steadily to the bottom of
the reactor. Attempts should be made to have a continuous flow without any channelling in
the reactor. The goal is to achieve a uniform flow distribution through the catalytic bed,
because the flow is mainly occurring in an axial direction. The two screens in upper and
lower parts of the reactor hold the catalyst and act as a mechanical support. Since the cross-
sectional area is smaller near the inlet and the outlet of the reactor, the presence of catalysts in
these parts would cause substantial pressure drop and consequently, reduce the efficiency of
the spherical reactors. The other advantage of these screens is to balance the free zones (free
catalyst zones) to find a desirable pressure drop during the process. The radial flow is
assumed to be negligible in comparison with the axial flow. As a result, the equations in the
axial coordinate are being taken into account exclusively.
3
CHAPTER 2
2.1 Literature Review
2.1.1 Mathematical Modelling and optimization of DME synthesis in two stage spherical
reactor, Journal of Natural Gas and Engineering by Fereshteh Samimi, Mahdi Bayat , M.R
Rahimpour, Peyman Keshavarz.
They all developed a mathematical model for DME production rate. In order to maximize
DME mole fraction in the outlet of the reactors, the catalyst distributions and the inlet
temperatures of each reactor are optimized using differential evolution (DE) method. DME
stands for Dimethyl ether and is synthesized by dehydration of methanol. The reaction is
exothermic.
The goal of their study was reduction of the pressure drop and recompression costs as well as
the enhancement of the production rate of dimethyl ether (DME) synthesis reactor. For this
reason, a novel conguration of 2-stage spherical reactors is proposed .In this conguration,
the catalyst distribution in the conventional reactor is divided into two parts to load the
spherical reactors. The unreacted methanol from the rst reactor passes through a heat
exchanger to reach to a desire temperature and then enters top of the second reactor as the
inlet feed. In order to maximize DME production rate, the catalyst volume and the gas inlet
temperature of each reactor were optimized using differential evolution (DE) method. In
addition to signicantly lower pressure drop, DME production rate increases 16.3% in the
proposed conguration compared with CR.
2.1.2 Dynamic optimization of a multi-stage spherical, radial flow reactor for the naphtha
reforming process in the presence of catalyst deactivation using differential evolution (DE)
method, an article published in the Journal of hydrogen energy by M.R Rahimpour, Davood
Iranshashi.
They used differential evolution (DE) method to optimize the operating conditions of a radial
flow spherical reactor containing the naphtha reforming reactions. In this reactor
configuration, the space between the two concentric spheres is filled by catalyst. The dynamic
behaviour of the reactor has been taken into account in the optimization process. The
achieved mass and energy balance equations in the model are solved by orthogonal
collocation method. The goal of this optimization is to maximize the hydrogen and aromatic
production which leads to the maximum consumption of the paraffin and naphthenic. In order
to reach this end, the inlet temperature of the gas at the entrance of each reactor, the total
pressure of the process, as well as the catalyst distribution in each reactor have been
optimized using the differential evolution (DE) method. The results of the optimization of the
spherical reactor have been compared with the non-optimized spherical reactor. The
comparison shows acceptable enhancement in the performance of the reactor.
Decreasing the pressure drop in an industrial process is an important issue. Considering this
fact, utilizing a radial flow spherical reactor, which has a low pressure drop through the
4
catalytic bed, is a potentially interesting idea for industrial naphtha reforming. A one
dimensional model was used for optimization of the spherical reactor for catalytic naphtha
reforming process. Orthogonal collocation method was applied to solve the mass and energy
balance equations. The differential evolution (DE) method was used as the optimization
technique. The goal was to maximize the hydrogen and aromatic production rate. The
variability of the total molar flow rate was considered in this research which improved the
calculation results. The catalyst distribution in all three stages was optimized in a way to
maximize the production of the desired products. In this study, the maximum possible inlet
temperature of the gas at the entrance of the reactors was defined 777 K. However, by
utilizing higher thermal capacity furnaces and raising the inlet temperature of the gas entering
the reactors up to 840 K, more desirable results will be achieved. The effects of temperature
and time on the catalyst activity have been investigated in the results. Acceptable
enhancement in the performance of the reactor can be noticed. The results suggest that this
configuration can be a compelling way to boost hydrogen and aromatic production. However,
an investigation in relation to the environmental aspects, commercial viability and economic
feasibility of the proposed configuration is necessary in order to consider commercialization
of the process.
The methanol production reactions are strongly exothermic and the catalyst is deactivated
over time. Therefore, the development of an auto-thermal two stage methanol reactor could
pave the way to increasing the methanol production in the methanol synthesis process. One
potentially interesting idea for industrial methanol synthesis is using an optimal auto-thermal
dual-type reactor. In this investigation, an auto-thermal dual-type methanol synthesis reactor
was modelled and optimized dynamically to maximize methanol production rate. The
optimization method used is based on genetic algorithms (GAs). Overall production
throughout 4 years of catalyst life was considered as optimization criterion to be maximized;
also, 3 variables which are length ratio of the reactors, feed and cooling water temperatures,
are tuned. Optimization includes two procedures. In the first approach, the ratio of reactor
lengths and temperature profile along the reactor were optimized. Useful results were
obtained from optimization, that is, yields optimal values for temperatures and reactor length
ratio. In the second approach, based on the results, optimization was followed by another task
that optimal behaviours to which feed and cooling water temperatures are concluded. The
results lead us to optimal operation policy yields of 4.7 % and 5.8 % in additional methanol
production during operating time for the first and second optimization approaches,
respectively. A comparison of calculated temperature profile of the catalyst along the lengths
of the reactors shows the extremely favourable temperature profile over the optimal auto-
thermal dual-type reactor system. The favourable temperature profile of the catalyst along the
reactor in the optimized reactor results in increased production rate in the system
The parameters affecting the production rate in an industrial methanol reactor are parameters
such as temperature and catalyst deactivation. In the case of reversible exothermic reactions
5
such as methanol synthesis, selection of a relatively low temperature permits higher
conversion but this must be balanced against a slower rate of reaction leading to a large
amount of catalyst. To the left of the point of maximum production rate, increasing
temperature improves the rate of reaction, which leads to more methanol production.
So in the present study it is to optimize the heat transfer from the surface of a spherical
reactor using both genetic algorithm and SOS.
CHAPTER 3
6
3.1 Description of Genetic Algorithm (GA) and Symbiotic
Organisms Search (SOS)
The application of optimization algorithms to real world problems has gained momentum in
the last decade. Dating back to the early 1940s, diverse traditional mathematical methods
such as linear programming (LP), nonlinear programming (NLP) or dynamic programming
(DP) were first employed for solving complex optimization problems by resorting to different
relaxation methods of the underlying formulation. These techniques are capable of cost
efficiently obtaining a global optimal solution in problem models subject to certain
particularities but unfortunately their application range does not cover the whole class of NP
complete problems, where an exact solution cannot be found in polynomial time. In fact, the
solution space of the problem increases exponentially with the number of inputs, which
makes them unfeasible for practical applications.
Traditional techniques do not fare well over a broad spectrum of problem domains.
Traditional techniques are not suitable for solving multi-modal problems as they
tend to obtain a local optimal solution.
Traditional techniques are not ideal for solving multi-objective optimization
problems.
Traditional techniques are not suitable for solving problems involving large number
of constraints.
Considering the drawbacks of traditional optimization techniques, attempts are
being made to optimize the systems using evolutionary optimization techniques.
In the past years, evolutionary multi-objective optimization (EMO) has become a popular and
useful eld of research and application. Evolutionary Optimization (EO) algorithms use a
population based approach in which more than one solution participates in an iteration and
7
evolves a new population of solutions in each iteration. The reasons for their popularity are
many:
Population
3.2.1.1 Representation
The first step in defining an EA is to link the real world to the EA world that is to set up a
bridge between the original problem context and the problem solving space where evolution
will take place. Objects forming possible solutions within the original problem context are
referred to as phenotypes, their encoding, the individuals within the EA are called genotypes.
The first design step is commonly called representation, as it amounts to specifying a
mapping from the phenotypes onto a set of genotypes that are said to represent these
phenotypes. Given an optimization problem on integers, the given set of integers would form
8
the set of phenotypes. Then binary code can be represented as, hence 18 would be seen as a
phenotype and 10010 as a genotype representing it. It is important to understand that the
phenotype space can be very different from the genotype space, and that the whole
evolutionary search takes place in the genotype space. A solution a good phenotype is
obtained by decoding the best genotype after termination. To this end, it should hold that the
optimal solution to the problem at hand a phenotype is represented in the given genotype
space.
The common EC terminology uses many synonyms for naming the elements of two spaces.
On the side of original problem context, candidate solution, phenotype and individual are
used to denote points of the space of possible solutions this space itself is commonly called
the phenotype space. On the side of the EA, genotype, chromosome and again individual can
be used for points in the space where the evolutionary search will actually take place. This
space is often termed the genotype space also for the elements of the individuals there are
many synonyms terms. A place holder is commonly called a variable, a locus, a position, or in
a biology oriented terminology a gene. An object on such a place can be called a value or an
allele.
The word representation is used in two slightly different ways sometimes it stands for the
mapping from the phenotype to the genotype space in this sense this is synonyms with
encoding e.g., one could mention binary representation or binary encoding of candidate
solutions the inverse mapping from genotypes to phenotypes is usually called decoding and it
is required that the representation be invertible.. to each genotype there has to be at most one
corresponding phenotype the word representation can also be used in a slightly different
sense where the emphasis is not on the mapping itself but on the data structure of the
genotype space this interpretation is behind speaking about mutation operators for binary
representation.
The role of the evaluation function is to represent the requirements to adapt to. It forms the
basis for selection, and thereby it facilitates the improvements. More accurately it defines
what improvement means from the problem solving perspective, it represent the task to solve
in evolutionary context technically it is a function or procedure that assigns a quality measure
to genotypes. Typically this function is composed from a quality measure in the phenotype
space at the inverse representation.
The evaluation function is commonly called the fitness function in EA. this might cause a
counterintuitive terminology if the original problem requires minimization for fitness is
usually associated mathematically, however, it is trivial to change minimization into
maximization and vice versa. Quite often the original problem to be solved by an EA is an
optimization problem. In this case the name objective function is often used in the original
problem context and evolution fitness function can be identical to, or a simple
transformations of, the given objective function.
9
3.2.1.3 Population
The role of parent selection or mating selection is to distinguish among individuals based on
their quality, in particular, to allow the better individuals to become parents of next
generation. An individual is a parent if has been selected to undergo variation in order to
create off springs together with the survival selection mechanism, parent selection is
responsible for pushing quality improvements. In EA parent selection is typically
probabilistic thus high quality individuals get a higher chance to become parents than those
with low quality. Nevertheless, low quality individuals are often given a small but positive
chance, otherwise the whole search could become too greedy and get struck in an optimum
which is local.
The role of variation operators is to create new individuals from old ones. In the
corresponding phenotype space this amounts to generating new candidate solutions from the
generation and search perspective, variation operators perform the generate step. Variation
operators in EA are divided into two types based on their arty.
3.2.1.6 Mutation
A unary variation is commonly called mutation. It is applied to one genotype and delivers a
modified mutant, the child or offspring of it. A mutation operator is always stochastic: its
output depends on the outcomes of a series of random choices. It should be noted that an
10
arbitrary unary operator is not necessarily seen as mutation. A problem specific heuristic
operator acting on one individual could be termed as mutation for being unary. However, in
general mutation is supposed to cause a random, unbiased change. For this reason it might be
more appropriate not to call heuristic unary operators mutation. The role of mutation in EC is
different in various EC dialects, for instance in genetic programming it is often not used at
all, in genetic algorithms it has traditionally been seen as a background operator to fill the
gene pool with fresh blood, while in evolutionary programming it is the one and only
variation operator doing the whole search work. Generating a child amounts to stepping to a
new point in the space. From this perspective, mutation has a theoretical role to. It can
guarantee that the space is connected. This is important since thermo stating that an EA will
discover the global optimum of a given problem often relay on the property that each
genotype representing a possible solution can be reached by the variation operators. The
simplest way to satisfy this condition is to allow the mutation operator to jump everywhere.
However it should also be noted that many researchers fill this proofs have limited practical
applications, and many implementation of EAs do not in fact posses this property.
Survival selection is also often called replacement or replacement strategy .in many
cases the two turns can be used interchangeably. The choice between the two is the often
arbitrary. A good reason use the name survival selection is to keep terminology consistent. A
preference for using the replacement can be motivated by the skewed proportion of the
number of individuals in the population and the number of newly created children. In
particular, if the number of children is very small with respect to the population size.
3.2.1.8 Initialization
Initialization is kept simple in most EA applications. The first population is seed by randomly
generated individuals. In principle problem specific heuristics can be used in this steps
aiming at an initial population with higher fitness. Whether this is worth the extra
computational effort or not is very much depending on the application at hand. There are
however some general observations concerning the issues based on the so called anytime
behaviour of EAs.
11
As for a suitable Termination condition we can distinguish two cases. If the problem has a
known optimal fitness level, probably coming from a known optimum of the given objective
function, then reaching this level should be used as stopping condition. However, EAs are
stochastic and mostly there are no guarantees to reach an optimum, hence this condition may
never stop. This requires that this condition is extended with one that certainly stops the
algorithm. Commonly used options for this purpose are the following
3. For a given period of time the fitness improvement remains under a given threshold
value.
The actual termination criterion in such cases is a disjunction: optimum value hit or condition
r satisfied. If the problem does not have a known optimum, then it need no disjunction,
simply condition from the above list or a similar one that is guaranteed to stop the algorithm.
On the basis of the above context of evolutionary optimization techniques, two of these
evolutionary optimization techniques for solving a simple non- linear single objective
optimization function are discussed. They are:
The chromosomes in a GA population typically take the form of bit strings. Each locus in the
chromosome has two possible alleles: 0 and 1. Each chromosome can be thought of as a point
in the search space of candidate solutions. The GA processes populations of chromosomes,
successively replacing one such population with another. The GA most often requires a
fitness function that assigns a score (fitness) to each chromosome in the current population.
The fitness of a chromosome depends on how well that chromosome solves the problem at
hand.
3.3.1 GA Operators
12
The simplest form of genetic algorithm involves three types of operators: selection,
crossover (single point), and mutation.
3.3.1.2 Selection
This operator selects chromosomes in the population for reproduction. The fitter the
chromosome, the more times it is likely to be selected to reproduce.
3.3.1.3 Crossover
This operator randomly chooses a locus and exchanges the sub sequences before and after
that locus between two chromosomes to create two offspring. For example, the strings
10000100 and 11111111 could be crossed over after the third locus in each to produce the two
offspring 10011111 and 11100100. The crossover operator roughly mimics biological
recombination between two singlechromosome organisms.
3.3.1.4 Mutation
This operator randomly flips some of the bits in a chromosome. For example, the string
00000100 might be mutated in its second position to yield 01000100. Mutation can occur at
each bit position in a string with some probability, usually very small.
1. Start with a randomly generated population of n lbit chromosomes. Table 3.1 shows the
randomly generated initial population.
a. Select a pair of parent chromosomes from the current population, the probability of
selection being an increasing function of fitness. Selection is done "with replacement,"
meaning that the same chromosome can be selected more than once to become a parent.
b. with probability pc (the "crossover probability" or "crossover rate"), cross over the pair
at a randomly chosen point (chosen with uniform probability) to form two offspring. If no
crossover takes place, form two offspring that are exact copies of their respective parents.
The crossover rate is defined to be the probability that two parents will cross over in a single
13
point. There are also "multipoint crossover" versions of the GA in which the crossover rate
for a pair of parents is the number of points at which a crossover takes place.
c. Mutate the two offspring at each locus with probability pm (the mutation probability or
mutation rate), and place the resulting chromosomes in the new population. Figure 3.3 shows
the reproduction process of initial population
14
Figure 3.1 Flowchart of Genetic Algorithm
Each iteration of this process is called a generation. A GA is typically iterated for anywhere
from 50 to 500 or more generations. The entire set of generations is called a run. At the end of
a run there are often one or more highly fit chromosomes in the population. Since
randomness plays a large role in each run, two runs with different randomnumber seeds will
generally produce different detailed behaviours.
3.5.1 Crossover
The main distinguishing feature of a GA is the use of crossover. Singlepoint crossover is the
simplest form: a single crossover position is chosen at random and the parts of two parents
after the crossover position are exchanged to form two offspring. The idea here is, of course,
to recombine building blocks on different strings. Singlepoint crossover has some
shortcomings, though. For one thing, it cannot combine all possible schemas. For example, it
cannot in general combine instances of 11*****1 and ****11** to form an instance of
11**11*1. Likewise, schemas with long defining lengths are likely to be destroyed under
singlepoint crossover. The schemas that can be created or destroyed by a crossover depend
strongly on the location of the bits in the chromosome. Singlepoint crossover assumes that
short, loworder schemas are the functional building blocks of strings, but one generally does
not know in advance what ordering of bits will group functionally related bits togetherthis
was the purpose of the inversion operator and other adaptive operators described above.
Many people have also noted that single point crossover treats some loci preferentially: the
segments exchanged between the two parents always contain the endpoints of the strings. To
reduce positional bias and this "endpoint" effect, twopoint crossover can be used, in which
15
two positions are chosen at random and the segments between them are exchanged.
Twopoint crossover is less likely to disrupt schemas with large defining lengths and can
combine more schemas than singlepoint crossover. In addition, the segments that are
exchanged do not necessarily contain the endpoints of the strings. Again, there are schemas
that twopoint crossover cannot combine. GA practitioners have experimented with different
numbers of crossover points in one of the methods, the number of crossover points for each
pair of parents is chosen from a Poisson distribution whose mean is a function of the length
of the chromosome. Parameterized uniform crossover has no positional biasany schemas
contained at different positions in the parents can potentially be recombined in the offspring.
However, this lack of positional bias can prevent co adapted alleles from ever forming in the
population, since parameterized uniform crossover can be highly disruptive of any schema.
To choose a particular type of crossover there is no simple answer; the success or failure of a
particular crossover operator depends in complicated ways on the particular fitness function,
encoding, and other details of the GA. It is still a very important open problem to fully
understand these interactions. Again, it is hard to glean generalized conclusions for the usage
of particular type of crossover for a given situation. It is common in recent GA applications to
use either twopoint crossover or parameterized uniform crossover. For the most part, the
comments and references above deal with crossover in the context of bitstring encodings,
though some of them apply to other types of encodings as well. Some types of encodings
require specially defined crossover and mutation operators. Most of the comments above also
assume that crossover's ability to recombine highly fit schemas is the reason it should be
useful.
3.5.2 Mutation
A common view in the GA community, dating back to Holland's book Adaptation in Natural
and Artificial Systems, is that crossover is the major instrument of variation and innovation in
GAs, with mutation insuring the population against permanent fixation at any particular locus
and thus playing more of a background role. This differs from the traditional positions of
other evolutionary computation methods, such as evolutionary programming and early
versions of evolution strategies, in which random mutation is the only source of variation.
However, the appreciation of the role of mutation is growing as the GA community attempts
to understand how GAs solve complex problems. For solving a complex problem, it is not a
choice between crossover and mutation but rather the balance among crossover, mutation,
and selection that is all important. The correct balance also depends on details of the fitness
function and the encoding. Furthermore, crossover and mutation vary in relative usefulness
over the course of a run. Precisely how all this happens still needs to be elucidated. The most
promising prospect for producing the right balances over the course of a run is to find ways
for the GA to adapt its own mutation and crossover rates during a search.
16
individual's fitness was decreased by the presence of other population members, where the
amount of decrease due to each other population member was an explicit increasing function
of the similarity between the two individuals. Thus, individuals that were similar to many
other individuals were punished, and individuals that were different were rewarded. Goldberg
and Richardson showed that in some cases this could induce appropriate "speciation,"
allowing the population members to converge on several peaks in the fitness landscape rather
than all converging to the same peak.
Holland introduced the formalism of genetic algorithms (GAs) by analogy with how
biological evolution occurs in Nature. Deep down under, a computer program is nothing but a
string of 1s and 0s, something like 110101010110000101001001.. This is similar to how
chromosomes are laid out along the length of a DNA molecule. It can be thought of each
binary digit as a gene, and a string of such genes as a digital chromosome.
The essence of evolution is that, in a population, the fittest have a larger likelihood of
survival and propagation. Figure 3.2 shows the evolution environment of GA. In
computational terms, it amounts to maximizing some mathematical function representing
'fitness'.
17
Figure 3.2 Evolution Environment of GA
It is important to remember that whereas Darwinian evolution is an open-ended and blind,
process, GAs have a goal. GAs are meant to solve particular pre-conceived problems.
For solving a maximization problem, the steps involved are typically as follows:
1. The first step is to let the computer produce a population of, say, 1000 individuals, each
represented by a randomly generated digital chromosome.
2. The next step is to test the relative fitness of each individual (represented entirely by the
corresponding chromosome) regarding its effectiveness in maximizing the function under
consideration; e.g. the fitness function. A score is given for the fitness, say on a scale of 1 to
10. In biological terms, the fitness is a probabilistic measure of the reproductive success of
the individual. The higher the fitness, the greater is the chance that the individual will be
selected (by us) for the next cycle of reproduction.
3. Mutations are introduced occasionally in a digital chromosome by arbitrarily flipping a 1
to 0, or a 0 to 1.
4. The next step in the GA is to take (in a probabilistic manner) those individual digital
chromosomes that have high levels of fitness, and produce a new generation of individuals by
a process of reproduction or crossover. The GA chooses pairs of individuals.
5. The new generation of digital individuals produced is again subjected to the entire cycle of
gene expression, fitness testing, selection, mutation, and crossover.
6. These cycles are repeated a large number of times, till the desired optimization or
maximization problem has been solved.
The sexual crossover in reproductive biology, as also in the artificial GA, serves two
purposes. It provides a chance for the appearance of new individuals in the population which
may be fitter than any earlier individual. Secondly, a provides a mechanism for the existence
of clusters of genes which are particularly well-suited for occurring together because they
result in higher-than-average fitness for any individual possessing them.
As the population can shuffle its genetic material in every generation through sexual
reproduction, new building blocks, as well as new combinations of existing building blocks,
can arise. Thus the GA quickly creates individuals with an ever-increasing number of good
building blocks (the 'bad' building blocks get gradually eliminated by natural selection). If
there is a survival advantage to the population, the individuals that have the good building
18
blocks spread rapidly, and the GA converges to the solution rapidly (a case of positive
feedback).
In the presence of reproduction, crossover and mutation, almost any compact cluster of genes
that provides above-average fitness will grow in the population exponentially. Schema was
the term used by Holland for any specific pattern of genes.
19
Figure 3.5 Symbiotic Organisms Living Together in an Ecosystem
Figure 3.5 illustrates a group of symbiotic organisms living together in an ecosystem where
mutualism, commensalism, parasitism methods are used for sustenance.
Similar to other population-based algorithms, the proposed SOS iteratively uses a population
of candidate solutions to promising areas in the search space in the process of seeking the
optimal global solution. SOS begins with an initial population called the ecosystem. In the
initial ecosystem, a group of organisms is generated randomly to the search space. Each
organism represents one candidate solution to the corresponding problem. Each organism in
the ecosystem is associated with a certain tness value, which reects degree of adaptation to
the desired objective. Almost all meta-heuristic algorithms apply a succession of operations
to solutions in each iteration in order to generate new solutions for the next iteration. A
standard GA has two operators, namely crossover and mutation. Harmony Search proposes
three rules to improvise a new harmony: memory considering, pitch adjusting, and random
choosing. Three phases were introduced in the ABC algorithm to nd the best food source.
These were the employed bee, onlooker bee, and scout bee phases. In SOS, new solution
generation is governed by imitating the biological interaction between two organisms in the
ecosystem. Three phases that resemble the real-world biological interaction model are
introduced:
20
Mutualism phase
Commensalism phase
Parasitism phase
The character of the interaction denes the main principle of each phase. Interactions benet
both-sides in the mutualism phase; benet one side and do not impact the other in the
commensalism phase; benet one side and actively harm the other in the parasitism phase.
Each organism interacts with the other organism randomly through all phases. The process is
repeated until termination criteria are met. The following algorithm outline reects the above
explanation:
1 Initialization
2 Repeat
3 Mutualism phase
4 Commensalism phase
5 Parasitism phase
21
Mutual vector = Xi +Xj/2
Equation shows a vector called Mutual Vector that represents the relationship
characteristic between organism Xi and Xj. The part of equation (X best mutual vector*BF) is
reecting the mutualistic effort to achieve their goal in increasing their survival advantage.
According to the Darwins evolution theory, only the ttest organisms will prevail, all
creatures are forced to increase their degree of adaptation to their ecosystem. Some of them
use symbiotic relationship with others to increase their survival adaptation. The Xbest is
needed here because Xbest is representing the highest degree of adaptation. Therefore,
Xbest/global solution is used to model the highest degree of adaptation as the target point for
the tness increment of both organisms. Finally, organisms are updated only if their new
tness is better than their pre-interaction tness. Figure 3.6 shows the flowchart of mutualism
phase in SOS
22
Figure 3.6 Flowchart of Mutualism Phase in SOS
23
3.8.2.2 Commensalism Phase
An example of commensalism is the relationship between remora sh and sharks. The remora
attaches itself to the shark and eats food leftovers, thus receiving a benet. The shark is
unaffected by remora sh activities and receives minimal, if any, benet from the
relationship.
Similar to the mutualism phase, an organism, Xj, is selected randomly from the ecosystem to
interact with Xi. In this circumstance, organism Xi attempts to benet from the interaction.
However, organism Xj itself neither benets nor suffers from the relationship. The new
candidate solution of Xi is calculated according to the commensal symbiosis between
organism Xi and Xj, which is modelled in equation following the rules, organism Xi is
updated only if its new tness is better than its pre-interaction tness.
24
3.8.2.3 Parasitism Phase
An example of parasitism is the plasmodium parasite, which uses its relationship with the
anopheles mosquito to pass between human hosts. While the parasite thrives and reproduces
inside the human body, its human host suffers malaria and may die as a result. In SOS,
organism Xi is given a role similar to the anopheles mosquito through the creation of an
articial parasite called Parasite Vector. Parasite Vector is created in the search space by
duplicating organism Xi, then modifying the randomly selected dimensions using a random
number. Organism Xj is selected randomly from the ecosystem and serves as a host to the
parasite vector. Parasite Vector tries to replace Xj in the ecosystem. Both organisms are then
evaluated to measure their tness. If Parasite Vector has a better tness value, it will kill
organism Xj and assume its position in the ecosystem. If the tness value of Xj is better, Xj
will have immunity from the parasite and the Parasite Vector will no longer be able to live in
that ecosystem. Figure 3.8 shows the flowchart of parasitism phase
26
to sample problems demonstrated the ability of SOS to generate solutions at a quality
signicantly better than other meta-heuristic algorithms. Based on mathematical benchmark
function results, SOS precisely identied 22 of 26 benchmark function solutions, surpassing
the performance of GA, DE, BA, PSO, and PBA. SOS was also tested with four practical
structural design problems. The three phases of the SOS algorithm are simple to operate, with
only simple mathematical operations to code. Further, unlike competing algorithms, SOS
does not use tuning parameters, which enhances performance stability.
CHAPTER 4
4.1 RESULTS AND DISCUSSIONS
4.1.1 Solution by Genetic Algorithm
27
Consider the convective heat transfer from a spherical reactor of diameter D and temperature
T s to a fluid at a temperature T a , with a convective heat transfer coefficient h,
denoting ( T sT a ) as , h is given by
h 2+ {
0.5 0.2
D } ---------- 1
It is to minimize the heat transfer from the sphere. Set up the objective function in terms of D
and subject to a single constraint. Employing genetic algorithm and SOS obtain the
optimum values of D and in order to minimize the heat transfer.
The equation for heat transfer from the surface area of a spherical reactor is given by
convection losses
Q=hA ------------ 2
Where
h 2+ {
0.5 0.2
D } and
Q= (2 D 2 +0.5D 0.2 )
20
Converting it to single variable problem by substituting = , it reduces to
D
Q=62.83(2D+0.91 D0.2 )
Boundary conditions0<D<6.3
String length = 6
4.1.1.1 Iteration 1
Table 4.1 First Iteration Values of Initial Population
28
D values INITIAL FITNESS COUNT MATING Values D
POPULATION VALUES POOL After values
Crossover
&
Mutation
0.3 000011 110.439 2 000011 000010 0.2
2.0 010100 301.094 1 000011 000100 0.4
3.0 011110 422.877 1 010100 010011 1.9
5.0 110010 669.739 0 011110 011111 3.3
Mating pairs are selected at random as (1, 4) and (2, 3) .crossover site for fist pair is selected
as 4 and for second pair is 3. Table 4.1 shows first iteration values of Initial population
4.1.1.2 Iteration2
Table 4.2 Second Iteration Values of Initial Population
Mating pairs are selected at random as (1,3) and( 2,4) .crossover site for fist pair is selected as
3 and for second pair is 3. Table 4.2 shows second iteration values of Initial population
4.1.1.3 Iteration3
Table 4.3 Third Iteration Values of Initial Population
29
4.1.1.4 Iteration 4
Table 4.4 Fourth Iteration Values of Initial Population
From the above iteration table 4.4 the best value of D is 0.2m and its fitness value is
104.0184kw
The iterations repeated until the fitness values of the last two iterations are almost same.
Minimum heat transfer through surface of spherical reactor is 104.01kw with diameter of
reactor being 0.1384m
h 2+ {
0.5 0.2
D }
subject to constraint D=20
We wish to minimize the heat transfer from the sphere. Set up the objective function in terms
of D and subject to a single constraint. Employing genetic algorithm and SOS obtain the
optimum values of D and in order to minimize the heat transfer
The equation for heat transfer from the surface area of a spherical reactor is given convection
losses
Q=hA
Where
h 2+ { 0.5 0.2
D } and
30
surface area through which heat transfer occurs isA = 4 r 2= D2
Q= (2 D 2 +0.5D 0.2 )
20
Converting it to single variable problem by substituting = ,
D
Q=62.83(2D+0.91 D0.2 )
Iteration 1
Step1: Initialize eco system
[]
0.3
2
=
3
5
D limits 0 to 6.3
[ ]
110.43
301.09
Fitness values=
422.87
669.73
Step 4: i=0
Step 5: i=i+1
31
X i+ X j X 1+ X 3 0.3+3
Mutual vector= = = =1.65
2 2 2
= 0.3+(0.8516(0.3-1.65))
= 0.3-1.149
=-0.84
=3+(0.623(0.3-1.65))
=2.1549
[]
0.1
2
Updated ecosystem=
2.15
5
[ ]
103.18
301.09
And the fitness values are
319.22
669.73
= o.1+(-0.0353)(0.1-5)
[ ]
0.2729
2
Updated ecosystem=
2.15
5
32
4.2.1.3 Parasitism Phase
Rand no=2, > X j= X 2
=2+(0.626(0.2729-1.136))
X 2 new = 1.461
[ ]
0.2729
1.461
Updated ecosystem= and the fitness values are
2.15
5
[ ]
108.424
236.4711
319.22
669.73
4.2.2 Iteration2
I=1+1=2
X i= X 2 , X best =X 1
X i+ X j 1.46+ 5
Mutual vector= = =3.23
2 2
=1.461+(0.8516(0.2729-3.23))
=-1.050.1
[ ]
0.2729
0.1
Updated ecosystem=
2.15
5
33
X 3 new = X 3 +(rand(0,1)( X best MVBF))
=5+(0.623(0.2729-3.23))
=3.155
[ ]
0.2729
0.1
Updated ecosystem= and the fitness value matrix is
2.15
3.15
[ ]
108.424
103.18
319.22
441.28
=0.1+(-0.0353(0.1-2.15))
=0.1723
[ ]
0.2729
0.1723
Updated ecosystem=
2.15
3.15
[ ]
108.424
102.925
Fitness value matrix=
319.22
441.28
X best = 0.1723
X i+ X j X 2+ X 3 0.1723+ 2.15
Mutual vector= = = =1.16
2 2 2
34
X 3 new = X 3 +(rand(0,1)( X best MVBF))
=2.15+(0.623(0.1723-1.16))
=1.534
[ ]
0.2729
0.1723
Updated ecosystem=
1.534
3.15
[ ]
108.424
102.925
And the fitness matrix is
245.24
441.28
4.2.3 Iteration 3
=1.534+(0.8516(0.1723-2.34))
[ ]
0.2729
0.1723
Updated ecosystem=
0.1
3.15
=3.15+(0.6236(0.1723-2.34))
=1.79
[ ] [ ]
0.2729 108.424
0.1723 = 102.925
Updated ecosystem= Fitness value=
0.1 103.18
1.79 275.82
35
X j=X 4 ,
=0.1+(-0.0353(0.1723-1.79))
[ ] [ ]
0.2729 108.424
0.1723 102.925
Updated Ecosystem= and the Fitness value=
0.15 102.53
1.79 275.82
X i+ X j X 4 + X 3 0.1571+1.79
Mutual vector= = = =0.97
2 2 2
=1.79+(0.6236(0.1571-0.97))
[ ] [ ]
0.2729 108.424
0.1723 102.925
Updated ecosystem= and the fitness matrix is
0.1571 102.53
1.79 215.26
4.2.4 Iteration 4
Select X j randomly X j= X 2
X i+ X j X 4 + X 3 0.1723+1.79
Mutual vector= = = =0.98
2 2 2
36
=1.79+(0.6236(0.1571-0.97))
=0.1723+(0.6236(0.1571-0.98))
[ ] [ ]
0.2729 108.424
0.1 103.18
Updated ecosystem= Updated fitness value=
0.1571 102.53
1.09 193.16
=1.09+(-0.0353(0.1571-0.2729))
[ ] [ ]
0.2729 108.424
0.1 103.18
Updated ecosystem= Updated fitness value=
0.1571 102.53
1.09 193.16
0.1+1.09
Mutual vector= =0.59
2
=0.1+(0.6236(0.1571-0.595))
=-0.176 0.1
37
[ ] [ ]
0.2729 108.424
0.1 103.18
Updated ecosystem= Updated fitness value=
0.1571 102.53
1.09 193.16
By SOS method,
Optimum value of heat transfer is 102.53kw with diameter of spherical reactor being
0.1571m
D=0.1571m D= 0.1384m
Q=102.53KW Q=102.305KW
38
CHAPTER 5
5.1 Conclusions
The following can be concluded
For a nuclear reactor whose dimensions of diameter range from 0 to 6.3m the optimal
heat removal rate was found to be 102.305KW at diameter of spherical reactor
0.138m using genetic algorithm and symbiotic organism search optimization using
same initial population.
The problem of finding the global optimum in a space with many local optima is a
classical problem for all systems that can adapt and learn. Both genetic algorithm and
symbiotic organisms search provide a comprehensive search methodology for
optimization. GA and SOS are applicable to both continuous and discrete optimization
problems. In global optimization scenarios these optimization techniques often
manifest their strengths; efficient parallelizable search; the ability to evolve solutions
with multiple objective criteria; and a characterizable and controllable process of
innovation.
For solving any mathematical model using genetic algorithms it requires large number
of process parameters assumed by us compared to the symbiotic organism search
where only few are required. These process parameters are not constant for every
situation as they are randomly selected quantities and a proper selection procedure
(like utilizing mat lab soft wares etc.) must be used for these algorithms to direct us to
an optimal solution.
Both the methods provide a near approximate of the optimal solution of the given
problem and this value keeps on changing in very small quantities in subsequent
39
iterations and this continues for all further iterations and must be terminated on some
basis like computational time or rounding of the values of the solution.
Compared to genetic algorithms, symbiotic organisms search is a modern approach
and a more sophisticated one which has a powerful approach towards analysing
population in their bounds. Hence it can be concluded that the computational time for
symbiotic organisms search is little less compared to genetic algorithms. Both the
methods can be applied to any optimization problem with any number of variables.
40
CHAPTER 6
6.1 References
Design and optimization of thermal systems by Yogesh Jaluria
Neural networks, fuzzy logic and genetic algorithms by S. Raja Sekaran and G.A.
Vijayalakshmi Pai
Dynamic Simulation and Optimization of a Dual-Type Methanol Reactor Using
Genetic Algorithms, an article published in the journal Chemical Engineering and
Technology by F. Askari, M.R Rahimpour, A.Jahanmiri
Mathematical Modelling and optimization of DME synthesis in two stage spherical
reactor, an article published in the Journal of natural gas and engineering by
Fereshteh Samimi, Mahdi Bayat , M.R Rahimpour, Peyman Keshavarz.
Dynamic optimization of a multi-stage spherical, radial flow reactor for the naphtha
reforming process in the presence of catalyst deactivation using differential evolution
(DE) method, an article published in the Journal of hydrogen energy by M.R
Rahimpour, Davood Iranshashi.
41