0% found this document useful (0 votes)
22 views56 pages

SCA_merged

Uploaded by

yashtyagi262
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views56 pages

SCA_merged

Uploaded by

yashtyagi262
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Experiment 1: Fuzzy Set Operations

The aim of this experiment is to perform basic operations like union, intersection, Min-Max product,
complement, and difference on fuzzy sets. Fuzzy logic expands classical set theory, allowing partial
membership between 0 and 1. This is essential for real-world scenarios where binary outcomes
aren't sufficient. For instance, union calculates the maximum membership value, while intersection
computes the minimum. Complement is the inverse, and difference involves finding the minimum of
a value and its complement. This understanding is foundational for applications in uncertain
environments like weather prediction, decision-making, and machine learning.

Experiment 2: Vehicle Speed Control with Fuzzy Logic


This experiment demonstrates how fuzzy logic can control vehicle speed by fuzzifying inputs such as
speed and distance into linguistic variables like "slow" or "fast." A fuzzy rule base is applied to
determine actions, e.g., "If speed is fast and distance is short, reduce speed." The inference engine
applies these rules, and defuzzification translates results into precise actions. Such systems excel in
adaptive scenarios like autonomous driving, where binary logic fails. The implementation provides
insights into real-time decision-making using fuzzy systems, ensuring smoother, safer vehicle
operations.

Experiment 3: Vector and Matrix Operations on Fuzzy Sets


The focus here is on extending vector and matrix operations to fuzzy systems. Fuzzy vectors and
matrices represent uncertain data, enabling operations like addition (union/min), scalar
multiplication, and matrix multiplication (Min-Max product). These methods enhance decision-
making in uncertain environments, applicable in fields like AI, robotics, and control systems. For
example, fuzzy matrix multiplication aids in modeling relationships between uncertain variables. This
experiment integrates fuzzy logic with linear algebra, paving the way for advanced applications in
uncertain environments.

Experiment 4: Logic Gates Using ANN


The experiment implements logical gates (AND, OR, NAND, NOR, XOR, XNOR) using Artificial Neural
Networks (ANNs). A perceptron model simulates these gates, using weights, biases, and activation
functions. For example, an AND gate outputs 1 only if both inputs are 1. The experiment showcases
how neural networks can simulate fundamental operations, serving as a building block for complex
computational models. This forms the basis for advanced systems like neural network processors and
AI applications, demonstrating the transition from logic-based to learning-based paradigms.

Experiment 5: Linear Regression with One Variable


This experiment involves creating a linear regression model to predict outcomes based on a single
variable. The model follows y=wx+by = wx + by=wx+b, where www is the slope and bbb is the
intercept. The aim is to optimize these parameters to minimize the error between predicted and
actual values. Visualization of the data and predictions aids in understanding the relationship. Such
models are foundational in predictive analytics, where trends from historical data guide decision-
making in domains like finance, healthcare, and marketing.
Experiment 6: Gradient Descent for Weight Optimization
The goal is to implement the gradient descent algorithm to optimize weights and biases in predictive
models. The algorithm calculates gradients to iteratively minimize the cost function, which measures
prediction errors. By adjusting weights and biases, the model improves accuracy. This method is
pivotal in machine learning, especially for training neural networks. Gradient descent's ability to
handle large datasets and complex functions makes it indispensable for applications like
recommendation systems, image recognition, and natural language processing.

Experiment 7: Solving the Knapsack Problem


The Knapsack problem involves selecting items with maximum value while adhering to weight
constraints. Using dynamic programming, the experiment constructs a matrix to track the optimal
value for each weight limit. This classical optimization problem finds applications in resource
allocation, logistics, and investment strategies. Understanding this problem and its solutions fosters
skills in algorithmic thinking, essential for solving real-world constraints efficiently.

Experiment 8: Genetic Algorithm - Crossover and Mutation


Crossover and mutation are genetic algorithm techniques for evolving solutions. Crossover mixes
parent characteristics to create offspring, while mutation introduces variability. Techniques like
single-point, double-point, and multi-point crossovers, along with mutations such as bit-flip or
inversion, maintain genetic diversity. These methods prevent premature convergence and explore
solution spaces effectively, vital for optimization problems like scheduling, routing, and machine
learning.

Experiment 9: Combination Problems with Genetic


Algorithm
This experiment uses genetic algorithms to solve combination problems by optimizing chromosomes
through selection, crossover, and mutation. Each chromosome represents a potential solution
evaluated via a fitness function. Iterative improvements yield the best solution. This approach is
robust for complex problems like scheduling, resource optimization, and design challenges where
traditional methods struggle. The experiment illustrates how biological evolution inspires
computational problem-solving.
Index

S. Experiment Date Signature Remarks


N
1. To perform basic operations
such as union, interaction, Min-
Max product, complement,
difference etc on fuzzy sets
2 Implementation for fuzzy
logic-based application: Vehicle
Speed Control
3 To perform vector and matrix
operations on fuzzy sets

4 Implementation of AND, OR,


NOR, NAND, XOR and XNOR
using ANN

5 Implementing the model fw,b for


linear regression with one
variable

6 Implementing gradient
algorithm for finding the optimal
set of weights and biases

7 To solve the Knapsack problem

8 To perform crossover and


mutation operations

9 Use genetic algorithm to solve


the problem of combination

2
Experiment 1

Aim: To perform basic operations such as union, interaction,


Min- Max product, complement, difference etc on fuzzy sets

Software used: Google Colaboratory

Theory:

Fuzzy sets extend classical set theory by allowing partial membership,


represented by a membership function ranging from 0 to 1. Basic
operations on fuzzy sets include:

● Union: The membership value of the union of two fuzzy sets is


the maximum of the membership values of the individual sets at
each element.
● Intersection: The membership value of the intersection is the
minimum of the membership values.
● Complement: The complement of a fuzzy set is obtained by
subtracting the membership value from 1.
● Difference: The difference between two fuzzy sets is calculated
by taking the minimum of the membership value of the first set
and the complement of the second set.
● Min-Max Product: This operation involves calculating the
product of membership values of two fuzzy sets, highlighting
their interaction.

These operations facilitate reasoning in uncertain environments.

Code:
def union(A,B):
result={}
for i in A:
if(A[i]>B[i]):

3
result[i]=A[i]
else:
result[i]=B[i]
print('Union Of Two Sets : ',result)
def intersect(A,B):
result={}
for i in A:
if(A[i]>B[i]):
result[i]=A[i]
else:
result[i]=B[i]
print('Intersection Of Two Sets : ',result)
def complement(A):
result={}
result1={}
for i in A:
result[i]=round(1-A[i],2)
for i in B:
result1[i]=round(1-B[i],2)
print('Complement of Set A : ',result)
print('Complement of Set B : ',result1)
def difference(A,B):
result={}
for i in A:
result[i]=round(min(A[i],1-B[i]),2)
print('Difference is :',result)
def sum(A,B):
result = {}
for key in A:
if key in B:
result[key] = round(A[key] + B[key] - (A[key] * B[key]), 2)
print('Sum is : ', result)
def dot_product(A, B):
result = 0
for key in A:
if key in B:
result += A[key] * B[key]
print("Dot product is:", result)
def scalar_a(A,C):
result = {}
for key in A:
result[key] = A[key] * C
print("Set A multiplied by",C,"is",result)

4
def scalar_b(B,C):
result = {}
for key in B:
result[key] = B[key] * C
print("Set B multiplied by",C,"is",result)
def cart_product(A,B):
cart_prod={}
for elem_A,mem_A in A.items():
for elem_B,mem_B in B.items():
cart_prod[(elem_A,elem_B)]=min(mem_A,mem_B)
print('Cartesian Product is : ',cart_prod)
def cart_product(A,B):
cart_prod={}
for elem_A,mem_A in A.items():
for elem_B,mem_B in B.items():
cart_prod[(elem_A,elem_B)]=min(mem_A,mem_B)
print('Cartesian Product is : ',cart_prod)

Taking Input from the user to build fuzzy set

Printing the fuzzy set


print("Fuzzy Set A : ",A)
print("Fuzzy Set B : ",B)

Asking user for the choosing the operation they want to perform

print('Fuzzy Logic Basic Maths Operations')


print("0 for exit")
print("1.Union")
print("2.Intersection")
print("3.Complement")
print("4.Difference")

5
print("5.Sum")
print("6.Dot Products")
print("7.Product of integer with set A")
print("8.Product of integer with set B")
print("9.Cartesian Product")
print("10.Print All")

choice=int(input("Enter Your Choice : "))

if choice==1:
union(A,B)
elif choice==2:
intersect(A,B)
elif choice==3:
complement(A)
elif choice==4:
difference(A,B)
elif choice==5:
sum(A,B)
elif choice==6:
dot_product(A,B)
elif choice==7:
alpha=int(input("Enter The Value you want to multiply SET A with :
"))
scalar_a(A,alpha)
elif choice==8:
beta=int(input("Enter The Value you want to multiply SET B with :
"))
scalar_b(B,beta)
elif choice==9:
import numpy as np
cart_product(A,B)
elements_A = list(A.keys())
elements_B = list(B.keys())
relation_matrix = np.zeros((len(elements_A), len(elements_B)))
for i, elem_A in enumerate(elements_A):
for j, elem_B in enumerate(elements_B):
relation_matrix[i, j] = min(A[elem_A], B[elem_B])
print("Relation Matrix:")
header = " " + " ".join(f"{elem_B:>4}" for elem_B in
elements_B)
print(header)
for i, elem_A in enumerate(elements_A):

6
row = f"{elem_A:>4}: " + " ".join(f"{relation_matrix[i,
j]:>4.1f}" for j in range(len(elements_B)))
print(row)
elif choice==10:
union(A,B)
intersect(A,B)
complement(A)
difference(A,B)
sum(A,B)
dot_product(A,B)
alpha=int(input("Enter The Value you want to multiply SET A with :
"))
scalar_a(A,alpha)
beta=int(input("Enter The Value you want to multiply SET B with :
"))
scalar_b(B,beta)
elif choice==0:
exit
else:
print("Invalid Choice")

Output:

Implementing Min Max Logic

import numpy as np

7
def get_fuzzy_relation(name):
relation = {}
print(f"Enter pairs and membership values for {name} (format:
'element1 element2 value'). Type 'done' when finished:")
while True:
user_input = input()
if user_input.lower() == 'done':
break
try:
element1, element2, value = user_input.split()
relation[(element1, element2)] = float(value)
except ValueError:
print("Invalid input. Please enter in the format: 'element1
element2 value'")
return relation

print("Define relation R:")


R = get_fuzzy_relation("R")

print("Define relation S:")


S = get_fuzzy_relation("S")

elements_A = sorted(set(k[0] for k in R.keys()))


elements_B = sorted(set(k[1] for k in R.keys()))
elements_C = sorted(set(k[1] for k in S.keys()))

composition_matrix = np.zeros((len(elements_A), len(elements_C)))

for i, a in enumerate(elements_A):
for j, c in enumerate(elements_C):
max_min = 0
for b in elements_B:
if (a, b) in R and (b, c) in S:
max_min = max(max_min, min(R[(a, b)], S[(b, c)]))
composition_matrix[ i, j] = max_min

print("Min-Max Composition Result Matrix:")


header = " " + " ".join(f"{c:>4}" for c in elements_C)
print(header)
for i, a in enumerate(elements_A):
row = f"{a:>4}: " + " ".join(f"{composition_matrix[i, j]:>4.1f}"
for j in range(len(elements_C)))
print(row)

8
Output

Result: Performed basic operations such as union, interaction,


Min- Max product, complement, difference etc on fuzzy sets

9
Experiment 2

Aim: Implementation for fuzzy logic-based application: Vehicle


Speed Control

Software used: Google Colaboratory

Theory:
Implementation
Input Variables: Common inputs include the current speed of the
vehicle, the distance to the vehicle ahead, and acceleration. These
inputs are fuzzified using membership functions to categorize them
into linguistic variables like "slow," "medium," and "fast."
Fuzzy Rule Base: A set of rules is defined, such as:

● If speed is "fast" and distance is "short," then "decrease speed


significantly."
● If speed is "slow" and distance is "long," then "increase speed
slightly."

Inference Engine: This processes the fuzzy rules to determine the


appropriate output actions based on the input values.
Defuzzification: The final step involves converting the fuzzy output
into a precise control action, typically using methods like the centroid
technique, to adjust the vehicle's speed effectively.

10
Code:

import numpy as np

Speed = 80
Acceleration = 105

print("The speed input is: ", Speed)


print("The Acceleration input is: ", Acceleration)
print("\n")

# Functions for open left and right fuzzyfication


def openLeft(x, alpha, beta):
if x < alpha:
return 1
if alpha <= x <= beta:
return (beta - x) / (beta - alpha)
else:
return 0

def openRight(x, alpha, beta):


if x < alpha:
return 0
if alpha <= x <= beta:
return (x - alpha) / (beta - alpha)
else:
return 0

# Function for triangular fuzzyfication


def triangular(x, a, b, c):
return max(min((x - a) / (b - a), (c - x) / (c - b)), 0)

# Fuzzy Partition
def partition(x):
NL = NM = NS = ZE = PS = PM = PL = 0

if 0 < x < 60:


NL = openLeft(x, 30, 60)
if 30 < x < 90:
NM = triangular(x, 30, 60, 90)
if 60 < x < 120:
NS = triangular(x, 60, 90, 120)

11
if 90 < x < 150:
ZE = triangular(x, 90, 120, 150)
if 120 < x < 180:
PS = triangular(x, 120, 150, 180)
if 150 < x < 210:
PM = triangular(x, 120, 150, 180)
if 180 < x < 240:
PL = openRight(x, 180, 210)

return NL, NM, NS, ZE, PS, PM, PL

# Getting fuzzy values for all the inputs for all the fuzzy sets
NLSD, NMSD, NSSD, ZESD, PSSD, PMSD, PLSD = partition(Speed)
NLAC, NMAC, NSAC, ZEAC, PSAC, PMAC, PLAC = partition(Acceleration)

# Display the fuzzy values for all fuzzy sets


outPut = [[NLSD, NMSD, NSSD, ZESD, PSSD, PMSD, PLSD],
[NLAC, NMAC, NSAC, ZEAC, PSAC, PMAC, PLAC]]
print("The fuzzy values of the crisp inputs")
print(["NL", "NM", "NS", "ZE", "PS", "PM", "PL"])
print(np.round(outPut, 2))

# Rules implementation
def compare(TC1, TC2):
TC = 0
if TC1>TC2 and TC1 !=0 and TC2 !=0:
TC = TC2
else:
TC = TC1
if TC1 == 0 and TC2 !=0:
TC = TC2
if TC2 == 0 and TC1 !=0:
TC = TC1
return TC

def rule(NLSD, NMSD, NSSD, ZESD, PSSD, PMSD, PLSD, NLAC, NMAC, NSAC,
ZEAC, PSAC, PMAC, PLAC):
PLTC1 = min(NLSD, ZEAC)
PLTC2 = min(ZESD, NLAC)
PLTC = compare(PLTC1, PLTC2)

PMTC1 = min(NMSD, ZEAC)

12
PMTC2 = min(ZESD, NMAC)
PMTC = compare(PMTC1, PMTC2)

PSTC1 = min(NSSD, PSAC)


PSTC2 = min(ZESD, NSAC)
PSTC = compare(PSTC1, PSTC2)

NSTC = min(PSSD, NSAC)


NLTC = min(PLSD, ZEAC)

return PLTC, PMTC, PSTC, NSTC, NLTC

PLTC, PMTC, PSTC, NSTC, NLTC = rule(NLSD, NMSD, NSSD, ZESD, PSSD, PMSD,
PLSD, NLAC, NMAC, NSAC, ZEAC, PSAC, PMAC, PLAC)

print("\n")
# Display the fuzzy values for all rules
outPutRules = [[PLTC, PMTC, PSTC, NSTC, NLTC]]
print("The fuzzy output: ")
print(["PLTC", "PMTC", "PSTC", "NSTC", "NLTC"])
print(np.round(outPutRules, 2))

# De-fuzzyfication
def areaTR(mu, a, b, c):
x1 = mu * (b - a) + a
x2 = c - mu * (c - b)
d1 = c - a
d2 = x2 - x1
aTR = (1 / 2) * mu * (d1 + d2)
return aTR

def areaOL(mu, alpha, beta):


xOL = beta - mu * (beta - alpha)
return 1 / 2 * mu * (beta + xOL), beta / 2

def areaOR(mu, alpha, beta):


xOR = (beta - alpha) * mu + alpha
aOR = (1 / 2) * mu * ((beta - alpha) + (beta - xOR))
return aOR, (beta - alpha) / 2 + alpha

def defuzzyfication(PLTC, PMTC, PSTC, NSTC, NLTC):


areaPL = areaPM = areaPS = areaNS = areaNL = 0
cPL = cPM = cPS = cNS = cNL = 0

13
if PLTC != 0:
areaPL, cPL = areaOR(PLTC, 180, 210)

if PMTC != 0:
areaPM = areaTR(PMTC, 150, 180, 210)
cPM = 180

if PSTC != 0:
areaPS = areaTR(PSTC, 120, 150, 180)
cPS = 150

if NSTC != 0:
areaNS = areaTR(NSTC, 60, 90, 120)
cNS = 90

if NLTC != 0:
areaNL, cNL = areaOL(NLTC, 30, 60)

numerator = areaPL * cPL + areaPM * cPM + areaPS * cPS + areaNS *


cNS + areaNL * cNL
denominator = areaPL + areaPM + areaPS + areaNS + areaNL

if denominator == 0:
print("No rules exist to give the result")
return 0
else:
crispOutput = numerator / denominator
return crispOutput

crispOutputFinal = defuzzyfication(PLTC, PMTC, PSTC, NSTC, NLTC)

if crispOutputFinal != 0:
print("\nThe crisp TC value is: ", crispOutputFinal)

14
Output

Result: Implemented Fuzzy Logic’s Application

15
Experiment 3

Aim: To perform vector and matrix operations on fuzzy sets

Software used: Google Colaboratory

Theory:

Performing vector and matrix operations on fuzzy sets extends


traditional linear algebra to handle uncertainty and partial truth.
Here’s an overview:

● Fuzzy Vectors: A fuzzy vector is composed of fuzzy sets, where


each component represents a fuzzy value. Operations include:
○ Addition: The fuzzy addition of two vectors is performed
by adding corresponding components using a defined
operation (e.g., max or min for union, and sum for
traditional addition).
○ Scalar Multiplication: Each component of the fuzzy vector
is multiplied by a scalar, affecting the membership values
according to the chosen scaling method.
● Fuzzy Matrices: A fuzzy matrix consists of fuzzy sets in its
entries, allowing for operations like:
○ Addition: Similar to vectors, fuzzy matrix addition
involves adding corresponding entries using max or min
operations.
○ Multiplication: Fuzzy matrix multiplication combines rows
and columns using operations like the min-max product or
the product-sum method.

These operations enable more complex modeling of systems with


uncertainty, enhancing applications such as control systems and
decision-making processes.

16
Code:

import numpy as np # it is an unofficial standard to use np for numpy


import time
a = np.zeros(4)
print(f"np.zeros(4) : a = {a}, a shape = {a.shape}, a data type =
{a.dtype}")
a = np.zeros((4,))
print(f"np.zeros(4,) : a = {a}, a shape = {a.shape}, a data type =
{a.dtype}")
a = np.random.random_sample(4)
print(f"np.random.random_sample(4): a = {a}, a shape = {a.shape}, a
data type = {a.dtype}")
a = np.arange(4.)
print(f"np.arange(4.): a = {a}, a shape = {a.shape}, a data type =
{a.dtype}")
a = np.random.rand(4)
print(f"np.random.rand(4): a = {a}, a shape = {a.shape}, a data type =
{a.dtype}")
a = np.array([5,4,3,2])
print(f"np.array([5,4,3,2]): a = {a}, a shape = {a.shape}, a data type
= {a.dtype}")
a = np.array([5.,4,3,2])
print(f"np.array([5.,4,3,2]): a = {a}, a shape = {a.shape}, a data type
= {a.dtype}")a = np.arange(10)
print(a)
#access an element
print(f"a[2].shape: {a[2].shape} a[2] = {a[2]}, Accessing an element
returns a scalar") # access the last element, negative indexes count
from the end
print(f"a[-1] = {a[-1]}")
#indexs must be within the range of the vector or they will produce and
error
try:c = a[10]
except Exception as e:
print("The error message you'll see is:")
print(e)
a = np.arange(10)
print(f"a = {a}")
#access 5 consecutive elements (start:stop:step)

17
c = a[2:7:1]; print("a[2:7:1] = ", c)
# access 3 elements separated by two
c = a[2:7:2]; print("a[2:7:2] = ", c)
# access all elements index 3 and above
c = a[3:]; print("a[3:] = ", c)
# access all elements below index 3 c = a[:3]; print("a[:3] = ", c)
# access all elements
c = a[:]; print("a[:] = ", c)

Output

a = np.array([1,2,3,4])
print(f"a : {a}") # negate elements of a
b = -a
print(f"b = -a : {b}") # sum all elements of a, returns a scalar
b = np.sum(a)
print(f"b = np.sum(a) : {b}")
b = np.mean(a)
print(f"b = np.mean(a): {b}")
b = a**2
print(f"b = a**2 : {b}")

Output

a = np.array([1, 2, 3, 4])
# multiply a by a scalar
b = 5 * a
print(f"b = 5 * a : {b}")
def my_dot(a, b):
"""
Compute the dot product of two vectors
Args:
a (ndarray (n,)): input vector
b (ndarray (n,)): input vector with same dimension as a

18
Returns:
x (scalar):
"""
x=0
for i in range(a.shape[0]):
x = x + a[i] * b[i]
return x # test 1-D
a = np.array([1, 2, 3, 4])
b = np.array([-1, 4, 3, 2])
print(f"my_dot(a, b) = {my_dot(a, b)}")
a = np.array([1, 2, 3, 4])
b = np.array([-1, 4, 3, 2])
c = np.dot(a, b)
print(f"NumPy 1-D np.dot(a, b) = {c}, np.dot(a, b).shape = {c.shape} ")
c = np.dot(b, a)
print(f"NumPy 1-D np.dot(b, a) = {c}, np.dot(a, b).shape = {c.shape} ")

np.random.seed(1)
a = np.random.rand(10000000) # very large arrays
b = np.random.rand(10000000)
tic = time.time() # capture start time
c = np.dot(a, b)
toc = time.time() # capture end time
print(f"np.dot(a, b) = {c:.4f}")
print(f"Vectorized version duration: {1000*(toc-tic):.4f} ms ")
tic = time.time() # capture start time
c = my_dot(a,b)
toc = time.time() # capture end time
print(f"my_dot(a, b) = {c:.4f}")
print(f"loop version duration: {1000*(toc-tic):.4f} ms ")
del(a)
del(b) #remove these big arrays from memory

X = np.array([[1],[2],[3],[4]])
w = np.array([2])
c = np.dot(X[1], w)
print(f"X[1] has shape {X[1].shape}")
print(f"w has shape {w.shape}")
print(f"c has shape {c.shape}")

19
a = np.zeros((1, 5))
print(f"a shape = {a.shape}, a = {a}")
a = np.zeros((2, 1))
print(f"a shape = {a.shape}, a = {a}")
a = np.random.random_sample((1, 1))
print(f"a shape = {a.shape}, a = {a}")

a = np.array([[5], [4], [3]])


print(f" a shape = {a.shape}, np.array: a = {a}")
a = np.array([[5], # One can also
[4], # separate values
[3]]); #into separate rows
print(f" a shape = {a.shape}, np.array: a = {a}")

a = np.arange(6).reshape(-1, 2) #reshape is a convenient way to create


matrices
print(f"a.shape: {a.shape}, \na= {a}")
#access an element
print(f"\na[2,0].shape: {a[2, 0].shape}, a[2,0] = {a[2, 0]},
type(a[2,0]) = {type(a[2, 0])} Accessing an element returns a
scalar\n")
#access a row
print(f"a[2].shape: {a[2].shape}, a[2] = {a[2]}, type(a[2]) =
{type(a[2])}")

20
a = np.arange(6).reshape(-1, 2)
print(a)

a = np.arange(20).reshape(-1, 10)
print(f"a = \n{a}")
#access 5 consecutive elements (start:stop:step)
print("a[0, 2:7:1] = ", a[0, 2:7:1], ", a[0, 2:7:1].shape =", a[0,
2:7:1].shape, "a 1-D array")
#access 5 consecutive elements (start:stop:step) in two rows
print("a[:, 2:7:1] = \n", a[:, 2:7:1], ", a[:, 2:7:1].shape =", a[:,
2:7:1].shape, "a 2-D array") # access all elements
print("a[:,:] = \n", a[:,:], ", a[:,:].shape =", a[:,:].shape)
# access all elements in one row (very common usage)
print("a[1,:] = ", a[1,:], ", a[1,:].shape =", a[1,:].shape, "a 1-D
array") # same as
print("a[1] = ", a[1], ", a[1].shape =", a[1].shape, "a 1-D array")

Result: Performed Basic Matrix and Vector operations such as


slicing, reshaping etc.

21
Experiment 4

Aim: Implementation of AND, OR, NOR, NAND, XOR and


XNOR using ANN

Software used: Google Colaboratory

Theory:

Implementing logical operations like AND, OR, NOR, NAND, XOR,


and XNOR using artificial neural networks (ANNs) involves
designing simple feedforward networks with appropriate activation
functions. Here’s a concise overview:

● Network Structure: Each operation can typically be


implemented with a small network consisting of:
○ Input Layer: Two neurons representing binary inputs (0
or 1).
○ Hidden Layer: One or more neurons (depending on the
operation).
○ Output Layer: A single neuron for the output.
● Activation Functions: Commonly, the sigmoid or step
activation function is used to map outputs between 0 and 1.
● Training Data: The networks are trained on binary inputs and
their corresponding logical outputs. For example:
○ AND: Input (0,0) → 0, (0,1) → 0, (1,0) → 0, (1,1) → 1.
○ OR: Input (0,0) → 0, (0,1) → 1, (1,0) → 1, (1,1) → 1.
○ XOR: Input (0,0) → 0, (0,1) → 1, (1,0) → 1, (1,1) → 0.
● Training Process: The ANN is trained using backpropagation
with a mean squared error loss function to adjust weights and
biases until the output matches the expected results for each
logical operation.

22
Code:

Implementing AND GATE

# importing Python library


import numpy as np

# define Unit Step Function


def unitStep(v):
if v >= 0:
return 1
else:
return 0

# design Perceptron Model


def perceptronModel(x, w, b):
v = np.dot(w, x) + b
y = unitStep(v)
return y

# AND Logic Function


# w1 = 1, w2 = 1, b = -0.5
def AND_logicFunction(x):
w = np.array([1, 1])
b = -0.5
return perceptronModel(x, w, b)

# testing the Perceptron Model


test1 = np.array([0, 1])
test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("AND({}, {}) = {}".format(0, 1, AND_logicFunction(test1)))


print("AND({}, {}) = {}".format(1, 1, AND_logicFunction(test2)))
print("AND({}, {}) = {}".format(0, 0, AND_logicFunction(test3)))
print("AND({}, {}) = {}".format(1, 0, AND_logicFunction(test4))

23
Output of AND Gate

Implementing OR Gate:

# importing Python library


import numpy as np

# define Unit Step Function


def unitStep(v):
if v >= 0:
return 1
else:
return 0

# design Perceptron Model


def perceptronModel(x, w, b):
v = np.dot(w, x) + b
y = unitStep(v)
return y

# OR Logic Function
# w1 = 1, w2 = 1, b = -0.5
def OR_logicFunction(x):
w = np.array([1, 1])
b = -0.5
return perceptronModel(x, w, b)

# testing the Perceptron Model


test1 = np.array([0, 1])
test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("OR({}, {}) = {}".format(0, 1, OR_logicFunction(test1)))


print("OR({}, {}) = {}".format(1, 1, OR_logicFunction(test2)))
print("OR({}, {}) = {}".format(0, 0, OR_logicFunction(test3)))

24
Output of OR Gate

Implementing NOR Gate:


# importing Python library
import numpy as np

# define Unit Step Function


def unitStep(v):
if v >= 0:
return 1
else:
return 0

# design Perceptron Model


def perceptronModel(x, w, b):
v = np.dot(w, x) + b
y = unitStep(v)
return y

# NOT Logic Function


# wNOT = -1, bNOT = 0.5
def NOT_logicFunction(x):
wNOT = -1
bNOT = 0.5
return perceptronModel(x, wNOT, bNOT)

# OR Logic Function
# w1 = 1, w2 = 1, bOR = -0.5
def OR_logicFunction(x):
w = np.array([1, 1])
bOR = -0.5
return perceptronModel(x, w, bOR)

# NOR Logic Function


# with OR and NOT
# function calls in sequence
def NOR_logicFunction(x):
output_OR = OR_logicFunction(x)

25
output_NOT = NOT_logicFunction(output_OR)
return output_NOT

# testing the Perceptron Model


test1 = np.array([0, 1])
test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("NOR({}, {}) = {}".format(0, 1, NOR_logicFunction(test1)))


print("NOR({}, {}) = {}".format(1, 1, NOR_logicFunction(test2)))
print("NOR({}, {}) = {}".format(0, 0, NOR_logicFunction(test3)))
print("NOR({}, {}) = {}".format(1, 0, NOR_logicFunction(test4)))

Output of NOR Gate

Implementing NAND Gate


# importing Python library
import numpy as np

# define Unit Step Function


def unitStep(v):
if v >= 0:
return 1
else:
return 0

# design Perceptron Model


def perceptronModel(x, w, b):
v = np.dot(w, x) + b
y = unitStep(v)
return y

# NOT Logic Function


# wNOT = 1, bNOT = 0.5
def NOT_logicFunction(x):

26
wNOT = 1
bNOT = 0.5
return perceptronModel(x, wNOT, bNOT)

# AND Logic Function


# w1 = 1, w2 = 1, bAND = -0.5
def AND_logicFunction(x):
w = np.array([1, 1])
bAND = -0.5
return perceptronModel(x, w, bAND)

# NAND Logic Function


# with AND and NOT
# function calls in sequence
def NAND_logicFunction(x):
output_AND = AND_logicFunction(x)
output_NOT = NOT_logicFunction(output_AND)
return output_NOT

# testing the Perceptron Model


test1 = np.array([0, 1])
test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("NAND({}, {}) = {}".format(0, 1, NAND_logicFunction(test1)))


print("NAND({}, {}) = {}".format(1, 1, NAND_logicFunction(test2)))
print("NAND({}, {}) = {}".format(0, 0, NAND_logicFunction(test3)))
print("NAND({}, {}) = {}".format(1, 0, NAND_logicFunction(test4)))

Output of NAND gate

Implementation of XOR Gate


# importing Python library
import numpy as np

# define Unit Step Function


def unitStep(v):
if v >= 0:

27
return 1
else:
return 0

# design Perceptron Model


def perceptronModel(x, w, b):
v = np.dot(w, x) + b
y = unitStep(v)
return y

# NOT Logic Function


# wNOT = -1, bNOT = 0.5
def NOT_logicFunction(x):
wNOT = -1
bNOT = 0.5
return perceptronModel(x, wNOT, bNOT)

# AND Logic Function


# here w1 = wAND1 = 1,
# w2 = wAND2 = 1, bAND = -0.5
def AND_logicFunction(x):
w = np.array([1, 1])
bAND = -0.5
return perceptronModel(x, w, bAND)

# OR Logic Function
# w1 = 1, w2 = 1, bOR = -0.5
def OR_logicFunction(x):
w = np.array([1, 1])
bOR = -0.5
return perceptronModel(x, w, bOR)

# XOR Logic Function


# with AND, OR and NOT
# function calls in sequence
def XOR_logicFunction(x):
y1 = AND_logicFunction(x)
y2 = OR_logicFunction(x)
y3 = NOT_logicFunction(y1)
final_x = np.array([y2, y3])
finalOutput = AND_logicFunction(final_x)
return finalOutput

28
# testing the Perceptron Model
test1 = np.array([0, 1])
test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("XOR({}, {}) = {}".format(0, 1, XOR_logicFunction(test1)))


print("XOR({}, {}) = {}".format(1, 1, XOR_logicFunction(test2)))
print("XOR({}, {}) = {}".format(0, 0, XOR_logicFunction(test3)))
print("XOR({}, {}) = {}".format(1, 0, XOR_logicFunction(test4)))

Output of XOR Gate

Implementation of XNOR Gate


# importing Python library
import numpy as np

# define Unit Step Function


def unitStep(v):
if v >= 0:
return 1
else:
return 0

# design Perceptron Model


def perceptronModel(x, w, b):
v = np.dot(w, x) + b
y = unitStep(v)
return y

# NOT Logic Function


# wNOT = 1, bNOT = 0.5
def NOT_logicFunction(x):
wNOT = 1
bNOT = 0.5
return perceptronModel(x, wNOT, bNOT)

29
# AND Logic Function
# w1 = 1, w2 = 1, bAND = -0.5
def AND_logicFunction(x):
w = np.array([1, 1])
bAND = -0.5
return perceptronModel(x, w, bAND)

# OR Logic Function
# here w1 = wOR1 = 1,
# w2 = wOR2 = 1, bOR = -0.5
def OR_logicFunction(x):
w = np.array([1, 1])
bOR = -0.5
return perceptronModel(x, w, bOR)

# XNOR Logic Function


# with AND, OR and NOT
# function calls in sequence
def XNOR_logicFunction(x):
y1 = OR_logicFunction(x)
y2 = AND_logicFunction(x)
y3 = NOT_logicFunction(y1)
final_x = np.array([y2, y3])
finalOutput = OR_logicFunction(final_x)
return finalOutput

# testing the Perceptron Model


test1 = np.array([0, 1])
test2 = np.array([1, 1])
test3 = np.array([0, 0])
test4 = np.array([1, 0])

print("XNOR({}, {}) = {}".format(0, 1, XNOR_logicFunction(test1)))


print("XNOR({}, {}) = {}".format(1, 1, XNOR_logicFunction(test2)))
print("XNOR({}, {}) = {}".format(0, 0, XNOR_logicFunction(test3)))
print("XNOR({}, {}) = {}".format(1, 0, XNOR_logicFunction(test4)))

30
Output of XNOR

Result: Successfully Implemented basic gates

31
Experiment 5

Aim: Implementing the model fw,b for linear regression with one
variable

Software used: Google Colaboratory

Theory:

Implementing the model fw,bfw,bfw,b for linear regression with one


variable involves creating a mathematical representation and
algorithm for predicting an output based on a single input feature.
Here's a concise outline:

Model Representation

● Equation: The linear regression model is represented as:


y=wx+by = wx + by=wx+b where:
○ yyy = predicted output
○ xxx = input feature (independent variable)
○ www = weight (slope of the line)
○ bbb = bias (intercept)

Steps for Implementation

1. Data Preparation
2. Initialize Parameters
3. Define the Loss Function:
4. Gradient Descent Optimization:
5. Iterate
6. Prediction

32
Code:
import numpy as np
import matplotlib.pyplot as plt

x_train = np.array([1.0, 2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0])


y_train = np.array([300.0,
500.0,700.0,900.0,1100.0,1300.0,1500.0,1700.0,1900.0])
print(f"x_train = {x_train}")
print(f"y_train = {y_train}")

# m is the number of training examples


print(f"x_train.shape: {x_train.shape}")
m = x_train.shape[0]
print(f"Number of training examples is: {m}")

# m is the number of training examples


m = len(x_train)
print(f"Number of training examples is: {m}")

i = 0 # Change this to 1 to see (x^1, y^1)


x_i = x_train[i]
y_i = y_train[i]
print(f"(x^({i}), y^({i})) = ({x_i}, {y_i})")

# Plot the data points


plt.scatter(x_train, y_train, marker='x', c='r')
# Set the title
plt.title("Housing Prices")
# Set the y-axis label
plt.ylabel('Price (in 1000s of dollars)')
# Set the x-axis label
plt.xlabel('Size (1000 sqft)')
plt.show()

w = 200
b = 100

def compute_model_output(x, w, b):


"""
Computes the prediction of a linear model
Args:
x (ndarray (m,)): Data, m examples

33
w,b (scalar): model parameters
Returns
f_wb (ndarray (m,)): model prediction
"""
m = x.shape[0]
f_wb = np.zeros(m)
for i in range(m):
f_wb[i] = w * x[i] + b

return f_wb

tmp_f_wb = compute_model_output(x_train, w, b,)

# Plot our model prediction


plt.plot(x_train, tmp_f_wb, c='b',label='Our Prediction')

# Plot the data points


plt.scatter(x_train, y_train, marker='x', c='r',label='Actual Values')

# Set the title


plt.title("Housing Prices")
# Set the y-axis label
plt.ylabel('Price (in 1000s of dollars)')
# Set the x-axis label
plt.xlabel('Size (1000 sqft)')
plt.legend()
plt.show()

34
Output:

Result: Successfully created a model for single variable

35
Experiment 6

Aim: Implementing gradient algorithm for finding the optimal set


of weights and biases

Software used: Google Colaboratory

Theory:
Data Preparation:

Gather the dataset that includes input features and their corresponding
target values.
Split the dataset into training and testing sets.
Initialize Parameters:

Start with small random values or zeros for the weights and biases.
Define the Loss Function:

Use a loss function to measure how well the model's predictions


match the actual target values.
Compute Gradients:

Calculate how much the loss function would change if you slightly
adjusted the weights and biases. This gives you the direction to update
them.
Update Parameters:

Adjust the weights and biases based on the gradients. Move them in
the direction that reduces the loss.
Iterate:

36
Repeat the gradient calculation and parameter updates for a set
number of iterations or until the changes become very small.
Make Predictions:
Use the final weights and biases to predict outcomes for new input
data.

Code:

import math,copy
x_train = np.array([1.0, 2.0])
y_train = np.array([300.0, 500.0])
def compute_cost(x, y, w, b):
m = x.shape[0]
cost = 0

for i in range(m):
f_wb = w * x[i] + b
cost = cost + (f_wb - y[i]) ** 2
total_cost = 1 / (2 * m) * cost

return total_cost

def compute_gradient(x, y, w, b):


# Number of training examples
m = x.shape[0]

dj_dw = 0
dj_db = 0

for i in range(m):
f_wb = w * x[i] + b
dj_dw_i = (f_wb - y[i]) * x[i]
dj_db_i = f_wb - y[i]
dj_db += dj_db_i
dj_dw += dj_dw_i
dj_dw = dj_dw / m
dj_db = dj_db / m

return dj_dw, dj_db


def gradient_descent(

37
x, y, w_in, b_in, alpha, num_iters, cost_function,
gradient_function
):
# An array to store cost J and w's at each iteration primarily for
graphing later
J_history = []
p_history = []
b = b_in
w = w_in

for i in range(num_iters):
# Calculate the gradient and update the parameters using
gradient_function
dj_dw, dj_db = gradient_function(x, y, w, b)

# Update Parameters using equation (3) above


b = b - alpha * dj_db
w = w - alpha * dj_dw

# Save cost J at each iteration


J_history.append(cost_function(x, y, w, b))
p_history.append([w, b])

if i<100000:
J_history.append(cost_function(x,y,w,b))
p_history.append([w,b])

if i% math.ceil(num_iters / 10) == 0:
print(
f"Iteration {i:4}: Cost {J_history[-1]:0.2e} ",
f"dj_dw: {dj_dw: 0.3e}, dj_db: {dj_db: 0.5e} ")

return w, b, J_history, p_history # return w and J,w history for


graphing

w_init = 0
b_init = 0
# some gradient descent settings
iterations = 100000
tmp_alpha = 9.0e-2
# run gradient descent
w_final, b_final, J_hist, p_hist = gradient_descent(
x_train,

38
y_train,
w_init,
b_init,
tmp_alpha,
iterations,
compute_cost,
compute_gradient,
)
print(f"(w,b) found by gradient descent:
({w_final:.4f},{b_final:.4f})")

Plotting the curve

fig,(ax1,ax2)=plt.subplots(1,2,constrained_layout=True, figsize=(12,6))
ax1.plot(J_hist[:100])
ax2.plot(1000 + np.arange(len(J_hist[1000:])), J_hist[1000:])
ax1.set_title("Cost vs. iteration(start)");ax2.set_title("Cost vs.
iteration (end)")
ax1.set_ylabel('Cost')
ax2.set_xlabel("Iteration")
plt.show()

Output:

39
Result: Successfully found optimal sets of weight and bias using
Gradient Descent Algorithm

40
Experiment 7

Aim: To solve the Knapsack problem

Software used: Google Colaboratory

Theory:
The Knapsack problem is a classic optimization problem that involves
selecting a subset of items, each with a given weight and value, to
maximize the total value while staying within a weight limit. Here's a
concise overview of how to solve it:

Problem Definition
Inputs:
A list of items, each with a weight and value.
A maximum weight capacity of the knapsack.
Goal: Maximize the total value of items in the knapsack without
exceeding its weight capacity.

Approaches to Solve the Knapsack Problem


Dynamic Programming (0/1 Knapsack):

Suitable for small to medium-sized problems.


Builds a table where each entry represents the maximum value
achievable with a certain weight capacity using the first few items.
Greedy Approach (Fractional Knapsack):

Applicable only if items can be divided.


Sort items by value-to-weight ratio and fill the knapsack with the
highest ratio until the capacity is reached.
Backtracking:

41
Explore all combinations of items to find the optimal solution.
Inefficient for large datasets due to exponential time complexity.

Code:
import numpy as np

def knapsack(max_capacity, weights, values, n):


# Initialize the 2D array with zeros
K = [[0 for x in range(max_capacity + 1)] for x in range(n + 1)]

# Build the 2D array in bottom-up manner


for i in range(n + 1):
for w in range(max_capacity + 1):
if i == 0 or w == 0:
K[i][w] = 0
print(np.array(K))
elif weights[i-1] <= w:
K[i][w] = max(values[i-1] + K[i-1][w-weights[i-1]],
K[i-1][w])
print(np.array(K))
else:
K[i][w] = K[i-1][w]
print(np.array(K))

return K[n][max_capacity]

max_capacity = 10
values = [50, 40, 80, 10]
weights = [3, 4, 6, 2]
n = len(values)
print("Maximum value that can be obtained:", knapsack(max_capacity,
weights, values, n))

Output

42
Result: Successfully solved the Knapsack problem

43
Experiment 8

Aim: To perform crossover and mutation operations

Software used: Google Collaboratory

Theory:
Crossover is a genetic algorithm operation where two parent solutions
are selected from the population to create offspring. First, a random
crossover point is chosen along the length of the parent solutions,
indicating where genetic material will be exchanged. In a single-point
crossover, the parents are split at this point, and the first part of one
parent is combined with the second part of the other parent to
generate two new offspring. Multi-point crossover involves selecting
multiple points, allowing segments from each parent to be alternated,
further enhancing genetic diversity in the offspring.

Mutation is another crucial operation in genetic algorithms, involving


the introduction of small, random changes to individual solutions.
This process begins by selecting an individual from the population to
undergo mutation. A mutation rate is then defined, determining the
probability that each part of the individual will be mutated. For each
part, a random decision is made to determine if a mutation will occur.
If so, that part is altered, such as flipping a binary value from 0 to 1 or
vice versa. Together, crossover and mutation help maintain diversity
in the population and prevent premature convergence on suboptimal
solutions, allowing the algorithm to explore the solution space
effectively.

Code:
import random

44
def single_point_crossover(parent1, parent2):
# Ensure parents are of the same length
assert len(parent1) == len(parent2)

# Random crossover point


crossover_point = random.randint(1, len(parent1) - 1)

# Create offspring
offspring1 = parent1[:crossover_point] + parent2[crossover_point:]
offspring2 = parent2[:crossover_point] + parent1[crossover_point:]

return offspring1, offspring2

# Example usage
parent1 = "101010"
parent2 = "110011"
offspring1, offspring2 = single_point_crossover(parent1, parent2)
print("Offspring 1:", offspring1)
print("Offspring 2:", offspring2)

def double_point_crossover(parent1, parent2):


assert len(parent1) == len(parent2)

# Select two random crossover points


point1, point2 = sorted(random.sample(range(len(parent1)), 2))

# Create offspring
offspring1 = parent1[:point1] + parent2[point1:point2] +
parent1[point2:]
offspring2 = parent2[:point1] + parent1[point1:point2] +
parent2[point2:]

return offspring1, offspring2

# Example usage
parent1 = "101010"
parent2 = "110011"
offspring1, offspring2 = double_point_crossover(parent1, parent2)
print("Offspring 1:", offspring1)
print("Offspring 2:", offspring2)

45
def multi_point_crossover(parent1, parent2, num_points):
assert len(parent1) == len(parent2)

# Randomly select crossover points and ensure they are unique


points = sorted(random.sample(range(len(parent1)), num_points))

offspring1 = ''
offspring2 = ''

last_point = 0
for i in range(num_points):
# Take from parent1 for even indices and from parent2 for odd
indices
if i % 2 == 0:
offspring1 += parent1[last_point:points[i]]
offspring2 += parent2[last_point:points[i]]
else:
offspring1 += parent2[last_point:points[i]]
offspring2 += parent1[last_point:points[i]]

last_point = points[i] # Update the last point

# Add the remaining segment after the last crossover point


if num_points % 2 == 0:
offspring1 += parent1[last_point:]
offspring2 += parent2[last_point:]
else:
offspring1 += parent2[last_point:]
offspring2 += parent1[last_point:]

return offspring1, offspring2

# Example usage
parent1 = "101010"
parent2 = "110011"
offspring1, offspring2 = multi_point_crossover(parent1, parent2, 3)
print("Offspring 1:", offspring1)
print("Offspring 2:", offspring2)

def bit_flip_mutation(chromosome, mutation_rate=0.1):


mutated_chromosome = ''
for gene in chromosome:

46
if random.random() < mutation_rate: # Flip with the given
mutation rate
mutated_chromosome += '1' if gene == '0' else '0'
else:
mutated_chromosome += gene
return mutated_chromosome

# Example usage
chromosome = "101010"
mutated_chromosome = bit_flip_mutation(chromosome)
print("Original Chromosome:", chromosome)
print("Mutated Chromosome:", mutated_chromosome)

def swap_mutation(chromosome):
# Convert string to list for easy swapping
chromosome_list = list(chromosome)

# Randomly select two indices to swap


idx1, idx2 = random.sample(range(len(chromosome_list)), 2)

# Swap the genes


chromosome_list[idx1], chromosome_list[idx2] =
chromosome_list[idx2], chromosome_list[idx1]

return ''.join(chromosome_list)

# Example usage
chromosome = "abcdef"
mutated_chromosome = swap_mutation(chromosome)
print("Original Chromosome:", chromosome)
print("Mutated Chromosome:", mutated_chromosome)

def scramble_mutation(chromosome):
chromosome_list = list(chromosome)

# Select random start and end indices


start, end = sorted(random.sample(range(len(chromosome_list)), 2))

# Scramble the selected genes


subset = chromosome_list[start:end + 1]
random.shuffle(subset)

47
# Place the scrambled genes back
chromosome_list[start:end + 1] = subset

return ''.join(chromosome_list)

# Example usage
chromosome = "abcdef"
mutated_chromosome = scramble_mutation(chromosome)
print("Original Chromosome:", chromosome)
print("Scrambled Chromosome:", mutated_chromosome)

def inversion_mutation(chromosome):
chromosome_list = list(chromosome)

# Select random start and end indices


start, end = sorted(random.sample(range(len(chromosome_list)), 2))

# Reverse the selected genes


chromosome_list[start:end + 1] = reversed(chromosome_list[start:end
+ 1])

return ''.join(chromosome_list)

# Example usage
chromosome = "abcdef"
mutated_chromosome = inversion_mutation(chromosome)
print("Original Chromosome:", chromosome)
print("Inverted Chromosome:", mutated_chromosome)

48
Output:

Result: Successfully performed Crossover and Mutation operations

49
Experiment 9

Aim: Use genetic algorithm to solve the problem of combination

Software Used: Google Colab

Theory:
In the genetic algorithm process is as follows
Step 1. Determine the number of chromosomes, generation, and
mutation rate and crossover rate value
Step 2. Generate chromosome-chromosome number of the population,
and the initialization value of the genes chromosome-chromosome
with a random value
Step 3. Process steps 4-7 until the number of generations is met
Step 4. Evaluation of fitness value of chromosomes by calculating
objective function
Step 5. Chromosomes selection
Step 6. Crossover
Step 7. Mutation Step
8. Solution (Best Chromosomes)

50
Code:
import random

def objective_function(chromosome):
return abs((chromosome[0] + 2 * chromosome[1] + 3 * chromosome[2] +
4 * chromosome[3]) - 30)

def fitness_value(f_obj):
return 1 / (1 + f_obj)

def select_chromosomes(population, fitness_values):


total_fitness = sum(fitness_values)
probabilities = [f / total_fitness for f in fitness_values]
cumulative_probabilities = []
cumulative_sum = 0

for p in probabilities:
cumulative_sum += p
cumulative_probabilities.append(cumulative_sum)

new_population = []
for _ in range(len(population)):
R = random.random()

51
for i, cp in enumerate(cumulative_probabilities):
if R <= cp:
new_population.append(population[i])
break

return new_population

def crossover(population, crossover_rate):


new_population = population[:]
for k in range(0, len(population), 2):
if k + 1 < len(population):
R = random.random()
if R < crossover_rate:
crossover_point = random.randint(1, 3) # Between 1 and
3
new_population[k][crossover_point:], new_population[k +
1][crossover_point:] = \
new_population[k + 1][crossover_point:],
new_population[k][crossover_point:]

return new_population

def mutate(population, mutation_rate):


total_genes = 4 * len(population)
num_mutations = int(mutation_rate * total_genes)

for _ in range(num_mutations):
position = random.randint(0, total_genes - 1)
chromosome_index = position // 4
gene_index = position % 4
population[chromosome_index][gene_index] = random.randint(0,
30)

return population

def genetic_algorithm(num_chromosomes=6, num_generations=50,


mutation_rate=0.1, crossover_rate=0.25):
# Step 1: Initialization
population = [[random.randint(0, 30) for _ in range(4)] for _ in
range(num_chromosomes)]

best_chromosome_overall = None
best_objective_value_overall = float('inf')

52
for generation in range(num_generations):
# Step 4: Evaluation
fitness_values = [fitness_value(objective_function(chrom)) for
chrom in population]

# Update overall best solution


for chrom in population:
obj_value = objective_function(chrom)
if obj_value < best_objective_value_overall:
best_objective_value_overall = obj_value
best_chromosome_overall = chrom

# Output the best solution of the current generation


print(f"Generation {generation + 1}: Best Chromosome:
{best_chromosome_overall}, "
f"Objective Function Value:
{best_objective_value_overall}")

# Step 5: Selection
population = select_chromosomes(population, fitness_values)

# Step 6: Crossover
population = crossover(population, crossover_rate)

# Step 7: Mutation
population = mutate(population, mutation_rate)

return best_chromosome_overall, best_objective_value_overall

# Run the genetic algorithm with multiple iterations


num_iterations = 10 # Number of iterations
best_solution_overall = None
best_objective_value_overall = float('inf')

for iteration in range(num_iterations):


print(f"\nIteration {iteration + 1}:")
best_solution, best_value = genetic_algorithm()
if best_value < best_objective_value_overall:
best_objective_value_overall = best_value
best_solution_overall = best_solution

print("\nOverall Best Chromosome Found:", best_solution_overall)

53
print("Overall Objective Function Value:",
best_objective_value_overall)

Output:

54
Result: Using genetic algorithm solved the problem of
combination

55

You might also like