Theory and algorithm of Bisection Method
For any continuous function f(x)
• Find two points, say a and b such that : a<b and f (a) ∗ f (b) < 0
• Find the midpoint of a and b, say “t”
• t is the root of the given function if f(t) = 0; else follow the next step
• Divide the interval [a, b] -
1. If f (t) ∗ f (a) < 0 there exist a root between t and a
2. Else if f (t) ∗ f (b) < 0 there exist a root between t and b
• Repeat above three steps until f(t) = 0.
The bisection method is an approximation method to find the roots of the given equation by repeatedly dividing the interval.
This method will divide the interval until the resulting interval is found, which is extremely small.
Input: Function f , interval endpoints a, b; Tolerance ϵ
Output: Approximation of the root
while |b − a| > ϵ do
c ← (a + b)/2 ; // Calculate midpoint
f (a) ← Evaluate f (a);
f (c) ← Evaluate f (c);
if f (a) · f (c) < 0 then
b←c; // Root lies in the left subinterval
end
else
a←c; // Root lies in the right subinterval
end
end
return (a + b)/2
Algorithm 1: Bisection Method
Analytical Solution of Bisection Method
f (x) = x2 − 2 intial guess of the interval :
{(a,b)—(1,2)}
• 1st Iteration : f (1) = −1 and f (2) = 2 { ∵
f (1).f (2) < 0 }
1+2
c= = 1.5 , f (1.5) = 0.25
2
since f (1).f (1.5) < 0 new interval will become (1, 1.5) {∵ here
b =c }
error = |1.5 − 1| = 0.5
Figure 1: Bisection method
• 2nd Iteration : f (1) = −1 and f (1.5) = 0.25 { ∵
f (1).f (1.5) < 0 }
1 + 1.5
c= = 1.25 , f (1.25) = −0.4375
2
since f (1.25).f (1.5) < 0 new interval will become (1.25, 1.5) {∵
here a =c }
error = |1.5 − 1.25| = 0.25
• 3rd Iteration : f (1.25) = −0.4375 and f (1.5) = 0.25
{ ∵ f (1.25).f (1.5) < 0 }
1.25 + 1.5
c= = 1.375 , f (1.375) = −0.109375
2
since f (1.375).f (1.5) < 0 new interval will become (1.375, 1.5)
{∵ here a=c }
error = |1.5 − 1.375| = 0.125
• 4th Iteration : f (1.375) = −0.109375 and f (1.5) = 0.25
{ ∵ f (1.375).f (1.5) < 0 }
1.375 + 1.5
c= = 1.4375 , f (1.4375) = 0.06640625
2
since f (1.375).f (1.4375) < 0 new interval will become (1.375, 1.4375)
{∵ here b=c }
error = |1.4375 − 1.375| = 0.0625
• 5th Iteration : f (1.375) = −0.109375 and f (1.4375) = 0.06640625
{ ∵ f (1.375).f (1.4375) < 0 }
1.375 + 1.4375
c= = 1.40625 , f (1.40625) = −0.0224609375
2
since f (1.40625).f (1.4375) < 0 new interval will become (1.40625, 1.4375)
{∵ here a=c }
error = |1.40625 − 1.4375| = 0.03125
Code and Output of Bisection Method
Below is the Python Code to solve the equation = tan(x) − x
import math
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return math.tan(x)-x
def bisection(a,b,tole):
#error,c = math.inf ,1
while (abs(a-b)>tole):
if f(a)*f(b)>0:
print("bisection is not possible ")
return
if f(a)*f(b)==0:
if f(a)==0:
print(a, "is the root ")
else:
print(b, "is the root ")
elif f(a)*f(b)<0:
c=(a+b)/2
if f(a)*f(c)<0:
b=c
error = np.abs(a-b)
elif f(a)*f(c)>0:
a=c
error = np.abs(a-b)
elif f(c)==0 :
print(c," is the root")
error = np.abs(a-b)
break
print("The root is in the interval (",a,",",b,") and error is " ,error*100/c,"%")
return c
a= float(input("Enter the lower limit "))
b = float(input("Enter the upper limit "))
tole = eval(input("enter the tolerance value = "))
r = bisection(a,b,tole)
print ("ans is ", r)
x= np.linspace(-10,10,500)
f = np.vectorize(f)
plt.plot(x,f(x))
plt.plot(x,f(x),color = 'blue' , label = 'f(x) ')
plt.legend()
plt.xlabel("X-Axis")
plt.ylabel("Y-Axis")
plt.grid()
plt.show()
Figure 2: tan(x) − x
Below is the Python Code to solve the equation = x2 − 2
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
def f(x):
return x**2-2
def bisection(a,b,tole):
while (np.abs(a-b)>tole):
if f(a)*f(b)>0:
print("bisection is not possible ")
return
if f(a)*f(b)==0:
if f(a)==0:
print(a, "is the root ")
else:
print(b, "is the root ")
elif f(a)*f(b)<0:
c=(a+b)/2
if f(a)*f(c)<0:
b=c
error = np.abs(a-b)
elif f(a)*f(c)>0:
a=c
error = np.abs(a-b)
elif f(c)==0 :
print(c," is the root")
error = np.abs(a-b)
break
print("The root is in the interval (",a,",",b,") and error is " ,error*100/c,"%")
return c
a= float(input("Enter the lower limit "))
b = float(input("Enter the upper limit "))
tole = eval(input("enter the tolerance value = "))
r = bisection(a,b,tole)
print ("Root is ", r)
short= fsolve(f,[-1.5,1.5])
print("fsolve gives the all roots of equation = " , short)
x= np.linspace(-2,2,100)
plt.plot(x,f(x))
plt.plot(x,f(x),color = 'blue' , label = 'f(x) ')
plt.legend()
plt.xlabel("X-Axis")
plt.ylabel("Y-Axis")
plt.grid()
plt.show()
Figure 3: x2 − 2
Theory and algorithm of Newton-Raphson Method
The Newton-Raphson method is an iterative root-finding technique that utilizes tangent lines to approximate the root of a
function. By iteratively refining the initial guess, the method converges rapidly to the root. The key idea is to move along the
tangent line of the function’s curve, which brings us closer and closer to the root.
Let’s consider a function f (x) and its derivative f ′ (x). Starting with an initial guess x0 , we can construct a tangent line at the
point (x0 , f (x0 )). The equation of this tangent line is given by:
y = f (x0 ) + f ′ (x0 )(x − x0 )
where y represents the function value and x represents the variable.
To find the x-intercept of the tangent line, we set y = 0, which yields:
0 = f (x0 ) + f ′ (x0 )(x − x0 )
Solving for x, we get:
f (x0 )
x = x0 −
f ′ (x0 )
This formula provides us with an improved estimate x1 for the root.
We can then repeat this process iteratively, using the updated estimate x1 as the new starting point. The general formula for
the Newton-Raphson iteration is:
f (xn )
xn+1 = xn − ′
f (xn )
where xn represents the nth estimate of the root.
The pseudocode for the Newton-Raphson method is as follows:
Input: Function f , derivative function f ′ , initial guess x0 , Tolerance ϵ
Output: Approximation of the root
x ← x0 ;
while |f (x)| > ϵ do
f ′ (x) ← Evaluate f ′ (x);
∆x ← − ff′(x)
(x) ; // Calculate the update
x ← x + ∆x ; // Update the estimate
end
return x
Algorithm 2: Newton-Raphson Method
Analytical Solution of Newton Raphson Method
f (x) = x3 − 20 f ′ (x) = 3x2 intial guess : x1 = 3
f (x1 )
• 1st Iteration : x2 = x1 −
f ′ (x1 )
7
x2 = 3 − 27 = 2.74
x2 = 2.74
|2.74−3| Figure 4: Newton Raphson
error = 3 · 100 = 8.67%
f (x2 )
• 2nd Iteration : x3 = x2 −
f ′ (x2 )
.57
x3 = 2.74 − 22.52 = 2.72
x3 = 2.72
|2.74−2.72|
error = 2.74 · 100 = 0.4%
f (x3 )
• 3rd Iteration : x4 = x3 −
f ′ (x3 )
0.12
x4 = 2.72 − 22.19 = 2.715
x4 = 2.715
|2.72−2.715|
error = 2.72 · 100 = 0.18%
f (x4 )
• 4th Iteration : x5 = x4 −
f ′ (x4 )
0.01
x5 = 2.715 − 22.11 = 2.7146
x5 = 2.7146
|2.715−2.7146|
error = 2.715 · 100 = 0.014%
f (x5 )
• 5th Iteration : x6 = x5 −
f ′ (x5 )
0.004
x6 = 2.7146 − 22.10 = 2.7145
x6 = 2.7145
|2.7145−2.71456|
error = 2.71456 · 100 = 0.0066%
Code and Output of Newton Raphson Method
Below is the Python Code to solve the equation = x3 − 20
import math
x0 = eval(input("enter the value of intial guess = "))
tol = eval(input("enter the value of allowed error = "))
def newton(x0,tol):
def fun(x):
y = x**3 - 20
return y
def funp(x):
yp = 3*x**2
return yp
x1 =0
count =0
while True:
x1 = x0 - fun(x0)/funp(x0)
err = abs((x1 -x0)/x0)*100
x0 = x1
count+=1
if err<tol:
break
print(x1," is the closest root after the ",count ," iteration with relative error possible ", err)
newton(x0,tol)
Figure 5: x3 − 20
Below is the Python Code to solve the equation = tan(x) − x
import math
x0 = eval(input("enter the value of intial guess = "))
tol = eval(input("enter the value of allowed error = "))
def newton(x0,tol):
def fun(x):
y = math.tan(x) -x
return y
def funp(x):
yp = (1/math.cos(x)**2)-1
return yp
x1 =0
count =0
while True:
x1 = x0 - fun(x0)/funp(x0)
err = abs((x1 -x0)/x0)*100
x0 = x1
count+=1
if err<tol:
break
print(x1," is the closest root after the ",count ," iteration with relative error possible ", err)
newton(x0,tol)
Figure 6: tan(x) − x
Theory and algorithm of Secant Method
The secant method is a numerical technique for finding the root of a
function without explicitly calculating the derivative. It is based on the idea
of approximating the derivative using the slope of a secant line drawn through
two points on the function’s curve. The secant method utilizes a recurrence
relation to iteratively update the estimate of the root.
Let’s consider two initial guesses, x0 and x1 , which are close to the true
root. We can draw a secant line through the points (x0 , f (x0 )) and (x1 , f (x1 )).
The equation of the secant line can be expressed as:
f (x1 ) − f (x0 )
y = f (x0 ) + (x − x0 )
x1 − x0
where y represents the function value and x represents the variable.
To find the x-intercept of the secant line, we set y = 0, which yields:
f (x1 ) − f (x0 )
0 = f (x0 ) + (x − x0 )
x1 − x0
Solving for x, we get the updated estimate for the root:
f (x1 ) · (x1 − x0 )
x = x1 −
f (x1 ) − f (x0 )
This formula provides us with an improved estimate x2 for the root.
We can then repeat this process iteratively, using the updated estimates
x1 and x2 as the new points on the secant line. The general formula for the
secant method iteration is:
f (xn ) · (xn − xn−1 )
xn+1 = xn −
f (xn ) − f (xn−1 )
where xn represents the nth estimate of the root.
The pseudocode for the secant method is as follows:
Input: Function f , initial guesses x0 and x1 , Tolerance ϵ
Output: Approximation of the root
xprev ← x0 ;
x ← x1 ;
while |f (x)| > ϵ do
f (x)·(x−xprev )
∆x ← − f (x)−f (xprev ) ; // Calculate the update
xprev ← x;
x ← x + ∆x ; // Update the estimate
end
return x
Algorithm 3: Secant Method
Analytical Solution of Secant Method :
f (x) = x3 − 20 intial guess : x1 = 4 , x2 = 5.5
f (x2 ) · (x2 − x1 )
• 1st Iteration : x3 = x2 −
f (x2 ) − f (x1 )
f (5.5) · (5.5 − 4) 219.5625
x3 = 5.5 − = 5.5 − =
f (5.5) − f (4) 102.375
3.35
x3 = 3.35
|3.35−5.5|
error = 5.5 = 0.64
f (x3 ) · (x3 − x2 )
• 2nd Iteration : x4 = x3 −
f (x3 ) − f (x2 )
f (3.35) · (3.35 − 5.5) −37.8
x4 = 3.35 − = 3.35 − =
f (3.35) − f (5) −128.7
3.05
x4 = 3.05 Figure 7: Secant method
|3.35−3.05|
error = 3.35 = 0.09
f (x4 ) · (x4 − x3 )
• 3rd Iteration : x5 = x4 −
f (x4 ) − f (x3 )
f (3.05) · (3.05 − 3.35) −2.511
x5 = 3.05 − = 3.05 − =
f (3.05) − f (3.35) −9.22
2.77
x5 = 2.77
|3.05−2.77|
error = 3.05 = 0.1
Code and Output of Secant Method
Below is the Python Code to solve the equation = tan(x) − x
import math
x1 = eval(input("enter the value of lower limit = "))
x2 = eval(input("enter the value of upper limit = "))
tol = eval(input("enter the acceptable maximum error = "))
def f(x):
y =math.tan(x) - x
return y
def secant(x1,x2,tol):
error = ((abs(x2-x1))/x2)
while error>tol:
x3 = x2 - f(x2)*(x2 -x1)/(f(x2) - f(x1))
x1,x2 = x2 ,x3
error = (abs(x1 -x2 )/x2)
print("The root is ",x3,"with possible error of ",error,count)
secant(x1,x2,tol)
Figure 8
Below is the Python Code to solve the equation = x3 − 20
import math
x1 = eval(input("enter the value of lower limit = "))
x2 = eval(input("enter the value of upper limit = "))
tol = eval(input("enter the acceptable maximum error = "))
def f(x):
y =x**3-20
return y
def secant(x1,x2,tol):
error = ((abs(x2-x1))/x2)
while error>tol:
x3 = x2 - f(x2)*(x2 -x1)/(f(x2) - f(x1))
x1,x2 = x2 ,x3
error = (abs(x1 -x2 )/x2)
print("The root is ",x3,"with possible error of ",error,count)
secant(x1,x2,tol)
Figure 9