0% found this document useful (0 votes)
4 views

computational mathematics

Uploaded by

viaas maths
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

computational mathematics

Uploaded by

viaas maths
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

COMPUTATIONAL MATHEMATICS

UNIT I

The Bisection Method is a numerical method used to find the roots (or zeros) of a continuous function.
Specifically, it is used to find the value of xxx where the function f(x)=0f(x) = 0f(x)=0. The method
works by repeatedly dividing the interval in half and selecting the subinterval in which the root lies.

Key Concepts:

1. Initial Interval: The method requires two starting values, aaa and bbb, such that f(a) and f(b)
have opposite signs, i.e.,f(a)⋅f(b)<0. This guarantees that there is at least one root between a and b
due to the Intermediate Value Theorem.
2. Midpoint: At each step, the method calculates the midpoint of the interval, m=2a+b.
 Evaluate Function at Midpoint:

The function value at the midpoint, f(m)f(m)f(m), is then computed:

 If f(m)=0, then m is the root.


 If f(m) has the same sign as f(a)f the root lies in the interval [m,b], and we set a=m.
 If f(m)has the same sign as f(b)f(b)f(b), the root lies in the interval [a,m] and we set b=m.

3.  Stopping Criteria: The process is repeated until the interval is sufficiently small or the
function value at the midpoint is close enough to zero.

Steps in the Bisection Method:

1. Choose the initial interval [a,b], where f(a) and f(b) have opposite signs.

a+b
Calculate the midpoint m=( )
2m
Check the value of f(m):

 If f(m)=0, then m is the root.


 If f(a)⋅f(m)<0 set b=m
 If f(m)⋅f(b)<0 set a=m

Repeat the process until ∣b−a∣ is sufficiently small or ∣f(m)∣ is small.

Example:

Let's say we want to find the root of the equation f(x) = x^2 – 4 using the bisection method. We can
follow these steps:

 Initial interval: [a,b]=[1,3] (since f(1)=−3and f(3)=5, the signs are opposite.
 Midpoint: m= 1
If the function doesn't cross zero exactly at the midpoint, we would continue narrowing down the interval
until we get sufficiently close to the root.

Advantages:

 Simple to implement.
 Guaranteed to find a root if the initial interval contains a sign change.

Disadvantages:

 Slow convergence, especially for functions with roots far apart.


 Requires the initial interval to be chosen such that the function changes signs.

The Method of False Position, also known as Regula Falsi, is a root-finding algorithm similar to the
Bisection Method. It is used to find the root of a continuous function. This method combines the idea of
the bisection method with linear interpolation.

Key Concept:

 The False Position Method iteratively refines an interval in which the root lies. At each step, it
uses a secant line between the points (a,f(a))and (b,f(b)), and it intersects this secant line with the
x-axis to estimate a new point c, which is the next approximation of the root.
 The point ccc is chosen in such a way that the sign of f(c)f(c)f(c) is used to decide which of the
two intervals (either [a,c]or [c,b]) will be retained for the next iteration.

Steps of the False Position Method:

1. Choose initial interval: Start with two values aaa and bbb such that f(a)and f(b) have opposite
signs (i.e., f(a)⋅f(b)<0).
2. Calculate the False Position: Compute the new approximation of the root, c, using the formula:
f (a)⋅(b−a)
c = a−
f ( b)−f (a)

Check if the root is found: If f(c)=0, then c is the root.


 Update the interval: If f(a)⋅f(c)<0, then the root lies in the interval [a,c], and we set b=c.
If f(b)⋅f(c)<0, then the root lies in the interval [c,b], and we set a=c.

 Repeat the process until the difference between aaa and b is smaller than a predefined
tolerance or the function value f(c) is sufficiently close to zero.

3. Check if the root is found: If f(c)=0, then c is the root.


4. Update the interval: If f(a)⋅f(c)<0, then the root lies in the interval [a,c], and we set b=c. If
f(b)⋅f(c)<0, then the root lies in the interval [c,b], and we set a=c.
5. Repeat the process until the difference between aaa and bbb is smaller than a predefined
tolerance or the function value f(c) is sufficiently close to zero.

Let’s take the same function f(x)=x2−4, and we want to find its root using the Method of False Position.
 Initial interval:a=1,b=3 (since f(1)=−3 and f(3)=5, the signs are opposite).
 The root lies between a and b because the function changes sign.

The method would iteratively calculate a new approximation using the false position formula and update
the interval based on the sign of f(c).

The Method of Successive Approximation (also known as Fixed-Point Iteration) is a numerical


method for finding the root of a function f(x)=0. It involves rearranging the equation into a form x=g(x),
then using an initial guess and iteratively applying the function g(x) to generate successive
approximations to the root.

Key Concept:

 The Successive Approximation method works by rewriting the equation f(x)=0 into the form
x=g(x). This gives a recursive relationship that allows you to approximate the root by iterating the
function.
 Starting from an initial guess x0, the next approximation x1 is computed by applying g(x) to x0,
and so on.

Steps for the Method of Successive Approximation:

1. Rearrange the equation: Convert the original equation f(x)=0into a form x=g(x).

For example, if the original equation is x2−4=0, it can be rearranged as:

4
x=√ 4 or x=
x

The choice of g(x) is important for convergence.

2. Initial guess: Choose an initial guess x0 close to the root.


3. Iterate: Compute the next approximation using:

xn+1=g(xn)

4. Convergence check: Repeat the iteration until the difference between successive approximations
is smaller than a specified tolerance:

∣x n+1−xn∣<tolerance

Alternatively, you can stop if ∣f(xn+1)∣ is close enough to zero.

5. Result: The value of x n+1 after the last iteration is taken as the root.

Example:
Let's solve the equation f(x)=x2−4=0 using the method of successive approximation.

1. Rearrange the equation to x=√ 4


2. Choose an initial guess, say x0=2
3. Apply the iteration x n+1=√ 4 (which gives a constant approximation for x).
4. We already know that the root is x=2, and the iteration converges directly to that value.

However, more interesting problems arise when there are more complex functions or when the rearranged
equation involves iterative refinement.

Conditions for Convergence:

For the method to converge, the function g(x) must satisfy certain conditions:

1. Fixed-point condition: g(x) must be continuous and the interval must be such that the iterations
move closer to the root.

root is less than 1: ∣g′(x)∣<1 If ∣g′(x)∣ is greater than 1, the method may diverge or not converge to
2. Convergence criterion: The method converges if the magnitude of the derivative of g(x) at the

the correct root.

Example Walkthrough:

Consider the equation f(x)=x2−4=0. We can rearrange it as:


x=4

Steps:

1. Rearrange f(x)=0 to x=4x


2. Choose an initial guess: x0=3.
3. Iterate using xn+1=4x
o First iteration: x1=4x≈1.333.
o Second iteration: x2=4x=1.3334≈3.000.
o Third iteration: x3=4x=43.000≈1.333.
o And so on...

The values oscillate around the root x=2, but this is due to the form of the equation and can be refined
further.
UNIT II

System of Linear Algebraic Equations: Direct Methods vs Iterative Methods

In numerical linear algebra, solving a system of linear equations is a fundamental problem. The
general system is given as:
Ax=b
where:
 A is a square matrix of coefficients (n×n),
 x is the vector of unknowns (n×1),
 b is the vector of constants (n×1).
There are two main approaches for solving such systems of equations: Direct Methods and Iterative
Methods.

1. Direct Methods
Direct methods aim to find the exact solution of the system in a finite number of steps (in theory) using
algebraic manipulation. These methods are efficient for small to moderate-sized systems where the matrix
is well-conditioned.

Common Direct Methods

 Gaussian Elimination (or Row Reduction): This method transforms the system Ax=b into an
upper triangular matrix U via a sequence of row operations. Once the system is in this form,
back-substitution is used to solve for the unknowns.

Steps:

1. Perform row operations to reduce the matrix A to upper triangular form U.


2. Back-substitute to solve for the unknowns x.

 LU Decomposition: Decompose the matrix A into a product of a lower triangular matrix L and
an upper triangular matrix U, i.e., A=LU. Then, solve the system in two steps:
1. Solve Ly=b for y.
2. Solve Ux=y for x.

 Matrix Inversion: If the matrix A is invertible, the solution can be found by directly multiplying
both sides of the equation by A−1 i.e., x=A−1b. This method is computationally expensive for large
matrices due to the cost of matrix inversion.

2. Iterative Methods

Iterative methods are techniques that generate a sequence of approximations that converge to the exact
solution. These methods are particularly useful for large, sparse systems where direct methods may be
inefficient or impractical.
Common Iterative Methods

 Jacobi Method: This method uses the diagonal elements of AAto iteratively update the values of
x. Each new approximation is computed based on the old one.

Steps:

1. Start with an initial guess x(0).


2. Update the components of x using the formula:

xi(k+1) = (bi∑ aij x j ¿ /aii


j≠1

3. Repeat until convergence (when the change in x is below a tolerance).

 Gauss-Seidel Method: This is a refinement of the Jacobi method. In Gauss-Seidel, the updated
values of x are used immediately in the next iteration, instead of waiting for the whole vector to
be updated.

Steps:

1. Start with an initial guess x(0)


2. Update each component xi iteratively using: xi(k+1) = (bi∑ aij x j ¿ /aii
j≠1
3. Repeat until convergence.

 Successive Over-Relaxation (SOR): This method improves upon Gauss-Seidel by introducing a


relaxation parameter ω\omegaω that accelerates convergence:

xi(k+1) = (bi∑ aij x j ¿ /aii


j≠1

 Conjugate Gradient Method: This method is used for solving large, sparse systems where A is
symmetric and positive definite. It is an optimization-based approach, iterating toward the
solution while minimizing the error along conjugate directions.
UNIT III

1. Bisection Method

The Bisection method is used to find the root of a function within a given interval [a,b][a, b][a,b], where
the function changes signs at the endpoints. The method repeatedly bisects the interval and then selects
the subinterval in which the root lies.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Define the function whose root is to be found


double f(double x) {
return x*x - 4; // Example: f(x) = x^2 - 4
}

// Bisection Method
double bisection(double a, double b, double tol) {
if (f(a) * f(b) > 0) {
cout << "The function has the same sign at the endpoints." << endl;
return -1;
}

double c;
while ((b - a) / 2 > tol) {
c = (a + b) / 2;
if (f(c) == 0.0) {
break;
} else if (f(c) * f(a) < 0) {
b = c;
} else {
a = c;
}
}
return (a + b) / 2;
}

int main() {
double a = 0, b = 3; // Interval [0, 3]
double tolerance = 0.001;

double root = bisection(a, b, tolerance);

if (root != -1) {
cout << "Root is: " << fixed << setprecision(4) << root << endl;
}

return 0;
}

2. False Position Method (Regula Falsi Method)

The False Position method is similar to the Bisection method but uses a linear interpolation to estimate
the root.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Define the function whose root is to be found


double f(double x) {
return x*x - 4; // Example: f(x) = x^2 - 4
}

// False Position Method (Regula Falsi)


double falsePosition(double a, double b, double tol) {
if (f(a) * f(b) > 0) {
cout << "The function has the same sign at the endpoints." << endl;
return -1;
}

double c;
while ((b - a) / 2 > tol) {
c = a - (f(a) * (b - a)) / (f(b) - f(a));
if (f(c) == 0.0) {
break;
} else if (f(c) * f(a) < 0) {
b = c;
} else {
a = c;
}
}
return c;
}

int main() {
double a = 0, b = 3; // Interval [0, 3]
double tolerance = 0.001;
double root = falsePosition(a, b, tolerance);

if (root != -1) {
cout << "Root is: " << fixed << setprecision(4) << root << endl;
}

return 0;
}

3. Method of Successive Approximation

In this method, the function is rearranged into an iterative form to find the root by successive
approximations.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Define the function in the form x = g(x)


double g(double x) {
return sqrt(4); // Example: x = sqrt(4) => x = 2
}

// Successive Approximation Method


double successiveApproximation(double x0, double tol) {
double x1;
x1 = g(x0); // Start the iteration

while (abs(x1 - x0) > tol) {


x0 = x1;
x1 = g(x0);
}
return x1;
}

int main() {
double x0 = 1; // Initial guess
double tolerance = 0.001;

double root = successiveApproximation(x0, tolerance);

cout << "Root is: " << fixed << setprecision(4) << root << endl;

return 0;
}

4. Newton-Raphson Method
The Newton-Raphson method uses the function and its derivative to iteratively find the root.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Define the function whose root is to be found


double f(double x) {
return x*x - 4; // Example: f(x) = x^2 - 4
}

// Define the derivative of the function


double df(double x) {
return 2*x; // Example: f'(x) = 2x
}

// Newton-Raphson Method
double newtonRaphson(double x0, double tol) {
double x1 = x0 - f(x0) / df(x0); // First iteration

while (abs(x1 - x0) > tol) {


x0 = x1;
x1 = x0 - f(x0) / df(x0);
}
return x1;
}

int main() {
double x0 = 2; // Initial guess
double tolerance = 0.001;

double root = newtonRaphson(x0, tolerance);

cout << "Root is: " << fixed << setprecision(4) << root << endl;

return 0;
}
UNIT IV

1. Secant Method

The Secant method is an iterative method for solving non-linear equations. It is similar to the Newton-
Raphson method but does not require the derivative of the function. Instead, it uses the secant line to
approximate the root
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Define the function whose root is to be found


double f(double x) {
return x*x - 4; // Example: f(x) = x^2 - 4
}

// Secant Method
double secantMethod(double x0, double x1, double tol) {
double x2;

do {
x2 = x1 - (f(x1) * (x1 - x0)) / (f(x1) - f(x0)); // Secant formula
x0 = x1;
x1 = x2;
} while (fabs(f(x2)) > tol);

return x2;
}

int main() {
double x0 = 1.0, x1 = 2.0; // Initial guesses
double tolerance = 0.001;

double root = secantMethod(x0, x1, tolerance);

cout << "Root is: " << fixed << setprecision(4) << root << endl;
return 0;
}

2. Graeff's Root Squaring Method

Graeff's Root Squaring Method is an iterative method used for finding the root of a function. It is
particularly useful when the function can be expressed in a way that allows the use of this method

#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Define the function whose root is to be found


double f(double x) {
return x*x - 4; // Example: f(x) = x^2 - 4
}

// Graeff's Root Squaring Method


double graeffRootSquaring(double x0, double tol) {
double x1;
x1 = sqrt(f(x0)); // Root squaring transformation

while (fabs(f(x1)) > tol) {


x0 = x1;
x1 = sqrt(f(x0)); // Iterative calculation
}

return x1;
}

int main() {
double x0 = 2.0; // Initial guess
double tolerance = 0.001;

double root = graeffRootSquaring(x0, tolerance);

cout << "Root is: " << fixed << setprecision(4) << root << endl;

return 0;
}

3. Gauss Elimination Method


The Gauss Elimination method is used to solve a system of linear equations. It involves transforming the
matrix into an upper triangular form and then solving it via back substitution.
#include <iostream>
#include <iomanip>
using namespace std;
// Function to perform Gauss Elimination
void gaussElimination(double A[3][3], double b[3], double x[3]) {
int n = 3;

// Forward elimination
for (int i = 0; i < n; i++) {
// Pivoting (if necessary)
for (int k = i + 1; k < n; k++) {
if (A[i][i] == 0) {
cout << "Division by zero detected." << endl;
return;
}
double factor = A[k][i] / A[i][i];
for (int j = 0; j < n; j++) {
A[k][j] -= factor * A[i][j];
}
b[k] -= factor * b[i];
}
}

// Back substitution
for (int i = n - 1; i >= 0; i--) {
x[i] = b[i];
for (int j = i + 1; j < n; j++) {
x[i] -= A[i][j] * x[j];
}
x[i] /= A[i][i];
}
}

int main() {
double A[3][3] = {
{2, -1, 1},
{1, 3, 2},
{3, -1, 2}
};

double b[3] = {3, 9, 8}; // Right-hand side vector


double x[3]; // Solution vector

gaussElimination(A, b, x);

cout << "Solution is: ";


for (int i = 0; i < 3; i++) {
cout << fixed << setprecision(4) << x[i] << " ";
}
cout << endl;

return 0;
}
4. Gauss-Jordan Method
The Gauss-Jordan method is similar to Gauss Elimination but further simplifies the matrix by reducing it
to a diagonal form. This method directly produces the solution.

#include <iostream>
#include <iomanip>
using namespace std;

// Function to perform Gauss-Jordan Elimination


void gaussJordan(double A[3][3], double b[3], double x[3]) {
int n = 3;

// Augment the matrix A with b


for (int i = 0; i < n; i++) {
A[i][n] = b[i];
}

// Forward elimination to get upper triangular form


for (int i = 0; i < n; i++) {
double pivot = A[i][i];
for (int j = 0; j < n + 1; j++) {
A[i][j] /= pivot;
}
for (int j = 0; j < n; j++) {
if (i != j) {
double factor = A[j][i];
for (int k = 0; k < n + 1; k++) {
A[j][k] -= factor * A[i][k];
}
}
}
}

// Extract the solution


for (int i = 0; i < n; i++) {
x[i] = A[i][n];
}
}

int main() {
double A[3][3] = {
{2, -1, 1},
{1, 3, 2},
{3, -1, 2}
};

double b[3] = {3, 9, 8}; // Right-hand side vector


double x[3]; // Solution vector

gaussJordan(A, b, x);
cout << "Solution is: ";
for (int i = 0; i < 3; i++) {
cout << fixed << setprecision(4) << x[i] << " ";
}
cout << endl;

return 0;}

UNIT V

1. Jacobian Method (for solving system of linear equations)


The Jacobian method is an iterative method for solving a system of linear equations. It is based on
decomposing the system into a diagonal part and an off-diagonal part.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Function to implement the Jacobian Method


void jacobianMethod(double A[3][3], double b[3], double x[3], double tol, int maxIter) {
int n = 3;
double x_new[3];
int iter = 0;
double error;

do {
for (int i = 0; i < n; i++) {
x_new[i] = b[i];
for (int j = 0; j < n; j++) {
if (i != j) {
x_new[i] -= A[i][j] * x[j];
}
}
x_new[i] /= A[i][i];
}

error = 0.0;
for (int i = 0; i < n; i++) {
error = max(error, fabs(x_new[i] - x[i]));
}

// Copy the new values to x for the next iteration


for (int i = 0; i < n; i++) {
x[i] = x_new[i];
}
iter++;
if (iter >= maxIter) {
cout << "Maximum iterations reached!" << endl;
break;
}
} while (error > tol);

// Output the results


cout << "Solution: ";
for (int i = 0; i < n; i++) {
cout << fixed << setprecision(4) << x[i] << " ";
}
cout << endl;
}

int main() {
double A[3][3] = {
{4, -1, 0},
{-1, 4, -1},
{0, -1, 3}
};
double b[3] = {15, 10, 10}; // Right-hand side vector
double x[3] = {0, 0, 0}; // Initial guess

double tol = 0.0001; // Tolerance


int maxIter = 1000; // Maximum iterations

jacobianMethod(A, b, x, tol, maxIter);

return 0;
}
2. Gauss-Seidel Method
The Gauss-Seidel method is an iterative technique used for solving a system of linear equations. It is
similar to the Jacobian method but updates the values of the variables as soon as they are computed.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Function to implement the Gauss-Seidel Method


void gaussSeidelMethod(double A[3][3], double b[3], double x[3], double tol, int maxIter) {
int n = 3;
int iter = 0;
double error;

do {
double x_new[3];

for (int i = 0; i < n; i++) {


x_new[i] = b[i];
for (int j = 0; j < n; j++) {
if (i != j) {
x_new[i] -= A[i][j] * x[j];
}
}
x_new[i] /= A[i][i];
x[i] = x_new[i]; // Update the value of x directly after calculating it
}

// Calculate error
error = 0.0;
for (int i = 0; i < n; i++) {
error = max(error, fabs(x_new[i] - x[i]));
}

iter++;
if (iter >= maxIter) {
cout << "Maximum iterations reached!" << endl;
break;
}

} while (error > tol);

// Output the results


cout << "Solution: ";
for (int i = 0; i < n; i++) {
cout << fixed << setprecision(4) << x[i] << " ";
}
cout << endl;
}

int main() {
double A[3][3] = {
{4, -1, 0},
{-1, 4, -1},
{0, -1, 3}
};
double b[3] = {15, 10, 10}; // Right-hand side vector
double x[3] = {0, 0, 0}; // Initial guess

double tol = 0.0001; // Tolerance


int maxIter = 1000; // Maximum iterations

gaussSeidelMethod(A, b, x, tol, maxIter);

return 0;
}
3. Largest Eigenvalue by Power Method
The Power Method is an iterative method used to find the largest eigenvalue (in absolute value) of a
matrix. It works by iterating on a random vector and multiplying by the matrix, normalizing at each step.
#include <iostream>
#include <cmath>
#include <iomanip>
using namespace std;

// Function to perform the Power Method to find the largest eigenvalue


double powerMethod(double A[3][3], double tol, int maxIter) {
int n = 3;
double x[3] = {1, 1, 1}; // Initial guess vector
double x_new[3];
double eigenValue = 0.0;
int iter = 0;
double error;

do {
// Multiply the matrix A with vector x
for (int i = 0; i < n; i++) {
x_new[i] = 0.0;
for (int j = 0; j < n; j++) {
x_new[i] += A[i][j] * x[j];
}
}

// Find the largest value in the new vector x_new


eigenValue = fabs(x_new[0]);
for (int i = 1; i < n; i++) {
if (fabs(x_new[i]) > eigenValue) {
eigenValue = fabs(x_new[i]);
}
}

// Normalize the new vector


double norm = 0.0;
for (int i = 0; i < n; i++) {
x_new[i] /= eigenValue;
}

// Calculate error (difference between old and new vector)


error = 0.0;
for (int i = 0; i < n; i++) {
error = max(error, fabs(x_new[i] - x[i]));
}

// Copy x_new to x for the next iteration


for (int i = 0; i < n; i++) {
x[i] = x_new[i];
}

iter++;
if (iter >= maxIter) {
cout << "Maximum iterations reached!" << endl;
break;
}
} while (error > tol);

return eigenValue;
}

int main() {
double A[3][3] = {
{4, -1, 0},
{-1, 4, -1},
{0, -1, 3}
};

double tol = 0.0001; // Tolerance


int maxIter = 1000; // Maximum iterations

double eigenValue = powerMethod(A, tol, maxIter);

cout << "Largest Eigenvalue: " << fixed << setprecision(4) << eigenValue << endl;

return 0;
}

You might also like