Petcov Nicolai
29.02.2024
Problem 2.1
a) The formula of Taylor series to approximate a function
is:
2 2 12 4 120 6
Now I can approximate 𝑒 −𝑡 = 1 − 𝑡 2 + 𝑡 − 𝑡
2! 4! 6!
2
Now I should take the integral from this expression and to multiply it by :
√𝑥
∞
2 (−1)𝑛 ∗ 𝑥 2𝑛+1
𝑒𝑟𝑓(𝑓) = ∑
√𝜋 𝑛! (2𝑛 + 1)
𝑛=0
b)
c)
import math while abs(math.erf(x) - approx) >
error:
def T(x, n): approx += T(x, n)
return (2/math.sqrt(math.pi)) * n += 1
((-1)**n * x**(2*n+1)) /
(math.factorial(n) * (2*n+1)) print(f"The number of terms needed:
{n}")
error = 10**-6 print(f"Approximated erf(3):
n = 0 {approx}")
x = 3 print(f"Exact erf(3):
approx = 0 {math.erf(x)}")
Output:
The number of terms needed: 32
Approximated erf(3): 0.999977372781473
Exact erf(3): 0.9999779095030014
Problem 2.2
The first 50 terms of the sequence Rn together with errors φ−Rn.
φ = 1.61803398875
{1.0: 0.6180339887499999, 2.0: -0.38196601125000007, 1.5: 0.11803398874999993,
1.6666666666666667: -0.04863267791666681, 1.6: 0.01803398874999984, 1.625: -
0.00696601125000007, 1.6153846153846154: 0.002649373365384511, 1.619047619047619: -
0.0010136302976191391, 1.6176470588235294: 0.00038692992647049174,
1.6181818181818182: -0.00014782943181823605, 1.6179775280898876:
5.646066011233408e-05, 1.6180555555555556: -2.1566805555650603e-05,
1.6180257510729614: 8.237677038502866e-06, 1.6180371352785146: -3.1465285146303756e-
06, 1.618032786885246: 1.2018647539413507e-06, 1.618034447821682: -4.59071682001877e-
07, 1.6180338134001253: 1.7534987462042295e-07, 1.618034055727554: -
6.697755416951168e-08, 1.6180339631667064: 2.5583293483677494e-08,
1.6180339985218033: -9.771803366476206e-09, 1.618033985017358: 3.732641973286377e-
09, 1.6180339901755971: -1.425597195847672e-09, 1.618033988205325: 5.44674971791892e-
10, 1.618033988957902: -2.0790213994814621e-10, 1.6180339886704431:
7.955680558779932e-11, 1.6180339887802426: -3.024269723539419e-11,
1.618033988738303: 1.16968656982408e-11, 1.6180339887543225: -4.3225423240755845e-
12, 1.6180339887482036: 1.7963408538435033e-12, 1.6180339887505408: -
5.409006575973763e-13, 1.6180339887496482: 3.517186542012496e-13, 1.618033988749989:
1.0880185641326534e-14, 1.618033988749859: 1.4099832412739488e-13,
1.6180339887499087: 9.126033262418787e-14, 1.6180339887498896: 1.1035616864774056e-
13, 1.618033988749897: 1.0302869668521453e-13, 1.618033988749894:
1.0591527654923993e-13, 1.6180339887498951: 1.0480505352461478e-13,
1.6180339887498947: 1.0524914273446484e-13, 1.618033988749895: 1.0502709812953981e-
13}
Ex. : {Rn : φ−Rn, Rn+1 : φ−Rn+1, … }
Code:
import numpy as np n2 = np.float64(nth)
nterms = 51 count += 1
# first two terms count = n = 0
n1, n2 = 0, 1 Rn = {}
count = 0 while count < nterms-1:
number = fib[n+1]/fib[n]
fib = [] Rn[number] =
np.float64(1.61803398875-number)
while count < nterms: n += 1
fib.append(n2) count += 1
nth = n1 + n2 print(Rn)
n1 = n2
The order of convergence is linear, because we have by the formula:
|𝑥𝑛+1 − 𝑥 ∗ |
= 0.61 𝐶 < 1
𝑥𝑛 − 𝑥 ∗
Problem 2.3
from sympy import symbols, diff, log f_two_prime = diff(f_two, R)
# Initial guess def newton_method(f, df, x0, tol):
R_init = 15000 while abs(f.subs(R, x0)) > tol:
x0 = x0 - f.subs(R,
# Error tolerance x0).evalf() / df.subs(R, x0).evalf()
err_tol = 10**-6 return x0
root_one = newton_method(f_one,
R = symbols('R') f_one_prime, R_init, err_tol)
f = 1.129241 * 10**-3 + (2.341077 * root_two = newton_method(f_two,
10**-4)*log(R) + 8.775468 * (10**-8 f_two_prime, R_init, err_tol)
* log(R)**3) print(f"The obtained range for
f_one = f - (1 / (19.01 + 273.15)) resistance value is between:
f_one_prime = diff(f_one, R) {min(root_one, root_two)} and
f_two = f - (1 / (18.99 + 273.15)) {max(root_one, root_two)}")
The obtained range for resistance value is between: 13065.8696230801 and
13077.7714262619
Problem 2.4
a)
b)
import numpy as np plt.xlabel('x')
import matplotlib.pyplot as plt plt.ylabel('f(x)')
from sympy import cos, symbols, plt.show()
diff, exp, pi
# Newton's method
def f(x):
return np.exp(x - np.pi) + x = symbols('x')
np.cos(x) - x + np.pi f = exp(x - pi) + cos(x) - x + pi
df = diff(f, x)
x = np.linspace(0, 6, 1000)
y = f(x) def newton_method(f, df, x0, e):
plt.plot(x, y) while abs(f.subs(x, x0)) > e:
plt.grid(True)
x0 = x0 - f.subs(x,
x0).evalf() / df.subs(x, x0).evalf() m = 2
return x0
def quadratic_newton_method(f, df,
x0 = 0.0 # Initial guess x0, e, m):
err_tol = 1e-12 # Error tolerance while abs(f.subs(x, x0)) > e:
x0 = x0 - m * f.subs(x,
print(newton_method(f, df, x0, x0).evalf() / df.subs(x, x0).evalf()
err_tol)) return x0
# Newton's method with quadratic print(quadratic_newton_method(f, df,
convergence x0, err_tol, m))
Newton’s method: 3.14159337581245
• Newton's method typically converges quadratically, which means that the
number of correct digits approximately doubles with each iteration near a
simple root. However, the convergence rate may be slower or faster
depending on the behavior of the function and the initial guess. In this case,
we can observe fast convergence, but not necessarily quadratic.
• Reason for Convergence Rate: The convergence rate of Newton's method
depends on the behavior of the function and its derivative near the root. If the
derivative near the root is close to zero, the convergence rate may slow down.
Also, if the initial guess is far from the root or if there are multiple roots
nearby, the convergence rate may be affected.
• Improving Convergence Rate: Several methods can be used to improve the
convergence rate of Newton's method:
Choosing a better initial guess closer to the root.
Using methods like the secant method or the modified Newton's
method, which may provide faster convergence in some cases.
c) Quadratic method: 3.14159265351430
d)
import math x0 = x1
if step == n:
def f(x): print('Is not convergent')
return math.exp(x - math.pi) + break
math.cos(x) - x + math.pi
condition = abs(f(x0)) > e
def g(x): else:
return math.exp(x - math.pi) + print(f"Found after {step}
math.cos(x) + math.pi iterations \t x = {x0}")
def point_iteration(x0, n, e): N = int(input('STEP: '))
condition = True x0 = float(input('Initial guess:
step = 0 '))
while condition: e = float(input('Error tolerance:
step += 1 '))
x1 = g(x0)
print(f'Step: {step}, x = point_iteration(x0, N, e)
{x0}, f(x) = {f(x0):.6f}')
The convergence of the fixed-point iteration may not be guaranteed or may
converge slowly.
Therefore, while fixed-point iterations can be powerful methods for finding
roots under certain conditions, they may not always be suitable or efficient
for all functions. In this case, Newton's method or its variants like the
quadratic method may provide faster and more reliable convergence.
Problem 2.5
a)
import math {x0}, f(x) = {abs(f(x0)):.6f}')
x0 = x1
if step == n:
def f(x): print('Is not
return math.cos(x) - 1 convergent')
break
def g(x): condition = abs(f(x0)) > e
return math.cos(x) + x - 1 else:
print(f"Found after {step}
iterations \t x = {x0}")
def point_iteration(x0, n, e):
condition = True
step = 0 N = 1000000
while condition: x0 = 0.1
step += 1 e = 1e-8
x1 = g(x0)
print(f'Step: {step}, x = point_iteration(x0, N, e)
Found after 14116 iterations x = 0.00014141701294478892
b) The fixed-point iteration method converges slowly when compared to
methods like the bisection method, especially for functions where the
derivative around the root is close to 1. This can be observed from the
number of iterations required for convergence
# Problem 2.5 b) print(f"Step: {step},
Root: {c}, f(x) = {f(c)}")
import math if f(a) * f(c) < 0:
b = c
def f(x): else:
return math.cos(x) - 1 + x a = c
def bisection_method(a, b, e): condition = abs(f(c)) > e
c = 0
step = 0 return c
condition = True
while condition: root = bisection_method(-1.5,
step += 1 1, 1e-10)
c = (a + b) / 2 print(root)
c) To speed up the convergence of the fixed-point iteration method, we can use
techniques like Newton's method or the Secant method, which generally
converge faster than the fixed-point iteration method.
import math iterations += 1
if f(a) * f(c) < 0:
def f(x): b = c
return math.cos(x) - 1 + x else:
def falsePosition(a, b, e): a = c
iterations = 0 condition = abs(f(c)) > e
c = None print(f"Step: {iterations},
condition = f(a) * f(b) < 0 f(c) = {f(c)}")
while condition: return c
c = a - (b - a) * f(a) / root = falsePosition(-1.5, 1, 1e-10)
(f(b) - f(a)) print(root)
Problem 2.6
1) Order of convergence if Newton’s method is faster in this case, because it is
quadratic, while in the bisection method is linear
2) α is a simple root, since f(x) is twice continuously differentiable, the initial
guess is sufficiently close to the root α
3) I would choose another method of the approximation, because exist
situations, when Secant method is faster, or even I would apply Hybrid
method.
Problem 2.7
1) x = e-x, this formula can be used and applied with fixed-point method and
Newton’s method
𝑥+𝑒 −𝑥
2) 𝑥 = , this formula should be used, but it depends on initial guess
2
3) The even better formula is using Newton’s method:
𝑓(𝑥)
Xn+1 = Xn −
𝑓′ (𝑥)
Thus:
𝑥𝑛 +𝑙𝑛(𝑥𝑛 )
Xn+1 = Xn − 1
1+
𝑥𝑛
Problem 2.8
1) Approximate rates of convergence: α2~0.513; α3~0.591; α4~0.552;
α5~0.575; α6~0.518
We can observe that 0< α<1 =>method is linear convergent
2) The average rate of linear convergence: α~0.5498
Bisection method has α=0.5 per iteration. Comparing bisevtion method
with α~0.5498, we observe that the given method is slightly slower that
Bisection method
3) To accelerate the convergence method we consider
2
(𝑋𝑛+1 −𝑋𝑛 )
Aitken’s Delta-Squared process: Xn+1 = Xn −
𝑋𝑛+2 −2𝑋𝑛+1 +𝑋𝑛