Nonlinear Algebraic Equations Example: (In) S I (In) (In) P, I

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Nonlinear Algebraic Equations Example

Continuous Stirred Tank Reactor (CSTR).


Look for steady state concentrations & temperature.
(in )
In: N s spieces with concentrations ci ,
heat capacities c(in
p,i
)
and temperature T (in)

Inside: N r reactions with stoichiometric coefficients σ α ,i


and reaction constants rα .
(in )
Out: N s spieces with concentrations ci
(c-s may be equal to zero),
heat capacities cp,i and temperature T.
November 2001 10.001 Introduction to Computer
Methods
Nonlinear Algebraic Equations Example

Nr
(F/ V) ci(in) − ci  + ∑σα ,i rα = 0 i = 1, 2,..., Ns
α =1

Mass balancefor spieces1, 2,..., Ns


Ns Nr
(F/ V)∑ ci(in)c(in)
p,i T (in)
− ci cp,i T − ∑ ∆Hα rα = 0
i =1 α =1

Energy balance
Column of unknown variables :x = [c1 ,c2 ,...,cNs ,T] T

November 2001 10.001 Introduction to Computer


Methods
Nonlinear Algebraic Equations Example
Each of the above equations may be written in the general form:
f i (x1 , x 2 ,..., x N ) = 0 : In vector form:
f(x)=0
f1 (x1 , x 2 ,..., x N ) = 0
f1 (x) 
f 2 (x1 , x 2 ,..., x N ) = 0 f (x) 
... f(x)=  2 
... 
f N (x1 , x 2 ,..., x N ) = 0  
 N 
f (x)


Let x be the solution staisfying f(x)=0.


We do not know x and take x[0] as initial guess.
November 2001 10.001 Introduction to Computer
Methods
Nonlinear Algebraic Equations

We need to form a sequence of estimates to the solution:



[1] [2] [3]
x , x , x ,... that will hopehully converge to x .

Thus we want: lim x [m]
=x
m →∞

lim x − x[m] = 0
m →∞

Unlike with linear equations, we can’t say much about


existence or uniqueness of solutions, even for
a single equation.

November 2001 10.001 Introduction to Computer


Methods
Single Nonlinear Equation

We assume f(x) is infinitely differentiable at the solution x .

Then f(x) may be Taylor expanded around x :
∧ ∧ 1 ∧ 1 ∧
f (x) = f (x) + (x − x)f (x) + (x − x) f (x) + (x − x)3 f ’’’(x) + ...
’ 2 ’’

2 3!

[0]
Assume x being close to x so that the series may be truncated:

f (x ) ≈ (x
[0] [0]
− x)f ’(x[0] ), or
[0]
f (x )
f (x[0] ) ≈ (x[0] − x[1] )f ’(x[0] ) x = x − ’ [0] −
[1] [0]

f (x )
− Newton ’s method
Example : f (x) = (x − 3)(x − 2)(x − 1) = x 3 − 6x 2 + 11x − 6
November 2001 10.001 Introduction to Computer
Methods
Single Nonlinear Equation

November 2001 10.001 Introduction to Computer


Methods
Single Nonlinear Equation

November 2001 10.001 Introduction to Computer


Methods
Single Nonlinear Equation

For f(x)=(x-3)(x-2)(x-1)=x3-6x2+11x+6=0, we se that


Newton’s method converges to the root at x=2 only if
1.6<x[0]<2.4.

We may look at the direction of the first step and see


why: x[1]- x[0]=-f(x[0])/ f’(x[0]).

So, we have 2 “easy” roots: x=1, x=3 and a more


difficult one.

We may “factorize out” the roots we already know.

November 2001 10.001 Introduction to Computer


Methods
Single Nonlinear Equation

f (x)
Introduce g(x) =
(x − 3)(x −1)
f (x) ’
f (x)  1 1 
g (x) =

−  + 
(x − 3)(x −1) (x − 3)(x −1)  (x − 3) (x −1) 
We now use Newton’s method to find the roots of g(x):
g(x)
x [i+1]
=x − ’[i]
→ get x = 2.
g (x)

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations
Let’s extend the method to multiple equations:
f1(x1,x2,x,…,xN)=0
f2(x1,x2,x,…,xN)=0 => f(x)=0
….. Start from initial guess x[0]
fN(x1,x2,x,…,xN)=0
∧ ∧
As before expand each equation at the solution x with f( x )=0:
∧ ∧
∧ N ∧∂f i (x) 1 N N
∂ fi (x) ∧ ∧
2
fi (x) = fi (x) + ∑ (x j − x j ) + ∑∑ (x j − x j ) (x k − x k ) + ...
j=1 ∂x j 2 j=1 k =1 ∂x j∂x k

[0]
Assume x is close to x and discard quadratic terms:

N ∧ ∂f i (x)
fi (x) ≈ ∑ (x j − x j )
j=1 ∂x j

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations

Let ’s define the Jacobian matrix J( x ) with the elements:

∧ ∂f i (x)
J ij (x) =
∂x j
Then our approximate expansion may be written as:
∧ ∧
 ∧ ∧

N
f i (x) ≈ ∑ J ij (x)(x j − x j ) =  J(x)(x − x) 
j=1  i
This gives us the linear system:
∧ ∧
f (x) ≈ J(x)(x − x)

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations
 ∧ ∧
[i +1] 
f (x ) = J(x )(x − x ) =  J(x)(x − x) 
[i] [i] [i]

 i
Note, Jacobian is evaluated at the position of
an old iteration, not at an unknown solution
Defining ∆x [i] = x [i +1] − x [i] , we rewrite the equation as
J(x [i] ) ∆x [i] = −f (x [i] ) or just J[i] ∆x [i] = −f [i]
The iterations are continued untill some convergence
criteria are met:
relative error f [i] ≤ δ rel f [0]
absolute error f [i] ≤ δ abs
November 2001 10.001 Introduction to Computer
Methods
Systems of Nonlinear Equations

Newton’s method does not always converge. If it does,


can we estimate the error?

Let our function f(x) be continuously differentiable in the


vicinity of x and the vector connecting x and x +p all lies
inside this vicinity.

∂f x+sp x+p
J(x + sp) = T
∂x x + sp
1 x
f (x + p) = f (x) + ∫ J(x + sp) p ds
0

− use path integral along the line x - x+p, parametrized by s.

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations

Add and subtract J(x)p to RHS:


1
f (x + p) = f (x) + J(x) p + ∫ [ J(x + sp) − J(x)] p ds
0

In Newton’s method we ignore the integral term and


choose p to estimate f(x+p). Thus the error in this case is:
1

∫ [ J(x + sp) − J(x)] p ds = f (x + p)


0

What is the upper bound on this error?


f(x) and J(x) are continuous:
J(x+sp)-J(x) → 0 as p → 0 for all 0 ≤ s ≤ 1.
November 2001 10.001 Introduction to Computer
Methods
Systems of Nonlinear Equations

Av
The norm of the matrix is defined as: A = max ,
v≠0 v
Ay
so for any y A ≥ , or Ay ≤ A y , therefore
y
1 1

∫ [ J(x + sp) − J(x) ] p ds


0
≤ ∫ [ J(x + sp) − J(x) ] ds
0
p

The error goes down at least as fast as p because for


a continuous Jacobian J(x+sp)-J(x) → 0 as p → 0.

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations
If we suggest that there exist some L>0 such that
J(y)-J(z) ≤ L y − z
or there is some upper bound on the "stretching" effect of J.
1 1
f (x + sp) = ∫ [ J(x + sp) − J(x) ]p ds
0
≤ ∫ [ J(x + sp) − J(x) ] ds
0
p

1
and ∫ [ J(x + sp) − J(x) ] ds
0
≤ L s p , so in this case

1
f (x + sp) = ∫ [ J(x + sp) − J(x) ] pds ≤ ( L s p ) p
0

≤ (L s ) p = O p
2
( ) 2

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations
Thus if we are at distance p from the solution, our error
scales as ||p||2.

What about convergence? How the error scales with


the number of iterations?

The answer is:


∧  ∧ 2
x[i +1] − x = O  x[i] − x  - local quadratic convergence,
 
a very fast one! Works when you are close enough to
the solution.

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations

x − x : 0.1 = 10−1
[i]


[i +1]
x − x : 0.01 = 10−2

[i + 2]
x − x : 0.00001 = 10−4

[i + 3]
x − x : 0.000000001 = 10−8

Works if we are close enough to an isolated


 ∧ 
(non-singular) solution: det  J( x )  ≠ 0.
 
November 2001 10.001 Introduction to Computer
Methods
Systems of Nonlinear Equations, Example
Let’s examine how Newton’s method
works for a simple system :
f1 = 3x + 4 x 2 − 145 = 0 ∧ 3 
3 2
1
x= 
f 2 = 4 x1 − x 2 + 28 = 0
2 3
4
The Jacobian is :
 ∂f1 ∂f1 
 ∂x ∂x 2  9 x1
2
8x 2 
J= 1 = 2
 ∂f 2 ∂f 2 
 8x 1 − 3x 2 
 ∂x1 ∂x 2 
November 2001 10.001 Introduction to Computer
Methods
Systems of Nonlinear Equations, Example

At each step we have the following system


to solve :

J =
[i]
 ( )
9 x1
[i ] 2
8x 2
[i ] 

 8x1 ( )
[i ] [i ] 2
− 3 x 2 

f =
[i ]
( ) ( )
3 x [i ] 3 + 4 x [i ] 2 − 145
 1 2

( ) ( )
4 x [i ] 2 − x [i ] 3 + 28 
 1 2 
[ i +1]
J ∆x = −f
[i] [i ] [i ]
x = x + ∆x
[i ] [i ]

November 2001 10.001 Introduction to Computer


Methods
Systems of Nonlinear Equations, Example

Let us examine performance of


Newton’s method with the convergence
criterion f = max{f1 , f 2 } < δ abs
δ bs = 10 −10

November 2001 10.001 Introduction to Computer


Methods
Newton’s Method

Newton’s method works well close to the solution,


but otherwise takes large erratic steps, shows poor
performance and reliability.

Let’s try to avoid such large steps: employ reduced


step Newton’s algorithm.

“Full” Newton’s step gives x[i+1]=x[i]+p[i]. We’ll use


only fraction of of the step: x[i+1]=x[i]+λi p[i], 0<λi<1.

November 2001 10.001 Introduction to Computer


Methods
Newton’s Method

How do we choose λi ?
Simplest way - weak line search:
- start with λi =2-m for m=0,1,2,…
As we reduce the value of λi bya a factor of 2 at
each step, we accept the first one that satisfies a
descent criterion:
(
f x + λi p < f x
[i ] [i ]
)
[i ]
( )
It can be proven that if the Jacobian is not singular,
the correct solution will be found by the reduced
step Newton’s algorithm.
November 2001 10.001 Introduction to Computer
Methods

You might also like