Optimization GRG - Method
Optimization GRG - Method
Optimization GRG - Method
Session: 2016-17
Submitted By:
Submitted To: KM Soni (1724)
Dr. Rajiv Kumar Dohare Harshit Bajpai (1727)
(Assistant Prof. Rajat K. Agrawal (1730)
Chemical Engineering Komal Dhanda (1744)
Dept.)
Pritam Agarwala (1747)
Introduction
The generalized reduced gradient method
algorithm was first developed in the late
1960s by Jean Abadie and since then refined
by several other researchers
It is based on extending methods for linear
constraints to apply to nonlinear constraints.
Algorithm
Compute the gradient of f(x) at the current point x c, giving
f(xc).
If the current point xc is close enough to being optimal, stop.
Compute a search direction dc using the gradient f(x c) and
perhaps other information such as the previous search
direction.
Determine how far to move along the current search
direction dc, starting from the current point xc. This distance
c is most often an approximation of the value of that
minimizes the objective function f(xc+ dc) and is used to
determine the next point xn = (xc+ dc).
Replace the current point xc by the next point xn and return
to step 1.
Generalized Reduced Gradient Method
ALGORITHM
: Cost Function y y x1 , x2 ,..., xn
: Constraints 1 x1 , x2 ,..., xn 0
2 x1 , x2 ,..., xn 0
...
m x1 , x2 ,..., xn 0
n : Number of variables
n-m : Number of decision Variables
m : Number of constraints
Assumed: n=4 , m=2
y y x1 , x2 , x3 , x4
Subject to
1 x1 , x2 , x3 , x4 0
2 x1 , x2 , x3 , x4 0
5
1 1 1 1
1 x1 x2 x3 x 4 0
x1 x2 x3 x4
2 2 2 2
2 x1 x2 x3 x4 0
x1 x2 x3 x4
i
Where
x j , Therefore
ij
11 12 x1 13 14 x3
21 22 mm
x2 m1
23 24 m n m
x4 n m 1
Thus
x1 13 14
mm k 3 x3 k 4 x4
1 1
J mm x
3 4 x J
x2 23 24
6
y
n
ynew yold xold xi
i 1 xi
x1 x3
ynew yold y1 y2 y3 y4
x2 x4
ynew yold y 1
y2 J 1 k3 y3 x3
y 1 y J k
2
1
4 y x
4 4
7
.Therefore we determine Generalized Reduced Gradient
GRGi y1 ... ym 1m J m1m kim1 yi
for
i n m,..., n
n
ynew yold GRG .x
i nm
i i
Calculate GRGi
GRGi 0 GRGi 0
i i 1 xi 0 xi 0 i i 1
xi 1 xi xi
xOptimum xi 1
9
EXAMPLE:Minimize
f ( x1 , x2 , x3 ) ( x1 x2 ) 2 ( x2 x3 ) 4
subject
g1 ( X ) x1 (1 x22 ) x34 3 0
3 xi 3
i 1,2,3...
STEP1:
We choose arbitrarily the independent and dependent variables as
y1 x1
Y , Z z1 x3
y
2 x
2
2.6
Let the starting vector be
X 1 2
2
with f (X1) = 21.16
10
STEP 2: compute GRG at X1
f
2( x1 x2 )
x1
f
2( x1 x2 ) 4( x2 x3 ) 3
x2
f
4( x2 x3 ) 3
x3
g
1 x22
x1
g
2 x1 x2
x2
g
4 x33
x3
We find, at X1
f
x 2( 2.6 2) 9 .2
1
Y f
f 2( 2.6 2) 4( 2 2)
3
9 .2
x2 X1
11
f
z f
x
4( x2 x3 ) 3 x1 0
3 X1
g g
C 5 10.4
x1 x2 X1
g
D 32
x3 X1
D 1
1
, D 1
C
1
5 10.4 0.15625 0.325
32 32
1
GR Y f D C z f
T
9.2 0.15625
9 . 2
0 . 325
( 0 )
9.2
9 . 2
12
Step 4: We use the steepest descent method
and take the search direction as
S = -G
9.2
R =
9.2
Step 5: We find the optimal step length along S.
(a) Considering the design variables, we use
eq (7.111) to obtain :
For y1 = x1 : 3 ( 2) 0.6087
9.2
For y2 = x2 : 3 (2)
9.2
Thus the smaller value gives 1 =0.5435.
9.2
T= -([D] -1
[C])S = -(0.15625-0.325)
9. 2
=
= -4.4275
And hence : 3 (2)
4.4275
For z1 = x3 : = = 1.1293
Thus, 2 = 1.1293
(b) The upper bound on is given by the smaller of
1 and 2, which is equal to .5435. By expressing
Y S
X
Z T
We obtain
g1 ( X ) {2.4684}
C
g1 g1
2 0.576 0.024 2 0.576 0.024 4 0.024 1.02595 2
y1 y 2
1.104 3.5258
1 2.024
dz 2.4684 1.104 3.5258. 0.5633
4.3195551 2.204
We have
x3 (2.4237) 0.25
1.2477
0.576
X new 0.024
1.2477
Next we go to step 1.
STEP1:
We do not have to change the set of independent
and dependent variables and hence we go to the next
step.
STEP 2:
We compute the GRG at the current X
f
x 2(0.576 0.024)
1
Y f
f 2(0.576 0.024) 4(0.024 1.2477)
3
x2
1.104
7. 1225
f f
z f 3
4(0.024 1.2477) 8.2265
z1 x3
g g
C
(1 (0.024) )2(0.576)(0.024)
2
x1 x2
1.000576 0.027648
g
D 4 x3
3
4 (1 .2477 ) 3
7.7694
x3
1
D 1 C 1.000576 0.027648 0.128784 0.003558
7.7694
GR Y f D 1
C
T
Y f