Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL
Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL
Mechanical Engineering Program COPPE / Federal University of Rio de Janeiro Caixa Postal 68503, 21945-970 Rio de Janeiro, BRAZIL
HERSKOVITS
JOSE
1. Introduction
Once the concepts of a material object are established, the act of designing
consists on choosing the values for the quantities that prescribe the object,
or dimensioning. These quantities are called Design Variables. A particular
value assumed by the design variables defines a configuration. The design
must meet Constraints given by physical or others limitations. We have a
feasible configuration if all the constraints are satisfied.
A better design is obtained if an appropriate cost or objective function
can be reduced. The objective is a function of the design variables and
quantifies the quality of the material object to be designed. The design is
optimum when the cost is the lowest among all the feasible designs.
If we call [x1 , x2 , ..., xn ] the design variables, f (x) the objective function
and the feasible set, the optimization problem can be denoted
minimize f (x); subject to x .
(1.1)
HERSKOVITS
JOSE
approach for interior points algorithms, proposed by the author [30], [31],
is presented in the last section.
2. Some basic concepts
We deal with the nonlinear optimization problem
minimize f (x)
(2.1)
where f, g and h are smooth functions in Rn and at least one of these functions is nonlinear. Any inequality constraint is said to be Active if gi (x) = 0
and Inactive if gi (x) < 0. Denoting g(x) = [g1 (x), g2 (x), ..., gm (x)]t and
h(x) = [h1 (x), h2 (x), ..., hp (x)]t , we have
minimize f (x)
subject to g(x) 0
and h(x) = 0.
(2.2)
HERSKOVITS
JOSE
k xk+1 x k
= < .
k xk x kp
Figure 1.
HERSKOVITS
JOSE
Figure 2.
Since any d Rn is a feasible direction, the well known optimality conditions for unconstrained optimization are easily derived from Theorem 3.1.
COROLLARY 3.2. First and second order necessary conditions. If x is a
local minimum of f over Rn , then
i) f (x ) = 0.
ii) for all d Rn , dt 2 f (x )d 0. That is, 2 f (x ) is positive semidefinite.
2
A sufficient local optimality condition for this problem is stated below.
It requires the calculus of two derivatives of f .
THEOREM 3.3. Sufficient optimality conditions. Let f be a twice continuously differentiable scalar function in Rn and x such that
i) f (x ) = 0
ii) 2 f (x ) is positive definite.
Then, x is a strict local minimum point of f.
(3.1)
HERSKOVITS
JOSE
(3.2)
Then, the Lagrangian plays a role similar to that of the objective function
in unconstrained optimization. This is also true when we consider the following second order conditions.
THEOREM 3.8. Second Order Necessary Conditions. Let x , a regular
point of the constraints h(x) = 0, be a local minimum of Problem (3.1).
Then there is a vector Rp such that the result of Theorem 3.7 is true
and the matrix
H(x , ) = 2 f (x ) +
p
X
i 2 hi (x )
i=1
(3.3)
10
HERSKOVITS
JOSE
X
iI(x ), i6=l
i i l l .
The optimality conditions discussed above are easily generalized to optimization problem (2.1), with equality and inequality constraints.
Figure 3.
11
(3.5)
G(x ) = 0
(3.6)
h(x ) = 0
(3.7)
g(x ) 0
(3.8)
0.
(3.9)
2
m
X
i=1
i 2 gi (x ) +
p
X
i=1
i 2 hi (x )
HERSKOVITS
JOSE
12
Figure 4.
(4.1)
(4.2)
(4.3)
13
(4.4)
Step 2. Update.
Set
y := y + d.
(4.5)
HERSKOVITS
JOSE
14
Figure 5.
Newtons iterations
(4.6)
Step 2. Update.
Set
y := y + d.
15
2
The algorithm doesnt fail if (4.6) has a unique solution. This methods
major advantage comes from its speed of convergence. However it requires
the evaluation of the Jacobian and the solution of a linear system at
each iteration, which can be very expensive in terms of computer effort.
Moreover, global convergence is not assured. The analytic Jacobian can be
replaced by a finite - difference approximation, but this is also costly since
n additional evaluations of the function per iteration are required.
With the objective of reducing computational effort, quasi - Newton
method generates an approximation of the Jacobian or of its inverse. The
basic idea of most quasi - Newton techniques is to try to construct an
approximation of the Jacobian, or of its inverse, using information gathered
as the iterates progress. Let be B k the current approximation of (y k ), a
new approximation B k+1 is obtained from
B k+1 = B k + B k .
Since
(4.7)
(y k+1 ) (y k ) (y k )t (y k+1 y k ),
(4.8)
(4.9)
(4.10)
HERSKOVITS
JOSE
16
Step 2. Update.
(i) Set
y := y + d
and
B := B + B.
Step 3. Go back to Step 1.
In Step 2, B can be updated using (4.9) or other rules. The following theorem is proved in [10].
THEOREM 4.3. Let (y) be twice continuously differentiable and y a solution of (y) = 0. Assume that (y )1 exists. Then, if started close
enough to y and the initial B is close enough from (y ), Algorithm 4.3
doesnt fail and it generates a sequence that converges to y . The convergence is superlinear.
2
Although quasi - Newton methods have the advantage of avoiding the
computation of (y), the initial B must be a good approximation of (y)
to have local convergence.
Looking to Algorithms 4.1 to 4.3 we note that they have a similar structure. All of them define a step by the expression
Sd = (y),
where S I in the linear iterations, S is an approximation of (y) in
quasi - Newton methods or S (y) in Newtons method. The rate of
convergence goes from linear to quadratic. We call this kind of iterations
Newton - like algorithms.
5. Unconstrained Optimization
Let us consider the unconstrained optimization problem
minimize f (x); x Rn .
(5.1)
17
(5.3)
(5.4)
HERSKOVITS
JOSE
18
Step 2. Line search.
Rnn
Figure 6.
19
(5.6)
2
20
HERSKOVITS
JOSE
Figure 7.
21
than = r1
r+1 . This result is proved in refs. [19] and [36]. It implies that
the convergence is slower as the conditioning of 2 f (x ) becomes worse.
5.4. NEWTONS METHOD
( B k )( B k )t
,
t ( B k )
(5.7)
22
HERSKOVITS
JOSE
B k+1 positive definite. Unfortunately this is not always true. Several Rank
Two Updating Rules overcome this problem. This is the case of Broyden Fletcher - Shanno - Goldfarb (BFGS) formulae
B k+1 = B k +
t B k t B k
t k .
t
B
(5.8)
(5.9)
(6.1)
(6.2)
h(x) = 0,
(6.3)
where the unknowns x and are called Primal and Dual Variables respectively. To have unique in (6.2), x must be a regular point of the problem.
Due to this fact, the algorithms that solve first order optimality conditions,
require the assumption that all the iterates xk are regular points of the
problem.
Denoting y (x, ) and
(y)
f (x) + h(x)
h(x)
(6.4)
we have
(y)
H(x, ) ht (x)
h(x)
0
(6.5)
23
where H(x, ) =
2 f (x)
p
X
i=1
H(xk , k ) h(xk )
ht (xk )
0
# "
xk+1 xk
k+1 k
"
f (xk ) + h(xk )k
h(xk )
(6.6)
As the system (6.4) is linear in , when
is known Newtons method
gives in one iteration. In fact, taking xk = x in (6.6), we have that
(xk+1 , k+1 ) = (x , ) for any k . We conclude that k+1 when
{xk } x . This remark suggests that a line search concerning only the
primal variables x is enough to get global convergence in (x, ).
In a similar way as in Section 4, can be substituted by a quasi
- Newton approximation or by the identity matrix. Since anyway, in the
present problem we need h to evaluate , it seems more efficient to substitute only H(xk , k ) in by its quasi Newton approximation or by the
identity. Calling S k this matrix and dk the change in x, we have the linear
system of equations
x
(6.7)
(6.8)
p
X
ri khi (x)k,
(6.9)
i=1
24
HERSKOVITS
JOSE
(6.10)
(6.11)
(6.12)
h (x)d = h(x)
(6.13)
(6.14)
25
Assumption 5.1 is also adopted in this algorithm. The same line search
procedures as in Algorithm 5.1 for unconstrained optimization can be employed. However some precautions must be taken here since is nonsmooth
[32].
6.1. FIRST ORDER METHOD
(6.15)
(6.16)
HERSKOVITS
JOSE
26
than = r1
r+1 , where now r is the ratio of the largest to the smallest
eigenvalue of 2 f (x ) as measured on the constraints. This is not surprising
since the iterates walk on the constraints.
The Projected Gradient Method was extended by Rosen [20], [44] to
problems with nonlinear constraints.Given a point xk on the constraints, a
better point x
k on the projected gradient is obtained. Since the projected
gradient no longer follows the constraints surface, xk is generally infeasible. The new iterate xk+1 is the projection of x
k on the constraints. This
projection is done in an iterative way, which is very costly.
When the constraints are nonlinear, is the ratio of the largest to the
smallest eigenvalue of the Hessian of the Lagrangian H(x , ) as measured
on the tangent space that determines the speed of convergence [36]. The
effect on the rate of convergence due to the curvature of the constraints is
included in H(x , ).
Algorithm 6.1, when applied to nonlinearly constrained problems, is
much more efficient than the projected gradient method. Since the iterates
are not required to be feasible, the projection stage is avoided.
6.2. NEWTONS METHOD
Taking S H(xk , k ) we have a Newtons equality constrained optimization algorithm. Even though in the majority of applications the computation of the second derivatives of f and g is very expensive, in several
problems they are available or easy to obtain.
The Hessian of the Lagrangian function, H(x, ), is not necessarily positive definite. Theorem 3.9 only ensures that H(x , ) is positive definite
on the tangent space. To have positive definite S, a procedure similar to
Levenberg - Marquardt method for unconstrained optimization can be employed.
6.3. QUASI-NEWTON METHOD
27
= x l(xk+1 , k ) x l(xk , k ).
t < 0.2 t B,
then it is computed
=
0.8 t B
t B t
and taken
= + (1 )B.
Finally, BFGS updating rule (5.8) is employed.
6.4. ABOUT THE SPEED OF CONVERGENCE
Since the techniques studied in this section are based on iterative algorithms
for nonlinear systems, (xk , k ) converges to (x , ) at the same speed as
the original algorithm. Then, the convergence of (xk , k ) is quadratic, superlinear or linear, depending if a Newtons, a quasi - Newton or a first
order algorithm for equality constrained optimization is employed.
In practice, we are more interested on the speed of convergence of xk
than (xk , k ). Several authors studied this point, in particular Powell [41],
Gabay [16], Hoyer [33], Gilbert [18]. Basically the same results are obtained,
but in two steps, i. e., Powell proved that quasi - Newton algorithms are
two-steps superlinearly convergent. That is
limk
k xk+2 x k
= 0.
k xk x k
As in unconstrained optimization, when applying Newtons or quasi Newton algorithms, the step length must be the unity near the solution.
There are examples showing that, taking t = 1 in Step 3) of Algorithm
6.1, it is not always possible to get a reduction of near the solution. This
is known as Maratos effect [37]. Several researches have been looking for
methods to avoid Maratos effect [18], [24], [27].
7. Sequential Quadratic Programming Method
To extend the ideas presented above in a way to solve the general nonlinear
programming problem (2.1), one major difficulty has to be overcome. While
28
HERSKOVITS
JOSE
(7.1)
Since (7.1) is a convex problem, the global minimum satisfies Karush Kuhn - Tucker optimality conditions
Sd + f (x) + h(x) = 0
(7.2)
ht (x)d + h(x) = 0,
(7.3)
where are the Lagrange multipliers. Then, (d, ) in Algorithm 6.1 can
be obtained by solving the quadratic program (7.1) instead of the linear
system (6.13), (6.14). Based on this fact, to solve the problem
minimize f (x)
subject to g(x) 0
and h(x) = 0,
(7.4)
Wilson proposed to define the search direction d and new estimates of the
Lagrange multipliers and by solving at each iteration
minimize 12 dt Sd + f t (x)d
(7.5)
29
m
X
si {sup[0, gi (x)]} +
i=1
p
X
ri khi (x)k
(7.6)
i=1
(7.7)
then d that solves (7.5) is a descent direction of (x, s, r). This result is
proved in [36].
In Sequential Quadratic Programming algorithms, the matrix S is defined as a quasi - Newton approximation of the Hessian of the Lagrangian.
Most of the optimizers employ BFGS rule modified by Powell, as explained
in the last section. SQP algorithm can be stated as follows,
ALGORITHM 7.1 Sequential Quadratic Programming
Parameters. r Rp and s Rm positive.
Data. Initialize x Rn and B Rnn symmetric and positive definite.
Step 1. Computation of the search direction d and an estimate of the Lagrange multipliers and .
Solve the quadratic program for d,
minimize 12 dt Sd + f t (x)d
(7.8)
m
X
i=1
si {sup[0, gi (x)]} +
p
X
i=1
ri khi (x)k
HERSKOVITS
JOSE
30
Step 3. Updates.
0.8 t B
t B t
t B t B
t
t
B
and
x := x + td
Step 4. Go back to Step 1.
31
(7.9)
HERSKOVITS
JOSE
32
(7.10)
(7.11)
where the Lagrangian l(x, ) = f (x) + t g(x) and e [1, 1, ..., 1]t .
j =
(7.12)
(7.13)
33
Step 4. Updates.
Let be = td and = x l(x + td, 0 ) x l(x, 0 ).
i) If t < 0.2 t B, then compute
=
0.8 t B
t B t
t B t B
t
t
B
and
x := x + td
Step 4. Go back to Step 1.
(7.14)
(7.15)
HERSKOVITS
JOSE
34
(7.17)
2
35
(7.18)
(7.19)
and
where now 1 (0, 1/2), 2 (1 , 1) and (0, 1).
Conditions (7.16) and (7.17) define upper bounds on the step length in
both criteria and, in Wolfes criterion, a lower bounds is given by one of the
conditions (7.18) and (7.19). The iterative procedure to find a step length
satisfying Wolfes criterion is similar to that employed in unconstrained
optimization, but interpolations of the constraints have to be also done.
8. A Newton-like Interior Point Method
In the last section, the Sequential Quadratic Programming Method was presented as a technique to extend Newton - like iterations to inequality constrained optimization. We present now a family of Newton-like algorithms
to solve Karush-Kuhn-Tucker optimality conditions. To obtain these algorithms, Newton - like iterations to solve the nonlinear system in (x, ) given
by the equalities (3.5) - (3.7) in KKT conditions, are first defined. Then,
these iterations are slightly modified in such a way to have the inequalities
(3.8) and (3.9) satisfied at each iteration.
The algorithms based of this approach require an initial estimate of x,
at the interior of the feasible region defined by the inequality constraints,
and generate a sequence of points also at the interior of this set. They are
feasible directions algorithms, since at each iteration they define a search direction that is a feasible direction with respect to the inequality constraints
and a descent direction of the objective, or another appropriate function.
When only inequality constraints are considered, the objective is reduced
at each iteration. In the general problem, an increase of the objective may
be necessary to have the equalities satisfied.
The present method is simple to code, strong and efficient. It does not
involve penalty functions, active set strategies or quadratic programming
subproblems. It only requires the solution of two linear systems with the
same matrix at each iteration, followed by an inaccurate line search.
This approach, proposed by the author, is the basis of the first order
algorithm and the quasi - Newton one described in [25], [26], [27] and [28].
This is also the case of the quasi - Newton algorithms described in [29] and
HERSKOVITS
JOSE
36
minimize f (x)
subject to g(x) 0.
(8.1)
(8.2)
G(x) = 0
(8.3)
(8.4)
g(x) 0.
(8.5)
H(xk , k ) g(xk )
k g t (xk ) G(xk )
# "
xk+1 xk
k+1
k
0
"
f (xk ) + g(xk )k
G(xk )k
(8.6)
where
is the starting point of the iteration and
is the
new estimate, H(x, ) is the Hessian of the Lagrangian and a diagonal
matrix with ii i . Proceeding in a similar way as in Section 6, we define
the linear system in (dk0 , k+1
0 )
(xk+1 , k+1
0 )
(xk , k )
(8.7)
(8.8)
37
(8.9)
which implies that git (xk )dk0 = 0 for i such that gi (xk ) = 0.
k+1
To avoid this effect, we define the linear system in dk and
k+1 = f (xk )
S k dk + g(xk )
(8.10)
k+1 = k k ,
k g t (xk )dk + G(xk )
(8.11)
obtained by adding a negative vector to the right side of (8.8), where the
scalar factor is positive. In this case, (8.11) is equivalent to
k+1 = k k ; i = 1, m,
ki git (xk )dk + gi (xk )
i
i
and then, git (xk )dk < 0 for the active constraints. Thus, dk is a feasible
direction.
The addition of a negative number in the right hand side of (8.8) produces the effect of deflecting dk0 into the feasible region, where the deflection
is proportional to k . As the deflection of dk0 is proportional to k and dk0
is a descent direction of f , it is possible to find bounds on k , in a way to
t
ensure that dk is also a descent direction. Since dk0 f (xk ) < 0, we can get
these bounds by imposing
t
(8.12)
(8.13)
= k ,
k g t (xk )dk1 + G(xk )k+1
1
(8.14)
HERSKOVITS
JOSE
38
(8.15)
(8.16)
g t (x)d0 + G(x)0 = 0.
(8.17)
If d0 = 0, stop.
ii) Solve the linear system for (d1 , 1 )
Sd1 + g(x)1 = 0,
(8.18)
g t (x)d1 + G(x)1 = .
(8.19)
(8.20)
Otherwise, set
= k d0 k2 .
(8.21)
Figure 8.
39
and also
d = d0 + d1 ,
(8.22)
= 0 + 1 .
(8.23)
(8.24)
gi (x + td) gi (x)
(8.25)
or
otherwise.
Step 3. Updates.
i) Set
x = x + td
and define new values for > 0 and S symmetric and positive definite.
ii) Go back to Step 1.
HERSKOVITS
JOSE
40
Rule 8.1.
Set, for i = 1, m,
i := sup [0i ; k d0 k2 ].
If gi (x) and i < I , set i = I .
(8.26)
2
41
(8.27)
2
This rule is based on Dikins algorithm for Linear Programming, presented in [13] and also studied in [46]. In the case of linear programs, since
H(x, ) = 0, we can take S = 0. In [31] it is shown that d0 obtained with
this rule is identical to the search direction given by Dikins algorithm.
8.2. ASYMPTOTIC CONVERGENCE
In the case when (8.26) and BFGS rules are used for and B, the convergence is two-step superlinear, provided that a unit step length is obtained
after a finite number of iterations. This result can be obtained in a similar
way as in Theorem 4.6 in [40], by showing that the search directions of the
present method and of the SQP algorithm locally differ by a term of the
same order as k d0 k2 and then, extending to the present algorithm the
proof of two-step superlinear convergence of the SQP method.
To satisfy the requirement about the step length, d must be such that
Definition 7.1 holds for some > 1. It is clear that increases whenever
grows but, unfortunately, in some problems the upper bound on may be
not large enough to allow the step length to be unitary. This effect, which
is similar to Maratos effect, in theory can produce rates of convergence
lower than superlinear.
In the quasi-Newton algorithm studied in [24], as in Algorithm 7.2,
Maratos effect is avoided by looking at each iteration for a decrease of an
estimate of the Lagrangian, instead of the objective function. An algorithm
HERSKOVITS
JOSE
42
In view of Algorithms 6.1 and 8.1, a Newton - like algorithm to solve the
general nonlinear program (2.1) is easily derived, [31]. This algorithm requires an initial point in the interior of the inequality constraints. It generates a sequence {xk } of interior points, with respect to the inequalities, that
converges to a Karush-Kuhn-Tucker point of the problem. In general, the
equalities are active only at the limit. Then, to have the equalities satisfied,
the objective function must be allowed to increase. The auxiliary function
(x, r), defined in Section 6, is taken as objective in the line search. The
algorithm is stated below, with d0 as a descent direction of (x, r) and d
obtained by deflecting d0 with respect to the inequality constraints.
ALGORITHM 8.2 Newton-like Interior Point Algorithm for constrained
optimization.
Parameters. (0, 1), > 0 and r > 0, r Rp
Data. Initialize x 0a , > 0 and S Rnn symmetric and positive
definite.
Step 1. Computation of the search direction d.
i) Solve the linear system for (d0 , 0 , 0 )
Sd0 + g(x)0 + h(x)0 = f (x),
g t (x)d0 + G(x)0 = 0.
ht (x)d0 = h(x)
If d0 = 0, stop.
43
= k d0 k2 .
= 0 + 1 .
x = x + td
and define new values for > 0 and S symmetric and positive definite.
ii) Go back to Step 1.
HERSKOVITS
JOSE
44
Acknowledgements. I thank Prof. Jasbir S. Arora for his careful review of the manuscript and helpful comments.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
45
Haftka, R. T., G
urdal, Z. and Kamat, M. P. Elements of Structural Optimization
, Kluwer Academic Publishers, 1990.
Han, S. P. A Globally Convergent Method for Nonlinear Programming, Journal
for Optimization Theory and Applications, 22, pp. 297-309, 1977.
Haug, E. J. and Arora, J.S. Applied Optimal Design, Wiley Interscience, New
York, 1979.
Herskovits, J. - A Two-Stage Feasible Direction Algorithm for Non-Linear Constrained Optimization, Research Report No. 103, INRIA, 1982.
Herskovits, J. A Two-Stage Feasible Directions Algorithm Including Variable Metric Techniques for Nonlinear Optimization, Research Report No. 118, INRIA, 1982.
Herskovits, J. Developpement dune methode numerique pour loptimisation nonlineaire, (in English), Th`ese de Docteur - Ingenieur en Mathematiques de la
Decision, Paris IX University, Published by INRIA, 1982.
Herskovits, J. A Two-Stage Feasible Directions Algorithm for Nonlinear Constrained Optimizations Using Quadratic Programming Subproblems, 11th IFIP
Conference on System Modeling and Optimization, Copenhagen, Denmark, July
1983.
Herskovits, J.N. and Carvalho, L.A.V. A Successive Quadratic Programming based
Feasible Directions Algorithm Proceedings of the 17th International Conference on
Analysis and Optimization of Systems, Antibes, France, Springer -Verlag, 1986.
Herskovits, J. A Two-Stage Feasible Directions Algorithm for Nonlinear Constrained Optimization, Mathematical Programming, Vol. 36, pp. 19-38, 1986.
Herskovits, J. and Coelho, C.A.B. An Interior Point Algorithm for Structural Optimization Problems, in Computer Aided Optimum Design of Structures: Recent
Advances, Brebbia, C. A. and Hernandez, S. ed., Computational Mechanics Publications, Springer-Verlag, 1989.
Herskovits, J. An Interior Points Method for Nonlinearly Constrained Optimization, in Optimization of Large Structural Systems, Rozvany, G. ed., Vol I, pp.
589-608, NATO/ASI Series, Springer - Verlag, 1991.
Herskovits, J. An Interior Point Technique for Nonlinear Optimization Research
Report No. 1808, INRIA, BP 105, 78153 Le Chesnay CEDEX, France, 1992.
Hiriart-Urruty, J. B. and Lemarechal, C. Convex analysis and Minimization Algorithms , I, II, Springer - Verlag, Berlin Heildelberg, 1993.
Hoyer, W. Variants of Reduced Newton Method for Nonlinear EqualityConstrained
Optimization Problems , Optimization, 17, pp.757-774,1986.
Kirsch, U. Structural Optimization, Fundamentals and Applications, Springer Verlag, Berlin, 1993.
Luenberger, D. G. Optimization By Vector Space Methods, John Wiley and Sons,
New York, 1969.
Luenberger D.G., Linear and Nonlinear Programming, 2nd. Edition, AddisonWesley, Reading, 1984.
Maratos, N. Exact Penalty Functions Algorithms for Finite Dimensional and control Optimization Problems, Ph. D. Dissertation, Imperial college, University of
London, 1978.
Mayne D.Q. and Polak E., Feasible Directions Algorithms for Optimization Problems with Equality and Inequality Constraints, Mathematical Programming 11,
pp. 67-80, 1976.
Minoux, M. Programation Mathematique: Theorie et Algorithmes ,I and II,
Dunod, Paris,1983.
Panier, E.R.,Tits A.L. and Herskovits J. A QP-Free, Globally Convergent, Locally Superlinearly Convergent Algorithm for Inequality Constrained Optimization, SIAM Journal of Control and Optimization, Vol 26, pp 788-810, 1988.
Powell, M.J.D. The Convergence of Variable Metric Methods for Nonlinearly Constrained Optimization Calculations, in Nonlinear Programming 3, edited by O.L.
Mangasarian, R.R. Meyer and S.M. Robinson, Academic Press, London, 1978.
46
42.
43.
44.
45.
46.
47.
48.
49.
50.
HERSKOVITS
JOSE
Powell M. J. D., Variable Metric Methods for Constrained Optimization, in Mathematical Programming - The State of the Art, Edited by A. Bachem, M. Grotschet
and B. Korte, Springer-Verlag, Berlin, 1983.
Rosen, J. The Gradient Projection Method for Nonlinear Programming, I, Linear
Constraints , SIAM J., 8, pp.181-217,1960.
Rosen, J. The Gradient Projection Method for Nonlinear Programming, II, Nonlinear Constraints , SIAM J., 9, pp.514-532,1961.
Tits, L. A. and Zhou J. L. A Simple, Quadratically Convergent Interior Point
Algorithm for Linear Programming and Convex Quadratic Programming, in Large
Scale Optimization: State of the Art, W. W. Hager, D. W. Hearn and P.M. Pardalos
ed., Kluwer Academic Publishers B.V, 1993.
Vanderbei, R. J.and Lagarios, J. C., I. I. Dikins Convergence Result for the Affine
Scaling Algorithm, Technical Report, AT&T Bell Laboratories, Murray Hill, NJ,
1988.
Vanderplaats, G. N. Numerical Optimization Techniques for Engineering Design
with Applications McGraw-Hill , New York, 1984.
Vautier, I., Salaun, M. and Herskovits, J. Application of an Interior Point Algorithm to the Modeling of Unilateral Contact Between Spot-Welded Shells, Proceedings of STRUCTURAL OPTIMIZATION 93, Rio de Janeiro, Aug. 1993, J.
Herskovits ed..
Zouain N.A., Herskovits J.N., Borges L.A. and Feij
oo R.A. - An Iterative Algorithm
for Limit Analysis with Nonlinear Yield Functions, International Journal on Solids
Structures, Vol. 30, n0 10, pp. 1397-1417, Gt. Britain, 1993.
Wilson, R. B. A Simplicial Algorithm for Concave Programming , Ph.D. Dissertation, Harvard University Graduate School of Business Administration, 1963.