Lecture 1 NLPP Optimizaati

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Optimization

Nonlinear programming:
One dimensional minimization methods
Introduction
The basic philosophy of most of the numerical methods of
optimization is to produce a sequence of improved approximations
to the optimum according to the following scheme:

1. Start with an initial trial point Xi


2. Find a suitable direction Si (i=1 to start with) which points in the
general direction of the optimum
3. Find an appropriate step length i* for movement along the direction
Si
4. Obtain the new approximation Xi+1 as
*
Xi 1 Xi i Si
5. Test whether Xi+1 is optimum. If Xi+1 is optimum, stop the procedure.
Otherwise set a new i=i+1 and repeat step (2) onward.
Iterative Process of Optimization
Introduction
The iterative procedure indicated is valid for unconstrained as
well as constrained optimization problems.

If f(X) is the objective function to be minimized, the problem


of determining i* reduces to finding the value i = i* that
minimizes f (Xi+1) = f (Xi+ i Si) = f ( i ) for fixed values of Xi
and Si.

Since f becomes a function of one variable i only, the


methods of finding i* in the previous slide are called one-
dimensional minimization methods.
One dimensional minimization
methods
Analytical methods (differential calculus methods)
Numerical methods
Elimination methods
Unrestricted search
Exhaustive search
Dichotomous search
Fibonacci method
Golden section method
Interpolation methods
Requiring no derivatives (quadratic)
Requiring derivatives
Cubic
Direct root
» Newton
» Quasi-Newton
» Secant
One dimensional minimization
methods
Differential calculus methods:

Analytical method

Applicable to continuous, twice differentiable functions

Calculation of the numerical value of the objective function is


virtually the last step of the process

The optimal value of the objective function is calculated after


determining the optimal values of the decision variables
One dimensional minimization
methods
Numerical methods:

The values of the objective function are first found at various combinations
of the decision variables

Conclusions are then drawn regarding the optimal solution

Elimination methods can be used for the minimization of even


discontinuous functions

The quadratic and cubic interpolation methods involve polynomial


approximations to the given function

The direct root methods are root finding methods that can be considered to
be equivalent to quadratic interpolation
Unimodal function
A unimodal function is one that has only one peak
(maximum) or valley (minimum) in a given interval

Thus a function of one variable is said to be unimodal if, given


that two values of the variable are on the same side of the
optimum, the one nearer the optimum gives the better
functional value (i.e., the smaller value in the case of a
minimization problem). This can be stated mathematically as
follows:
A function f (x) is unimodal if
x1 < x2 < x* implies that f (x2) < f (x1) and
x2 > x1 > x* implies that f (x1) < f (x2) where x* is the minimum point
Unimodal function
Examples of unimodal functions:

Thus, a unimodal function can be a nondifferentiable or


even a discontinuous function

If a function is known to be unimodal in a given range, the


interval in which the minimum lies can be narrowed down
provided that the function values are known at two different
values in the range.
Unimodal function
For example, consider the normalized interval [0,1] and two function
evaluations within the interval as shown:

There are three possible outcomes:

f1 < f2

f1 > f2

f1 = f2
Unimodal function

If the outcome is f1 < f2, the minimizing x can not lie to the
right of x2

Thus, that part of the interval [x2,1] can be discarded and a


new small interval of uncertainty, [0, x2] results as shown in
the figure
Unimodal function

If the outcome is f (x1) > f (x2) , the interval [0, x1] can be
discarded to obtain a new smaller interval of uncertainty, [x1,
1].
Unimodal function

If f1 = f2 , intervals [0, x1] and [x2,1] can both be discarded to


obtain the new interval of uncertainty as [x1,x2]
Unimodal function

Furthermore, if one of the experiments (function evaluations in the


elimination method) remains within the new interval, as will be the
situation in Figs (a) and (b), only one other experiment need be placed
within the new interval in order that the process be repeated.

In Fig (c), two more experiments are to be placed in the new interval in
order to find a reduced interval of uncertainty.
Unimodal function

The assumption of unimodality is made in all the elimination


techniques

If a function is known to be multimodal (i.e., having several


valleys or peaks), the range of the function can be subdivided
into several parts and the function treated as a unimodal
function in each part.
Elimination methods
In most practical problems, the optimum solution is known to lie within
restricted ranges of the design variables. In some cases, this range is not
known, and hence the search has to be made with no restrictions on the
values of the variables.

UNRESTRICTED SEARCH

Search with fixed step size

Search with accelerated step size


Unrestricted Search
Search with fixed step size
The most elementary approach for such a problem is to use a
fixed step size and move from an initial guess point in a
favorable direction (positive or negative).

The step size used must be small in relation to the final


accuracy desired.

Simple to implement

Not efficient in many cases


Unrestricted Search
Search with fixed step size
1. Start with an initial guess point, say, x1
2. Find f1 = f (x1)
3. Assuming a step size s, find x2=x1+s
4. Find f2 = f (x2)
5. If f2 < f1, and if the problem is one of minimization, the
assumption of unimodality indicates that the desired
minimum can not lie at x < x1. Hence the search can be
continued further along points x3, x4, .using the
unimodality assumption while testing each pair of
experiments. This procedure is continued until a point,
xi=x1+(i-1)s, shows an increase in the function value.
Unrestricted Search

6. The search is terminated at xi, and either xi or xi-1 can be


taken as the optimum point
7. Originally, if f1 < f2 , the search should be carried in the
reverse direction at points x-2, x-3 where x-j=x1- ( j-1 )s
8. If f2=f1 , the desired minimum lies in between x1 and x2, and
the minimum point can be taken as either x1 or x2.
9. If it happens that both f2 and f-2 are greater than f1, it implies
that the desired minimum will lie in the double interval
x-2 < x < x2
Unrestricted Search
Search with accelerated step size
Although the search with a fixed step size appears to be very
simple, its major limitation comes because of the
unrestricted nature of the region in which the minimum can
lie.

For example, if the minimum point for a particular function


happens to be xopt=50,000 and in the absence of knowledge
about the location of the minimum, if x1 and s are chosen as
0.0 and 0.1, respectively, we have to evaluate the function
5,000,01 times to find the minimum point. This involves a
large amount of computational work.
Unrestricted Search

An obvious improvement can be achieved by increasing the


step size gradually until the minimum point is bracketed.
A simple method consists of doubling the step size as long
as the move results in an improvement of the objective
function.
One possibility is to reduce the step length after bracketing
the optimum in ( xi-1, xi). By starting either from xi-1 or xi, the
basic procedure can be applied with a reduced step size.
This procedure can be repeated until the bracketed interval
becomes sufficiently small.
Example
Find the minimum of f = x (x-1.5) by starting from 0.0 with an initial step size of
0.05.
Solution:
The function value at x1 is f1=0.0. If we try to start moving in the negative x
direction, we find that x-2=-0.05 and f-2=0.0775. Since f-2>f1, the assumption of
unimodality indicates that the minimum can not lie toward the left of x-2. Thus,
we start moving in the positive x direction and obtain the following results:
i Value of s xi=x1+s fi = f (xi) Is fi >
fi-1
1 - 0.0 0.0 -
2 0.05 0.05 -0.0725 No
3 0.10 0.10 -0.140 No
4 0.20 0.20 -0.260 No
5 0.40 0.40 -0.440 No
6 0.8 0.80 -0.560 No
7 1.60 1.60 +0.160 Yes
Example
Solution:
From these results, the optimum point can be seen to be xopt x6=0.8.

In this case, the points x6 and x7 do not really bracket the minimum point but
provide information about it.

If a better approximation to the minimum is desired, the procedure can be


restarted from x5 with a smaller step size.

You might also like