DR FX CXX DX Q XD DXR R
DR FX CXX DX Q XD DXR R
DR FX CXX DX Q XD DXR R
УДК 519.853.32
© Т. Баяртугс, А. Энхболор, Р. Энхбат
In this paper we consider the quadratic programming which consists of convex qua-
dratic maximization and convex quadratic minimization. Based on optimality condi-
tions, we propose algorithms for solving these problems.
Keywords: quadratic maximization, quadratic minimization, algorithm, conver-
gence.
1. Introduction
4
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация
Clearly,
y −u > 0 (2.3)
holds because u ∉ L f ( z ) ( f ) . Moreover, this y can be considered as a
solution of the following convex minimization problem:
1 2
g ( x) = x − u → min, x ∈ E f ( z ) ( f )
2
Applying the Lagrange method to this latter problem, we obtain the
following optimality condition at the point y:
5
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014
⎧λ0 ≥ 0, λ ≥ 0, λ0 + λ ≥ 0
⎪
⎨λ0 g ( y ) + λ f ( y ) = 0 (2.4)
⎪λ f ( y ) − f ( z ) = 0
⎩ ( )
If λ0 = 0 , then (2.4) implies that λ > 0, f ( y ) = f ( z ) and f ′( y ) = 0 .
Hence we conclude that z must be a global minimizer of f over R n ,
which contradicts the assumption in the theorem. If λ = 0 , then we have
λ0 > 0 and g ′( y ) = y − u = 0 , which also contradicts (2.3). So, without loss
of generality, we can set λ0 = 1 and λ > 0 in (2.4) to obtain
y − u + λ f ( y ) = 0, λ > 0,
f ( y) = f ( z)
From this we conclude that f ′( y ), u − y > 0 , which contradicts (2.2).
This last contradiction implies that the assumption that z is not a solution
of problem (2.1) must be false. This completes the proof.
Sometimes it is also useful reformulate Theorem 2.1 in the following
form. Theorem 2.2 Let z ∈ D and rank (C ) ≠ rank (C | d t ) . Then z is a
solution of problem (2.1) if and only if
f ′( y ), x − y ≤ 0 for all y ∈ E f ( z ) ( f ) and x ∈ D , (2.5)
where (C | d t ) is the extended matrix of the matrix C by the column d t .
6
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация
The following lemma shows that finding a point on the level set of
f ( x) is computationally p ossible.
Lemma 2.1. Let a point z ∈ D and a vector h ∈ R n satisfy
f ′( z ), h < 0 . Then there exists a positive number α such that
z + α h ∈ E f (z) ( f ) .
Proof. Note that Ch, h > 0 , and
2Cz + d , h < 0 (2.8)
Construct a point yα for α > 0 defined by
yα = z + α h .
Solve the equation f ( yα ) = f ( z ) with respect to α . In fact, we have
Cyα , yα + d , yα + q = f ( z )
or equivalently,
C ( z + α h), z + α h + d , z + α h + q = Cz , z + d , z + q
hich yields
2Cz + d , h
α =−
Ch, h
7
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014
that
f (u k ) − f ( z ) = f (u k ) − f ( y k ) > f ′( y k ), u k − y k > 0 .
Let z = ( z1 , z2 ,..., zn ) be a local maximizer of problem (2.6). Then due to
[10], zi = ai ∨ bi , i = 1, 2,..., n . In order to construct an approximation set take the
following steps.
Define points v1 , v 2 ,..., v n +1 by formulas
⎧ zi if i = k
⎪
vi = ⎨ak if zk = ak i, k = 1, 2,3,..., n ,
k
(2.12)
⎪b if z = b
⎩ k k k
and
⎧bi if zi = bi
vin +1 = ⎨ i = 1, 2,3,..., n . (2.13)
⎩ai if zi = ai
Clearly,
v n +1 − z > v k − z , k = 1, 2,..., n,
n
∑(a − bi ) = v n +1 − z .
2 2
i
i =1
Denote by h i
vectors hi = v i − z , i = 1, 2,..., n + 1 . Note that
k
h ,h j
= 0, k ≠ j , k , j = 1, 2,..., n .
Define the approximation set Azn +1 by
Azm = { y1 , y 2 ,..., y m | y i ∈ E f ( z ) ( f ), y i = z − α i , i = 1, 2,..., n} (2.14)
2Cz + d , hi
where α i = , i = 1, 2,..., n + 1 .
Chi , hi
Then an algorithm for solving (2.6) is described in the following.
Algorithm MAX
Input: A convex quadratic function f and a box set D .
Output: An approximate solution x to problem (2.6); i.e., an
approximate global maximize of f over D .
Step 1. Choose a point x 0 ∈ D . Set k := 0 .
Step 2. Find a local maximize z k ∈ D the projected gradient method
starting with an initial approximation point x k .
Step 3. Construct an approximation set Ank+1 at the point z k by
z
formulas (2.12), (2.13) and (2.14).
Step 4. For each y i ∈ Ank+1 , i = 1, 2,..., n + 1 , solve the problems
z
8
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация
f ′( y i ), x → max, x ∈ D
which have analytical solutions u i , i = 1, 2,..., n + 1 found as
⎧bs if ( 2Cy i + d ) > 0,
⎪ s
u =⎨ i
⎪⎩as if ( 2Cy + d ) s ≤ 0.
s i
2Cz + d , u j − z k
k
y=z −k
(u j
− zk )
2C ( u − z ) , u − z
j k j k
Theorem 2.2. If θ nk+1 > 0 for k = 1, 2,... then Algorithm MAX converges
to a global solution in a finite number of steps.
Proof immediate from lemma 2.2 and the fact that convex function
reaches its local and global solutions at vertices of the box set D .
2
x − y → min, x ∈ D ,
We can solve this problem analytically to obtain its solution as follows.
⎧ ai if yi ≤ ai
( PD ( y ) )i = ⎪⎨ yi if ai < yi < bi i = 1, 2,..., n (3.3)
⎪b if y ≥ b
⎩ i i i
α ( f ′( x k ), x k − xαk ) ≥ − xαk − x k .
2
10
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация
2
f ( xαk ) − f ( x k ) ≤ f ′( x k ), xαk − x k + L xαk − x k
Taking into account the inequality (3.6), we have
2
( ) (3.7)
xαk − x k 2 ⎛ 1 ⎞ 2
f ( xα ) − f ( x ) ≤ −
k k
+ L xαk − x k = ⎜ − + L ⎟ xαk − x k
α ⎝ α ⎠
1
Set in (3.7) α k = α , 0 < α < , α = αk .
L
In this case
1 2
C=− + L < 0, f ( xαk ) − f ( x k ) ≤ C xαk − x k <0
α
It is clear that the sequence { f ( x )} k
is decreasing. On the other hand,
as a strict convex quadratic function, f ( x k ) is bounded from below, then
lim ( f ( x k +1 ) − f ( x k ) )
k →∞
Consequently,
lim f ( x k +1 ) − f ( x k ) = 0 (3.8)
k →∞
x* ∈ S , x k ∈ D we have
0 ≤ f ( x k ) − f ( x* ) ≤ f ′( x k ), x k − x* = f ′( x k ), x k − xαk + xαk − x* =
(3.9)
f ′( x k ), x k − xαk + f ′( x k ), xαk − x* , α > 0
From here, we obtain
α f ′( x k ), xαk − x* ≤ xαk − x k , x* − xαk , α > 0
1
Setting α = α , 0 < α < in (3.9), we obtain
L
1 *
0 ≤ f ( x k ) − f ( x* ) ≤ ( x − xk +1 ) − f ′( xk ), xk +1 − xk ≤
α
⎛1 * ⎞
⎜ x −x
k +1
+ f ′( x k ) ⎟ x k +1 − x k
⎝ α ⎠
0
Since M ( x ) is bounded, then
x* − x k +1 ≤ sup x − y = K < +∞
x , y∈M ( x 0 )
11
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014
Moreover,
f ′( x k ) = f ′( x k ) − f ′( x 0 ) + f ′( x k ) ≤ f ′( x k ) − f ′( x 0 ) + f ′( x 0 ) ≤
L x − x 0 + f ′( x 0 ) ≤ L ⋅ K + N ,
where N = f ′( x 0 ) , x ∈ M ( x 0 ) . Then
⎛K ⎞
0 ≤ f ( x k ) − f ( x* ) ≤ ⎜ + L ⋅ K + N ⎟ x k +1 − x k .
⎝α ⎠
Taking into account (3.8) that
lim x k +1 − x k = 0
k →∞
we have
lim f ( x k ) = f ( x* )
k →∞
Numerical Experiments
The proposed algorithms for quadratic maximization and minimization
problems have been tested on the following type problems. The algorithms are
coded in Matlab. Dimensions of the problems were ranged from 200 up to
5000. Computational time, and global solutions are given in the following
tables.
Problem 1.
max ( Ax, x + B, x )
x∈D
subject to
D = {− ( n − i + 1) ≤ xi ≤ n + 0.5i, i = 1, 2,..., n}
where
⎛ n n −1 ... 1⎞
⎜ ⎟
n −1 n ... 2⎟
A=⎜ , B = (1,1,...,1)
⎜ M M O M⎟
⎜ ⎟
⎝ 1 2 L n⎠
Problem 2.
n
max ∑ (n − 1 − 0.1 ⋅ i )xi2
x∈D
i =1
subject to
D = { x ∈ R n | −1 − i ≤ xi ≤ 1 + 5i, i = 1, 2,..., n}
Problem 3.
⎛1 ⎞
min ⎜ Ax, x − B, x ⎟
x∈D
⎝ 2 ⎠
12
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация
subject to
D = {10 ≤ xi ≤ 30, i = 1, 2,..., n}
where
⎛ n n −1 ... 1⎞
⎜ ⎟
n −1 n ... 2⎟
A=⎜ , B = (1,1,...,1)
⎜ M M O M⎟
⎜ ⎟
⎝ 1 2 L n⎠
Problem 4.
n
max ∑ ( xi − 1)
2
x∈D
i =1
subject to
D = { x ∈ R n | i + 1 ≤ xi ≤ i + 10, i = 1, 2,..., n}
Table
proble Computing
n Initial value Global value
m time (sec)
1 200 2.66690000000000e+006 335.337841669956e+009 4.166710
1 500 41.6672500000000e+006 32.7084036979206e+012 29.082187
1 1000 333.334500000000e+006 1.04625056270796e+015 202.615012
1 2000 2.66666900000000e+009 33.4733378341612e+015 1579.706486
1 5000 41.6666725000000e+009 3.26848965365074e+018 22248.485127
2 200 37.7900000000000e+003 12.3936575899980e+009 4.766626
2 500 236.975000000000e+003 482.716617724994e+009 30.115697
2 1000 948.950000000000e+003 7.71590814447147e+012 196.977634
2 2000 3.79790000000000e+006 123.393965944640e+012 1786.474928
2 5000 23.7447500000000e+006 3.5660855482763485e+18 27342.423215
3 200 600.004500000000e+006 266.668000000000e+006 9.625021
3 500 9.37501125000000e+009 4.16667000000000e+009 69.149144
3 1000 75.0000225000000e+009 33.3333400000000e+009 408.735783
3 2000 600.000045000000e+009 266.666680000000e+009 3035.512940
3 5000 937.500011250000e+012 416.6666800000000e+12 21432.421674
4 200 500 200.00 9.227003
4 500 12500 500.00 56.679240
4 1000 25000 1000.00 365.62471
4 2000 50000 2000.00 2904.275652
4 5000 125000 5000.00 22134.532145
13
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014
Conclusion
To provide a unified view, we considered the quadratic programming
problem consisting of convex quadratic maximization and convex quadratic
minimization. Based on global optimizality conditions by Strekalovsky [11,
12] and classical local optimality conditions [1, 7], we proposed some
algorithms for solving the above problem. Under appropriate conditions we
have shown that the proposed algorithms converges to a global solution in a
finite number of steps. The Algorithm MAX generates a sequence of local
maximizers and and uses linear programming at each iteration which makes
algorithm easy to implement numerically.
References
1. Bertsekas D.P. Nonlinear Programming, 2nd edition Athena Scientific. –
Belmont, MA, 1999.
2. Bomze I., Danninger G. A Finite Algorithm for Solving General Quadratic
Problem, Journal of Global Optimization, 4. – 1994. – Р. 1-16.
3. Enkhbat R.. An Algorithm for Maximizing a Convex Function over a Sim-
ple. Set // Journal of Global Optimization, 8. 1996. – Р. 379-391.
4. Horst R. On the Global Minimization of a Concave Function: Introduction
and Servey // Operations Research Spectrum, 6. – 1984. – Р. 195-200.
5. Horst R. A General Class of Branch and Bound Methods in Global Optimi-
zation with some New Approaches for Concave Minimization // Journal of Optimiza-
tion Theory and Applications, 51. – 1986. – Р. 271-291.
6. Horst R., Tuy H. Global Optimization, Springer-Verlag. – New-York,
London, Tokyo, 1990.
7. Horst R., Pardalos P.M., Nguyen V. Thoai. Introduction to Global Optimiza-
tion, Kluwer Academic, Dordrecht. – Boston, 2000.
8. Karmanov V.G., Mathematical Programming // Mir Publisher. – Moscow, 1989.
9. Pshenichnyi B.N., Danilin Yu.M. Numerical Methods in Extremal Problems.
– Moscow: Nauka, 1975.
10. Rockafellar R.T. Convex Analysis // Princeton University Press, Princeton, 1970.
11. Strekalovsky A.S. On the Global Extremum Problem // Soviet Math.Doklady,
292(5). 1987. – Р . 1062-1066.
12. Strekalovsky A.S. Global Optimality Conditions for Nonconvex Optimiza
tion // Journal of Global Optimization, 12. – 1998. – Р. 415-434,
13. Vasiliev O.V. Optimization Methods. – Atlanta: World Federation Publish-
ers, 1996.
14