DR FX CXX DX Q XD DXR R

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014

1. Управляемые системы и методы оптимизации

УДК 519.853.32
© Т. Баяртугс, А. Энхболор, Р. Энхбат

КВАДРАТИЧНАЯ МИНИМИЗАЦИЯ И МАКСИМИЗАЦИЯ

В статье мы рассматриваем квадратичное программирование, которое состоит


из квадратичной максимизации и квадратичной минимизации. Основываясь на
условиях оптимальности, мы предлагаем алгоритмы для решения этих задач.
Ключевые слова: квадратичная максимизация, квадратичная минимизация,
алгоритм, сходимость.

T. Bayartugs, A. Enkhbolor, R. Enkhbat

QUADRATIC MINIMIZATION AND MAXIMIZATION

In this paper we consider the quadratic programming which consists of convex qua-
dratic maximization and convex quadratic minimization. Based on optimality condi-
tions, we propose algorithms for solving these problems.
Keywords: quadratic maximization, quadratic minimization, algorithm, conver-
gence.

1. Introduction

Consider an extremum problem of a quadratic function over a box D ∈ R n :


f ( x) = Cx, x + d , x + q → max(min), x ∈ D, (1.1)
where C is an n × n matrix, d , x ∈ R n , and D is a box set of R n . Here ⋅, ⋅
denotes the scalar product of two vectors.
Quadratic programming plays an important role in mathematical
programming. For example quadratic programming surve as auxiliary problems
for nonlinear programming in its linearized problems or in optimization
problems approximated by quadratic functions. Also this has many
applications in science, technology, statistics and economics. Moreover, many
combinatorial optimization problems are formulated as quadratic programming
including integer programming, quadratic assignment problems, linear
complementary problems and network flow problems [7]. There are a number
of methods for solving problem (1.1) as convex problem such as the interior
point methods, the projected gradient method, the conditional gradient method,
the proximal algorithm, penalty methods, finite step algorithm and so on [1, 9,
13]. Then well known optimality condition for problem (1.1) is in Rockafellar
[10]. Also, the quadratic maximization problem is known as «NP» problem.
There are many methods [3, 11, 12] and algorithms devobed to solution of the
quadratic maximization over convex sets.
The paper is organized as follows. In section 2 we consider quadratic
convex maximization problem and apply global optimality condition [11] to

4
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация

this. We propose some finite algorithms by approximation of the level sets of


the objective function with a finite number of points and solving linear
programming as auxiliary problems. In section 3 we consider the quadratic
minimization problem over box constraints and recall the gradient projection
method for solving this problem. In the last section we present numerical
solutions obtained by the proposed algorithms for quadratic maximization and
minimization problems.

2. Quadratic Convex Maximization Problem

2.1. Problem Statement and Global Maximality Condition


Consider the quadratic maximization problem.
f ( x) = Cx, x + d , x + q → max, x ∈ D, (2.1)
where C is a positive semidefinite (n x n) matrix, and D ∈ R n is a box set
of R n . A vector d ∈ R n and a number q ∈ R are given. Then optimality
conditions [12] can be also applied by proving the following theorem.
Theorem 2.1 [12] Let z ∈ D be such that f ′( z ) ≠ 0 . Then z is a
solution of problem (2.1) if and only if
f ′( y ), x − y ≤ 0 for all y ∈ E f ( z ) ( f ) and x ∈ D , (2.2)
where EC ( f ) = { y ∈ R | f ( y ) = c} .
n

Proof. Necessity. Assume that z is a global maximizer of problem (2.1)


and let y ∈ E f ( z ) ( f ) and x ∈ D . It is clear f ′( y ) ≠ 0 . Then the convexity of
f implies that
0 ≥ f ( x) − f ( z ) = f ( x) − f ( y ) > f ′( y ), x − y
Suffciency. Suppose, on the contrary, that z is not a solution to problem
(2.1); i.e., there exists an u ∈ D such that f (u ) > f ( z ) . It is clear that the
closed set L f ( z ) ( f ) = { x ∈ R n | f ( x) ≤ f ( z )} is convex. Since int L f ( z ) ( f ) = 0
then there is the projection y of u on Lf (z) ( f ) such that
y − u = min x−u
x∈E f ( z ) ( f )

Clearly,
y −u > 0 (2.3)
holds because u ∉ L f ( z ) ( f ) . Moreover, this y can be considered as a
solution of the following convex minimization problem:
1 2
g ( x) = x − u → min, x ∈ E f ( z ) ( f )
2
Applying the Lagrange method to this latter problem, we obtain the
following optimality condition at the point y:

5
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014

⎧λ0 ≥ 0, λ ≥ 0, λ0 + λ ≥ 0

⎨λ0 g ( y ) + λ f ( y ) = 0 (2.4)
⎪λ f ( y ) − f ( z ) = 0
⎩ ( )
If λ0 = 0 , then (2.4) implies that λ > 0, f ( y ) = f ( z ) and f ′( y ) = 0 .
Hence we conclude that z must be a global minimizer of f over R n ,
which contradicts the assumption in the theorem. If λ = 0 , then we have
λ0 > 0 and g ′( y ) = y − u = 0 , which also contradicts (2.3). So, without loss
of generality, we can set λ0 = 1 and λ > 0 in (2.4) to obtain
y − u + λ f ( y ) = 0, λ > 0,
f ( y) = f ( z)
From this we conclude that f ′( y ), u − y > 0 , which contradicts (2.2).
This last contradiction implies that the assumption that z is not a solution
of problem (2.1) must be false. This completes the proof.
Sometimes it is also useful reformulate Theorem 2.1 in the following
form. Theorem 2.2 Let z ∈ D and rank (C ) ≠ rank (C | d t ) . Then z is a
solution of problem (2.1) if and only if
f ′( y ), x − y ≤ 0 for all y ∈ E f ( z ) ( f ) and x ∈ D , (2.5)
where (C | d t ) is the extended matrix of the matrix C by the column d t .

2.2. Approximation of the Level Set


Furthermore, to construct a numerical method for solving problem
(2.1) based on optimality conditions (2.2) we assume that C is a
symmetric positive defined n × n matrix. Then problem (2.1) can be
written as follows.
f ( x) = Cx, x + d , x + q → max, x ∈ D , (2.6)
where D = { x ∈ R n | a ≤ x ≤ b} , and a, b, d ∈ R n , q ∈ R .
Now introduce the definitions.
Definition 2.1. The set E f (z) ( f ) defined by
E f ( z ) ( f ) = { y ∈ R n | f ( y ) = f ( z )} is called the level set of f at z .
Definition 2.2. The set Azm defined by Azm = { y1 , y 2 ,..., y m | y ∈ E f ( z ) ( f )}
is called the approximation set to the level set E f ( z ) ( f ) at the point z .
Note that a checking the optimality conditions (2.2) requires to solve
linear programming problems:
f ′( y ), x − y → max, x ∈ D
for each y ∈ E f ( z ) ( f ) . This is a hard problem. So we need to find an
appropriate approximation set such that one could check the optimality
conditions at a finite number of points.

6
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация

The following lemma shows that finding a point on the level set of
f ( x) is computationally p ossible.
Lemma 2.1. Let a point z ∈ D and a vector h ∈ R n satisfy
f ′( z ), h < 0 . Then there exists a positive number α such that
z + α h ∈ E f (z) ( f ) .
Proof. Note that Ch, h > 0 , and
2Cz + d , h < 0 (2.8)
Construct a point yα for α > 0 defined by
yα = z + α h .
Solve the equation f ( yα ) = f ( z ) with respect to α . In fact, we have
Cyα , yα + d , yα + q = f ( z )
or equivalently,
C ( z + α h), z + α h + d , z + α h + q = Cz , z + d , z + q
hich yields
2Cz + d , h
α =−
Ch, h

By (2.8), we have α > 0 and consequently, yα ∈ E f ( z ) ( f ) .


For each y i ∈ Am , i = 1, 2,..., m solve the problem
f ′( y i ), x → max, x ∈ D . (2.9)
Let u j , j = 1, 2,..., m be solutions of those problems which always exist
due to their compact set D :
f ′( y i ), u i = max x∈D f ′( y i ), x . (2.10)
Refer to the problems generated by (2.9) as auxiliary problems of the
Azm . Define θ m as follows:
θ m = max f ′( y j ), u i − y j
j =1,2,..., m

The value of θ m is said to be the approximate global condition value.


There are some properties of Azm and θ m .
Lemma 2.2. If for z ∈ D there is a point y k ∈ Azm such that
f ( y k ), u k − y k > 0 Then
f (uk ) > f ( z )
holds, where u ∈ D satisfies f ′( y ), u = max f ′( y k ), x
k k k
x∈D
Proof. By the definition of u k , we have
max f ′( y k ), x − y k = f ′( y k ), u k − y k
Since f is convex, we have

7
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014

f (u ) − f (v) > f ′(v), u − v


for all u , v ∈ R [8, 13]. Therefore, the assumption in the lemma implies
n

that
f (u k ) − f ( z ) = f (u k ) − f ( y k ) > f ′( y k ), u k − y k > 0 .
Let z = ( z1 , z2 ,..., zn ) be a local maximizer of problem (2.6). Then due to
[10], zi = ai ∨ bi , i = 1, 2,..., n . In order to construct an approximation set take the
following steps.
Define points v1 , v 2 ,..., v n +1 by formulas
⎧ zi if i = k

vi = ⎨ak if zk = ak i, k = 1, 2,3,..., n ,
k
(2.12)
⎪b if z = b
⎩ k k k

and
⎧bi if zi = bi
vin +1 = ⎨ i = 1, 2,3,..., n . (2.13)
⎩ai if zi = ai
Clearly,
v n +1 − z > v k − z , k = 1, 2,..., n,
n

∑(a − bi ) = v n +1 − z .
2 2
i
i =1

Denote by h i
vectors hi = v i − z , i = 1, 2,..., n + 1 . Note that
k
h ,h j
= 0, k ≠ j , k , j = 1, 2,..., n .
Define the approximation set Azn +1 by
Azm = { y1 , y 2 ,..., y m | y i ∈ E f ( z ) ( f ), y i = z − α i , i = 1, 2,..., n} (2.14)
2Cz + d , hi
where α i = , i = 1, 2,..., n + 1 .
Chi , hi
Then an algorithm for solving (2.6) is described in the following.
Algorithm MAX
Input: A convex quadratic function f and a box set D .
Output: An approximate solution x to problem (2.6); i.e., an
approximate global maximize of f over D .
Step 1. Choose a point x 0 ∈ D . Set k := 0 .
Step 2. Find a local maximize z k ∈ D the projected gradient method
starting with an initial approximation point x k .
Step 3. Construct an approximation set Ank+1 at the point z k by
z
formulas (2.12), (2.13) and (2.14).
Step 4. For each y i ∈ Ank+1 , i = 1, 2,..., n + 1 , solve the problems
z

8
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация

f ′( y i ), x → max, x ∈ D
which have analytical solutions u i , i = 1, 2,..., n + 1 found as
⎧bs if ( 2Cy i + d ) > 0,
⎪ s
u =⎨ i

⎪⎩as if ( 2Cy + d ) s ≤ 0.
s i

where i = 1, 2,..., n + 1 and s = 1, 2,..., n .


Step 5. Find a number j ∈ {1, 2,..., n + 1} such that
θ m = f ′( y i ), u i − y i = max f ′( y j ), u i − y j
j =1,2,..., n +1

Step 6. If f (u j ) > f ( z k ) then x k +1 := u j , k := k + 1 and go to step 1.


Step 7. Find a y ∈ E ( f ) such that
f ( zk )

2Cz + d , u j − z k
k

y=z −k
(u j
− zk )
2C ( u − z ) , u − z
j k j k

Step 8. Solve the problem


f ′( y ), x → max, x ∈ D .
Let v be solution, i.e.,
Step 9. If f ′( y ), v − y > 0 then x k +1 := v, k := k + 1 and go to step 1.
Otherwise, z k is an approximate maximizer and terminate.

Theorem 2.2. If θ nk+1 > 0 for k = 1, 2,... then Algorithm MAX converges
to a global solution in a finite number of steps.
Proof immediate from lemma 2.2 and the fact that convex function
reaches its local and global solutions at vertices of the box set D .

3. Quadratic Convex Minimization Problem

3.1. Problem Statement and Global Minimality condition


Consider the quadratic minimization problem over a box constraint.
f ( x) = Cx, x + d , x + q → min, x ∈ D,
(3.1)
D = { x ∈ R n | ai ≤ xi ≤ bi , i = 1, 2,..., n}.
where C is a symmetric positive semidefinite n × n matrix and and
a , b, d ∈ R n , q ∈ R .
Theorem 3.1. [1] Let z ∈ D . Then z is a solution of problem (3.1) if and
only if
f ′( z ), x − z > 0 for al x ∈ D (3.2)
We propose the gradient projection method for solving problem.
Before describing this algorithm denote by PD ( y ) projection of a point
y ∈ R n on the box set D which is a solution to the following quadratic
programming problem
9
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014

2
x − y → min, x ∈ D ,
We can solve this problem analytically to obtain its solution as follows.
⎧ ai if yi ≤ ai
( PD ( y ) )i = ⎪⎨ yi if ai < yi < bi i = 1, 2,..., n (3.3)
⎪b if y ≥ b
⎩ i i i

It is well known that for the projection PD the following conditions


hold [1].
PD ( y ) − y , x − PD ( y ) > 0 ∀x ∈ D (3.4)
We show that how to apply the gradient projection method for solving
problem (3.1). It can be easily checked that the function f ( x) defined by
(3.1) is strictly convex quadratic function. Its gradient is computed as:
f ′( x) = 2Cx + d
Lemma 3.1. The gradient f ′( x) satisfies the L ipshitz condition with a
constant L = 2 C .
Proof. Compute f (u ) − f (v) for arbitrary points u , v ∈ D . Then we have
f ′(u ) − f (v) = 2 C (u − v) ≤ 2 C u − v which completes the proof.
The gradient projection algorithm generates a sequence x k in the
following way.
x k +1 = PD ( x k − α k f ( x k )), k = 0,1, 2,..., x 0 ∈ D
where f ( x k +1 ) < f ( x k ) .
Theorem 3.2. Assume that a sequence { x k } ⊂ D is constructed by the
1
gradient projection algorithm, in which α k = α , k = 0,1, 2,..., 0 < αL< . Then
L
a sequence { xk } is a minimizing sequence, i.e., lim f ( x k ) = min f ( x) .
n →∞ x∈D

Proof. By the algorithm, we have


xαk = PD ( x k − α f ( x k ) ) , α > 0,
1
x k +1 = xαk , 0 < α <
L
By (3.5), we can write xα − x + α f ( x ), x − xα ≥ 0, ∀x ∈ D, α > 0 .
k k k k

If we set x = x k in the above, then it becomes


xαk − x k + α f ( x k ), x − xαk ≥ 0, α > 0,
xαk − x k , x − xαk + α f ′( x k ), x k − xαk ≥ 0, α > 0, (3.6)

α ( f ′( x k ), x k − xαk ) ≥ − xαk − x k .
2

By virtue of Lipshits condition and the property of the remainder


bounedness,

10
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация

2
f ( xαk ) − f ( x k ) ≤ f ′( x k ), xαk − x k + L xαk − x k
Taking into account the inequality (3.6), we have
2

( ) (3.7)
xαk − x k 2 ⎛ 1 ⎞ 2
f ( xα ) − f ( x ) ≤ −
k k
+ L xαk − x k = ⎜ − + L ⎟ xαk − x k
α ⎝ α ⎠
1
Set in (3.7) α k = α , 0 < α < , α = αk .
L
In this case
1 2
C=− + L < 0, f ( xαk ) − f ( x k ) ≤ C xαk − x k <0
α
It is clear that the sequence { f ( x )} k
is decreasing. On the other hand,
as a strict convex quadratic function, f ( x k ) is bounded from below, then
lim ( f ( x k +1 ) − f ( x k ) )
k →∞

Consequently,
lim f ( x k +1 ) − f ( x k ) = 0 (3.8)
k →∞

for a given x 0 , the set M ( x 0 ) defined by M ( x 0 ) = { x ∈ D | f ( x) ≤ f ( x 0 )}


is bounded. Since
f ( x 0 ) > f ( x1 ) > ... > f ( x k ) > f ( x k +1 ) > ... ,
Then
{x } ⊂ M ( x ), k = 0,1, 2,...
k 0

By the Weierstrass theorem the set S = { x ∈ D | f ( x) = f ( x )}


* *
is non
empty, where f ( x) = min f ( x) . Since f ( x) is convex, for any
x∈D

x* ∈ S , x k ∈ D we have
0 ≤ f ( x k ) − f ( x* ) ≤ f ′( x k ), x k − x* = f ′( x k ), x k − xαk + xαk − x* =
(3.9)
f ′( x k ), x k − xαk + f ′( x k ), xαk − x* , α > 0
From here, we obtain
α f ′( x k ), xαk − x* ≤ xαk − x k , x* − xαk , α > 0
1
Setting α = α , 0 < α < in (3.9), we obtain
L
1 *
0 ≤ f ( x k ) − f ( x* ) ≤ ( x − xk +1 ) − f ′( xk ), xk +1 − xk ≤
α
⎛1 * ⎞
⎜ x −x
k +1
+ f ′( x k ) ⎟ x k +1 − x k
⎝ α ⎠
0
Since M ( x ) is bounded, then
x* − x k +1 ≤ sup x − y = K < +∞
x , y∈M ( x 0 )

11
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014

Moreover,
f ′( x k ) = f ′( x k ) − f ′( x 0 ) + f ′( x k ) ≤ f ′( x k ) − f ′( x 0 ) + f ′( x 0 ) ≤
L x − x 0 + f ′( x 0 ) ≤ L ⋅ K + N ,
where N = f ′( x 0 ) , x ∈ M ( x 0 ) . Then
⎛K ⎞
0 ≤ f ( x k ) − f ( x* ) ≤ ⎜ + L ⋅ K + N ⎟ x k +1 − x k .
⎝α ⎠
Taking into account (3.8) that
lim x k +1 − x k = 0
k →∞

we have
lim f ( x k ) = f ( x* )
k →∞

which proves the assertion.

Numerical Experiments
The proposed algorithms for quadratic maximization and minimization
problems have been tested on the following type problems. The algorithms are
coded in Matlab. Dimensions of the problems were ranged from 200 up to
5000. Computational time, and global solutions are given in the following
tables.

Problem 1.
max ( Ax, x + B, x )
x∈D
subject to
D = {− ( n − i + 1) ≤ xi ≤ n + 0.5i, i = 1, 2,..., n}
where
⎛ n n −1 ... 1⎞
⎜ ⎟
n −1 n ... 2⎟
A=⎜ , B = (1,1,...,1)
⎜ M M O M⎟
⎜ ⎟
⎝ 1 2 L n⎠

Problem 2.
n
max ∑ (n − 1 − 0.1 ⋅ i )xi2
x∈D
i =1
subject to
D = { x ∈ R n | −1 − i ≤ xi ≤ 1 + 5i, i = 1, 2,..., n}
Problem 3.
⎛1 ⎞
min ⎜ Ax, x − B, x ⎟
x∈D
⎝ 2 ⎠

12
Т. Баяртугс, А. Энхболор, Р. Энхбат. Квадратичная минимизация и максимизация

subject to
D = {10 ≤ xi ≤ 30, i = 1, 2,..., n}
where
⎛ n n −1 ... 1⎞
⎜ ⎟
n −1 n ... 2⎟
A=⎜ , B = (1,1,...,1)
⎜ M M O M⎟
⎜ ⎟
⎝ 1 2 L n⎠
Problem 4.
n
max ∑ ( xi − 1)
2

x∈D
i =1
subject to
D = { x ∈ R n | i + 1 ≤ xi ≤ i + 10, i = 1, 2,..., n}

Table

proble Computing
n Initial value Global value
m time (sec)
1 200 2.66690000000000e+006 335.337841669956e+009 4.166710
1 500 41.6672500000000e+006 32.7084036979206e+012 29.082187
1 1000 333.334500000000e+006 1.04625056270796e+015 202.615012
1 2000 2.66666900000000e+009 33.4733378341612e+015 1579.706486
1 5000 41.6666725000000e+009 3.26848965365074e+018 22248.485127
2 200 37.7900000000000e+003 12.3936575899980e+009 4.766626
2 500 236.975000000000e+003 482.716617724994e+009 30.115697
2 1000 948.950000000000e+003 7.71590814447147e+012 196.977634
2 2000 3.79790000000000e+006 123.393965944640e+012 1786.474928
2 5000 23.7447500000000e+006 3.5660855482763485e+18 27342.423215
3 200 600.004500000000e+006 266.668000000000e+006 9.625021
3 500 9.37501125000000e+009 4.16667000000000e+009 69.149144
3 1000 75.0000225000000e+009 33.3333400000000e+009 408.735783
3 2000 600.000045000000e+009 266.666680000000e+009 3035.512940
3 5000 937.500011250000e+012 416.6666800000000e+12 21432.421674
4 200 500 200.00 9.227003
4 500 12500 500.00 56.679240
4 1000 25000 1000.00 365.62471
4 2000 50000 2000.00 2904.275652
4 5000 125000 5000.00 22134.532145

13
ВЕСТНИК БУРЯТСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА 1/2014

Conclusion
To provide a unified view, we considered the quadratic programming
problem consisting of convex quadratic maximization and convex quadratic
minimization. Based on global optimizality conditions by Strekalovsky [11,
12] and classical local optimality conditions [1, 7], we proposed some
algorithms for solving the above problem. Under appropriate conditions we
have shown that the proposed algorithms converges to a global solution in a
finite number of steps. The Algorithm MAX generates a sequence of local
maximizers and and uses linear programming at each iteration which makes
algorithm easy to implement numerically.
References
1. Bertsekas D.P. Nonlinear Programming, 2nd edition Athena Scientific. –
Belmont, MA, 1999.
2. Bomze I., Danninger G. A Finite Algorithm for Solving General Quadratic
Problem, Journal of Global Optimization, 4. – 1994. – Р. 1-16.
3. Enkhbat R.. An Algorithm for Maximizing a Convex Function over a Sim-
ple. Set // Journal of Global Optimization, 8. 1996. – Р. 379-391.
4. Horst R. On the Global Minimization of a Concave Function: Introduction
and Servey // Operations Research Spectrum, 6. – 1984. – Р. 195-200.
5. Horst R. A General Class of Branch and Bound Methods in Global Optimi-
zation with some New Approaches for Concave Minimization // Journal of Optimiza-
tion Theory and Applications, 51. – 1986. – Р. 271-291.
6. Horst R., Tuy H. Global Optimization, Springer-Verlag. – New-York,
London, Tokyo, 1990.
7. Horst R., Pardalos P.M., Nguyen V. Thoai. Introduction to Global Optimiza-
tion, Kluwer Academic, Dordrecht. – Boston, 2000.
8. Karmanov V.G., Mathematical Programming // Mir Publisher. – Moscow, 1989.
9. Pshenichnyi B.N., Danilin Yu.M. Numerical Methods in Extremal Problems.
– Moscow: Nauka, 1975.
10. Rockafellar R.T. Convex Analysis // Princeton University Press, Princeton, 1970.
11. Strekalovsky A.S. On the Global Extremum Problem // Soviet Math.Doklady,
292(5). 1987. – Р . 1062-1066.
12. Strekalovsky A.S. Global Optimality Conditions for Nonconvex Optimiza
tion // Journal of Global Optimization, 12. – 1998. – Р. 415-434,
13. Vasiliev O.V. Optimization Methods. – Atlanta: World Federation Publish-
ers, 1996.

T. Bayartugs, Mongolian University of Science and Technology, Ulaanbaatar,


Mongolia, e -mail: bayart1969@yahoo.com
R. Enkhbat, professor, National University of Mongolia, Ulaanbaatar, Mongolia,
210646, e-mail: renhkbat46@yahoo.com
A. Enkhbolor, Inje University, Korea.
Т. Баяртугс, преподаватель, Монгольский университет науки и технологий,
г. Улан-Батор, Монголия, e -mail: bayart1969@yahoo.com
Р. Энхбат, профессор, Монгольский национальный университет, Улаанбаа-
тар, Монголия, 210646, e -mail: renhkbat46@yahoo.com
А. Энхболор, Инчже университет, Корея.

14

You might also like