Chapter 8
FRACTIONAL PROGRAMMING
Johannes B.G. Frenk
Econometric Institute,
Erasmus University,
3000 DR Rotterdam, The Netherlands
frenk@few.eur.nl
Siegfried Schaible
A.G. Anderson Graduate School of Management
University of California,
Riverside, CA 92521-0203, USA
siegfried.schaible@ucr.edu
Abstract
Single-ratio and multi-ratio fractional programs in applications are often
generalized convex programs. We begin with a survey of applications of
single-ratio fractional programs, min-max fractional programs and sumof-ratios fractional programs. Given the limited advances for the latter
class of problems, we focus on an analysis of min-max fractional programs. A parametric approach is employed to develop both theoretical
and algorithmic results.
Keywords: Single-ratio fractional programs, min-max fractional programs, sum-ofratios fractional programs, parametric approach.
1.
Introduction.
In various applications of nonlinear programming a ratio of two functions is to be maximized or minimized. In other applications the objective function involves more than one ratio of functions. Ratio optimization problems are commonly called fractional programs. One of
the earliest fractional programs (though not called so) is an equilibrium
model for an expanding economy introduced by von Neumann (cf. [74])
336
GENERALIZED CONVEXITY AND MONOTONICITY
in 1937. The model determines the growthrate of an economy as the
maximum of the smallest of several output-input ratios. At a time when
linear programming hardly existed, the author already proposed a duality theory for this nonconvex program. However, apart from a few
isolated papers like von Neumanns, a systematic study of fractional
programming began much later.
In 1962 Charnes and Cooper (cf. [18]) published their classical paper in which they show that a linear fractional program with one ratio
can be reduced to a linear program using a nonlinear variable transformation. Separately, Martos [49] in 1964 (from his Ph.D. dissertation
in Hungarian in 1960) showed that linear fractional programs can be
solved with an adjacent vertex-following procedure, the same way as linear programs are solved with the simplex method. He recognized that
generalized convexity properties (pseudolinearity) of linear ratios enables
such a technique which is successfully used in linear programming.
The study of fractional programs with only one ratio has largely dominated the literature in this field until about 1980. Many of the results known then are presented in the first monograph on fractional programming (cf. [65]) which the second author published in 1978. Since
then two other monographs solely devoted to fractional programming
appeared, one in 1988 authored by Craven (cf. [21]) and one in 1997
by Stancu-Minasian (cf. [71]). An overview of solution methods for
single-ratio and multi-ratio fractional location problems appeared in the
monograph by Barros (cf. [5]).
Fractional programs with one or more ratios have often been studied in
the broader context of generalized convex programming (cf. [4]). Ratios
of convex and concave functions as well as composites of such ratios are
not convex in general, even in the case of linear ratios. But often they
are generalized convex in some sense. From the beginning, fractional
programming has benefited from advances in generalized convexity, and
vice versa (cf. [50]).
Fractional programming also overlaps with global optimization. Several types of ratio optimization problems have local, nonglobal optima.
An extensive survey of fractional programming with one or more ratios
appeared in the Handbook of Global Optimization [61]. The survey also
contains the largest bibliography on fractional programming with one
or multiple ratios so far. It has almost twelve-hundred entries. For a
separate, rich bibliography [71] may be consulted.
Very recently two surveys have appeared updating some of the developments reviewed in [61]. The single-ratio and min-max case is treated
in [59] and the sum-of-ratios case in [60].
Fractional Programming
2.
337
Classification of Fractional Programs.
To start with single-ratio fractional programs, let
be a
nonempty closed set and
be extended real-valued
functions which are finite-valued on B. Assuming
for every
consider the nonlinear program
The problem
is called a single-ratio fractional program. In most
applications the nonempty feasible region B has more structure and is
given by
with
and
some set of real-valued continuous functions. So far, the functions in the numerator and denominator
were not specified. If
and
are affine functions (linear
plus a constant) and
denotes the nonnegative orthant of
then the optimization problem
is called a single-ratio linear fractional program. Moreover, we call
a single-ratio quadratic fractional
program if
the functions and are quadratic and the functions
are affine. The minimization problem
is called a
single-ratio convex fractional program if C is a convex set,
and
are convex functions and is a positive concave function on B.
In addition it is assumed that is nonnegative on B if is not affine.
In case of a maximization problem the single-ratio fractional program
is called a single-ratio concave fractional program if is concave and
is convex. Under these restrictive convexity\concavity assumptions the
minimization problem
is in general a nonconvex problem.
In some applications more than one ratio appears in the objective
function. One form of such an optimization problem is the nonlinear
programming problem
with extended real-valued functions
which are finite-valued on B with
for every
and
The problem
is often called a generalized fractional program.
As for single-ratio fractional programs we can specify the functions and
make a distinction between multi-ratio linear fractional programs and
multi-ratio convex fractional programs. If one
is not affine, we need
to assume that all functions
are nonnegative. Clearly both problems
and
are special cases of the following problem.
338
GENERALIZED CONVEXITY AND MONOTONICITY
Let
and
be nonempty closed sets and
be a finite-valued function on A B. In case
is a finite-valued positive function on A B, consider the minmax nonlinear programming problem
Problem (P) is called a (primal) min-max fractional program. In order
to unify the theory for single-ratio and multi-ratio fractional programs,
we consider in Section 6 the so-called parametric approach applied to
problem (P) and derive from this approach duality results and algorithmic procedures for problem (P). This yields immediately duality results
and algorithmic procedures for problems
and
Another multi-ratio fractional program we encounter in applications
is the so-called sum-of-ratios fractional program given by
with
for every
and
It is a more challenging
problem than
as recent studies have shown. We also encounter in
applications the so-called multi-objective fractional program
which is related to
and
In Sections 3 and 4 we will review applications of fractional programs
and
respectively. Section 5 focuses on applications of the
fractional program
In addition we review here some of the solution
procedures for this rather challenging problem. Finally in Section 6 we
return to problems
and
In a joint treatment of both involving
the more general problem (P) a parametric approach is used for the
analysis and development of solution procedures of (P).
3.
Applications of Single-Ratio Fractional
Programs
Single-ratio fractional programs
arise in management decision
making as well as outside of it. They also occur sometimes indirectly
in modelling where initially no ratio is involved. The purpose of the
following overview is to demonstrate the diversity of problems which
can be cast in the form of a single-ratio fractional program. A more
comprehensive coverage which also includes additional references for the
Fractional Programming
339
models below is contained in [61]. For other surveys of applications of a
single-ratio fractional program see [21, 59, 62, 64, 65, 71].
Economic Applications.
The efficiency of a system is sometimes characterized by a ratio of
technical and/or economical terms. Maximizing the efficiency then leads
to a fractional program. Some applications are given below.
Maximization of Productivity.
Gilmore and Gomory [37] discuss a stock cutting problem in the
paper industry for which under the given circumstances it is more
appropriate to minimize the ratio of wasted and used amount of
raw material rather than just minimizing the amount of wasted
material. This stock cutting problem is formulated as a linear
fractional program. In a case study, Hoskins and Blom [43] use
fractional programming to optimize the allocation of warehouse
personnel. The objective is to minimize the ratio of labor cost to
the volume entering and leaving the warehouse.
Maximization of Return on Investment.
In some resource allocation problems the ratio profit/capital or
profit/revenue is to be maximized. A related objective is return
per cost maximization. Resource allocation problems with this
objective are discussed in more detail by Mjelde in [53]. In these
models the term cost may either be related to actual expenditure
or may stand, for example, for the amount of pollution or the probability of disaster in nuclear energy production. Depending on the
nature of the functions describing return, profit, cost or capital,
different types of fractional programs are encountered. For example, if the price per unit depends linearly on the output and cost
and capital are affine functions, then maximization of the return
on investment gives rise to a concave quadratic fractional program
(assuming linear constraints). In location analysis maximizing the
profitability index (rate of return) is in certain situations preferred
to maximizing the net present value, according to [5] and [6] and
the cited references.
Maximization of Return/Risk.
Some portfolio selection problems give rise to a concave nonquadratic fractional program of the form (8.3) below which expresses the
maximization of the ratio of expected return and risk. For related
concave and nonconcave fractional programs arising in financial
planning see [61]. Markov decision processes may also lead to the
340
GENERALIZED CONVEXITY AND MONOTONICITY
maximization of the ratio of mean and standard deviation. A very
recent application of fractional programming in portfolio theory is
given in [48]. The authors argue that the ratio of two variances
gives sophisticated forecasting models with significant predictive
power.
Minimization of Cost/Time.
In several routing problems a cycle in a network is to be determined
which minimizes the cost-to-time ratio or maximizes the profitto-time ratio. Some of these models are combinatorial fractional
programs (cf. [56]). Also the average cost objective used within
the theory of stochastic regenerative processes (cf. [2]) leads to the
minimization of cost per unit time. A particular example occurring
within this framework is the determination of the optimal ordering policy of the classical periodic and continuous review single
item inventory control models (cf. [12, 13, 34]). Other examples
of this framework are maintenance and replacement models. Here
the ratio of the expected cost for inspection, maintenance and replacement and the expected time between two inspections is to be
minimized (cf. [7, 35]).
Maximization of Output/Input.
Charnes and Cooper use a linear fractional program as a model to
evaluate the efficiency of decision making units (Data Envelopment
Analysis (DEA)). Given a collection of decision making units, the
efficiency of each unit is obtained from the maximization of a ratio
of weighted outputs and inputs subject to the condition that similar ratios for every decision making unit are less than or equal to
unity. The variable weights are then the efficiency of each member
relative to that of the others. For an extensive recent treatment
of DEA see [17]. In the management literature there has been
an increasing interest in optimizing relative terms such as relative
profit. No longer are these terms merely used to monitor past
economic behavior. Instead the optimization of rates is receiving
more attention in decision making processes for future projects (cf.
[5, 42]). We mention here a case study in which the effectiveness
of medical institutions in the area of trauma and burned management was analyzed with help of linear fractional programming (cf.
[24]).
Non-Economic Applications.
In information theory the capacity of a communication channel can be
defined as the maximal transmission rate over all probabilities. This is a
Fractional Programming
341
concave nonquadratic fractional program. Also the eigenvalue problem
in numerical mathematics can be reduced to the maximization of the
Rayleigh quotient, and hence gives rise to a quadratic fractional program
which is generally not concave. An example of a fractional program in
physics is given by Falk (cf. [25]). He maximizes the signal-to-noise ratio
of an optical filter which is a concave quadratic fractional program.
Indirect Applications.
There are a number of management science problems that indirectly
give rise to a concave fractional program. We begin with a recent study
which shows that the sensitivity analysis of general decision systems
leads to linear fractional programs (cf. [52]). The developed software
was used in the appraisal of Hungarian hotels. A concave quadratic fractional program arises in location theory as the dual of a Euclidean multifacility min-max problem. In large scale mathematical programming,
decomposition methods reduce the given linear program to a sequence of
smaller problems. In some of these methods the subproblems are linear
fractional programs. The ratio originates in the minimum-ratio rule of
the simplex method.
Fractional programs are also met indirectly in stochastic programming,
as first shown by Charnes and Cooper [19] and by Bereanu [14]. This
will be illustrated by two models below (cf. [65, 71]).
Consider the following stochastic mathematical program
where the coefficient vector is a random vector with a multivariate
normal distribution and B is a (deterministic) convex feasible region.
It is assumed that the decision maker replaces the above optimization
problem by the maximum probability model
i.e., he wants to maximize the probability that the random variable
attains at least a value equal to a prescribed level
Then the
optimization problem listed in (8.2) reduces to
where is the mean vector of the random vector and V its variancecovariance matrix. Hence the maximum probability model of the concave
program (8.2) gives rise to a fractional program. If in problem (8.2) the
342
GENERALIZED CONVEXITY AND MONOTONICITY
linear objective function is replaced by other types of nonlinear functions,
then the maximum probability model leads to various other fractional
programs as demonstrated in [65] and [71].
Consider a second stochastic program
where
are concave functions on the convex feasible region B,
and is a random variable with a continuous cumulative distribution
function. Then the maximum probability model for (8.4) gives rise to
the fractional program
For a linear program (8.4) the deterministic equivalent (8.5) becomes
a linear fractional program. If
is concave and
linear, then (8.5) is
still a concave fractional program. However, if
is also a (nonlinear)
concave function, then (8.5) is no longer a concave fractional program.
Obviously a quadratic program (8.4) reduces to a quadratic fractional
program. For more details on (8.4) and (8.5) see [65, 71].
Stochastic programs (8.2) and (8.4) are met in a wide variety of planning problems. Whenever the maximum probability model is used as a
deterministic equivalent, such decision problems lead to a fractional program of one type or another. Hence fractional programs are encountered
indirectly in many different applications of mathematical programming,
although initially the objective function is not a ratio.
4.
Applications of Min-Max Fractional
Programs
In mathematical economics the multi-ratio fractional program
arises when the growthrate of an expanding economy is defined as follows
(cf. [74]):
where denotes a feasible production plan of the economy.
In management science simultaneous maximization of rates such as
those discussed in the previous section can also lead to a multi-ratio
fractional program. This is the case if either in a worst-case approach
the model
Fractional Programming
is used or with the help of prescribed ratio goals
343
the model
is employed. Examples of the second approach are found in financial
planning with different fractional goals or in the allocation of funds under equity considerations. Financial planning with fractional goals is
discussed in [38]. Furthermore, multi-facility location-queueing problems giving rise to
are introduced in [5].
A third area of application of min-max fractional programs is numerical mathematics (cf. [41]). Given the values
of a function
in
finitely many points of an interval for which an approximating ratio of
two polynomials
and
with coefficient vectors
is
sought. If the best approximation is defined in the sense of the
then the following problem is to be solved:
with variables
At the end of this section on applications of
we point out that in
case of infinitely many ratios
is related to a fractional semi-infinite
program (cf. [41]). Several applications in engineering give rise to such
a problem when a lower bound for the smallest eigenvalue of an elliptical
differential operator is to be determined (cf. [40]).
For further applications of
we refer to the very recent survey [59].
5.
Sum-of-Ratios Fractional Programs
Problem
arises naturally in decision making when several rates
are to be optimized simultaneously and a compromise is sought which
optimizes a weighted sum of these rates. In light of the applications of
single-ratio fractional programming numerators and denominators may
be representing output, input, profit, cost, capital, risk or time, for
example. A multitude of applications of the sum-of-ratios problem can
be envisioned in this way. Included is the case where some of the ratios
are not proper quotients. This describes situations where a compromise
is sought between absolute and relative terms like profit and return on
investment (profit/capital) or return and return/risk, for example.
Almogy and Levin (cf. [1]) analyze a multistage stochastic shipping
problem. A deterministic equivalent of this stochastic problem is formulated which turns out to be a sum-of-ratios problem.
Rao (cf. [57]) discusses various models in cluster analysis. The problem of optimal partitioning of a given set of entities into a number of
344
GENERALIZED CONVEXITY AND MONOTONICITY
mutually exclusive and exhaustive groups (clusters) gives rise to various
mathematical programming problems depending on which optimality
criterion is used. If the objective is to minimize the sum of the squared
distances within groups, then a minimum of a sum of ratios is to be
determined.
The minimization of the mean response time in queueing-location
problems gives rise to
as well, as shown by Drezner et al. (cf.
[23]); see also [75].
Furthermore we mention an inventory model analyzed in [63] which is
designed to determine simultaneously optimal lot sizes and an optimal
storage allocation in a warehouse. The total cost to be minimized is the
sum of fixed cost per unit, storage cost per unit and material handling
cost per unit.
In [46] Konno and Inori formulate a bond portfolio optimization problem as a sum-of-ratios problem.
More recently other applications of the sum-of-ratios problem have
been identified. Mathis and Mathis [51] formulate a hospital fee optimization problem in this way. The model is used by hospital administrators in the State of Texas to decide on relative increases of charges
for different medical procedures in various departments.
According to [20] a number of geometric optimization problems give
rise to the sum-of-ratios problem. These often occur in layered manufacturing, for instance in material layout and cloth manufacturing. Quite
in contrast to other applications of the sum-of-ratios problem mentioned
before, the number of variables is very small (one, two or three), but the
number of ratios is large; often there are hundreds or even thousands of
ratios involved.
Our current understanding of the structural properties of the sum-ofratios problem is rather limited. In [36] Freund and Jarre showed that
this problem is essentially NP-hard, even in the case of one concave ratio
and a concave function. Hence
is a global optimization problem in
contrast to
and
Given the small theoretical basis, it is not surprising that algorithmic
advances have been rather limited too. However in recent years some
progress has been made. Some of the proposed algorithms have been
computationally tested. Typically execution times grow very rapidly in
the number of ratios. At this time problems up to about ten ratios can
be handled. We refer to the algorithms by Konno and Fukaishi (cf. [45])
(see also [44]) and by Kuno (cf. [47]). The former is superior to several
earlier methods (cf. [45]) while the latter is seemingly faster than the
former. Clearly a more thorough testing of various proposed algorithms
is needed before further conclusions can be drawn. Also some of the
Fractional Programming
345
applications call for methods which can handle a large number of ratios;
e.g., fifty (cf. [1]). Currently such methods are not available.
For a special class of sum-of-ratios problems with up to about one
thousand ratios, but only very few variables an algorithm is given in
[20]. This method by Chen et al. is superior to the other algorithms on
the particular class of problems in manufacturing These are geometric
optimization problems arising in layered manufacturing. In contrast to
general-purpose algorithms for
the method in [20] is rather robust
with regard to the number of ratios.
Focus of the remainder of this review of fractional programming will
be the min-max fractional program (P). It includes as special cases
and
For a very recent survey of applications, theoretical results
and solution methods for
and
since [61] was published we refer
to [59]. A corresponding survey for
since [61] appeared is given
in [60]. For a survey of some recent developments for multi-objective
fractional programs
we refer to [33].
6.
Analysis of Min-Max Fractional Programs.
In this section we will analyze min-max fractional programs by means
of a parametric approach. Although other approaches are also available, this one makes it possible to derive duality results for the (primal) min-max fractional program (P) and at the same time to construct
an algorithm which solves problem (P). As already mentioned in Section 2, let
and
be some nonempty closed sets and
a finite-valued function on A B. Moreover, consider the function
which is a finite-valued positive
function on A B. For the related functions
and
given by
we assume, unless stated otherwise, that the following condition holds.
Condition 8.1 For every
For every
we have
we now consider the single-ratio fractional program
This optimization problem is well-defined and its objective function
value satisfies
A more complicated optimization
problem is given by the already introduced (primal) min-max fractional
346
GENERALIZED CONVEXITY AND MONOTONICITY
program
Clearly
It is not assumed beforehand
that the optimization problems (P) and
have an optimal solution.
Therefore we cannot replace sup by max or inf by min. The simpler
optimization problem
is introduced since it will be part of the socalled primal Dinkelbach-type approach discussed in subsection 6.2 to
solve the (primal) min-max fractional program (P).
Another optimization problem is to consider for every
the
single-ratio fractional program
Also this problem is well-defined and it satisfies
Clearly for every
we obtain
Similarly as for the
(primal) min-max fractional program we introduce the more complicated
optimization problem
This problem is called a (dual) max-min fractional program. Clearly
its optimal objective function value
satisfies
Like for the
(primal) min-max fractional program we introduce the functions
and
given by
Analyzing the so-called dual Dinkelbach-type approach to solve problem (D), we need the following counterpart of Condition 8.1.
Condition 8.2 For every
we have
The simpler optimization problem
is introduced since it will be
part of the dual Dinkelbach-type approach discussed in subsection 6.4 to
solve the (dual) max-min fractional program (D). If we consider a singleratio fractional program, A consists of one element and the optimization
problems (P) and (D) are identical. For a classical multi-ratio fractional
program (generalized fractional program) A is a finite set consisting
of more than one element; hence optimization problems (P) and (D)
are different from each other. If programs (P) and (D) are different
and additionally
both the primal and dual Dinkelbach-type
Fractional Programming
347
approach can be used to solve optimization problem ( P). As already
observed before, many results (cf. [4, 5, 21]) were derived for generalized
fractional programs. In this section we will consider the more general
(primal) min-max and (dual) max-min fractional program and derive
similar structural properties for this problem as it was done for the
more specialized primal and dual generalized fractional program before.
We selected these more general optimization problems not often considered in the fractional programming literature since one can use similar
parametric techniques as for generalized fractional programs and at the
same time unify the existing theory for single-ratio and multi-ratio fractional programs. Using the parametric approach one can reduce the
max-min and min-max fractional program to so-called (semi-infinite)
max-min and min-max programs. Unfortunately, solving these semiinfinite optimization problems efficiently on a computer is very difficult.
For an extensive discussion of some of the used procedures the reader
should consult [55]. However for special cases there is still room for improvement, and this seems to be a new area of research (cf. [15]). In the
theoretical analysis of max-min and min-max fractional programs it will
turn out that convexity plays a major role, not only in establishing the
equality
(a so-called strong duality result), but also in the rate of
convergence analysis for the primal and dual Dinkelbach-type parametric approach. Due to symmetry arguments similar type of convergence
results hold for these two algorithms.
In case we analyze the primal Dinkelbach-type approach, not all the
results are valid under Condition 8.1, and we sometimes need the following stronger condition.
Condition 8.3 The set
is compact, the function is positive
on A B and for every
the functions
and
are finite-valued and continuous on some open set
containing
A.
If Condition 8.3 holds, then it follows from Corollary 1.2 of [3] that
for every
and so this condition implies Condition 8.1. Moreover,
the single-ratio fractional program
has an optimal solution and
is finite for every
In case we also analyze the dual Dinkelbach-type approach, not all
results are valid under Condition 8.2, and so we sometimes need the
following counterpart of Condition 8.3.
348
GENERALIZED CONVEXITY AND MONOTONICITY
Condition 8.4 The set
is compact, the function is positive
on A B and for every
the functions
and
are finite-valued and continuous on some open set
containing
B.
Again, if Condition 8.4 holds, it follows from Corollary 1.2 of [3] that
for every
and so this condition implies Condition 8.2. Moreover,
the single-ratio fractional program
has an optimal solution, and
is finite for every
Before analyzing in the next subsection the parametric approach applied to (P), we will derive an alternative representation of a generalized
fractional program. This alternative representation satisfies automatically Condition 8.3. For a generalized fractional program the set A is
given by
and the functions and are replaced by
the functions
and
This means
In this case the optimization problem
can be solved trivially.
To obtain a different representation of a generalized fractional program, we introduce the unit simplex
If the vector belongs to
the strictly positive orthant of
it
is well-known (cf. [4]) that the function
given by
is quasiconvex on
for every
By Condition
8.1 it follows for
given by
that
for every
Then for
given by
we have
for every
Applying relation (8.10) yields
With this we have found another representation of a generalized fractional program. Using this representation, the corresponding (dual) generalized fractional program is given by
Fractional Programming
349
In subsection 6.3 we will give sufficient conditions to guarantee that
the primal and dual optimal objective function values coincide. However
before discussing this, we will first consider in the next subsection the
so-called primal parametric approach for solving the (primal) min-max
fractional program (P).
6.1
The Primal Parametric Approach.
To analyze the properties of the (primal) min-max fractional program
(P) and at the same time construct some generic algorithm to solve this
problem we introduce the function
given by
and consider for every
For every
the optimization problem
the function
is now given by
Since
on
and
is the supremum of affine functions, it is
obvious that
is a decreasing lower semicontinuous convex function.
Its so-called effective domain
is defined by (cf. [58])
By the finiteness of
problem than
on
it is obvious that for every
A more difficult optimization
is the parametric min-max optimization problem
For this function it holds that
the function
the so-called effective domain
By the definition of the functions
In the next result we identify for
domains of the functions
and
and
for every
is given by
For
it is easy to verify that
and
the effective
GENERALIZED CONVEXITY AND MONOTONICITY
350
Lemma 8.1 Assume Condition 8.1 holds. Then
and
is finite if and only
Proof.
some
if and only if
Assume
Suppose by contradiction that there exists
satisfying
This implies for every
that
Hence for a given
one can find some sequence
satisfying
Since
and
for every
is finite, we obtain by relation (8.13) that
yielding
which contradicts our
assumption.
Conversely, if
then clearly
exists some
satisfying
it is easy to see that
and so
the proof of the first part. By identifying B with
follows immediately from the first part.
and so there
Due to
which completes
the second part
Using similar algebraic manipulations as in [22] applied to a generalized fractional program one can show the following important result for
the optimal value function
of a parametric min-max problem
The validity of the so-called parametric approach to solve problem (P)
is based on this result.
Theorem 8.1 Assume Condition 8.1 holds and
if and only if
Moreover, if
if and only if
Proof. If
and
and
satisfying
then there exist some
for every
Since
for every
It follows that
Conversely, if
satisfying
Then
then
this yields
then there exist some
This implies
and
for
Fractional Programming
every
and we obtain for every
351
that
Since
it follows from relation (8.14) that
and the proof of the first part is completed. By identifying B with
the second part follows from the first part.
A useful implication of Theorem 8.1 is given by the following result.
Lemma 8.2 Assume Condition 8.1 holds and
Then
for some
Proof. By the definition of
we obtain
for
every
This implies
From Theorem 8.1 it follows
that
and this shows the desired result.
If Condition 8.1 holds and
we obtain from Theorem 8.1 and
Lemma 8.1 that
and
is finite for every
In case
we only assume that is positive on A B it is easy to verify that
for every
and
implies
However as shown
by the following single-ratio fractional program satisfying Condition 8.1
and
it may happen that
for every
and
(cf. [22]).
Example 8.1 For A = {1},
it follows that the optimization problem
and so
Also
for every
and
optimal solution set of the optimization problem
for every
and
(P) reduces to
Moreover, the
equals B, and
To derive some other properties of the so-called parametric approach
we need to investigate in detail the functions
and
We first observe
that the positivity of the function on A B implies that the functions
and
are decreasing. In the next result it is shown that
the decreasing function
is upper semicontinuous.
Theorem 8.2 Assume Condition 8.1 holds.
is upper semicontinuous.
Then the function
Proof. To prove that the function
is upper semicontinuous, let
and consider the upper level set
then this set is closed. So we assume that
If
GENERALIZED CONVEXITY AND MONOTONICITY
352
To show that this set is closed consider some accumulation point
of the set
Hence there exists some sequence
satisfying
If for some
it holds
that
then by the monotonicity of the function
we obtain
and so
Therefore we may assume
without loss of generality that
for every
Observe now for
every
and
that
for every
This implies using
for every
and hence
and
that
Since
we obtain for every
that
By
relation (8.15),
and
this yields for every
that
Hence
and so
Applying Theorem 1.7 of [29] yields that
is upper semicontinuous.
By Theorem 8.2 and Lemma 1.30 of [29] we obtain
Since for every
we know that
this yields
Again by the monotonicity of
it follows that
exists.
But this limit might not be equal to
Therefore the function
is
left-continuous with right-hand limits.
An important consequence of Theorem 8.2 is given by the next result.
To show this result we first introduce a so-called set-valued mapping
(cf. [3]) with
denoting the set of all subsets of the nonempty
set
and X a nonempty closed subset of
If
is
a set-valued mapping, it is always assumed that
is nonempty
for every
The graph of a set-valued mapping
is given
by
An important subclass of set-valued mappings is introduced in the
next definition (cf. [11]).
Fractional Programming
Definition 8.1 The set-valued mapping
closed set is called closed if its graph is a closed set.
353
where X is a
By the definition of a closed set it is immediately clear that the setvalued mapping
is closed if and only if for any sequence
and
it follows that
Examples of set-valued mappings occurring in min-max optimization
are the set-valued mappings
and
given
by
and
The set
represents the set of optimal solutions of the optimization problem
while the set
denotes the set of optimal
solutions in B of the optimization problem
Also we consider the
set-valued mapping
given by
This set represents the set of optimal solutions of the optimization
problem
For the above set-valued mappings one can show the
following result. It is always assumed in the next result that the sets
and
are nonempty on their domain.
Lemma 8.3 Assume Condition 8.1 holds and the functions and are
finite-valued and continuous on some open set
containing
A B . Then the set-valued mappings
and
are closed.
Proof. We first show that the set-valued mapping
is closed. To start
with this, consider some sequence
satisfying
and
Since A and B are closed sets, this yields
and
and by
the definition of
we obtain
Since the function is continuous on
it is easy to verify
using Theorem 1.7 of [29] that the function
is lower semicontinuous
on
Using this together with Lemma 1.30 of [29] and
we obtain
354
GENERALIZED CONVEXITY AND MONOTONICITY
Then by relation (8.19) it follows that
This shows
that the set
is closed. To prove that the set-valued mapping
is
closed we consider some sequence
satisfying
and
By Theorem 8.2 and Lemma
1.30 of [29] we obtain
Since
is lower semicontinuous on
it follows that
Hence by relation (8.20) we obtain
Using
this shows that
Hence we have verified
that
is closed.
Finally, to show that
is closed, consider a sequence
satisfying
and
Since
it follows that
using the fact that
is closed. This shows
Moreover, since
we obtain
using the fact that
is closed. Hence
Therefore
is an optimal solution of the min-max fractional program (P). This completes the proof.
We will now consider for every
the decreasing convex function
introduced in relation (8.12). In the next result it is shown
for
finite that this function is Lipschitz continuous with Lipschitz
constant
Lemma 8.4 Assume Condition 8.1 holds and
is finite for
Then the function
is strictly decreasing and
Lipschitz continuous with Lipschitz constant
and this function
satisfies
and
Proof. If
is finite for some
then we know by Lemma 8.1 that
is finite for every
Selecting some
using
and the fact that
is finite, it is easy to verify that
for every
Hence
with Lipschitz constant
that
is a Lipschitz continuous convex function
Also it is easy to verify using
Fractional Programming
355
for every
This shows that
is strictly decreasing on
Again by relation (8.22) we obtain for a given
and
that
and for a given and
that
If
is finite, it follows from Lemma 8.4 and Theorem 1.13 of [29]
that the finite-valued convex function
has a nonempty subgradient
set
for every
Hence for every
and
the subgradient inequality
holds. Applying relation (8.21) and the fact that
ing we obtain
for every
for every
that
is strictly decreas-
Furthermore, applying relation (8.22) yields
Hence by relations (8.23) and (8.24) it follows
To give a more detailed representation of the subgradient set
it is convenient to assume that the set
introduced in relation
(8.16) is nonempty. As already observed, this set represents the set of
optimal solutions of the parametric problem
It is easy to see that
for every
Since
is a closed
convex set, this implies
Although it is possible for a finite
to give a complete representation
of the subgradient set
for every
we only consider the
following important subcase.
Lemma 8.5 Assume Condition 8.3 holds. Then it follows for every
that
is finite,
is a nonempty compact set for every
and
Also for every
that
and
and
it holds
356
GENERALIZED CONVEXITY AND MONOTONICITY
Proof. Since the functions
and
are continuous,
on A B and A is compact, we obtain that
is finite. By
the same argument it also follows that
is nonempty for every
Also by the continuity of the function
the set
is closed and hence compact. Using now the
proof of Lemma 3.2 in [8] and the fact that
is a compact set
yields the desired representation of the subgradient set
To show
the last part we observe by the subgradient inequality that
Moreover, applying the same argument it follows
that
Adding these two inequalities yields
and since
it follows that
Looking at the proof of the last inequality it is only needed that
the subgradient sets
and
are nonempty. In view of
Lemma 8.1 this is true if
is finite and Condition 8.1 holds. By
relation (8.11) the above conditions are clearly satisfied for a generalized
fractional program.
In the next lemma we show the following important improvement of
Lemma 8.1 and Lemma 8.2.
Lemma 8.6 Assume Condition 8.1 holds. Then the set
is nonempty if and only if
Moreover, if this
set is nonempty, it only contains the finite value
Proof. If the set
is nonempty, then it follows
for any belonging to this set that
for every
This shows by the positivity of on A B that
Also
by Lemma 8.2 we obtain for
finite that
This
proves the first part of the above result. To prove the second part, we
observe that by Lemma 8.4 the function
is strictly decreasing. This
completes the proof.
Up to now we did not assume that there exists some
satisfying
i.e., that the min-max fractional program (P) has an
optimal solution in B. In the next theorem we show the implications of
this assumption. To do so, consider the (possibly empty) set
given by
It is now possible to prove the following theorem.
Fractional Programming
357
Theorem 8.3 If Condition 8.1 holds, then
if and only if
Moreover, if
some
then
Proof. By Lemma 8.2 it follows for
Since
this shows that
for some
for
that
Using relation (8.28) with replaced by
it follows for
that
belongs to
Hence we still need to show that
only
contains
Consider therefore an arbitrary belonging to
By the
definition of
in relation (8.27) one can find some
satisfying
and this implies by Lemma 8.6 that
Since
it follows by Theorem 8.1 that
and this shows
that
Hence
and we have verified
that
only contains
To prove the converse we obtain for
that
for some
Applying Lemma 8.6 yields
is finite
and
which proves the only if implication. To verify the
second part it follows by relation (8.28) that belongs to
for
every
satisfying
and so
To prove the reverse inclusion, let
Since
for some
it follows from relation (8.28) with
that
Since
this implies
Applying now Lemma 8.6 yields
If we introduce the (possibly empty) set
replaced by
given by
then without Condition 8.1 one can show, using similar techniques as
before, the following result. Note the vector
is an optimal solution
of the (primal) min-max problem (P) if and only if
and
Theorem 8.4 The (primal) min-max fractional program (P) has an
optimal solution if and only if
Moreover, if (P) has an
optimal solution, then the set
listed in relation (8.18) is nonempty
and
GENERALIZED CONVEXITY AND MONOTONICITY
358
For the moment this concludes our discussion of some of the theoretical properties related to the parametric approach. In the next subsection
we will consider the (primal) Dinkelbach-type algorithm and use the previously derived properties to show its convergence.
6.2
The Primal Dinkelbach-Type Algorithm.
In this section we will introduce the so-called primal Dinkelbach-type
algorithm to solve the (primal) min-max fractional program (P). A
similar approach for a slightly different min-max fractional program satisfying some compactness assumptions on the feasible sets A and B was
considered by
(cf. [72, 73]). Contrary to [72] the feasible set
A in this section does not depend on
Due to this our assumptions
are less restrictive. Using Lemma 8.1 and the fact that the (primal)
Dinkelbach-type algorithm is based on solving a sequence of parametric
optimization problems
for
it is natural to assume that the
(primal) min-max fractional program (P) satisfies the next condition.
Condition 8.5
Condition 8.1 holds and
If
is finite, then for every
while for
the set
is finite for every
the set
is nonempty
is nonempty for every
Contrary to the analysis in [22] for generalized fractional programs we
do not assume that the min-max fractional program (P) has an optimal
solution. Also for generalized fractional programs the first part of Condition 8.5 is automatically satisfied. If Condition 8.5 holds, then one
can execute the following so-called primal Dinkelbach-type algorithm.
The geometrical interpretation of this algorithm is as follows. By Theorem 8.3 we need to find the zero point
of the optimal value function
Starting at a given point
it follows by Theorem 8.1 that
Since the function
is nonconvex and it is too ambitious to
compute in one step its zero point
we replace this function by the
easier convex function
with belonging to
We know by
the definition of
and
that
and
For the function
it is easy to compute its zero point. By Lemma
8.2 this is given by
We now replace the original point
in the
parametric problem
by the smaller value
and repeat the
procedure.
Fractional Programming
359
Primal Dinkelbach-type algorithm.
1 Select
and
and compute
2 Determine
If
Otherwise compute
let
stop and return
and
and go to step 1.
To determine
in step 1 and 2 one has to solve a single-ratio
fractional program. If A is a finite set, then this is easy. Also in order
to select
one has to solve for A finite a finite min-max
problem. Algorithms for such a problem can be found in part 2 of
[55]. In case A is not finite, one needs to solve a much more difficult
problem, a semi-infinite min-max problem (cf. [27, 55]). Therefore to
apply the above generic primal Dinkelbach-type algorithm in practice
one needs to have an efficient algorithm to determine an element of the
set
and this is in most cases the bottleneck. In general one
cannot expect that an efficient and fast algorithm exists. But for special
cases this might be the case. Including the construction of approximate
solutions of the problem
by using smooth approximations of the
max operator, thus speeding up the computations and at the same time
bounding the errors (cf. [16]) seems to be an important topic for future
research.
By Lemma 8.6 it is sufficient to find in step 2 of the primal Dinkelbachtype algorithm the solution of the equation
As already
observed, we can give an easy geometrical interpretation of the above
algorithm (cf. [5, 16]). The next result shows that the sequence
generated by the primal Dinkelbach-type algorithm is strictly decreasing.
Lemma 8.7 If Condition 8.5 holds, then the sequence
generated by
the primal Dinkelbach-type algorithm is strictly decreasing and satisfies
for every
Proof. If the algorithm stops at
then by the stopping rule we
know that
This implies by Theorem 8.1 for
that
which shows that
If the algorithm does not
stop at the first step, then
Since
is nonempty, the
algorithm finds some
Hence
GENERALIZED CONVEXITY AND MONOTONICITY
360
Thus for every
and so
we obtain
for every
This shows
To verify that
we assume by contradiction that
Since
yields by relation (8.29) and Lemma 8.6 that
this
and we obtain a contradiction. Therefore
and by the definition
of
it is obvious that
Applying now the same argument
iteratively shows the desired result.
By Lemma 8.7 it follows that the sequence
generated by the primal
Dinkelbach-type algorithm converges to some limit
In case the
generated sequence is finite, it is easy to show the following result.
Lemma 8.8 If Condition 8.5 holds and the primal Dinkelbach-type algorithm stops at
then
and
Proof. Since Condition 8.5 holds, we obtain
Also by the stopping rule of the Dinkelbach-type algorithm it follows that
This implies by Theorem 8.1 that
Since always
we obtain
To show that
with
and
we observe by Lemma 8.6 and by using
that
Hence it follows that
8.6 we obtain
Applying again Lemma
which completes the proof.
In the remainder of this subsection we only consider the case that the
primal Dinkelbach-type algorithm generates an infinite sequence
By Lemma 8.7 it follows that
exists. Imposing
some additional condition it will be shown in Lemma 8.9 that this limit
equals
To simplify the notation in the following lemmas we introduce
for the sequence
generated by the
primal Dinkelbach-type algorithm the sequence
with
and for
finite the sequence
with
Fractional Programming
361
By the observation after Lemma 8.4 these subgradient sets are nonempty. It is now possible to derive the next result.
Lemma 8.9 If Condition 8.5 holds and there exists a subsequence
satisfying
then
Moreover, for
finite it follows that
Proof. By Lemma 8.7 the sequence
is strictly decreasing,
and so
exists. If
we obtain using
for every
that
and so for
the result is proved. Therefore assume that
is finite. Since
and the function
and the sequence
are
decreasing, it follows that the sequence
is increasing
and
exists. If we assume that
then one can find some
satisfying
for every
By Lemma 8.2 we also know that
Applying the
subgradient inequality to the convex function
we obtain for every
that
with
Since by relation (8.25) it follows that
the above inequality shows
This yields by
our assumption and
finite that
and so
that
of [29] yields
it follows that
This contradicts that
is finite, and we have shown
Applying now Theorem 8.2 and Lemma 1.30
Then by Theorem 8.1
Since by Lemma 8.7 it is obvious that
we obtain
completing the proof.
By relation (8.25) it follows that
for every
and so one can apply Lemma 8.9 in case
To achieve a rate of convergence result for the
sequence
generated by the primal Dinkelbach-type algorithm, we need
to assume in the proof that
To apply our procedure we always impose that
is nonempty for
finite. Then it follows by
Theorem 8.3 that
if and only if the min-max fractional program (P) has an optimal solution in B or equivalently there exists some
362
GENERALIZED CONVEXITY AND MONOTONICITY
satisfying
However, if the condition of Lemma 8.9
holds, we conjecture for
finite that the min-max fractional program
(P) might not have an optimal solution in B, and so
is not equal to
zero. Using a stronger condition than in Lemma 8.9, we show in the next
lemma for finite
that the sequence
generated by the
primal Dinkelbach-type algorithm satisfies
This sufficient condition implies the existence of an optimal solution of
the (primal) min-max fractional program (P) in B.
Lemma 8.10 If Condition 8.5 holds,
sequence
satisfying
and
Proof. By the convexity of the function
equality we obtain for every
that
is finite and there exists a subthen
and the subgradient in-
with
Since
it follows by our assumption and
the monotonicity of the subgradient sets as shown in Lemma 8.5 that
one can find some finite M satisfying
for every
and every sequence
and
This
shows
for every
obtain
and so
Hence by Lemma 8.9 we
Using relations (8.34) and (8.33) yields
Since by Theorem 8.1 we know that
the proof is completed.
By relation (8.25) it follows in case
that the
condition of Lemma 8.10 is satisfied. A similar condition is also given in
[22] for a generalized fractional program. In the next lemma we consider
the generated sequence
and show for B compact
and some additional topological properties on the functions and that
this sequence contains a converging subsequence.
Lemma 8.11 If Condition 8.5 holds, the functions and are finitevalued and continuous on some open set
containing A B,
the set B is compact and there exists a subsequence
satisfying
then the sequence
has a converging subsequence and every limit point
of the sequence
satisfies
with
finite. Additionally, if
there exist a unique
satisfying
then
Fractional Programming
363
Moreover, for A B compact, the generated sequence
has a converging subsequence and every limit point
of the sequence
is an optimal solution of problem (P).
If the optimization problem (P) has a unique optimal solution
then
and
Proof. To verify that
is finite we obtain by Condition 8.5 and
continuous that the finite-valued function
is lower semicontinuous. By the compactness of B this implies, using Corollary 1.2 of
[3], that there exists some
satisfying
and so
is
finite. Again by the compactness of B it is also obvious that the sequence
contains a convergent subsequence. To show that
every limit point
of the sequence
satisfies
we observe by Lemma 8.9 that
This implies by Lemma
8.3 that
Using now Theorem 8.3 we obtain
If there exists a unique
satisfying
then again by
Theorem 8.3 we obtain
Since every converging subsequence of the sequence
converges to an element of
it follows that every convergent subsequence converges to the element
By contradiction and B compact we obtain
and the
proof of the first part is completed. If A B is compact, then by the
continuity of the function we obtain
Again by the observation after Lemma 8.10 we obtain
By
Lemma 8.3 the set-valued mapping
is closed and using a similar proof
as for the first part one can show the last part.
If we consider a generalized fractional program, then clearly A is
compact, and if additionally the conditions of Lemma 8.11 hold, then
the second part of this lemma applies. Unfortunately it is not clear
to the authors whether in the first part of this lemma the condition
can be omitted.
We now want to investigate how fast the sequence
converges to
Before discussing this in detail, we list for
finite the following inequality for the sequence
generated by the primal Dinkelbachtype algorithm. A similar inequality can also be derived for the dual
Dinkelbach-type algorithm to be discussed in subsection 6.4.
Theorem 8.5 If Condition 8.5 holds and there exists some
satisfying
then it follows for every
and
364
GENERALIZED CONVEXITY AND MONOTONICITY
that
Proof. Since
to the function
for some
we obtain by Lemma 8.6 that
Applying now the subgradient inequality
at the point
it follows for
that
Hence
Moreover, for every
again by Lemma 8.6 that
gradient inequality to the function
that
and
we obtain
Applying now the subat the point
yields for
Hence by relations (8.35) and (8.36) we obtain
Since
this implies
Using relation (8.37) it follows that
and this completes the proof.
In case of a single-ratio fractional program the function
reduces to
and so for every
it follows that
Hence we obtain that the inequality in Theorem
8.5 reduces to
for any optimal solution
of the optimization problem
(cf. [67]).
Before introducing convergence results for the primal Dinkelbach-type
algorithm, we need the following definition (cf. [54]).
Fractional Programming
365
Definition 8.2 A sequence
Q-linearly if there exists some
The sequence
with limit
such that
converges
converges Q-super linearly if
If a slightly stronger condition as used in Lemma 8.10 holds, then
one can show that the sequence
generated by the primal
Dinkelbach-type algorithm converges Q-linearly. The same result was
shown for a generalized fractional program in [22].
Theorem 8.6 If Condition 8.5 holds,
satisfies
then
converges Q-linearly.
is finite and the sequence
and
Proof. By Lemma 8.10 we obtain
Since Condition 8.5 holds,
one can find some
satisfying
and this
shows by Lemma 8.6 that
Hence the set
is nonempty, and for every belonging to this set it follows by
Theorem 8.5 that
with
and
Since
is strictly
decreasing and
it follows by Lemma 8.5 that the sequence
is decreasing and satisfies
with
This shows that
exists. To identify
we observe in view of
that
for every
and
Since the function
that
is continuous, this yields using
for every
and so
Therefore
and we have
identified this limit. Also by our assumption we obtain that there exists
some
and this shows
366
GENERALIZED CONVEXITY AND MONOTONICITY
Applying now relation (8.39) yields the desired result.
If the conditions of Theorem 8.5 hold and additionally we assume that
then it can be shown in view of the proof of Theorem
8.6 that the sequence
converges Q-linearly to
This condition is
slightly weaker than the one used in Theorem 8.6. Observe the condition
was used in the proof of Lemma 8.10 to show that
and this implies as shown in the first part of the proof of
Theorem 8.6 that
for some
Therefore, if there exists
some
satisfying
and
then assuming
Condition 8.5 holds the sequence
converges Q-linearly to
A disadvantage of the first part of the previous assumption is that in general we
do not know looking at a min-max problem whether there exists some
satisfying
Hence we imposed some stronger algorithmic condition on the sequence
implying this result. In case the
(primal) min-max fractional program (P) has a unique optimal solution
and some additional topological properties are satisfied, then one can
show that the sequence
converges superlinearly.
Theorem 8.7 If Condition 8.5 holds, the functions and are continuous on some open set
containing the compact set A B
and the min-max fractional program (P) has a unique optimal solution
then
and
and
the sequence
converges Q-superlinearly.
Proof. Using Lemmas 8.9 and 8.11 the first part follows, and so we only
have to show that
converges superlinearly. Considering the proof of
Theorem 8.6 it follows that
with
and
Since
is uniformly bounded by the compactness of A B and the function
is continuous, there exists a converging subsequence
satisfying
To identify
we observe for every
that
with
Since B is compact and continuous, it follows
by Proposition 1.7 of [3] that
is upper semicontinuous, and
this implies by relation (8.40) that
Fractional Programming
Since
we obtain
by relation (8.41) that
solution and Lemma 8.5 we obtain
367
and this shows
By the uniqueness of the optimal
This shows the desired result.
In case we consider a single-ratio fractional program with B compact
and the functions
continuous it follows by Lemma 8.11 that
with
an optimal solution of this fractional programming problem. Replacing now in relation (8.38)
by
we obtain for a single-ratio fractional program with B compact and
continuous that the sequence
always converges Q-superlinearly. Clearly in practice the
(primal) Dinkelbach-type algorithm stops in a finite number of steps,
and so we need to derive a practical stopping rule. Such a rule is constructed in the next lemma. For other practical stopping rules yielding
so-called
solutions the reader may consult [16].
Lemma 8.12 If Condition 8.5 holds and there exists some subsequence
satisfying
and some
satisfying
then the sequence
is
decreasing and its limit equals 0. Moreover, it follows for every
that
Proof. By Lemma 8.7 the sequence
is strictly decreasing, and this
implies by Lemma 8.5 that the negative sequence
is decreasing. Also,
since
is decreasing, we obtain that the negative sequence
is increasing and so the positive sequence
is decreasing. Applying
now Lemma 8.9 it follows that
while the listed inequality is an immediate consequence of Lemma 8.9 and relation (8.35).
Using Lemma 8.12 a stopping rule for the (primal) Dinkelbach-type
algorithm is given by
for some predetermined
Finally we observe that the (primal) Dinkelbach-type algorithm applied to
a generalized fractional program can be regarded as a cutting plane algorithm (cf. [10]). This generalizes a similar observation by Sniedovich (cf.
[70]) who showed the result for the (primal) Dinkelbach-type algorithm
applied to a single-ratio fractional program.
In the next section we investigate the dual max-min fractional program (D) and its relation to the primal min-max fractional program
(P).
368
6.3
GENERALIZED CONVEXITY AND MONOTONICITY
Duality Results for Primal Min-Max
Fractional Programs.
In this subsection we first investigate under which conditions the optimal objective function values of the primal min-max fractional program
(P) and the dual max-min fractional program (D) coincide. To start
with this analysis, we introduce the following class of bifunctions.
Definition 8.3 The function
is called a concave/convex bifunction on the convex set
with
and
if for every
the function
is concave on
and for every
the function
is convex on
Moreover, a function
is called a convex/concave
bifunction on
if
is a concave/convex bifunction on the same
set. It is called an affine/affine bifunction if it is both a concave/convex
and a convex/concave bifunction.
To guarantee that
condition.
equals
we introduce the following sufficient
Condition 8.6 The set
is a closed convex set and
is a
compact convex set. Moreover, there exists some open convex set
containing A B such that is a positive finite-valued convex/concave
bifunction and
a positive finite-valued concave/convex bifunction on
If the function is a positive affine/affine bifunction, then
is a finite-valued concave/convex bifunction.
If the set B is given by relation (8.1), one can also introduce another
dual max-min fractional program. To guarantee that for this problem
strong duality holds, we need the following slightly stronger condition.
Condition 8.7 The set
is a closed convex set and
is a
compact convex set. Moreover, there exists some open convex set
containing AC such that is a positive finite-valued convex/concave
bifunction and
a positive finite-valued concave/convex bifunction on
If the function is a positive affine/affine bifunction, then
is a finite-valued concave/convex bifunction
If Condition 8.6 holds, then by Theorem 1.15 of [29] we obtain that
the function
is continuous on
for every
and
is continuous on
for every
The same property
also holds for the function
By the compactness of A this implies
Fractional Programming
369
for every
and so Condition 8.6 implies Condition 8.1. Also, since
for every
the function
is continuous on
A and the set A is compact, we obtain that
is finite for every
implying
For
we derive in Theorem 8.8 that
the optimal objective function value of the (primal) min-max fractional
program (P) equals the optimal objective function value of the (dual)
max-min fractional program (D). Contrary to the proof of the same
result in [5] for generalized fractional programs based on Sions minimax
result (cf. [31, 69]) the present proof is an easy consequence of the easierto-prove minimax result by Ky Fan (cf. [26, 27, 32]) and Theorem 8.1.
Note that we do not assume that there exists some
satisfying
Theorem 8.8 If Condition 8.6 holds, then there exists some
satisfying
Proof. Since we know that
it follows for
that
for every
This shows the desired result for
If
is finite, then we need to verify that
Since
is
finite, we obtain by Condition 8.6 that the function
is
a concave/convex bifunction on A B and for every
the function
is continuous on
Applying now Theorem 3.2 of [32]
(see also [27]) we obtain
This shows by Theorem 8.1 and the remark after Condition 8.6 that
for some
Since
for every
Hence
for every
we obtain
Using now relation (8.43) the desired result follows.
Since there are rather general necessary and sufficient conditions on
the bifunctions such that for those functions min-max equals max-min
370
GENERALIZED CONVEXITY AND MONOTONICITY
(cf. [28, 30]), the above result holds for a much larger class than the
class of concave/convex bifunctions. However, since the class of concave/convex bifunctions is most known, we have restricted ourselves to
this well-known class. An easy consequence of Theorem 8.8 is given by
the next result.
Lemma 8.13 If Condition 8.6 holds and there exists some
satisfying
and some
satisfying
then the
vector
is an optimal solution of the (primal) min-max fractional
program (P) and an optimal solution of the (dual) max-min fractional
program (D).
Proof. By the definition of
vector
that
and
it is clear that for every
This implies by Theorem 8.8 for the given vector
that
Hence
is an optimal solution of the (primal) min-max fractional program (P) and an optimal solution of the (dual) max-min fractional program (D).
If the (dual) max-min fractional program (D) has a unique optimal
solution and the optimal solution set of the (primal) min-max fractional
program (P) is nonempty, then by Lemma 8.13 the unique optimal solution of (D) is an optimal solution of (P). If Condition 8.6 holds and
we use the so-called dual Dinkelbach-type algorithm to be discussed in
subsection 6.4 for identifying
this observation will be useful. To analyze the properties of the optimization problem (D) and at the same
time construct some generic algorithm to solve problem (D), we introduce similar parametric optimization problems as done for problem (P)
at the beginning of subsection 6.1. For every
consider the
parametric optimization problem
For every
the function
is now given by
Fractional Programming
371
Since
on A B and
is the infimum of affine functions, it is
obvious that
is a decreasing upper semicontinuous concave function.
The so-called effective domain
is defined by
By the finiteness of on
that actually
optimization problem than problem
optimization problem
it is obvious for every
A more difficult
is now given by the parametric
As for the concave function
we also introduce the effective domain
of the function
given by
It should be clear to the reader that we actually apply the Dinkelbachtype approach to the (dual) max-min fractional program (D) while at
the beginning of subsection 6.1 we applied the same approach to the
(primal) min-max fractional program (P). It is easy to show that
and so we obtain
for every
If the optimization
problem (P) is a single-ratio fractional program, then the set A consists
of one element, and as already observed there is no difference in the
representation of the (primal) min-max fractional program (P) and the
(dual) max-min fractional program (D). Hence for A consisting of one
element it is not surprising that also the functional representation of the
functions
and
are the same. If the set A consists of more than one
element, then we want to know, despite the different functional representations of the functions
and
under which conditions
for some
It should come as no surprise that this equality holds under
the same conditions as used in Theorem 8.8. Note that in the next result
we do not assume that the set
is nonempty.
Theorem 8.9 Assume Condition 8.6 holds where
bifunction on A B. Then it follows for every
some
satisfying
Moreover, if
every
is a convex/concave
that there exists
is an affine/affine bifunction, the same result holds for
372
GENERALIZED CONVEXITY AND MONOTONICITY
Proof. Since
we obtain by Lemma 8.1 that
for every
Also, for a convex/concave bifunction it follows by Condition
8.6 and
that the function
is a concave/convex
bifunction on A B and
is continuous on
for every
A similar observation holds for
if
is an
affine/affine bifunction. Since A is compact, we can now apply Theorem
3.2 of [32]. This shows
Hence by relation (8.45) there exists for
and a convex/concave
bifunction
or
and an affine/affine bifunction
some
satisfying
This completes the proof.
Applying similar proofs as in Lemma 8.1 and Theorem 8.1 one can
verify the following results.
Lemma 8.14 Assume Condition 8.2 holds. Then
only if
and
if and only if
if and
Clearly Lemma 8.14 can be compared with Lemma 8.1 while the next
result is the counterpart of Theorem 8.1.
Theorem 8.10 Assume Condition 8.2 holds and
if and only if
Moreover, if
and only if
Then
then
if
A direct consequence of the above results is given by the following.
Theorem 8.11 Assume Condition 8.6 holds where is a positive convex/concave bifunction on A B. Then it follows that
for every
and these functions are finite-valued on
Moreover, if is a positive affine/affine bifunction on A B
and
is finite, then
for every
and these
functions are finite-valued on
Proof. If is a positive convex/concave bifunction on A B, then by
Condition 8.6 the function must be a positive concave/convex bifunction on A B . Then automatically
Also, by Theorem 8.8
and 8.9 we obtain
and
for every
Since
Condition 8.6 implies Condition 8.1, it follows by the remark after Theorem 8.1 that
is finite for every
This yields
is finite-valued on
Using the monotonicity of
we see
Fractional Programming
373
for every
Hence the first part follows. The second part can be
proved similarly, and its proof is therefore omitted.
If Condition 8.6 holds and hence also Condition 8.1 and
is finite, then it might happen (as shown in Example 8.1) that the value
is not equal to zero. If additionally there exists some
satisfying
then by Theorems 8.3 and 8.11 we know that
and we need this assumption in combination with Condition 8.6 to identify
by the so-called dual Dinkelbachtype algorithm to be discussed in the next subsection. The next result is
the counterpart of Theorem 8.2. It can be proved by similar techniques.
Theorem 8.12 Assume Condition 8.2 holds. Then the decreasing function
is lower semicontinuous.
Similarly as in Section 6.1 it follows by Theorem 8.12 that
and the function
is right-continuous with left-hand limits.
As in Section 6.1 we now introduce the following set-valued mappings
and
given by
and
The set
represents the set of optimal solutions of optimization problem
while the set
denotes the set of optimal solutions in A of optimization problem
Also we consider the set-valued
mapping
given by
This set represents the set of optimal solutions in A B of optimization problem
In the next result it is assumed that the sets
and
are nonempty on their domain. Applying
Theorem 8.12 and using a similar proof as in Lemma 8.3 we obtain the
following counterpart of Lemma 8.3.
Lemma 8.15 Assume Condition 8.2 holds and the functions and
are finite-valued and continuous on some open set
containing
AB. Then the set-valued mappings
and
are closed.
Considering now the function
given by
374
GENERALIZED CONVEXITY AND MONOTONICITY
one can show as in Lemma 8.4 the following result.
Lemma 8.16 Assume Condition 8.2 holds and
is finite for
Then the function
is strictly decreasing and
Lipschitz continuous with Lipschitz constant
and the function
satisfies
and
As in Section 6.1 with respect to the function
it follows in case
Condition 8.2 holds that the subgradient set of the convex strictly increasing function
is nonempty for every
and this set satisfies
Moreover, the subgradient inequality is given by
for every
of Lemma 8.5.
Also one can show the following counterpart
Lemma 8.17 Assume Condition 8.4 holds. Then it follows for every
that
is finite,
is a nonempty compact set for every
and
Also for every
holds that
and
and
it
The next result can be compared with Lemma 8.6.
Lemma 8.18 Assume Condition 8.2 holds. Then the set
is nonempty if and only if
Moreover, if this
set is nonempty, then it only contains the finite value
Up to now we did not assume that there exists some
satisfying
or equivalently the dual max-min fractional program
(D) has an optimal solution in B. In the next lemma the implications of
this assumption are discussed. To do so, consider the (possibly empty)
set
given by
The counterpart of Theorem 8.3 is given by the following result.
Fractional Programming
375
Theorem 8.13 Assume Condition 8.2 holds. Then
for some
if and only if
Moreover, if
for some
then the set
is nonempty and
If we introduce the (possibly) empty set
given by
then without Condition 8.2 one can show the following counterpart of
Theorem 8.4. Remember a vector
is an optimal solution of (D) if
and only if
and
Theorem 8.14 The (dual) max-min fractional program (D) has an optimal solution if and only if
Moreover, if (D) has an optimal
solution, then the set
is nonempty and
Finally we will consider in this section another dual max-min fractional program if the nonempty set B is given by (see also relation (8.1))
In case the set B is specified as in relation (8.51) we always assume
for the corresponding primal min-max fractional program (P) that the
function is positive on AC. Introducing now the vector-valued function
given by
we consider for
every
the single-ratio fractional program
A more complicated optimization problem is now introduced by the
so-called partial dual of the (primal) min-max fractional program given
by
Again this is a max-min fractional program, and using only
AC it is easy to show the following result.
Lemma 8.19
If
is positive on A C, then it follows that
on
GENERALIZED CONVEXITY AND MONOTONICITY
376
Proof. Since
and
obtain by the positivity of
for every
and
for every
on AC that
and
we
This shows
and so the first inequality is verified. We already showed that
Hence the proof is complete.
To verify that
it is clear from Lemma 8.19 that
for every
If
is finite and we want to ensure that
then the following so-called Slater-type condition on the nonempty set B should be considered. Before introducing this condition,
we assume throughout the remainder of this section that the (possibly
empty) set
denotes the set of indices for which
is affine. Note that
denotes the relative interior of the set C (cf.
[29, 58]).
Condition 8.8 There exists some
where C is a closed convex
set satisfying
for every
and
for every
Moreover, for every
the functions
are convex.
To show under which conditions the equality
and the finiteness of
holds, we first need to prove the following Lagrangean duality
result.
Lemma 8.20 Assume Condition 8.8 holds and for a given
the
function
is convex on C and
is concave on C.
Then it follows for every
that there exists some
satisfying
with B defined in relation (8.51). Moreover, the same result holds for
every
if
is convex and
is affine.
Proof. Using the definition of the set B and
it is easy to see that
Fractional Programming
377
Moreover, for either
and
concave or
and
affine we see that the function
is convex
on C. Applying now Theorem 28.2 of [58] or Theorem 1.25 of [29] we
obtain that there exists some dual solution
such that the above
inequality is actually an equality.
Using Lemma 8.20 it is now possible to show that the optimal objective function value of the partial dual equals
Theorem 8.15 Assume Conditions 8.7 and 8.8 hold. Then there exists
some
satisfying
Proof. For
we know by the remark after Lemma 8.19 that the
result holds. Hence we only need to verify the result for
finite. To
start we observe by relation (8.42) that
for some
satisfying
Applying now Lemma 8.20 one can find some
This shows
By relation (8.52) and
for every
which completes the proof.
In case we use the partial dual
the single-ratio fractional program
we obtain
it follows that the partial dual of
with B given by relation (8.51) is given by
Thus for this (Lagrangean) dual (cf. [66, 68]) the single-ratio fractional program and its dual have a different representation. If Theorem
8.15 holds, one can always apply a Dinkelbach-type algorithm to the partial dual
to find
This is discussed in detail in [6] and [9]. In the
next subsection we will introduce a similar Dinkelbach-type algorithm
applied to the (dual) max-min problem (D).
GENERALIZED CONVEXITY AND MONOTONICITY
378
6.4
The Dual Dinkelbach-Type Algorithm.
In this section we apply the Dinkelbach-type approach to the (dual)
max-min fractional program (D). Parallel to subsection 6.2 we assume
that the next condition holds. Note that this condition is the counterpart
of Condition 8.5 used in the primal Dinkelbach-type algorithm which was
applied to the (primal) min-max fractional program (P).
Condition 8.9
Condition 8.2 holds and
is finite for every
If
is finite, then for every
while for
the set
the set
is nonempty
is nonempty for every
If condition 8.9 holds, then one can execute the following so-called
dual Dinkelbach-type algorithm. As for the (primal) Dinkelbach-type
algorithm introduced in Section 6.2 one can give a similar geometrical
interpretation of the next algorithm.
Dual Dinkelbach-type algorithm.
1 Select
and
and compute
2 Determine
If
Otherwise compute
let
stop and return
and
and go to 1.
Observe in Step 1 and 2 one has to solve a single-ratio fractional program. If B is a finite set, then solving such a problem is easy. Moreover,
by Lemma 8.18 it is sufficient to find in step 2 of the primal Dinkelbachtype algorithm the solution of the equation
As already
observed, this yields an easy geometrical interpretation of the above
algorithm (see also [5]). The next result shows that the sequence
generated by the dual Dinkelbach-type algorithm is strictly increasing.
The proof of this result is similar to the proof of the corresponding result for the primal Dinkelbach-type algorithm in Lemma 8.7. This also
shows that the primal Dinkelbach-type algorithm approaches the optimal objective function value from above while the dual Dinkelbach-type
algorithm approaches it from below.
Fractional Programming
379
Lemma 8.21 If Condition 8.9 holds, then the sequence
generated
by the dual Dinkelbach-type algorithm is strictly increasing and satisfies
for every
By Lemma 8.21 we obtain that the sequence
generated by the
dual Dinkelbach-type algorithm converges to some limit
Using a
similar proof as in Lemma 8.8 one can show the following result in case
the generated sequence is finite. If strong duality holds and so
one can also use this algorithm to approximate
Lemma 8.22 If Condition 8.9 holds and the dual Dinkelbach-type algorithm stops at
then
and
In the remainder of this subsection we only consider the case where the
dual Dinkelbach-type algorithm generates an infinite sequence
By Lemma 8.21 it follows that
exists. Imposing some
additional condition it will be shown in Lemma 8.23 that this limit equals
To simplify the notation in the following lemmas, we introduce for
the sequence
generated by the primal
Dinkelbach-type algorithm the sequence
given by
and for
finite the sequence
given by
By the observation after Lemma 8.16 these subgradient sets are nonempty. Using a similar proof as in Lemma 8.9 it is possible to verify the
next result.
Lemma 8.23 If Condition 8.9 holds and there exists a subsequence
satisfying
then
Moreover for
finite it follows that
By relation (8.49) it follows that
for every
Hence one can apply Lemma 8.23 in
we can follow the
To show that
proof of Lemma 8.10 and obtain the following result.
Lemma 8.24 If Condition 8.9 holds,
sequence
satisfying
and
is finite and there exists a subthen
380
GENERALIZED CONVEXITY AND MONOTONICITY
By relation (8.55) it follows in case
that the condition of Lemma 8.24 is satisfied. The next result should be contrasted
with Lemma 8.11.
Lemma 8.25 If Condition 8.9 holds, the functions and are finitevalued and continuous on some open set
containing A
B, the set A is compact and there exists a subsequence
satisfying
then the sequence
has a converging subsequence and every limit point
of the sequence
satisfies
with
finite.
Additionally, if
there exist a unique
satisfying
then
Moreover, for AB compact the generated sequence
has a converging subsequence and every limit point of the
sequence
is an optimal solution of problem (D). If the
optimization problem (D) has a unique optimal solution
then
and
We now want to investigate how fast the sequence
converges to
Before discussing this in detail, we list for
finite the following inequality for the sequence
generated by the dual Dinkelbach-type
algorithm. The proof is similar to the proof of the corresponding result
listed in Theorem 8.5 for the primal Dinkelbach-type algorithm.
Theorem 8.16 If Condition 8.9 holds and there exists some
satisfying
then it follows for every
that
and
If a slightly stronger condition as used in Lemma 8.24 holds, then
one can show that the sequence
generated by the primal
Dinkelbach-type algorithm converges Q-linearly. The same result was
shown for the dual generalized fractional program in [5] and [8]. The
proof of the next result is similar as the proof of the corresponding result
for the primal Dinkelbach-type algorithm given in Theorem 8.6
Theorem 8.17 If Condition 8.9 holds,
is finite and the sequence
satisfies
then
and the
sequence
converges Q-linearly.
Finally we show in case the dual (max-min) fractional program (D)
has a unique optimal solution and some other topological conditions
hold that the sequence
converges Q-superlinearly. In case
REFERENCES
381
also strong duality holds, then we know by the remark after Lemma 8.13
that this unique optimal solution of (D) is also an optimal solution of the
primal min-max fractional program P assuming this set is nonempty. By
the compactness of A B in the next result the set of optimal solutions
of (P) is nonempty.
Theorem 8.18 If Condition 8.9 holds, the functions and are continuous on some open set W containing the compact set A B and the
max-min fractional program (D) has a unique optimal solution
then
and
and the sequence
converges Q-superlinearly.
If strong duality holds, then it is clear that one can also use the
dual Dinkelbach-type algorithm to determine the value
This is the
main use of this algorithm in the literature (cf. [8, 9]). Also one could
combine the dual and primal approach in case strong duality holds and
use simultaneously both. An example of such an approach applied to a
generalized fractional program with an easy geometrical interpretation
is discussed by Gugat (cf. [39, 41]). In [39] it is shown under slightly
stronger conditions that always a Q-superlinear convergence rate holds.
This concludes our discussion of the parametric approach used in minmax fractional programming which was a major emphasis in this chapter
on fractional programming.
References
[1] Almogy, Y. and O. Levin, Parametric analysis of a multistage stochastic shipping problem, in Operational Research 69, J.
Lawrence, ed., Tavistock Publications, London, 1970, 359-370.
[2] Asmussen, S., Applied Probability and Queues, Wiley, New York,
1987.
[3] Aubin, J.B., Optima and Equilibria (An Introduction to Nonlinear
Analysis), Graduate Texts in Mathematics Vol. 140, Springer Verlag, Berlin, 1993.
[4] Avriel, M., Diewert, W.E., Schaible, S. and I. Zang, Generalized
Concavity, Plenum Press, New York, 1988.
[5] Barros, A., Discrete and Fractional Programming Techniques for
Location Models, Kluwer Academic Publishers, Dordrecht, 1998.
[6] Barros, A.I., Frenk, J.B.G. and J. Gromicho, Fractional location
problems, Location Science 5, 1997, 47-58.
382
GENERALIZED CONVEXITY AND MONOTONICITY
[7] Barros, A.I., Dekker, R., Frenk, J.B.G. and S. van Weeren, Optimizing a general optimal replacement model by fractional programming
techniques, J. Global Optim. 10, 1997, 405-423.
[8] Barros, A.I., Frenk, J.B.G., Schaible, S. and S. Zhang, A new algorithm for generalized fractional programming, Math. Program. 72,
1996, 147-175.
[9] Barros, A.I., Frenk, J.B.G., Schaible, S. and S. Zhang, Using duality to solve generalized fractional programming problems, J. Global
Optim. 8, 1996, 139-170.
[10] Barros, A.I. and J.B.G. Frenk, Generalized fractional programming
and cutting plane algorithms, J. Optim. Theory Appl. 87, 1995,
103-120.
[11] Bazaraa, M.S., Sherali, H.D. and C.M. Shetty, Nonlinear Programming (Theory and Applications), Wiley, New York, 1993.
[12] Bzsa, E., Decision Support for Inventory Systems with Complete
Backlogging, PhD Thesis, Tinbergen Institute Research Series No.
282, Econometric Institute, Erasmus University, Rotterdam, 2002.
[13] Bzsa, E., den Iseger, P.W. and J.B.G. Frenk, Modeling of inventory
control with regenerative processes, International Journal of Production Economics 71, 2001, 263-276.
[14] Bereanu, B., Decision regions and minimum risk solutions in linear
programming, in: A. Prekopa (ed.), Colloquium on Applications of
Mathematics to Economics, Budapest 1963 (Publication House of
the Hungarian Academy of Sciences, Budapest 1965), 37-42.
Frenk, J.B.G. and S. Zhang, A progressive finite repre[15]
sentation approach to minimax optimization, 2004, in preparation.
Frenk, J.B.G. and S. Zhang, Generalized fractional pro[16]
gramming with user interaction, 2004, submitted.
[17] Charnes, A., Cooper, W.W., Levin, A.Y and L.M. Seiford, eds.,
Data Envelopment Analysis: Theory, Methodology and Applications,
Kluwer Academic Publishers, Dordrecht, 1994.
[18] Charnes, A. and W.W. Cooper, Programming with linear fractional
functionals, Naval Research Logistics Quarterly 9, 1962, 181-186.
[19] Charnes, A. and W.W. Cooper, Deterministic equivalents for optimizing and satisficing under chance constraints, Operations Research 11, 1963, 18-39.
[20] Chen, D.Z., Daescu, O., Dai, Y., Katoh, N., Wu, X. and J. Xu,
Optimizing the sum of linear fractional functions and applications,
in Proceedings of the Eleventh Annual ACM-SIAM Symposium on
Discrete Algorithms, ACM, New York, 2000, 707-716.
REFERENCES
383
[21] Craven, B.D., Fractional Programming, Heldermann Verlag, Berlin,
1988.
[22] Crouzeix, J.P., Ferland, J.A. and S. Schaible, An algorithm for generalized fractional programs, J. Optim. Theory Appl. 47, 1985, 3549.
[23] Drezner, Z., Schaible, S. and D. Simchi-Levi, Queueing-location
problems on the plane, Naval Research Logistics 37, 1990, 929-935.
[24] Falk, J.E., Polacsay, S.W., Sacco, W.J., Copes, W.S., and H.R.
Champion, Bounds on a trauma outcome function via optimization,
Oper. Res. 20 (Supp.1), 1992, 86-95.
[25] Falk, J.E., Maximization of signal-to-noise ratio in an optical filter,
SIAM J. Appl. Math. 7, 1969, 582-592.
[26] Fan, K., Minimax theorems, Proc. Nat. Acad. Sc. U.S.A. 39, 1953,
42-47.
[27] Frenk, J.B.G., Kassay, G. and J. Kolumbn, On equivalent results
in minimax theory, Eur. J. Oper. Res. 157, 2004, 46-58.
[28] Frenk, J.B.G., Protassov,V. and G. Kassay, On Borel probability
measures and noncooperative game theory, Optimization, 2004, to
appear.
[29] Frenk, J.B.G. and G. Kassay, Introduction to convex and quasiconvex analysis, Chapter 1 of this volume.
[30] Frenk, J.B.G., Kas, P. and G. Kassay, On linear programming duality and necessary and sufficient conditions in minmax theory,
Econometric Institute Report E.I 2004-14, Econometric Institute,
Erasmus University, 2004, submitted.
[31] Frenk, J.B.G. and G. Kassay, The level set method of Jo and its
use in minimax theory, Econometric Institute Report E.I 2003-03,
Econometric Institute, Erasmus University, 2003, submitted.
[32] Frenk, J.B.G. and G. Kassay, Minimax results and finite dimensional minimization, J. Optim. Theory Appl. 113, 2002, 409-421.
[33] Frenk, J.B.G. and S. Schaible, Fractional programming: introduction
and applications, in Encyclopedia of Optimization, Vol II, E-Integer,
C.A. Floudas and P.M. Pardalos, eds., Kluwer Academic Publishers,
Dordrecht, 2001, 234-240.
[34] Frenk, J.B.G. and M.J. Kleijn, On regenerative processes and inventory control, Pure Appl. Math. 9, 1998, 61-94.
[35] Frenk, J.B.G., Dekker, R. and M.J. Kleijn, On the marginal cost
approach in maintenance, J. Optim. Theory Appl. 94, 1998, 771781.
384
GENERALIZED CONVEXITY AND MONOTONICITY
[36] Freund, R.W. and F. Jarre, Solving the sum-of-ratios problem by an
interior point method, J. Global Optim. 19, 2001, 83-102.
[37] Gilmore, P.C. and R.E. Gomory, A linear programming approach to
the cutting stock problem - Part II, Operations Research 11, 1963,
863-888.
[38] Goedhart, M.H and J. Spronk, Financial planning with fractional
goals, Eur. J. Oper. Res. 82, 1995, 111-124.
[39] Gugat, M., A fast algorithm for a class of generalized fractional
programs, Management Science 42, 1996, 1493-1499.
[40] Gugat, M., Computation of lower bounds for spectra via fractional
semi-infinite programming, Approximation Optimization 8, 1995,
379-391.
[41] Gugat, M., Fractional Semi-infinite Programming, PhD Thesis,
Mathematical Institute, University of Trier, Trier, 1994.
[42] Heinen, E., Grundlagen betriebswirtschaftlicher Entscheidungen,
Das Zielsystem der Unternehmung, 2.Auflage, Gabler, 1971.
[43] Hoskins, J.A. and R. Blom, Optimal allocation of warehouse personnel: a case study using fractional programming, FOCUS (U.K.)
3 (2), 1984, 13-21.
[44] Konno, H., Minimization of the sum of several linear fractional
functions, in Generalized Convexity and Generalized Monotonicity,
N. Hadjisavvas, J.E. Martnez-Legaz and J.P. Penot, eds., Lecture
Notes in Economics and Mathematical Systems Vol. 502, Springer
Verlag, Berlin, 2001, 3-20.
[45] Konno, H. and K. Fukaishi, A brand and bound algorithm for solving
low rank linear multiplicative and fractional programming problems,
J. Global Optim. 18, 2000, 283-299.
[46] Konno, M. and M. Inori, Bond portfolio optimization by bilinear
fractional programming, J. Oper. Res. Soc. Japan 32, 1989, 143158.
[47] Kuno, T., A branch-and-bound algorithm for maximizing the sum
of several linear ratios, J. Global Optim. 22, 2002, 155-174.
[48] Lo, A. and C. MacKinlay, Maximizing predictability in the stock and
bond markets, Macroeconomic Dynamics 1, 1997, 102-134.
[49] Martos, B., Hyperbolic programming, Naval Research Logistics
Quarterly 11, 1964, 135-155; originally published in Math. Institute
of Hungarian Academy of Sciences 5, 1960, 383-406 (Hungarian).
[50] Martos, B., Nonlinear Programming, Theory and Applications,
North Holland, Amsterdam, 1975.
REFERENCES
385
[51] Matthis, F.H. and L.J. Matthis, A nonlinear programming algorithm
for hospital management, SIAM Review 37, 1995, 230-234.
[52] Meszaros, Cs. and T. Rapcsk, On sensitivity analysis for a class
of decison systems, Decision Support Systems 16, 1996, 231-240.
[53] Mjelde, K.M., Methods of the Allocation of Limited Resources, Wiley, New York, 1983.
[54] Nocedal, J. and S.J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer Verlag, Berlin, 1999.
[55] Polak, E., Optimization: Algorithms and Consistent Approximations, Applied Mathematical Sciences Vol. 124, Springer Verlag,
Berlin, 1997.
[56] Radzik, T., Fractional combinatorial optimization, in Handbook of
Combinatorial Optimization, Vol. 1, D.Z. Du and P.M. Pardalos,
eds., Kluwer Academic Publishers, Dordrecht, 1998, 429-478.
[57] Rao, M.R., Cluster analysis and mathematical programming, J.
Amer. Statist. Assoc. 66, 1971, 622-626.
[58] Rockafellar, R.T., Convex Analysis, Princeton Mathematical Series
Vol. 28, Princeton University Press, Princeton, New Jersey, 1970.
[59] Schaible, S. and J. Shi., Recent developments in fractional programming: single-ratio and max-min case, in Proceedings of the 3rd International Conference in Nonlinear Analysis and Convex Analysis,
Aug.25-29, 2003, Takahashi, W. and T. Tanaka, eds., Yokohama
Publishing, 2004, to appear.
[60] Schaible, S. and J. Shi, Fractional programming: the sum-of-ratios
case, Optimization Methods and Software 18, 2003, 219-229.
[61] Schaible, S., Fractional programming, in Handbook of Global Optimization, R. Horst and P.M. Pardalos, eds., Kluwer Academic
Publishers, Dordrecht, 1995, 495-608.
[62] Schaible, S. and T. Ibaraki, Fractional programming (invited review), Eur. J. Oper. Res. 12, 1983, 325-338.
[63] Schaible, S. and T. Lowe, A note an a material control problem, IIE
Transactions 15, 1983, 177-179.
[64] Schaible, S., Fractional programming: applications and algorithms,
Eur. J. Oper. Res. 7, 1981, 111-120.
[65] Schaible, S., Analyse und Anwendungen von Quotientenprogrammen, Ein Beitrag zur Planung mit Hilfe der nichtlinearen Programmierung, Mathematical Systems in Economics Vol. 42, Hain Verlag,
Meisenheim, 1978.
386
GENERALIZED CONVEXITY AND MONOTONICITY
[66] Schaible, S., Fractional programming, I, Duality, Management Science 22, 1976, 858-867.
[67] Schaible, S., Fractional programming, II, On Dinkelbachs algorithm, Management Science 22, 1976, 868-873.
[68] Schaible, S., Duality in fractional programming: a unified approach,
Oper. Res. 24, 1976, 452-461.
[69] Sion, M., On general minimax theorems, Pacific J. Math. 8, 1958,
171-176.
Sniedovich,
M., Fractional programming revisited, Eur. J. Oper. Res.
[70]
33, 1988, 334-341.
[71] Stancu-Minasian, I.M., Fractional Programming: Theory, Methods
and Applications, Kluwer Academic Publishers, Dordrecht, 1997.
On some procedures for solving fractional max-min prob[72]
lems, Mathematica-Revue DAnalyse Numrique et de Thorie de
LApproximation 17, 1988, 73-91.
A parametrical method for max-min nonlinear fractional
[73]
problems, Seminarul Itinerant de
Aproximare
Convexitate, Cluj-Napoca, 1983, 175-184.
[74] Von Neumann, J., ber ein konomisches Gleichungssystem und
eine Verallgemeinerung des Brouwerschen Fixpuntsatzes, in Ergebnisse eines mathematischen Kolloquiums (8), Menger, K., ed.,
Leipzig und Wien, 1937, 73-83.
[75] Zhang, S., Stochastic Queue Location Problems, Tinbergen Institute
Research Series No. 14, Econometric Institute, Erasmus University,
Rotterdam, 1991.
II
GENERALIZED MONOTONICITY