0% found this document useful (0 votes)
2 views

"Uncertain Random Geometric Programming Problems"

This paper introduces a deterministic formulation for geometric programming problems with coefficients as independent linear-normal uncertain random variables. It presents a novel framework for uncertain random variables and develops three transformation techniques to convert these variables into more manageable forms, facilitating the transition to stochastic geometric programming problems. The authors also provide a numerical example to demonstrate the effectiveness of their proposed approach.

Uploaded by

penber0427
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

"Uncertain Random Geometric Programming Problems"

This paper introduces a deterministic formulation for geometric programming problems with coefficients as independent linear-normal uncertain random variables. It presents a novel framework for uncertain random variables and develops three transformation techniques to convert these variables into more manageable forms, facilitating the transition to stochastic geometric programming problems. The authors also provide a numerical example to demonstrate the effectiveness of their proposed approach.

Uploaded by

penber0427
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Noname manuscript No.

(will be inserted by the editor)

Uncertain random geometric programming problems

Tapas Mondal* · Akshay Kumar Ojha · Sabyasachi


Pani

Received: date / Accepted: date


arXiv:2310.01848v1 [math.OC] 3 Oct 2023

Abstract In this paper, we introduce a deterministic formulation for the geometric programming prob-
lem, wherein the coefficients are represented as independent linear-normal uncertain random variables.
To address the challenges posed by this combination of uncertainty and randomness, we introduce the
concept of an uncertain random variable and present a novel framework known as the linear-normal un-
certain random variable. Our main focus in this work is the development of three distinct transformation
techniques: the optimistic value criteria, pessimistic value criteria, and expected value criteria. These ap-
proaches allow us to convert a linear-normal uncertain random variable into a more manageable random
variable. This transition facilitates the transformation from an uncertain random geometric program-
ming problem to a stochastic geometric programming problem. Furthermore, we provide insights into an
equivalent deterministic representation of the transformed geometric programming problem, enhancing
the clarity and practicality of the optimization process. To demonstrate the effectiveness of our proposed
approach, we present a numerical example.

Keywords Stochastic programming · Uncertainty modelling · Linear-normal uncertain random


variable · Geometric programming
Mathematics Subject Classification (2010) 90C15 · 90C30 · 90C46 · 90C47 · 49K45

1 Introduction

Geometric programming (GP) is a highly efficient approach for addressing nonlinear optimization prob-
lems when the objective and constraint functions are in posynomial form. In 1967, Duffin et al. [13] first
introduced the fundamental theories of the GP problem. In most cases, the GP method is employed
for objective and constraint functions, which are in posynomial form. When all parameters in the GP
problem are positive, except for the exponents, it is referred to as the posynomial problem. The GP
approach transforms the original problem into a dual problem, making it more readily solvable.
In modern days, the GP problem is used in engineering design problems like circuit design [12, 21],
inventory modeling [40, 49, 59], production planning [8, 11, 22, 23, 25, 28], risk management [51], chemical
processing [44, 50, 57], information theory [10], and structural design [20].
The classical GP problem is considered under the assumption that parameters are well-defined and
fixed. Many researchers [2, 4, 13, 14, 16, 26, 27, 41, 46–48] have devised effective and functional algorithms
designed for this traditional GP problem, where parameters are exact and specific. In reality, though,

Tapas Mondal* (Corresponding author)


School of Basic Sciences, Indian Institute of Technology Bhubaneswar
Tel.: +918345939869
E-mail: tm19@iitbbs.ac.in
Akshay Kumar Ojha
School of Basic Sciences, Indian Institute of Technology Bhubaneswar
E-mail: akojha@iitbbs.ac.in
Sabyasachi Pani
School of Basic Sciences, Indian Institute of Technology Bhubaneswar
E-mail: spani@iitbbs.ac.in
2 Tapas Mondal* et al.

the values associated with GP parameters often lack certainty and precision. Consequently, various
approaches have been crafted to address the challenge of GP in scenarios where parameter values are
uncertain and less precise. Avriel and Wilde [3], for instance, introduced a new dimension with the
stochastic GP problem, wherein they explored a GP scenario featuring objective and constraint function
coefficients as nonnegative random variables. Dupačová [15] developed a metal cutting model in GP form
and presented stochastic sensivity analysis. In 2016, Liu et al. [32] solved the stochastic GP problem
with joint probabilistic constraints. Also, Liu et al. [33] came up with a new way to solve the joint
rectangular chance-constrained GP problem with random parameters that are elliptically distributed
and independent from each other. In 2021, Shiraz et al. [52] solved the stochastic GP problem with joint
chance constraints based on copula theory. In this article, the coefficients of each term in the objective
and constraint functions of the GP problem are taken as dependent random variables. Furthermore, the
GP problem has been developed when the parameters are taken as interval types. Liu [35] developed
a technique for determining the possible range of objective values in posynomial GP problems when
coefficients and exponents have varying intervals. Additionally, Mahapatra and Mandal [39] tackled the
GP problem using interval-valued coefficients within a parametric framework.
Over the past few decades, the GP problem has been developed in fuzzy environments. In 1993, Cao [5]
extended the GP problem to an imprecise environment, along with interval and fuzzy coefficients. Later
on, the same author [6] proposed the GP problem with T-fuzzy coefficients. Mandal and Roy [40] solved
the GP problem with L-R fuzzy coefficients. Yang and Cao [60] made significant contributions to the
fuzzy relational GP under monomial. Liu [34] developed the GP problem with fuzzy parameters. Shiraz
et al. [54] used possibility, necessity, and credibility approaches to solve the fuzzy chance-constrained GP
problem. Under the rough-set theory, Shiraz and Fukuyama [55] developed the GP problem.
Recently, Liu [31] pioneered uncertainty theory, a new and developing field of mathematics. To solve
the GP problem, several researchers developed an uncertainty-based framework based on uncertainty the-
ory. Shiraz et al. [53] first considered the GP problem in an uncertain environment. Based on uncertainty
theory, the authors developed the deterministic form of an uncertain GP problem under normal, linear,
and zigzag uncertainty distributions. In 2022, Mondal et al. [43] developed a procedure for solving the
GP problem with uncertainty. The authors derived the equivalent deterministic form of the GP problem
under triangular and trapezoidal uncertainty distributions. In addition, the same authors [42] proposed
triangular and trapezoidal two-fold uncertainty distributions and developed three reduction methods to
reduce two-fold uncertainty distributions into single-fold uncertainty distributions. Chassien and Go-
erigk [7] introduced a robust GP problem with polyhedral and interval uncertainty. In 2023, Fontem [17]
demonstrated how to set bounds for a worst-case chance-constrained GP problem with random exponent
parameters.
It is observed that the GP problem is taken into account under uncertainty distributions whenever
the parameters of uncertainty distributions are taken in deterministic form. However, in reality, the
parameters of an uncertainty distribution are not always deterministic. So the obvious question is: how
can we deal with an uncertain variable when its parameters are random variables instead of constant
values? If the parameters of an uncertain variable are random variables instead of constant values, then
it is called an uncertain random variable. In 2013, Liu [37] proposed an uncertain random variable
that is a mixture of uncertainty and randomness. Further, the same author [38] applied the concept
of uncertain random variables to an optimization problem called uncertain random programming. In
2021, Liu et al. [36] developed portfolio optimization for uncertain random returns. In addition, Chen et
al. [9] and Li et al. [29] solved portfolio optimization with uncertain random parameters. Zhou et al. [62]
introduced multi-objective optimization in uncertain random environments. Wang et al. [58] developed
an inventory model with uncertain random demand. Ke et al. [24] solved a decentralized decision-making
production control problem with uncertain random parameters. In 2018, Qin [45] introduced uncertain
goal programming. Gao et al. [19] proposed the concept of an uncertain random variable in a complex
form. Ahmadzade et al. [1] and Gao et al. [18] introduced the convergence concept of distribution for an
uncertain random variable.
In most of the optimization problems with uncertain random parameters, the authors considered some
of the parameters to be uncertain variables and some to be random variables. From the literature survey,
it is observed that there has been a lot of research on the crisp and fuzzy GP problem. In addition, an
optimization problem is developed when some of parameters are uncertain variables and some of random
variables. To the best of our knowledge, no previous work on the GP problem with uncertain random
coefficients has been done. As a result, we attempt to investigate the GP problem with uncertain random
coefficients. The following are the main contributions we made to this study:
Uncertain random geometric programming problems 3

I. We introduce a novel deterministic formulation for the GP problem, where coefficients are uniquely
represented as independent linear-normal uncertain random variables.
II. To tackle uncertainty and randomness, we introduce the concept of an uncertain random variable
and propose a novel framework called the linear-normal uncertain random variable.
III. We develop three distinct transformation methods, each with its own unique approach: the optimistic
value criteria, pessimistic value criteria, and expected value criteria. These methods are instrumental
in converting complex linear-normal uncertain random variables into more manageable, tractable
random variables.
IV. The transformation methods enable us to convert a linear-normal uncertain random variable into a
more tractable random variable, facilitating the transition from an uncertain random GP problem to
a stochastic GP problem.
V. We provide insights into an equivalent deterministic representation of the transformed GP problem,
adding clarity and practicality to the optimization process.
VI. To demonstrate the effectiveness and real-world applicability of our proposed approach, we present a
comprehensive numerical example, offering tangible evidence of its efficacy in addressing GP problems.
In essence, our research presents a paradigm shift in addressing GP problems, introducing innovative
concepts and methods that enhance understanding, clarity, and practicality in optimization processes.
The rest of the paper is organized as follows: A linear-normal uncertain random variable is proposed
in Section 2. Transformation methods are developed in Section 3. The GP problem with linear-normal
uncertain variable coefficients is considered, and the deterministic formulation is developed in Section 4.
In Section 5, a numerical example is given. Finally, a conclusion on this work is incorporated in Section
6.

2 Linear-normal uncertain random variable

In this section, we introduce the concept of a linear-normal uncertain random variable. To begin, let us
delve into some foundational ideas from both probability theory and uncertainty theory.

2.1 Probability theory

Probability theory is a field of mathematics focused on exploring the characteristics of unpredictable


events. Within probability theory, the concept of a random variable plays an important role. To grasp
the essence of a random variable, it is essential to familiarize ourselves with the notions of probability
measure and probability space.
Definition 1 [30] Consider the σ−algebra Lp defined over a nonempty set Ω. To establish the concept
of a random variable, we must initially introduce the probability measure and probability space. The
probability measure, denoted as P r : Lp → [0, 1], adheres to the following axioms:
I. (Normality Axiom) P r{Ω} = 1.
II. (Nonnegativity Axiom) P r{A} ≥ 0 for any event A ∈ Lp .
∞ ∞
P r{Ai } for every countable sequence of events {Ai }∞
S P
III. (Additivity Axiom) P r Ai = i=1 .
i=1 i=1

Furthermore, the trio denoted as (Ω, Lp , P r) constitutes the probability space. The notion of a random
variable proves highly valuable in the context of probability description. Here, we provide the definition
of a random variable.
Definition 2 [30] A measurable function, denoted as ω : (Ω, Lp , P r) → R, is referred to as a random
variable. In this context, (Ω, Lp , P r) represents the probability space, and R is the set of real numbers.
In simpler terms, for any Borel set B, the set ω ∈ B is considered an event.
On the other hand, one can distinctly define a random variable through its associated probability
distribution, which holds essential information about the random variable. In many instances, having
knowledge of the probability distribution is more than enough, obviating the need to know the random
variable directly. Below is the mathematical definition of probability distribution.
Definition 3 [30] If Φp stands as the probability distribution associated to a random variable denoted
as ω, then Φp is formally defined as
Φp (x) = P r{ω ≤ x}, ∀x ∈ R.
4 Tapas Mondal* et al.

Additionally, the following theorem supports the requirement that is both necessary and sufficient
for a probability distribution function.
Theorem 1 [30] A function Φp : R → [0, 1] qualifies as a probability distribution if and only if it satisfies
the conditions of being monotonically increasing, right-continuous, and having the limits lim Φp (x) = 0
x→−∞
and lim Φp (x) = 1.
x→+∞

Normal random variables play an important role in order to define the linear-normal uncertain random
variable. Here is the definition of the normal random variable.
Definition 4 [30] Let Φp be the probability distribution corresponding to a random variable ω. The
random variable ω is said to be normal if and only if Φp is defined as
  
1 x−µ
Φp (x) = 1 + erf √ , x ∈ R,
2 σ 2
Rz 2
where µ, σ are real numbers with σ > 0, and erf(z) = √2π e−t dt. If ω is a normal random variable with
0
the parameters µ and σ, then we write it as ω ∼ N (µ, σ).

Remark 1 If µ = 0 and σ = 1, then the random variable ω ∼ N (0, 1) is known as a standard normal
random variable. Its distribution function is
  
1 x
Φ(x) = 1 + erf √ , x ∈ R.
2 2

2.2 Uncertainty theory

Uncertainty theory is an emerging mathematical discipline that employs uncertain variables to depict the
attributes of uncertainties and represent degrees of belief. In order to introduce the concept of uncertain
variables, it is essential to grasp the notions of uncertain measure and uncertainty space.
Definition 5 [31] Consider the σ−algebra Lu defined over a nonempty set Γ . To introduce the concept
of an uncertain variable, we must initially define the uncertain measure and the concept of an uncertainty
space. The uncertain measure is represented as the set function M : Lu → [0, 1], which satisfies to the
following axioms:
I. (Normality Axiom) M{Γ } = 1.
II. (Duality Axiom) M{A} + M{Ac } = 1 for any event A ∈ Lu .
∞ ∞
M{Ai } for every countable sequence of events {Ai }∞
S P
III. (Subadditivity Axiom) M Ai ≤ i=1 .
i=1 i=1

Furthermore, the trio denoted as (Γ, Lu , M) constitutes the uncertainty space. The concept of the
uncertain variable is highly valuable in the context of describing uncertainties. Below is the formal
definition of the uncertain variable.
Definition 6 [31] A measurable function denoted as ξ : (Γ, Lu , M) → R is referred to as an uncertain
variable. In this context, (Γ, Lu , M) represents the uncertainty space, and R corresponds to the set of
real numbers. Put simply, for any Borel set B, the set ξ ∈ B is considered an event.
Alternatively, an uncertain variable is identified through its associated uncertainty distribution, which
carries uncertain information regarding the uncertain variable. In many instances, having knowledge of
the uncertainty distribution is more than sufficient, obviating the need to know the uncertain variable
directly. Here is the mathematical definition of uncertainty distribution.
Definition 7 [31] If Φu represents the uncertainty distribution corresponding to an uncertain variable
ξ, then Φu is defined as
Φu (x) = M{ξ ≤ x}, ∀x ∈ R.
In addition, the requisite and complete condition for an uncertainty distribution function is provided
in the following theorem.
Theorem 2 [31] A function Φu : R → [0, 1] qualifies as an uncertainty distribution if and only if it
displays monotonic increasing behavior, excluding the cases where Φu (x) = 0 or Φu (x) = 1.
Uncertain random geometric programming problems 5

Describing uncertain information geometrically in a two-dimensional space using the uncertainty


distribution is straightforward. Real-world decision-making systems often employ various uncertain vari-
ables, such as normal, linear, zigzag, log-normal, triangular, and trapezoidal. Linear uncertain variables
are particularly significant in defining linear-normal uncertain random variables. Here is the definition
of a linear uncertain variable.
Definition 8 [31] Let Φu represents the uncertainty distribution corresponding to an uncertain variable
ξ. The uncertain variable ξ is classified as linear if and only if Φu is of the following form:

 0, x ≤ a;
Φu (x) = x−a , a ≤ x ≤ b;
 b−a
1, x ≥ b;
where a and b are real numbers with a < b. When ξ is a linear uncertain variable with parameters a and
b, we denote it as ξ ∼ L(a, b).

2.3 Uncertain random variable

An uncertain random variable is an uncertain variable whose parameters are random variables instead
of constant values. For better clarification, in a linear uncertain variable ξ ∼ L(a, b), the parameters a
and b are taken as constants. In this scenario, the obvious question is: how can we deal with the linear
uncertain variable ξ ∼ L(a, b) if its parameters a and b are random variables instead of constant values?
In order to describe this kind of variable, we need to define an uncertain random variable.
Definition 9 An uncertain random variable ξ˜ is a function from the uncertainty space (Γ, Lu , M) to
˜
the set of random variables such that P r{ξ(ω) ∈ B} is a measurable function of ω for any Borel set B
of R.

Remark 2 In broad terms, an uncertain random variable can be described as a measurable function that
operates within the uncertainty space and maps to the set of random variables. In simpler terms, it is an
element characterized by uncertainty, which can take on values that belong to the category of random
variables.

In this research, we propose a linear-normal uncertain random variable. The proposed definition is
given below.
Definition 10 A linear-normal uncertain random variable is a linear uncertain variable whose parame-
ters are normal random variables. More precisely, the uncertain random variable ξ˜ ∼ L(A, B) is said to
be a linear-normal uncertain random variable if and only if the parameters A and B are normal random
variables.

Remark 3 In Definition 10, the parameters A and B of the linear-normal uncertain random variable
ξ˜ ∼ L(A, B) are taken as independent normal random variables.

3 Transformation of uncertain random variable to random variable

In this section, we demonstrate the process of converting an uncertain random variable into a conventional
random variable. To accomplish this, we employ three critical value criteria: optimistic, pessimistic, and
expected value criteria. We begin by introducing the idea of critical values for uncertain variables, which
Liu [31] first introduced in 2015. Here is the definition of these critical values.
Definition 11 [31,61] If ξ is an uncertain variable, then the optimistic, pessimistic, and expected values
of ξ are defined as follows:
I. ξsup (α) = sup{r|M{ξ ≥ r} ≥ α}, α ∈ (0, 1);
II. ξinf (α) = inf{r|M{ξ ≤ r} ≥ α}, α ∈ (0, 1);
R∞ R0
III. ξexp = M{ξ ≥ r}dr − M{ξ ≤ r}dr.
0 −∞

Alternatively, Definition 11 can be expressed using uncertainty distribution. The subsequent theorem,
commonly referred to as the measure inversion theorem, proves to be highly valuable in this context.
6 Tapas Mondal* et al.

Theorem 3 [31, 61] Consider an uncertain variable ξ with an associated uncertainty distribution Φu .
For any real number r, the following relationships hold: M{ξ ≤ r} = Φu (r) and M{ξ ≥ r} = 1 − Φu (r).

Based on Theorem 3, Definition 11 can be redefined as

I. ξsup (α) = sup{r|1 − Φu (r) ≥ α}, α ∈ (0, 1);


II. ξinf (α) = inf{r|Φu (r) ≥ α}, α ∈ (0, 1);
R∞ R0
III. ξexp = (1 − Φu (r))dr − Φu (r)dr.
0 −∞

R∞ R0 R∞
Remark 4 Integrating by parts, we have (1 − Φu (r))dr − Φu (r)dr = rΦ′u (r)dr. Therefore, the
0 −∞ −∞
R∞
expected value of an uncertain variable ξ can be determined by the formula given as ξexp = rΦ′u (r)dr.
−∞

Based on the above operations, the following useful theorem is developed.

Theorem 4 [31, 61] Let ξ ∼ L(a, b) be a linear uncertain variable. The optimistic, pessimistic, and
expected values of ξ are as follows:

I. ξsup (α) = αa + (1 − α)b, α ∈ (0, 1);


II. ξinf (α) = (1 − α)a + αb, α ∈ (0, 1);
III. ξexp = a+b
2 .

It is observed that if the parameters of an uncertain variable are random variables instead of constant
values, then the critical values of the uncertain variable are random variables instead of constant values.
Based on this concept, we transform the uncertain random variable into a random variable using the
critical values of uncertainty. In particular, if ξ˜ ∼ L(A, B) is a linear-normal uncertain random variable,
then based on Theorem 4, the transformed random variables using optimistic, pessimistic, and expected
value criteria are as follows:

I. ξ˜sup (α) = αA + (1 − α)B, α ∈ (0, 1);


II. ξ˜inf (α) = (1 − α)A + αB, α ∈ (0, 1);
III. ξ˜exp = A+B2 .

The main aim is to characterize the random variables ξ˜sup (α), ξ˜inf (α), and ξ˜exp . The characteristic
function of a random variable is an important concept and plays a powerful role in this regard. Here is
the definition of the characteristic function of a random variable.

Definition 12 [30] If ω is a random variable with probability distribution Φp , then its characteristic
R∞ itr ′ √
function is defined as χω (t) = E[eitω ] = e Φp (r)dr, t ∈ R, i = −1.
−∞

Using the characteristic function, we characterize the transformed random variables ξ˜sup (α), ξ˜inf (α),
and ξ˜exp . The following theorem is helpful in this regard.

Theorem 5 If ξ˜ ∼ L(A, B) is a linear-normal uncertain random variable where A ∼ N (µA , σA ) and


B ∼ N (µB , σB ) are two independent normal random variables, then the following results hold:

I. ξ˜sup (α) ∼ N αµA + (1 − α)µB , α2 σA


p 
2 + (1 − α)2 σ 2 , α ∈ (0, 1);
B
II. ξ˜inf (α) ∼ N (1 − α)µA + αµB , (1 − α)2 σA
p
2 + α2 σ 2 , α ∈ (0, 1);
√ 2 2 B
˜ µA +µB σA +σB
III. ξexp ∼ N 2 , 2 .

Proof: We prove only (I). The proofs of (II) and (III) are similar to the proof of (I).
We have ξ˜sup (α) = αA + (1 − α)B, where A ∼ N (µA , σA ) and B ∼ N (µB , σB ) are two indepen-
dent normal random variables. So, the characteristic functions of A and B are χA (t) = E[eitA ] and
1 2 2 1 2 2
χB (t) = E[eitB ]. Using Definition 12, we get χA (t) = eiµA t− 2 σA t and χB (t) = eiµB t− 2 σB t . Now, the
Uncertain random geometric programming problems 7

characteristic function of the random variable ξ˜sup (α) is calculated as

χξ̃sup (t; α) = E eitξ̃sup (α)


 
 
it αA+(1−α)B
=E e

= E eitαA · eit(1−α)B
 

= E eitαA E eit(1−α)B
   

= χA (αt) · χB ((1 − α)t)


1 2 2 1 2 2
= eiµA (αt)− 2 σA (αt) · eiµB ((1−α)t)− 2 σB ((1−α)t)
2 2
1
+(1−α)2 σB
2
)t2
= ei(αµA +(1−α)µB )t− 2 (α σA
.
1 2 2 2 2 2
Therefore, the characteristic function of ξ˜sup (α) is χξ̃sup (t; α) = ei(αµA +(1−α)µB )t− 2 (α σA +(1−α) σB )t .
This shows that ξ˜sup (α) ∼ N αµA + (1 − α)µB , α2 σA
p 
2 + (1 − α)2 σ 2 , which completes the proof. □
B
˜ ˜
It is observed that the parameters of the transformed random variables ξsup (α), ξinf (α), and ξexp ˜
are known. So, we can visualize their corresponding probability distribution functions. The probability
distributions of ξ˜sup (α), ξ˜inf (α), and ξ˜exp are as follows:
I.   
1 x − αµA − (1 − α)µB
Φξ̃sup (x; α) = 1 + erf p , x ∈ R, α ∈ (0, 1);
2 2 + 2(1 − α)2 σ 2
2α2 σA B
II.   
1 x − (1 − α)µA − αµB
Φξ̃inf (x; α) = 1 + erf p , x ∈ R, α ∈ (0, 1);
2 2 + 2α2 σ 2
2(1 − α)2 σA B
III.   
1 2x − µA − µB
Φξ̃exp (x) = 1 + erf p 2 , x ∈ R.
2 2σA + 2σB2

To understand the efficiency of the transformation methods, we provide an example of a linear-normal


uncertain random variable below.
Example 1 If ξ˜ ∼ LN (A, B) is a linear-normal uncertain random variable where A ∼ N (3, 1) and
B ∼ N (2, 1) are two independent normal random variables, then
I. the transformed random variable via α optimistic value criteria is ξ˜sup (α) ∼ N (α + 2, α2 + (1 − α)2 )
p

and its corresponding probability distribution is


  
1 x−α−2
Φξ̃sup (x; α) = 1 + erf p , x ∈ R, α ∈ (0, 1);
2 2α2 + 2(1 − α)2
II. the transformed random variable via α pessimistic value criteria is ξ˜inf (α) ∼ N (3−α, α2 + (1 − α)2 )
p

and its corresponding probability distribution is


  
1 x+α−3
Φξ̃inf (x; α) = 1 + erf p , x ∈ R, α ∈ (0, 1);
2 2α2 + 2(1 − α)2

III. the transformed random variable via expected value criteria is ξ˜exp ∼ N ( 25 , 2
2 ) and its corresponding
probability distribution is
  
1 2x − 5
Φξ̃exp (x) = 1 + erf , x ∈ R.
2 2
Utilizing the optimistic, pessimistic, and expected value criteria, we depict the transformed probability
distribution of the linear-normal uncertain random variable ξ˜ ∼ LN (A, B) in Figure 1, Figure 2, and
Figure 3, correspondingly, where A ∼ N (3, 1) and B ∼ N (2, 1). Figure 1 and Figure 2 are presented as
solid illustrations because they essentially represent bivariate functions involving both x and α. To offer a
clear visualization for different values of α ∈ (0, 1), we present the transformed probability distributions
using the optimistic and pessimistic value criteria in Figure 4 and Figure 5 as two-dimensional plots.
Remark 5 The optimistic value criteria transformation method for a fixed α ∈ (0, 1) is equivalent to the
pessimistic value criteria transformation method for (1 − α). In particular, at α = 0.5, the optimistic
value criteria and pessimistic value criteria transformation methods are equivalent to the expected value
criteria transformation method.
8 Tapas Mondal* et al.

Figure 1: Transformed probability distribution of a linear-normal uncertain random variable via α opti-
mistic value criteria.

Figure 2: Transformed probability distribution of a linear-normal uncertain random variable via α pes-
simistic value criteria.

4 Geometric programming problem with linear-normal uncertain random variable


coefficients

A conventional GP problem involves addressing a minimization problem where both the objective func-
tion and the constraints exhibit posynomial characteristics. Problems within the GP framework, in which
all coefficients and decision variables are positive except for the exponents, are categorized as posynomial
problems. A deterministic GP problem is outlined as
N0 n
α
X Y
min f0 (x) = βi0 xj 0ij
i=1 j=1

s.t. (1)
Nk n
α
X Y
fk (x) = βik xj kij ≤ 1, k = 1, 2, . . . K,
i=1 j=1

where βik > 0, xj > 0, αkij ∈ R, ∀i, j, k. In this context, we have N0 representing the total number of
terms in the objective function, and Nk representing the total number of terms in the k th constraint of
Problem 1. To formulate the dual for Problem 1, let us introduce the variable N as the total number of
K
P
terms present in Problem 1, given by N = Nk . Additionally, we use δik as the dual variables, and λk
k=0
Uncertain random geometric programming problems 9

Figure 3: Transformed probability distribution of a linear-normal uncertain random variable via expected
value criteria.

Figure 4: Transformed probability distributions of a linear-normal uncertain random variable via opti-
mistic value criteria for different values of α.

N
Pk
as the sum of dual variables present in the k th constraint, defined as λk = δik , where k = 0, 1, 2, . . . , K.
i=1
Consequently, the dual problem for Problem 1 can be expressed as

N  δ  λk
Y βik ik
max V (δ) = λk
i=1
δik
s.t.
N0
X (2)
δi0 = λ0 = 1, (Normality condition)
i=1
N
X
δik αkij = 0, j = 1, 2, . . . , n. (Orthogonality conditions)
i=1

In a GP problem, the primal problem is converted into its dual problem, which is easier to solve
than the primal problem. The relationship between primal and dual problems is useful to find the
primal decision variables. The relationship between primal and dual GP problems due to strong duality
10 Tapas Mondal* et al.

Figure 5: Transformed probability distributions of a linear-normal uncertain random variable via pes-
simistic value criteria for different values of α.

theorem [13, 14] is as follows:


n  
X δi0 f0 (x)
α0ij ln(xj ) = ln , i = 1, 2, . . . , N0 ,
j=1
βi0
n   (3)
X δik
αkij ln(xj ) = ln , i = 1, 2, . . . , Nk , k = 1, 2, . . . , K.
j=1
λk βik

Remark 6 In a GP problem, the total number of terms minus the total number of decision variables
minus one is known as the degree of difficulty. In Problem 1 and its dual given in Problem 2, the degree
of difficulty is D = N − n − 1. Depending on D, we have the following two cases.
I. If D ≥ 0, then Problem 2 is feasible. Because in this case, the number of equations is less than or
equal to the number of dual variables. This guarantees the existence of a solution to Problem 2.
II. On the other hand, if D < 0, then Problem 2 is inconsistent. Because in this case, the number
of equations is greater than the number of dual variables. It guarantees that there is no analytical
solution to the dual problem. However, Sinha and Biswal [56] found an approximate solution to
the dual problem with a negative degree of difficulty using the least squares method and the linear
programming method.

In this section, our main aim is to develop an equivalent deterministic form of a GP problem in which
the coefficients of the objective and constraint functions are linear-normal uncertain random variables.
First, we consider the uncertain random GP problem as

N0 n
α
X Y
min f˜0 (x) = β̃i0 xj 0ij
i=1 j=1

s.t. (4)
Nk n
α
X Y
f˜k (x) = β̃ik xj kij ≤ 1, k = 1, 2, . . . K,
i=1 j=1

where the coefficients β̃ik ∼ LN (Aik , Bik ) are independent linear-normal uncertain random variables,
Aik ∼ N (µAik , σAik ) and Bik ∼ N (µBik , σBik ) are independent normal random variables, xj > 0,
αkij ∈ R, ∀i, j, k. To develop the equivalent deterministic form of Problem 4, we first transform uncertain
random variable coefficients β̃ik ∼ LN (Aik , Bik ) into random variable coefficients based on Theorem
5. For instance, Problem 4 is converted into stochastic GP problems via optimistic, pessimistic, and
expected value criteria. Here are the transformations of Problem 4 into stochastic GP problems.
Uncertain random geometric programming problems 11

I. The transformation of Problem 4 into the stochastic GP problem via α optimistic value criteria is
N0 n
α
X Y
min β̃i0sup (α) xj 0ij
i=1 j=1

s.t. (5)
Nk n
α
X Y
β̃iksup (α) xj kij ≤ 1, k = 1, 2, . . . K,
i=1 j=1

q 
where β̃iksup (α) ∼ N αµAik + (1 − α)µBik , 2
α 2 σA 2
+ (1 − α)2 σB are independent normal random
ik ik

variables, α ∈ (0, 1).


II. The transformation of Problem 4 into the stochastic GP problem via α pessimistic value criteria is
N0 n
α
X Y
min β̃i0inf (α) xj 0ij
i=1 j=1

s.t. (6)
Nk n
α
X Y
β̃ikinf (α) xj kij ≤ 1, k = 1, 2, . . . K,
i=1 j=1

q 
where β̃ikinf (α) ∼ N (1 − α)µAik + αµBik , 2
(1 − α)2 σA 2
+ α 2 σB are independent normal random
ik ik

variables, α ∈ (0, 1).


III. The transformation of Problem 4 into the stochastic GP problem via expected value criteria is
N0 n
α
X Y
min β̃i0exp xj 0ij
i=1 j=1

s.t. (7)
Nk n
α
X Y
β̃ikexp xj kij ≤ 1, k = 1, 2, . . . K,
i=1 j=1

 q 
2
σA 2
+σB
µAik +µBik ik ik
where β̃ikexp ∼ N 2 , 2 are independent normal random variables.

We use probabilistic constraints with a tolerance level ϵ ∈ (0, 0.5]. Then Problem 5 becomes

min x0
s.t.
Nk
X n 
α
Y
Pr β̃iksup (α) xj kij ≤ 1 ≥ 1 − ϵ, k = 1, 2, . . . K, (8)
i=1 j=1
N0
X n 
α
Y
Pr β̃i0sup (α) xj 0ij x−1
0 ≤ 1 ≥ 1 − ϵ,
i=1 j=1

Problem 6 becomes

min x0
s.t.
Nk
X n 
α
Y
Pr β̃ikinf (α) xj kij ≤ 1 ≥ 1 − ϵ, k = 1, 2, . . . K, (9)
i=1 j=1
N0
X n 
α
Y
Pr β̃i0inf (α) xj 0ij x−1
0 ≤1 ≥ 1 − ϵ,
i=1 j=1
12 Tapas Mondal* et al.

Problem 7 becomes

min x0
s.t.
Nk
X n 
α
Y
Pr β̃ikexp xj kij ≤ 1 ≥ 1 − ϵ, k = 1, 2, . . . K, (10)
i=1 j=1
N0
X n 
α
Y
Pr β̃i0exp xj 0ij x−1
0 ≤ 1 ≥ 1 − ϵ.
i=1 j=1

Now, the main focus is to find an equivalent deterministic form of a probabilistic constraint. The following
theorem is very useful in this regard.

Theorem 6 If ξl ∼ N (µl , σl ) with l = 1, 2, . . . , L are independent normal random variables, and Ul with
l = 1, 2, . . . , L are nonnegative variables, then for every tolerance level ϵ ∈ (0, 0.5],

L
X 
Pr ξl Ul ≤ 1 ≥1−ϵ
l=1

is equivalent to
v
L
X
u L
uX
−1
µl Ul + Φ (1 − ϵ)t σl2 Ul2 ≤ 1,
l=1 l=1


where Φ−1 (1 − ϵ) = 2erf−1 (1 − 2ϵ) is the inverse of the standard normal distribution function.

 L
  L
   L

P P P
Proof: Let Z = − 1. Then we have µZ = E[Z] = E
ξl Ul ξl Ul − 1 = E[ξl ]Ul − 1 =
l=1     l=1  l=1
L L L L
2
var[ξl ]Ul2 − 0 = σl2 Ul2 . Therefore, we
P P P P
µl Ul − 1 and σZ = var(Z) = var ξl Ul − 1 =
l=1 l=1 l=1 l=1
have
L
X 
Pr ξl Ul ≤ 1 ≥ 1 − ϵ ⇔ P r(Z ≤ 0) ≥ 1 − ϵ
l=1
 
Z − µZ µZ
⇔ Pr ≤− ≥1−ϵ
σZ σZ
   
µZ Z − µZ
⇔Φ − ≥1−ϵ ∵ ∼ N (0, 1)
σZ σZ
µZ
⇔− ≥ Φ−1 (1 − ϵ)
σZ
⇔ µZ + σZ Φ−1 (1 − ϵ) ≤ 0
v
XL u L
uX
−1
⇔ µ U − 1 + Φ (1 − ϵ)t
l l σ2 U 2 ≤ 0 l l
l=1 l=1
v
L
X
u L
uX
−1
⇔ µl Ul + Φ (1 − ϵ)t σl2 Ul2 ≤ 1.
l=1 l=1

This completes the proof. □


Based on Theorem 6, we convert each probabilistic constraint into a deterministic constraint. There-
fore, converting probabilistic constraints into deterministic constraints and, after simplification, the de-
terministic form of Problem 8, Problem 9, and Problem 10 are derived in Problem 11, Problem 12, and
Uncertain random geometric programming problems 13

Problem 13, respectively. Problem 11 is


v
N0 n u N0 n
α 2α
X Y uX Y
min αµAi0 + (1 − α)µBi0 xj 0ij + Φ−1 (1 − ϵ)t 2
α 2 σA i0
+ (1 − α)2 σ2
Bi0 xj 0ij
i=1 j=1 i=1 j=1

s.t.
v (11)
Nk n u Nk n
α 2α
X Y uX Y
+ Φ−1 (1 − ϵ)t 2 2

αµAik + (1 − α)µBik xj kij α2 σA ik
+ (1 − α)2 σB ik
xj kij ≤ 1,
i=1 j=1 i=1 j=1

where α ∈ (0, 1), ϵ ∈ (0, 0.5], k = 1, 2, . . . K.


Problem 12 is
v
N0 n u N0 n
α 2α
X Y uX Y
−1 2 2 σ2
min (1 − α)µAi0 + αµBi0 xj 0ij + Φ (1 − ϵ)t (1 − α)2 σA i0
+ α Bi0 xj 0ij
i=1 j=1 i=1 j=1

s.t.
v (12)
Nk n u Nk n
α 2α
X Y uX Y
+ Φ−1 (1 − ϵ)t 2 2

(1 − α)µAik + αµBik xj kij (1 − α)2 σA ik
+ α 2 σB ik
xj kij ≤ 1,
i=1 j=1 i=1 j=1

where α ∈ (0, 1), ϵ ∈ (0, 0.5], k = 1, 2, . . . K.


Problem 13 is
v
N0   n u N0 n
X µAi0 + µBi0 Y α0ij Φ−1 (1 − ϵ) uX
2 2
Y 2α
min xj + t σA + σ Bi0 xj 0ij
i=1
2 j=1
2 i=1
i0
j=1

s.t.
v (13)
Nk   n u Nk n
X µAik + µBik Y αkij Φ−1 (1 − ϵ) uX
2 2
Y 2α
xj + t σA + σ Bik xj kij ≤ 1,
i=1
2 j=1
2 i=1
ik
j=1

where ϵ ∈ (0, 0.5], k = 1, 2, . . . K.

5 Numerical example

In this section, we solve a numerical example to show the efficiency and efficacy of the procedures. We
consider a GP problem with linear-normal uncertain random variable coefficients as
min f˜0 (x) = β̃10 x−1 x−1 x−1 + β̃20 x1 x3
1 2 3
s.t.
(14)
f˜1 (x) = β̃11 x1 x2 + β̃21 x2 x3 ≤ 1,
x1 , x2 , x3 > 0,
where β̃10 ∼ LN (A10 , B10 ), β̃20 ∼ LN (A20 , B20 ), β̃11 ∼ LN (A11 , B11 ), β̃21 ∼ LN (A21 , B21 ) are inde-
pendent linear-normal uncertain random variables, A10 ∼ N (50, 3), B10 ∼ N (40, 2), A20 ∼ N (45, 2),
B20 ∼ N (40, 1), A11 ∼ N (1, 31 ), B11 ∼ N ( 23 , 23 ), A21 ∼ N ( 23 , 13 ), B21 ∼ N ( 43 , 1) are independent normal
random variables. Based on an α optimistic value point of view, the deterministic form of Problem 14
with a tolerance level ϵ is
min (40 + 10α)x−1 −1 −1
1 x2 x3 + (40 + 5α)x1 x3
q
+ Φ−1 (1 − ϵ) 9α2 + 4(1 − α)2 x−2 −2 −2
  2 2
2 2
1 x2 x3 + 4α + (1 − α) x1 x3

s.t.
3−α 4 − 2α (15)
x1 x2 + x2 x3
2 s3
   
1 2 4 1 2
+ Φ−1 (1 − ϵ) α + (1 − α)2 x21 x22 + α + (1 − α)2 x22 x23 ≤ 1,
9 9 9
x1 , x2 , x3 > 0, α ∈ (0, 1), ϵ ∈ (0, 0.5].
14 Tapas Mondal* et al.

To solve Problem 15, let us introduce two new variables, x4 and x5 , defined as x4 = 9α2 + 4(1 −
α)2 x−2 −2 −2 1 2 4 1 2
 2 2
 2 2 2
 2 2 2
 2 2
1 x2 x3 + 4α + (1 − α) x1 x3 and x5 = 9 α + 9 (1 − α) x1 x2 + 9 α + (1 − α) x2 x3 . Then,
Problem 15 becomes as
1
min (40 + 10α)x−1 −1 −1
1 x2 x3 + (40 + 5α)x1 x3 + Φ
−1
(1 − ϵ)x42
s.t.
3−α 4 − 2α 1
x1 x2 + x2 x3 + Φ−1 (1 − ϵ)x52 ≤ 1,
2 3
(16)
9α2 + 4(1 − α)2 x−2 −2 −2 −1 2 2
 2 2 −1
1 x2 x3 x4 + 4α + (1 − α) x1 x3 x4 ≤ 1,
   
1 2 4 2 2 −1 1 2
2
α + (1 − α) x1 x2 x5 + α + (1 − α) x22 x23 x−1
2
5 ≤ 1,
9 9 9
x1 , x2 , x3 , x4 , x5 > 0, α ∈ (0, 1), ϵ ∈ (0, 0.5].
In Problem 16, the total number of terms is 10 and the total number of variables is 5. So, the degree
of difficulty is = 10 − 5 − 1 = 4. Let δ1 , δ2 , . . . , δ10 be the corresponding dual variables. The the dual of
Problem 16 is
δ  δ  δ  δ  δ
40 + 10α 1 40 + 5α 2 Φ−1 (1 − ϵ) 3 3 − α 4 4 − 2α 5

max V (δ) =
δ1 δ2 δ3 2δ4 3δ5
 −1 δ6  2 δ δ δ
9α + 4(1 − α)2 7 4α2 + (1 − α)2 8 α2 + 4(1 − α)2 9
   
Φ (1 − ϵ)
δ6 δ7 δ8 9δ9
2 δ10
 2 
α + 9(1 − α)
(δ4 + δ5 + δ6 )δ4 +δ5 +δ6 (δ7 + δ8 )δ7 +δ8 (δ9 + δ10 )δ9 +δ10
9δ10
s.t.
(17)
δ1 + δ2 + δ3 = 1,
− δ1 + δ2 + δ4 − 2δ7 + 2δ8 + 2δ9 = 0,
− δ1 + δ4 + δ5 − 2δ7 + 2δ9 + 2δ10 = 0,
− δ1 + δ2 + δ5 − 2δ7 + 2δ8 + 2δ10 = 0,
1
δ3 − δ7 − δ8 = 0,
2
1
δ6 − δ9 − δ10 = 0.
2
Solving Problem 17 for a fixed α ∈ (0, 1) with the tolerance level ϵ = 0.05, we get the dual solution.
Consequently, by the primal-dual relationship, we find the primal solution. When we set parameter α to
discrete values from 0.1 to 0.9 with an increment 0.1, the corresponding solutions are computed in Table
1.

Table 1: Optimal solutions

α x∗1 x∗2 x∗3 x∗4 x∗5 Objective value


0.1 1.454 0.167 1.233 39.882 0.056 219.893
0.2 1.409 0.183 1.219 31.919 0.051 213.316
0.3 1.361 0.201 1.207 27.584 0.047 206.732
0.4 1.310 0.223 1.199 26.036 0.042 200.180
0.5 1.254 0.247 1.195 26.533 0.038 193.715
0.6 1.194 0.274 1.197 28.480 0.034 187.493
0.7 1.132 0.304 1.208 31.494 0.031 181.885
0.8 1.075 0.332 1.225 35.514 0.030 177.570
0.9 1.033 0.354 1.242 40.852 0.032 175.417

We find the deterministic form of Problem 14 based on an optimistic value point of view. Further, we
solve the transformed deterministic problem for different values of α ∈ (0, 1), which are given in Table 1.
Similarly, it is easy to find the solution to Problem 14 based on the pessimistic value point of view and
the expected value point of view. However, from Table 1, we visualize the solution to Problem 14 based
Uncertain random geometric programming problems 15

on the pessimistic value point of view and the expected value point of view. Figure 6 shows the objective
values with respect to the parameter α based on the optimistic value point of view and the pessimistic
value point of view. However, α = 0.5 gives the optimal objective value based on expected value point
of view.

Figure 6: Optimal objective values with respect to the parameter α.

6 Conclusion

In conclusion, our work delves into the realm of GP from a unique perspective by incorporating indepen-
dent linear-normal uncertain random variables into the problem. We introduce and define the concept
of an uncertain random variable and propose a novel approach in the form of linear-normal uncertain
random variables. We are able to transform these uncertain random variables into their stochastic coun-
terparts by using three different criteria: optimistic value, pessimistic value, and expected value. This
turns the uncertain random GP problem into a manageable stochastic GP problem.
One of the primary advantages of our approach is its ability to model and address uncertainty
in a structured manner, which is particularly useful in real-world scenarios where data is not always
deterministic. This method allows decision-makers to make informed choices that consider the inherent
uncertainty and randomness in the input coefficients, ultimately leading to more reliable solutions.
Furthermore, we present an equivalent deterministic representation of the transformed GP problem,
which simplifies the decision-making process for practitioners who may prefer working with determin-
istic models. This duality between stochastic and deterministic representations provides flexibility in
addressing the problem based on the decision-maker’s preference and data availability.
However, it is essential to acknowledge the limitations of our work. Firstly, the computational com-
plexity of solving stochastic GP problems can be substantial, especially for large-scale instances, and may
require specialized optimization techniques or approximation methods. Secondly, our approach relies on
the assumption that the coefficients are independent linear-normal uncertain random variables, which
might not always hold in practical applications. Addressing correlated uncertainties remains an avenue
for future research. Additionally, the transformation of uncertain variables into stochastic forms may
introduce approximations, potentially impacting the accuracy of the results.
In summary, our work offers a valuable contribution to the field of GP problems by providing a
systematic approach to handling uncertainty and randomness through linear-normal uncertain random
variables. While it offers practical benefits in decision-making under uncertainty, decision-makers should
be aware of the computational challenges and the assumptions inherent in our approach. Future research
could explore more sophisticated techniques to handle correlated uncertainties and further improve the
applicability of stochastic GP problems in real-world scenarios.

Acknowledgement The first author is thankful to CSIR for financial support of this work through file No: 09\1059(0027)\2019-
EMR-I.
16 Tapas Mondal* et al.

Data availability Data sharing not applicable to this article as no datasets were generated. A random
data set is taken for the numerical example given in this paper.
Conflict of interests The authors have no relevant financial or non financial interests to disclose. The
authors declare that they have no conflict of interests.

References

1. Ahmadzade, H., Gao, R., Naderi, H., Farahikia, M.: Partial divergence measure of uncertain random variables and
its application. Soft Comput. 24, 501–512 (2020)
2. Avriel, M., Dembo, R., Passy, U.: Solution of generalized geometric programs. Int. J. Numer. Method Eng. 9,
149–168 (1975)
3. Avriel, M., Wilde, D.J.: Engineering design under uncertainty. I&EC Process Des. Dev. 8, 127–131 (1969)
4. Beightler, C.S., Phillips, D,T.: Applied Geometric Programming. Wiley, New York (1976)
5. Cao, B.Y.: Extended fuzzy geometric programming. J. Fuzzy Math. 1, 285–293 (1993)
6. Cao, B.Y.: Research for a geometric programming model with T-fuzzy variable. J. Fuzzy Math. 5, 625–632 (1997)
7. Chassein, A., Goerigk, M.: On the complexity of robust geometric programming with polyhedral uncertainty. Oper.
Res. Lett. 47, 21–24 (2019)
8. Cheng, T.C.E.: An economic order quantity model with demand-dependent unit production cost and imperfect
production process. IIE Trans. 23, 23–28 (1991)
9. Chen, L., Gao, R., Bian, Y., Di, H.: Elliptic entropy of uncertain random variables with application to portfolio
selection. Soft Comput. 25, 1925–1939 (2021)
10. Chiang, M.: Geometric programming for communication systems. Found. Trends Commun. Inf. Theory. 2, 1–154
(2005)
11. Choi, J.C., Bricker, D.L.: Effectiveness of a geometric programming algorithm for optimization of machining eco-
nomics models. Comput. Oper. Res. 10, 957–961 (1996)
12. Chu, C., Wong, D.F.: VLSI circuit performance optimization by geometric programming. Ann. Oper. Res. 105,
37–60 (2001)
13. Duffin, R.J., Peterson, E.L., Zener, C.M.: Geometric Programming Theory and Applications. Wiley, New York
(1967)
14. Duffin, R.J., Peterson, E.L.: Geometric programming with signomials. J. Optim. Theory Appl. 11, 3–35 (1973)
15. Dupačová, J. Stochastic geometric programming with an application. Kybernetika 46, 374–386 (2010)
16. Fang, S.C., Peterson, E.L., Rajasekera, J.R.: Controlled dual perturbations for posynomial programs. Eur. J. Oper.
Res. 35, 111–117 (1988)
17. Fontem, B.: Robust chance-constrained geometric programming with application to demand risk mitigation. J.
Optim. Theory Appl. 197, 765–797 (2023)
18. Gao, R., Ralescu, D.A.: Convergence in distribution for uncertain random variables. IEEE Trans. Fuzzy Syst. 26,
1427–1434 (2018)
19. Gao, R., Zhang, Z., Ahmadzade, H., Ralescu, D.A.: Complex uncertain random variables. Soft Comput. 22, 5817–
5824 (2018)
20. Gupta, N.C.D., Paul, H., Yu, C.H.: An application of geometric programming to structural design. Computers &
Structures. 22, 965–971 (1986)
21. Hershenson, M.D., Boyd, S.P., Lee, T.H.: Optimal design of a CMOS op-amp via geometric programming. IEEE
Trans. Comput. Aid. Design. 20, 1–21 (2001)
22. Islam, S., Roy, T.K.: Fuzzy multi-item economic production quantity model under space constraint: a geometric
programming approach. Appl. Math. Comput. 184, 326–335 (2007)
23. Jung, H., Klein, C.M.: Optimal inventory policies under decreasing cost functions via geometric programming. Eur.
J. Oper. Res. 132, 628–642 (2001)
24. Ke, H., Su, T., Ni, Y.: Uncertain random multilevel programming with application to production control problem.
Soft Comput. 19, 1739–1746 (2015)
25. Kim, D., Lee, W.J.: Optimal joint pricing and lot sizing with fixed and variable capacity. Eur. J. Oper. Res. 109,
212–227 (1998)
26. Kortanek, K.O., No, H.: A second order affine scaling algorithm for the geometric programming dual with logarithmic
barrier. Optimization 23, 303–322 (1992)
27. Kortanek, K.O., Xu, X., Ye, Y.: An infeasible interior-point algorithm for solving primal and dual geometric pro-
grams. Math. Program. 76, 155–181 (1997)
28. Lee, W.J.: Determining order quantity and selling price by geometric programming. Optimal solution, bounds, and
sensitivity. Decis. Sci. 24, 76–87 (1993)
29. Li, B., Li, X., Teo, K.L., Zheng, P.: A new uncertain random portfolio optimization model for complex systems with
downside risks and diversification. Chaos Solit. Fractals. 160, 11-22 (2022)
30. Liu, B.: Uncertainty theory: An Introduction to Its Axiomatic Foundation, Physica-Verlag, Heidelberg (2004)
31. Liu, B.: Uncertainty Theory, 4th edn. Springer, Berlin (2015)
32. Liu, J., Lisser, A., Chen, Z.:Stochastic geometric optimization with joint probabilistic constraints. Oper. Res. Lett.
44, 687–691 (2016)
33. Liu, J., Peng, S., Lisser, A., Chen, Z.: Rectangular chance constrained geometric optimization. Optim. Eng. 21,
537–566 (2020)
34. Liu, S.T.: Geometric programming with fuzzy parameters in engineering optimization. Int. J. Approx. Reason. 46,
484–498 (2007)
35. Liu, S.T.: Posynomial geometric programming with parametric uncertainty. Eur. J. Oper. Res. 168, 345–353 (2006)
36. Liu, Y., Ahmadzade, H., Farahikia, M.: Portfolio selection of uncertain random returns based on value at risk. Soft
Comput. 25, 6339–6346 (2021)
37. Liu, Y.: Uncertain random variables: a mixture of uncertainty and randomness. Soft Comput. 17, 625–634 (2013)
Uncertain random geometric programming problems 17

38. Liu, Y.: Uncertain random programming with applications. Fuzzy Optim. Decis. Mak. 12, 153–169 (2013)
39. Mahapatra, G.S., Mandal, T.K.: Posynomial parametric geometric programming with interval valued coefficient. J.
Optim. Theory Appl. 154, 120–132 (2012)
40. Mandal, N.K., Roy, T.K.: A displayed inventory model with L-R fuzzy number. Fuzzy Optim. Decis. Mak. 5, 227–243
(2006)
41. Maranas, C.D., Floudas, C.A.: Global optimization in generalized geometric programming. Comput. Chem. Eng.
21, 351–369 (1997)
42. Mondal, T., Ojha, A.K., Pani, S.: Geometric programming problems with triangular and trapezoidal two-fold un-
certainty distributions. arXiv preprint arXiv:2302.01710 (2023)
43. Mondal, T., Ojha, A.K., Pani, S.: Solving geometric programming problems with triangular and trapezoidal uncer-
tainty distributions. RAIRO–Oper. Res. 56, 2833–2851 (2022)
44. Passy, U., Wilde, D.J.: A geometric programming algorithm for solving chemical equilibrium problems. SIAM J.
Appl. Math. 16, 363–373 (1968)
45. Qin, Z.: Uncertain random goal programming. Fuzzy Optim. Decis. Mak. 17, 375–386 (2018)
46. Rajgopal, J.: An alternative approach to the refined duality theory of geometric programming. J. Math. Anal. Appl.
167, 266–288 (1992)
47. Rajgopal, J., Bricker, D.L.: Posynomial geometric programming as a special case of semi-infinite linear programming.
J. Optim. Theory Appl. 66, 455–475 (1990)
48. Rajgopal, J., Bricker, D.L.: Solving posynomial geometric programming problems via generalized linear program-
ming. Comput. Optim. Appl. 21, 95–109 (2002)
49. Roy, T.K., Maiti, M.: A fuzzy EOQ model with demand-dependent unit cost under limited storage capacity. Eur.
J. Oper. Res. 99, 425–432 (1997)
50. Ruckaert, M.J., Martens, X.M., Desarnauts, J.: Ethylene plant optimization by geometric programming. Comput.
& Chem. Eng. 2, 93–97 (1978)
51. Scott, C.H., Jefferson, T.R.: Allocation of resources in project management. Int. J. Syst. Sci. 26, 413–420 (1995)
52. Shiraz, R.K., Khodayifar, S., Pardalos, P.M.: Copula theory approach to stochastic geometric programming. J. Glob.
Optim. 81, 435–468 (2021)
53. Shiraz, R.K., Tavana, M., Di Caprio, D., Fukuyama, H.: Solving geometric programming problems with normal,
linear and zigzag uncertainty distributions. J. Optim. Theory Appl. 170, 243–265 (2016)
54. Shiraz, R.K., Tavana, M., Fukuyama, H., Di Caprio, D.: Fuzzy chance-constrained geometric programming: the
possibility, necessity and credibility approaches. Oper. Res. Int. J. 17, 67–97 (2017)
55. Shiraz, R.K., Fukuyama, H.: Integrating geometric programming with rough set theory. Oper. Res. Int. J. 18, 1–32
(2018)
56. Sinha, S.B., Biswas, A., Biswal, M.P.: Geometric programming problems with negative degrees of difficulty. Eur. J.
Oper. Res. 28, 101–103 (1987)
57. Wall, T.W., Greening, D., Woolsey, R.E.D.: OR practice—solving complex chemical equilibria using a geometric-
programming based technique. Oper. Res. 34, 345–493 (1986)
58. Wang, D., Qin, Z., Kar, S.: A novel single-period inventory problem with uncertain random demand and its appli-
cation. Appl. Math. Comput. 269, 133–145 (2015)
59. Worrall, B.M., HALL, M.A: The analysis of an inventory control model using posynomial geometric programming.
Int. J. Prod. Res. 20, 657–667 (1982)
60. Yang, J.H., Cao, B.Y.: Monomial geometric programming with fuzzy relation equation constraints. Fuzzy Optim.
Decis. Mak. 6, 337–349 (2007)
61. Yang, L., Liu, P., Li, S., Gao, Y., Ralescu, D.A.: Reduction methods of type-2 uncertain variables and their appli-
cations to solid transportation problem. Inf. Sci. 291, 204–237 (2015)
62. Zhou, J., Yang, F., Wang, K.: Multi-objective optimization in uncertain random environments. Fuzzy Optim. Decis.
Mak. 13, 397–413 (2014)

You might also like