Chemical Process Design Taking Into Account JCC

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

ISSN 0040-5795, Theoretical Foundations of Chemical Engineering, 2020, Vol. 54, No. 1, pp. 145–156. © Pleiades Publishing, Ltd.

, 2020.
Russian Text © The Author(s), 2020, published in Teoreticheskie Osnovy Khimicheskoi Tekhnologii, 2020, Vol. 54, No. 1, pp. 17–29.

Chemical Process Design Taking into Account Joint


Chance Constraints
T. V. Laptevaa, *, N. N. Ziyatdinova, and I. I. Emel’yanova
aKazan National Research Technological University, Kazan, 420015 Tatarstan Russia
*e-mail: tanlapteva@yandex.ru
Received August 2, 2019; revised August 6, 2019; accepted September 9, 2019

Abstract—A method for solving the problems of chemical process design taking into account the incomplete-
ness and inaccuracy of the initial information presented in the form of one-stage optimization problems with
joint chance constraints is proposed. Joint chance constraints guarantee the required probability level of per-
formance, unlike individual chance constraints, which provide only an estimate of the level of performance.
However, the calculation of joint chance constraints is much more complicated than that of individual chance
constraints. This work proposes a method for solving the problems of chemical process design with allowance
for joint chance constraints, which eliminates the operation of the direct calculation of constraints. The
method involves converting a stochastic programming problem into a sequence of deterministic nonlinear
programming problems and can significantly reduce the time it takes to solve the problem. The effectiveness
of the proposed approach is illustrated by model examples.

Keywords: chemical process design, optimization, joint chance constraints, chemical engineering
DOI: 10.1134/S0040579520010133

INTRODUCTION design parameters of the dimensionality nd , z is the nz -


It is known that, during their operation, chemical vector of variables characterizing the operating mode of
processes (CPs) are affected by external and internal the CP, and f (d, z, θ) is the estimation function of the
factors that affect their operation. The CP operation in operating efficiency of the CP. θ is the nθ-vector of
a changing environment requires that these changes be uncertain parameters, θ ∈ T , the domain of uncertainty
considered when designing new CPs. In the modern
theory of CP optimization, such problems are solved T = {θ : θiL ≤ θi ≤ θUi ,i = 1,..., nθ} is formed from the
on the basis of statements of optimization problems identified ranges of uncertain parameters θi ,
under uncertainties in the initial information [1], i = 1,..., nθ. The continuous differentiability of func-
when changes in the conditions of CP operation are tions f (d, z, θ) and g j (d, z, θ), j = 1,..., m is assumed.
presented via changes in a series of parameters of its
mathematical model. It is generally believed that the Problem (1)–(2) cannot be solved, since the exact
change in these parameters occurs regardless of our values of the θ parameters are not known.
desire and the random nature of such parameters is The problems of designing an optimal CP with
assumed. However, on the basis of the available histor- allowance for uncertainty in the initial information as
ical data, it is possible to predict the variation range of a function of the nature and completeness of informa-
these parameters over the selected period of the oper- tion at different stages of the CP life cycle, the possi-
ation of the designed CP. In some cases, the available bilities of refining it, and the desires of a designer are
information is sufficient to determine the law of distri- solved on the basis of statements of one- or two-stage
bution of uncertain parameters and its characteristics. optimization problems [2]. The random nature of
In general, the problem of the design of an optimal exogenous and endogenous uncertainty [3] is taken
CP with allowance for the variation of uncertain into account differently in the criterion and con-
parameters can be written in the form [1, 2] straints of the optimization problem. This leads to the
use of problem statements of the robust optimization
min f (d, z, θ), (1) [4], stochastic optimization [5], or optimization with
d , z∈H chance constraints [6].
g j (d, z, θ) ≤ 0, j = 1,..., m, ∀θ ∈ T , (2) The problems of robust optimization involve ful-
filling the requirements for the CP operation under
where H = {d, z : hq (d, z ) ≤ 0, q = 1,..., p}, constraints any changes in the operating conditions of the CP.
(2) are the mathematical formulation of the design Such constraints are called hard [7]. Chance con-
requirements to the CP operation, d is the vector of the straints [8] assume that the requirements for the oper-

145
146 LAPTEVA et al.

ation of the CP are satisfied on average or with a given probability level of constraint performance after solv-
probability. When using the latter, a very high proba- ing the optimization problem with constraints of the
bility level of satisfying such constraints or a low level form (3), some constraints will be satisfied at the spec-
of risk is required. The methods for solving two-stage ified moment of the CP operation stage and some will
optimization problems with hard constraints and one- not be. It occurs that we cannot guarantee the required
stage problems with chance constraints have been pro- probability level of the satisfaction of constraints, and
posed in many papers [9–11]. In this work, we con- we get only an estimate of the solution of the problem
sider the formulations of optimization problems with [17].
soft constraints in the form of chance constraints. On the other hand, joint constraints (5) in a for-
The chance constraints were first introduced in malized statement already imply ensuring the level of
[12], after which significant studies of the properties of fulfillment of the probability of the constraint satisfac-
such constraints were carried out in [13]. The researchers tion when they are simultaneously satisfied in the
were mainly attracted to problems with individual chance same subdomain of the domain of uncertainty. This
constraints, which can be presented as ensures that the required level of the probability of
constraint satisfaction is provided. The guarantee of
Pr{g j (d, z, θ) ≤ 0} ≥ α j , j = 1,..., m, the simultaneous implementation of joint constraints
Pr{g j (d, z, θ) ≤ 0} =  ρ(θ)d θ,
Ωj
(3) has attracted active attention to such constraints in the
last decade.
The problem of solving optimization problems with
Ω j = {θ : g j (d, z, θ) ≤ 0; θ ∈ T }, (4) joint chance constraints is that it is necessary to calcu-
here, ρ(θ) is probability density function of θ parameters. late multidimensional integrals in the left-hand side of
(5) at each step of the optimization method in order to
The calculation of constraints of the form (3) when obtain the level of the probability of constraint satis-
solving the optimization problem requires a multidi- faction; in addition, the constraints are not convex
mensional integration procedure at each step of the [14]. The methods aimed at solving the arising prob-
optimization method, and the integration region Ω j in lems can be divided into three groups: methods that
the form (4) is not convex in most problems [14]. offer modifications of quadrature formulas for multi-
Moreover, already in [15], the joint chance constraints dimensional integration with a given accuracy, meth-
of the following form were analyzed: ods on the basis of the idea of statistical tests, and
Pr{g j (d, z, θ) ≤ 0; j = 1,..., m} ≥ α, methods providing an easy-to-calculate approxima-
tion of the left-hand sides of constraints.
(5)
Pr{g j (d, z, θ) ≤ 0; j = 1,..., m} = 
Ωα
ρ(θ)d θ ≥ α, The first group of methods involves the use of the
features of the problem to be solved for obtaining spe-
Ωα = {θ : g j (d, z, θ) ≤ 0; j = 1,..., m; θ ∈ T }. (6) cial quadrature formulas that make it possible to effi-
ciently obtain the values of multidimensional integrals
Constraints of form (5) are even more complicated with a given accuracy [18–20]. However, the use of
in the computational sense, since they are often not this group of methods still leads to a sharp increase in
computable directly and lead to ravine regions and the number of nodes in the integration region with the
hard problems [16]. increase in its dimension and demand for high accu-
Comparing the two types of constraints (3) and (5), racy. The second group of methods includes statistical
we consider the features of the resulting solution. The test methods or sample approximation [21]. The group
requirement to satisfy constraints (3) does not imply the of methods develops the idea of the Monte Carlo
satisfaction of all m constraints in the same region, mak- method, proposing its modifications, which make it
ing it possible to have different regions Ω j , j = 1,..., m. It possible to accelerate the convergence of the method
is necessary only to provide the required probability without a loss of accuracy [22, 23]. Here it is necessary
level for the satisfaction of constraints. Since to highlight the actively used quasi-Monte Carlo
methods that use sequences with the so-called low
Ω j ⊂ T , j = 1,..., m, Ωr ≠ Ω s , r ≠ s, mismatch, e.g., Sobol sequences. These sequences can
r = 1,..., m, s = 1,..., m, be considered pseudorandom for many known distri-
butions. A review of such methods is given in [24]. In
the situation is possible in which a part of constraints this field, there are methods that use probabilistic
with numbers from the multitude J , J ⊂ {1,..., m}, is metrics that make it possible to estimate the error of a
satisfied in some point θt ∈ T , solution obtained using statistical tests [25, 26]. The
third direction in this field is methods that fit the
g j (d, z, θt ) ≤ 0, θt ∈ T , j ∈ J , moments of the generated set to the characteristics of
the original one [27]. A comparison of statistical test
and part of constraints is not satisfied in this point. methods is given in Section 16 of the book [28]. Since
Since such a point of the uncertainty domain char- all the listed methods of the group are based on the
acterizes the moment of the CP operation stage, we generation of a discrete distribution according to some
can get a situation when, having obtained the specified requirements and the algorithm, their common name

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


CHEMICAL PROCESS DESIGN 147

is used in the literature as sample approximation or dom variables. We also assume that uncertain param-
sample average approximation (SAA) methods [29]. eters θi have a normal distribution with an average μi
The original idea of the method was proposed by
A. Nemirovski and A. Shapiro in [23]. In the work, the and variance σi2 . We consider chance constraints with
authors developed a method for solving the two-stage the uncertainty only in the left-hand side of con-
optimization problem by soft constraints for the case straints.
of biaffine dependences on variables d, z and θ param-
eters in the left-hand sides of constraints. The SAA FORMULATION OF OPTIMIZATION
method was adapted to solving two-stage linear opti- PROBLEMS WITH JOINT CHANCE
mization problems with a quantile criterion by the CONSTRAINTS
authors of [30]. In [31], the method is modified and
combined with the Benders decomposition algorithm The problem of the optimal CP design in the for-
for solving the two-stage optimization problem with mulation of the one-stage optimization problem with
binary parameters of the first stage and continuous individual chance constraints in the optimality strat-
ones at the second stage of the system life cycle. How- egy on average can be formulated as [41]
ever, in turn, the SAA method uses random number min E[ f (d, z, θ)], (7)
generation to simulate the initial distribution and cal- d , z∈H
culate constraints. This can lead to significant devia-
tions from the actual distribution or it may require Pr{θ ∈ Ω} ≡ ρ(θ)d θ ≥ α,
Ω
(8)
large computational costs to achieve acceptable accu-
racy [14]. Ω = {θ : g j (d, z, θ) ≤ 0; θ ∈ T }, j = 1,..., m, (9)
Methods of the third group include methods of
reducing existing constraints to well-known and easily
computable ones, as well as methods of reducing
E[ f (d, z, θ)] =  f (d, z, θ)ρ(θ)d θ.
T
(10)

chance constraints to a deterministic form. Among the In problem (7)–(8), the probability level of the
first methods, the use of the Chebyshev inequality constraint satisfaction (8) is set by a designer under the
[32] and the Hoeffding inequality [33] should be men- assumption 0 ≤ α ≤ 1; (10) is the mathematical expec-
tioned. A convex approximation of joint chance con- tation of the function f (d, z, θ).
straints was proposed in [34], where the authors con-
sidered the linear optimization problem. The use of If we consider joint chance constraints and
the Bonferroni inequality for reducing joint chance accounting for information from the design and oper-
constraints to individual chance constraints is shown ation stages in the CP life, then, in general, the prob-
in [23, 35]. The approximations of constraints are pro- lem of the design of optimal CP in the formulation of
posed in [37] for multistage linear optimization prob- a two-stage optimization problem with joint chance
lems. The calculation of constraints for different types constraints (TSOPJCC) in the optimality strategy on
of continuous and discrete distributions of uncertain average has the form [42]
parameters is considered in [38]. A convex approxima-
tion of joint constraints for special cases of constraint
min
d , z(θ)∈H  f (d, z(θ), θ)ρ(θ)d θ,
T
(11)
functions is presented in [39]. These approximations
make it possible to simplify the calculation of multidi- Pr{g j (d, z(θ), θ) ≤ 0; j = 1,..., m} ≥ α, (12)
mensional integrals when obtaining the left-hand
sides of chance constraints. At the same time, meth-
ods are being developed that reduce chance con-
Pr{g j (d, z(θ), θ) ≤ 0; j = 1,..., m} =  ρ(θ)d θ,
ΩT
(13)

straints to a deterministic form, which makes it possi- ΩT = {θ : g j (d, z(θ), θ) ≤ 0; j = 1,..., m; θ ∈ T }. (14)
ble to completely eliminate multidimensional integra-
tion operations. Initially, such methods were proposed We further assume that the solution of the problem
for individual chance constraints; however, as early as (11)–(12) exists and denote it as d *, z * (θ).
1997, K. Maranas considered a way to convert the Considering that the left-hand side of constraints
combined chance constraints to a deterministic form for (12) can be rewritten in the form (13), (14), the prob-
the linear form of constraints. G.M. Ostrovsky et al. for- lem (11)–(12) can be rewritten in the following form:
mulated a two-stage optimization problem with
chance constraints [40, 41] and proposed a method for
solving it involving the transformation of chance con-
min
d , z(θ)∈H  f (d, z(θ), θ)ρ(θ)d θ,
T
(15)
straints into a deterministic form. However, all of the
above works either involve the use of individual chance
constraints or are designed for a narrow class of con-
Pr{θ ∈ ΩT } ≡  ρ(θ)d θ ≥ α,
ΩT
(16)

straint functions or discrete distributions.


ΩT = {θ : g j (d, z(θ), θ) ≤ 0; j = 1,..., m; θ ∈ T }. (17)
In this work, we propose a new method for solving
optimization problems with joint constraints for the When the control variables z(θ) are independent of
case when uncertain parameters are independent ran- parameters θ, i.e., z(θ) = z = const, ∀θ ∈ T , the

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


148 LAPTEVA et al.

problem (15)–(16) is transformed into a one-stage T = T1(k ) ∪ T2(k ) ∪ ... ∪ TQ((kk)) , (21)
optimization problem with joint chance constraints
(OSOPJCC) of the form
Tq(1k ) ∩ Tq(2k ) = ∅, q1 = 1,.., Q(k ),
min E[ f (d, z, θ)], (18) (22)
d , z∈H q2 = 1,.., Q (k ), q1 ≠ q2,
Pr{θ ∈ ΩO } ≡  ρ(θ)d θ ≥ α,
ΩO
(19) Tq
(k )
= {θ : θi
(k )L,q
≤ θi ≤ θi
(k )U ,q
, i = 1,..., nθ},
(23)
q = 1,.., Q . (k )

ΩO = {θ : g j (d, z, θ) ≤ 0; j = 1,..., m; θ ∈ T }. (20)


We approximate the function f (d, z, θ) by the
Assuming that the solution of the problem (18)–
dependence f (d, z, θ, θq ), which we construct on the
(19) exists, we denote it as d*, z*.
basis of the expansion of function f (d, z, θ) into a Tay-
The complexities of solving TSOPJCC and OSOP- lor series in terms of uncertain parameters θ at the
JCC are as follows:
(i) calculation of multidimensional integrals in points θq ∈ Tq(k ), q = 1,..., Q (k ), keeping the linear part
(15), (18) and (16), (19) for obtaining exact values of of the expansion. It was shown in [45] that it is possible
the criterion and constraints of the problems at each to use the approximation Eap[ f (d, z, θ)] for the crite-
iteration of the optimization method; rion of the form (10):
(ii) search variables z(θ) in problem (15)–(16) are (k )
Q

functions of uncertain parameters having an unknown
form.
Eap[ f (d, z, θ)] =   a f (d, z, θ )
q =1
q
q

(24)
Further we consider the solution of OSOPJCC nθ

∂f (d, z, θ )
q
(18)–(19). +
i =1
 ∂θi
(E[θi ;Tq(k )] − aqθiq )  ,

The proposed approach is based on the following
operations:
(i) The construction of Ω  ⊂ T for the approxima-
aq =  ρ(θ)d θ, (25)
Tq( k )
tion domain ΩO . Domain Ω  is constructed in such a
way as to exclude multidimensional integration opera-
tions when calculating the left-hand sides of con-
E[θi ;Tq(k )] =  θ ρ(θ)d θ.
i (26)
Tq( k )
straints (19). Domain Ω  is specified at each iteration of
Using information on the independence of uncer-
the OSOPJCC solution, for which a procedure is devel- tain parameters, we can calculate (25) on the basis of
oped to refine the approximation of the domain ΩO . the dependence [43]
(ii) The approximation Eap[ f (d, z, θ)] of the nθ
OSOPJCC criterion having the form of the mathemat-
ical expectation E[ f (d, z, θ)]. We use the approxima-
aq = ∏ [Φ(θ
i =1
(k )U ,q
i ) − Φ(θ (i k )L,q )], (27)
tion proposed earlier in [43], which avoids multidi-
θ − E[θi ]  (k )U ,q θi − E[θi ]
(k )L,q (k )U ,q
mensional integration in (18). θ (i k )L,q = i , θi = ,
Then at some iteration with the number k we solve σi σi (28)
q = 1,.., Q , i = 1,..., nθ,
the following problem: (k )

= min Eap [ f (d, z, θ)], where Φ(ξ) is the function of the standard normal dis-
(k ) (k )
f1
d , z∈H
tribution; E[θi ] is the mathematical expectation; and
 (k )} ≥ α.
Pr{θ ∈ Ω (σi )2 is the variance of the parameter θi , i = 1,..., nθ.
The integral of the right-hand side of expression
APPROXIMATION OF THE CRITERION (26) is also calculated as
OF OSOPJCC i −1

To exclude the operation of multidimensional inte- E[θi ;Tq(k )] = ∏ [Φ(θ


r =1
(k )U ,q
r ) − Φ(θ (rk )L,q )]
gration when obtaining the criterion of the problem
(18)–(19) in [45], we proposed an approximation of θ(i k )U ,q nθ
(29)
the criterion of the problem based on a piecewise lin-
ear dependence. We assume at the k th iteration that
×  θiρ(θ)d θ ∏
r =i +1
[Φ(θ (rk )U ,q ) − Φ(θ (rk )L,q )].
θ(i k )L,q
the domain of uncertainty Т is partitioned into subdo-
The formulas for the one-dimensional integral
mains Tq(k ), q = 1,..., Q (k ), such that the following con- remained in expressions (26) and (29), which should
ditions are met: be calculated at each iteration of the method

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


CHEMICAL PROCESS DESIGN 149

θi
( k )U , q
( R ) ∩ ( R ) = ∅,
(k )
q1
(k )
q2 q1 = 1,.., N ,
(k )

I i,q =  θi ρ(θ)d θ. (30)


q2 = 1,.., N (k ), q1 ≠ q2,
(37)
θi
( k ) L, q

By approximating the function of the standard nor-


mal distribution Φ(ξ), and also the one-dimensional
( )
where the multitude Rq(1k ) composes a multitude of
integral, we eliminate integration when calculating (18). internal points of the region Rq(1k ).
We consider a new domain Ωl,(k ) = Rl(k ) ∩ ΩO ,
CONSTRUCTION OF THE APPROXIMATION
l = 1,..., N . Taking into account (32), (36), and (37),
(k )
 FOR OSOPJCC
OF Ω
it is possible to present the domain ΩO in the form
We consider some iteration with a number k . We
(k )
assume that we obtained the solution of the problem ΩO = Ω1,(k ) ∪ Ω2,(k ) ∪ ... ∪ ΩN ,(k )
.
(18)–(19). Then we know a set of domains Ω j ,
Ω j ≠ ∅, j = 1,..., m, of the form Further, we consider a means for the approxima-
tion of subdomains Ωl,(k ). To this end, we introduce
Ω j = {θ : g j (d, z, θ) ≤ 0, θ ∈ T }, j = 1,..., m. (31)
the notations of the vector θ = (θ1,..., θnθ −1) and region
Obviously, the sought-after domain ΩO can be Rl(k ) in the space of variables θ :
obtained as
= {θ : θi ≤ θi ≤ θi , i = 1,..., nθ − 1},
(k ) (k )L,l (k )U ,l
ΩO = Ω1 ∩ Ω2 ∩ ... ∩ Ωm. (32) Rl
(38)
We consider an individual domain Ω j . Its bound- l = 1,..., L ,
(k )

ary can be described by the equation


where L(k ) is the number of such regions at iteration k .
g j (d, z, θ1, θ2,..., θnθ ) = 0. (33) Obviously, due to the shape of the regions
We express parameter θnθ from (33) as a function of Rl(k ) = {θ : θ(i k )L,l ≤ θi ≤ θ(i k )U ,l , i = 1,..., nθ}, a region
the remaining parameters of Eq. (33). It is always pos- RS(kl ) is found for the following relation to be true
sible to obtain the numerical value of parameter θnθ
with the known values of variables d, z and parameters Rl(k ) ⊂ RS(kl ), l = 1,..., L(k ). (39)
θ1, θ2,..., θnθ −1 from the equation To describe the approximation of subdomains
θnθ = ϕ j (d, z, θ1, θ2,..., θnθ −1). Ω , we consider the behavior of functions
l ,(k )

We introduce new functions of the form g j (d, z, θ1, θ2,..., θnθ ). We consider the case when the
following condition is fulfilled:
g j (d, z, θ1, θ2,..., θnθ )
∂g j ∂g j (d, z, θ)
 ≥ 0, j = 1,..., m. (40)
θnθ − ϕ j (d, z, θ1, θ2,..., θnθ −1), ∂θ ≥ 0; ∂θnθ
 nθ
(34)
=  In this case, domain Ωl,(k ) can be presented in the
 ∂g j form
ϕ j (d, z, θ1, θ2,..., θnθ −1) − θnθ , ≤ 0.
 ∂θnθ Ωl,(k ) = {θ : θnθ − ϕ j (d, z, θ1, θ2,..., θnθ −1) ≤ 0,
Obviously, the above reasoning can be extended to (41)
j = 1,..., m, θ ∈ Rl(k )}.
all domains Ω j , j = 1,..., m . Then, using the form (34)
of functions g j (d, z, θ1, θ2,..., θnθ ), it is possible to pres- We approximate the function
ent sought-after domain ΩO in the form ϕ j (d, z, θ1, θ2,..., θnθ −1), j = 1,..., m, at the iteration k ,
by piecewise linear dependences
ΩO = {θ : g j (d, z, θ1, θ2,..., θnθ ) ≤ 0,
(35) ϕ(jk )(d, z, θ) = {ϕ(jk,l), if θ ∈ Rl(k )}, j = 1,..., m, (42)
j = 1,..., m, θiL ≤ θi ≤ θUi , i = 1,..., nθ}.
We introduce a partition of the uncertainty domain where the functions ϕ(jk,l), j = 1,..., m, l = 1,..., L(k ), are
into subdomains Rl(k ) = {θ : θ(i k )L,l ≤ θi ≤ θ(i k )U ,l , constructed in the central point θ(k ),l,mid =
i = 1,..., nθ} , l = 1,..., N (k ), at the iteration with number (θ1(k ),l,mid ,..., θ(nkθ ),−1l,mid ), θ(i k ),l,mid = 0.5(θ(i k ),U ,l − θ(i k ),L,l ),
k , such that the following conditions are fulfilled: i = 1,..., nθ − 1,, of the region Rl(k ) and have the form
T = ∪ ∪ ... ∪
(k ) (k ) (k )
R1 R2 RN (k ) , (36) ϕ(jk,l) = ϕ j (d, z, θ(k ),l,mid ).

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


150 LAPTEVA et al.

Then, for the approximation of domains Ωl,(k ), (50). We replace constraint (50). We introduce a con-
straint of the form
 l,(k ),
l = 1,..., L , it is possible to introduce domains Ω
(k )

l = 1,..., L of the form ≤ min ϕ j,l , l = 1,.., L .


(k ) (k ) (k ) (k )
yl (51)
j =1,...,m
 l,(k ) = {θ : θn − ϕ(jk,l) ≤ 0, j = 1,..., m, θ ∈ Rl(k )},
Ω θ We consider the solution of the problem (48), (49),
l = 1,..., L .
(k ) (51). Due to the monotonic increase in function Φ(ξ)
Since conditions (37) and (39) are satisfied, upon an increase in the yl(k ) value, the Φ( yl(k ) ) value
 of the
domain ΩO can be approximated by domain Ω increases as well. Consequently, the allowable region
form of problem (48), (49), (51) increases with an increase
in yl(k ) . Since the objective function of problem (48),
 =Ω
 1,(k ) ∪ Ω
 2,(k ) ∪ ... ∪ Ω
L
(k )
Ω ,(k )
, (49), (51) is independent of yl(k ),the value of the objec-
Ω ∩Ω
= ∅, q = 1,.., L , s = 1,.., L , (43)
q,(k ) s,(k ) (k ) (k )
tive function does not increase as the allowable
domain of the problem grows, but it may decrease.
q ≠ s. Then the method for solving problem (48), (49), (51)
We introduce a variable: increases the yl(k ) value until condition (50) is fulfilled.
yl
(k )
= min ϕ j,l .
(k )
(44) Then we obtain problem (48)–(50).
j =1,...,m
Obviously, constraint (51) is equivalent to the con-
Then it is possible to write that condition θnθ ≤ yl(k ) straint
is equivalent to condition θnθ − ϕ(jk,l) ≤ 0, j = 1,..., m.
yl(k ) ≤ ϕ(jk,l), j = 1,..., m, l = 1,.., L(k ). (52)

We write the probability measure of domain Ω l ,(k )
,
taking into account (44): Substituting (52) into problem (48)–(50) instead of
(50), we obtain a new kind of problem, giving an esti-
 nθ −1  mate of the solution to problem (18)–(19):
 i =1

 l,(k )} =  [Φ(θ (i k )U ,l ) − Φ(θ (i k )L,l )]
Pr{θ ∈ Ω
 (45)
f1(k ) = min(k ) Eap
(k )
[ f (d, z, θ)],
(k ) 
× [Φ( yl ) − Φ(θi )],
(k )L,l d , z∈H , yl
(k )
L
where
A
l =1
l [Φ( y
(k )
l(k ) ) − Φ(θ i
(k )L,l
)] ≥ α, (53)
θi − E[θi ]  (k )U ,l θi − E[θi ]
(k )L,l (k )U ,l
θ (i k )L,l = , θi = , yl ≤
(k )
ϕ j,l , j = 1,..., m, l = 1,.., L ,
(k ) (k )
σi σi
(46)
− E[θnθ ]
(k )
yl nθ −1
l = 1,.., L(k ), i = 1,..., nθ, yl(k ) =
σnθ
. where Al(k ) = ∏ [Φ(θ
i =1
(k )U ,l
i ) − Φ(θ (i k )L,l )].

With allowance for (45) and (43), it is possible to Problem (53) contains only the continuously dif-
write ferentiable function and is a common nonlinear pro-
 } = Pr{θ ∈ Ω
Pr{θ ∈ Ω  (k ) 1,(k )
}
gramming problem.
(47)

+ Pr{θ ∈ Ω
2,(k ) 
} + ... + Pr{θ ∈ Ω
L( k ),(k ) ∂g j (d, z, θ)
}. Now we consider the case when ≤ 0,
∂θnθ
We substitute expressions (24), (45), and (47) into
problem (18)–(19). Taking into account additional j = 1,..., m. In this case, domain Ωl,(k ) can be presented
search variables in the new formulation, we get the in the form
problem
f1
(k )
= min(k ) Eap [ f (d, z, θ)],
(k )
(48) Ωl,(k ) = {θ : ϕ j (d, z, θ1, θ2,..., θnθ −1) − θnθ ≤ 0,
d , z∈H , yl
j = 1,..., m, θ ∈ Rl(k )}.
 } ≥ α,
Pr{θ ∈ Ω (k )
(49)
Repeating the above arguments for the new case,
= min ϕ j,l , l = 1,.., L .
(k ) (k ) (k )
yl (50) we introduce the domains
j =1,...,m

Problem (48)–(50) gives an estimate of the sought-  l,(k ) = {θ : ϕ(jk,l) − θn ≤ 0, j = 1,..., m, θ ∈ Rl(k )},
Ω
after solution to the problem (15)–(16), but it contains θ

l = 1,..., L .
(k )
a nondifferentiable function in the right-hand side of

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


CHEMICAL PROCESS DESIGN 151

vl = max ϕ j,l , Φ( yl(k )), Φ(vl(k )) and leads to difficulties in applying


(k ) (k )
Then, introducing variables
j =1,...,m
optimization methods using derivatives to solve prob-
l = 1,.., L , we obtain the probability measure of the
(k )
lems. However, it was shown in [46] that, when using
domain Ω  l,(k ) in the form smooth functions in the criteria and in the left-hand
sides of constraints at the solution point, the nonlinear
 l,(k )} = Al(k )[Φ(θ (i k )U ,l ) − Φ(vl(k ) )],
Pr{θ ∈ Ω programming method can continue to work from the
where point at which the sign of the derivative was reversed.
− E[θnθ ] (k ) nθ −1
vl
(k )
vl(k ) =
σnθ
, Al =
i =1

[Φ(θ (i k )U ,l ) − Φ(θ (i k )L,l )].
METHOD FOR REFINING APPROXIMATIONS
Repeating arguments similar to those when replac- The solution of problems (53), (54), and (55) is
ing constraint (50) with constraint (51) and then with carried out for a given partition of the domain of
constraint (52), we obtain the form of the problem of
estimating the solution of the problem (18)–(19): uncertainty Tq(k ), q = 1,..., Q (k ), of form (23), and the
f1(k ) = min(k ) Eap
(k )
[ f (d, z, θ)], given domains Ω  l,(k ), l = 1,..., L(k ), of form (38). To
d , z∈H , yl refine the obtained estimate of the criterion of prob-
L
(k )
lem (15)–(16), it is necessary to refine the approxima-
A
l =1
l [Φ( y
(k )
l(k ) ) − Φ(θ i )] ≥ α,
(k )L,l
(54) tion used in constructing the problems (53), (54), and
(55). Therefore, it is necessary to refine the approxi-
vl ≥
(k )
ϕ j,l , j = 1,..., m, l = 1,.., L .
(k ) (k ) mation of function f (d, z, θ) by dependence

∂g j (d, z, θ) f (d, z, θ, θq ), q = 1,..., Q(k ), as well as the approxima-


Now we consider the case when ≥ 0, tion of domain ΩO by domains Ω  l,(k ), l = 1,..., L(k ). To
∂θnθ
∂g j (d, z, θ) this end, we carry out the partition of domains Tq(k ),
j ∈ J1, ≤ 0, j ∈ J 2, J1 ∪ J 2 = {1,..., m},
∂θnθ q = 1,..., Q(k ), and Rl(k ), l = 1,..., N (j k ), into subdo-
J1 ∩ J 2 = ∅. Then we use the variables mains. It is possible to propose different means of the
partition, e.g., on each new iteration, partition all
∂g j (d, z, θ) available domains into two. Obviously, this should
yl(k ) = min ϕ(jk,l), if ≥ 0,
j∈J1 ∂θnθ improve the approximations. However, the quality of
the approximation in different domains differs and,
j ∈ J1, partitioning all the domains, we also partition those
∂g j (d, z, θ) domains whose approximation quality is already high.
vl(k ) = max ϕ(jk,l), if ≤ 0, Moreover, a rapid increase in the number of regions
∂θnθ
j∈J 2
Rl(k ), l = 1,..., N (j k ) leads to the fast increase in the
j ∈ J 2.
After considerations similar to those given above, number of search variables yl(k ) , vl(k ) and constraints of
we obtain the form of the problem giving an estimate the problem, leading to an increase in the dimension
of the solution to problem (18)–(19): of the problem to solve. Therefore, we partition one of
the domains Tq(k ), q = 1,..., Q(k ), and one of the
f1(k ) = min(k ) Eap
(k )
[ f (d, z, θ)],
d , z∈H , yl domains Rl(k ), l = 1,..., N (j k ) on each new iteration.
(k )
L

A
l =1
l [Φ( y
(k )
l(k ) ) − Φ(vl(k ) )] ≥ α, (55)
Further we consider the principle of choosing the
partition domains. We assume that we solved one of
vl(k ) ≥ κ(jk,l), j ∈ J 2, yl(k ) ≤ ϕ(jk,l), the problems (53), (54), (55) on the iteration with the
number k and obtained the solution d (k ), z (k ) .
j ∈ J1, l = 1,.., L(k ).
Obviously, problem (55) can be used if the follow- To refine the approximation f (d, z, θ, θq ) on each
ing condition is fulfilled: new iteration with the number, we partition some
min ϕ j,l ≥ max ϕ j,l .
(k ) (k ) domain Ts(k ) with number s from set Tq(k ), q = 1,..., Q (k ).
j∈J1 j∈J 2
To select such a domain, we estimate the approxima-
Note: It should be noted that, in the course of solv- tion quality of function f (d, z, θ) by dependences
∂g (d, z, θ) f (d, z, θ, θq ) in each domain Tq(k ), q = 1,..., Q (k ) and we
ing the problem, the derivative j may change
∂θnθ choose the domain to which the worst approximation
sign. This implies the nondifferentiability of functions quality corresponds:

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


152 LAPTEVA et al.

CA1

CA0, F0, T0
F1, T2
2

1
V, T1 3
Tw2 Fw, Tw1

Fig. 1. Designed CP: (1) reactor, (2) heat exchanger, and (3) compressor.

s = arg max(k ) max( f (d (k ), z (k ), θ) with joint constraints (18)–(19). The problem of


q∈{1,...,Q } θ∈Tq designing an optimal CP in statement (7)–(8) was
solved by the method proposed in [44]—we call it
− f (d , z , θ, θ )) .
(k ) (k ) q 2
method 1—and also the method proposed in [2], or
method 2. The problem of designing an optimal CP in
We partition domain Ts(k ) in the center of the rib statement (18)–(19) was solved by the method pro-
corresponding to the uncertain parameter selected posed in this paper.
sequentially from the set {θ1, θ2,..., θnθ }. Example. We consider a chemical process (Fig. 1)
To refine the approximation of domain ΩO by consisting of a reactor and a heat exchanger with recy-
 l,(k ), l = 1,..., L(k ), we partition some domain cle [45].
domains Ω
The CP mathematical model includes the material
Rp(k ) with the number p from the set Rl(k ), l = 1,..., N (j k ). and heat balance equations of the reactor and the heat
To choose such a domain, we estimate the approxima- transfer and heat balance equations of the heat
tion quality of function ϕ j (d, z, θ1, θ2,..., θnθ −1) by exchanger. In [45], an approximation of the heat
transfer equation was proposed, due to which part of
dependences ϕ j (d, z, θ1(k ),l,mid ,..., θ(nkθ ),−1l,mid ) (θ(i k ),l,mid = the state variables is expressed as a function of the
0.5(θ(i k ),U ,l − θ(i k ),L,l ), i = 1,..., nθ − 1) in each of remaining parameters. The mathematical model of
CP took the form
domains Rl(k ), l = 1,..., N (j k ) and we choose the domain
to which the worst approximation quality corresponds. VkRC A0 exp(−E RT1)
conv = ,
Domain numberRp(k ) with the worst quality of the F0 + VkRC A0 exp(−E RT1)
approximation is to be found by solving the problem (T − Tw2 ) + (T2 − Tw1)
Q = AU 1 ,
p = arg max(k ) max( ϕ j (d , z , θ) 2
(k ) (k )
l∈{1,...,L }θ∈Rl 2(−H )F0conv 2F0ρc p(T1 − T0 )
(k )

T2 = −
− ϕ j (d , z , θ
(k ) (k ) (k ),l ,mid 2
)) , AU AU
− (T1 − Tw2 ) + Tw1,
where θ(k ),l,mid = (θ1(k ),l,mid ,..., θ(nkθ ),−1l,mid ), θ(i k ),l,mid =
F1 = Q , Fw = Q .
0.5(θ(i k ),U ,l − θ(i k ),L,l ), i = 1,..., nθ − 1. ρc p(T1 − T2 ) ρwc pw(Tw2 − Tw1)
We partition domain Rp(k ) in the center of the edge The values of the model state parameters are given
corresponding to an uncertain parameter selected in Table 1.
sequentially from set {θ1, θ2,..., θnθ −1}. The vector of uncertain parameters consists of five
quantities F0,T0,Tw1, kR ,U . The domain of uncertainty
is set in the form of intervals of the variation of uncer-
COMPUTATIONAL EXPERIMENT tain parameters with respect to their nominal values:
To demonstrate the effectiveness of the proposed {θi : θiN (1 − γσi ) ≤ θi ≤ θiN (1 + γσi )}, i = 1,...,5.
method, OSOP was solved with chance constraints at Parameter γ sets the size of the domain of uncertainty.
different probability levels and domain of uncertainty
sizes for CP described in [45]. The problem was solved The nominal θiN values of uncertain parameters and
in two formulations: in the form of OSOP with indi- variance σi are given in Table 2. The statistical mutual
vidual constraints (7)–(8) and in the form of OSOP independence of uncertain parameters is assumed.

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


CHEMICAL PROCESS DESIGN 153

Table 1. Parameters of the mathematical model of the Tw1 − Tw2 ≤ 0, (60)


example
Parameter Value Parameter Value 311 ≤ T2 ≤ 389, (61)
Tw2 − T1 + 11.1 ≤ 0, (62)
ρ c p , kJ/(m3 K) 167.4 U , kJ/(m2 h K) 1635
ρw c pw , kJ/(m3 K) 4.190 T0 , K 333 311 ≤ T1 ≤ 389, (63)

F0 , m3/h 45.36 Tw1 , K 300 301 ≤ Tw2 ≤ 355. (64)


C A0 , kmol/m 3 32.04 E R, K 560 Constraints (57)–(61) are indirectly dependent on
uncertain parameters and are considered probabilistic.
kR , m3/(kmol h) 9.81 ΔH , kJ/kmol –23260
The results of solving the problem of designing an
optimal CP disregarding the uncertainty in the source
Table 2. Characteristics of uncertain parameters information, i.e., in the nominal point, are shown in
Table 3 and correspond to the size of domain of uncer-
θi θiN σi tainty of 0.
F0 , m3/h 45.36 0.1 The values of criterion f of the problem of designing
the optimal CP in the formulation (7)–(8) obtained by
T0 , K 333 0.02
methods 1 and 2 for different sizes of the domain of
Tw1 , K 300 0.03 uncertainty and probability α j , ( j = 57, 58, 59, 60, 61)
kR , m3/(kmol h) 9.81 0.1 are given in Table 3. In this case the probability level of
the satisfaction of constraints (61) was set at 0.95 for
U , kJ/(m2 h K) 1635 0.1 providing the convergence of the material–thermal
balance of CP.
The values of criterion f of the estimate of the
The objective function has the form of overall optimality criterion of the problem of designing opti-
expenses mal CP in the formulation (18)–(19) obtained by the
method proposed in this work for different domain of
f = 691.2V 0.7 + 873 A 0.6 + 1.76Fw + 7056F1. (56) uncertainty sizes and probability level α of the satis-
There are four search variables in the problem: faction of joint constraints are given in Table 4.
constructive d is the volume of the reactor V and the Analyzing the results, it should be noted that the
heat exchange surface in the heat exchanger A ; control solution of the problem with joint constraints in all
z are temperatures T1 and Tw2 . The requirements for cases gives a criterion value larger than that when solv-
the system are of the form of constraints: ing a problem with individual joint constraints, which
is determined by the need to satisfy constraints in one
0,9 − (C A0 − C A1) C A0 ≤ 0, (57) common domain. This feature confirms the operabil-
ity of the proposed method. It can be seen that the cri-
T2 − T1 ≤ 0, (58) terion values in Table 4 obtained for problem (18)–
(19) slightly exceed the criterion values in Table 3 pre-
Tw1 − T2 + 11.1 ≤ 0, (59) sented for method 2 of the solution of the problem

Table 3. Results of solving the CP design problem in the OSOP form with individual chance constraints
Method 2 Method 1
γ α
f2 V A t, s f1 V A t, s
0 9374 5.48 7.88 0.01 – – – –
0.5 9878 5.63 7.44 10 9379 5.48 7.87 2.9
1 0.75 9957 5.79 7.48 11 9608 5.64 7.70 3.38
0.95 10132 6.04 7.62 12.59 9983 5.84 7.68 3.85
0.5 9941 5.7 7.68 65 9386 5.48 7.86 2.73
1.5 0.75 10019 5.95 7.98 70 9750 5.72 7.66 2.32
0.95 10155 6.35 8.07 73 10113 6.03 7.64 14
0.5 – – – – 9409 5.48 7.82 2.19
2.5
0.95 – – – – 10824 6.42 8.61 10

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


154 LAPTEVA et al.

Table 4. Results of solving the CP design problem in the mance of the designed CP in different stages of its
OSOP form with joint chance constraints operation.
γ α f V A t, s
1.0 0.5 9937 5.76 7.41 0.3 NOTATION
0.75 10038 5.85 7.54 0.35
0.95 10168 5.97 7.84 0.3 Notation of the theoretical section
1.5 0.5 10014 5.81 7.49 0.3 a coefficient in the approximation of the integrand
0.75 10155 6.15 7.84 0.3 d design search variables
0.95 10268 6.23 8.02 0.35 E mathematical expectation
2.5 0.5 10093 5.91 7.81 0.3 f estimation function of the operating efficiency of the
0.95 10833 6.89 8.75 0.5 CP
g constraint function
H allowable range of search variables
(7)–(8), but significantly exceed the values obtained
by method 1. This can be explained by the low accu- h constraints independent of uncertain parameters
racy achieved by the approximation of constraints I one-dimensional integral
when using method 1; i.e., more iterations should be k iteration number
carried out for method 1. number of subdomains in the domain Ω
L
We note that, in comparison with method 1 and m number of chance constraints
method 2, the proposed method finds a solution in a
very short time. In addition, this method works on N number of subdomains in the domain Ω
large domains of uncertainty, unlike method 2. n vector dimensionality
The undoubted advantage of the method proposed Pr Probability
in this work is the guarantee of compliance with the p number of constraints independent of uncertain
required probability level of satisfaction of constraints parameters
at any time during the operation phase of the designed Q number of subdomains in the domain T
CP, as well as the short time it takes to obtain a solution.
R approximation of domain Ω
S range of uncertain parameter
CONCLUSIONS domain of uncertainty
T
This article proposes an approach to solving design z control search variables
problems of flexible CPs presented in the form of one- α probability level
stage optimization problems with joint chance con-
straints. The difficulty of solving such problems lies in θ uncertain parameters
the impossibility of calculating the resulting multidi- ξ random quantity
mensional integrals in the left-hand sides of con- ρ probability density
straints by the well-known methods for many nonlin-
ear functions that formalize the requirements for CP σ standard deviation
operation. When such a calculation is possible, it is Φ function of the standard normal distribution
required to carry out multidimensional integration φ approximable hypersurface obtained for chance
operations at each iteration of the method at the direct constraint
solution of problems.
Ω domain of constraint satisfaction
The method proposed in this work involves solving  approximation of domain Ω
a sequence of problems that give an estimate of the cri- Ω
terion of the sought-after problem. For the construc- Subscripts and superscripts of the theoretical section
tion of such problems, the use of approximations of ap approximation
integrands is proposed, which makes it possible to get i index
free of multidimensional integration in the criterion of
the problem and leads joint chance constraints to a set j constraint number
of deterministic ones. l subdomain number
The advantages of the proposed method include L left domain boundary
the high speed of obtaining the result, as well as the O one-stage optimization problem
final form, size, and location of the domain of con-
straint satisfaction in the uncertainty domain. This q subdomain number
will make it possible to more fully estimate the perfor- r subdomain number

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


CHEMICAL PROCESS DESIGN 155

s subdomain number 7. Ostrovskii, G.M., Ziyatdinov, N.N., Lapteva, T.V.,


and Pervukhin, I.D., Flexibility analysis of chemical
T two-stage optimization problem technology systems, Theor. Found. Chem. Eng., 2007,
U right domain boundary vol. 41, no. 3, p. 235.
Notation of computational experiment https://doi.org/10.1134/S0040579507030025
8. Finger, M., Le Bras, R., Gomes, C.P., and Selman, B.,
A heat transfer surface in the heat exchanger, m2 Solutions for hard and soft constraints using optimized
C concentration of the reactant, kmol/m3 probabilistic satisfiability, Theory and Applications of
Satisfiability Testing – SAT 2013. Lecture Notes in Com-
с heat capacity, kJ/(kg K) puter Science, Järvisalo, M. and Van Gelder, A., Eds.,
conv conversion of reactant A Berlin: Springer-Verlag, 2013.
E activation energy, J/kmol 9. Powell, W.B., Approximate Dynamic Programming:
Solving the Curses of Dimensionality, Wiley Series in
ΔH reaction heat, kJ/kmol Probability and Statistics, Hoboken, N.J.: Wiley, 2011.
F flow rate, m3/h 10. Grossmann, I.E., Apap, R.M., Calfa, B.A., Garcia-
kR preexponential factor of the Arrhenius equation, Herreros, P., and Zhang, Q., Recent advances in math-
ematical programming techniques for the optimization
m3/(kmol h) of process systems under uncertainty, Comput. Chem.
Q amount of heat, kJ Eng., 2016, vol. 91, pp. 3–14.
R universal gas constant, J/(kmol K) https://doi.org/10.1016/j.compchemeng.2016.03.002
11. Schwarm, A.T. and Nikolaou, M., Chance-con-
T temperature, K strained model predictive control, AIChE J., 1999, vol.
t time expended on solving the problem, s 45, no. 8, p. 1743.
U overall heat transfer coefficient, kJ/(m2 h K) 12. Charnes, A., Cooper, W.W., and Symonds, G.H., Cost
horizons and certainty equivalents: An approach to sto-
V volume of the reactor, m3 chastic programming of heating oil, Manage. Sci., 1958,
γ parameter that defines the size of the domain of vol. 4, p. 235.
uncertainty 13. Prékopa, A., Stochastic Programming, New York:
ρ Springer, 1995.
density, kg/m3
14. Zhuangzhi, L. and Zukui, L., Optimal robust optimiza-
Subscripts and superscripts of computational experiment tion approximation for chance constrained optimiza-
1 reactor tion problem, Comput. Chem. Eng., 2015, vol. 74, p. 89.
2 heat exchanger 15. Jagannathan, R., Chance-constrained programming
with joint constraints, Oper. Res., 1974, vol. 22, no. 2,
N nominal value
p. 358.
p recycle stream 16. van Ackooij, W. and Sagastizábal, C., Constrained bun-
w water dle methods for upper inexact oracles with application
to joint chance constrained energy problems, SIAM J.
Optim., 2014, vol. 2, no. 24, p. 733.
REFERENCES 17. Javier, O., Xavier, M., Mohamed, G., and Yongdong, L.,
1. Henrion, R. and Moller, A., Optimization of a contin- Optimal Design and Placement of Piezoelectric Actuators
uous distillation process under random inflow rate, using Genetic Algorithm: Application to Switched Reluc-
Comput. Math. Appl., 2003, vol. 45, p. 247. tance Machine Noise Reduction, INTECH, 2011.
2. Ostrovsky, G.M., Lapteva, T.V., Ziyatdinov, N.N., and 18. Bernardo, F.P., Performance of cubature formulae in
Silvestrova, A.S., Design of chemical engineering sys- probabilistic model analysis and optimization, J. Com-
tems with chance constraints, Theor. Found. Chem. put. Appl. Math., 2015, vol. 280, p. 110.
Eng., 2017, vol. 51, no. 6, p. 961. 19. Klöppel, M., Geletu, A., Hoffmann, A., and Li, P., Us-
https://doi.org/10.1134/S0040579517060136 ing sparse-grid methods to improve computation effi-
ciency in solving dynamic nonlinear chance-con-
3. Goel, V. and Grossmann, I.E., A class of stochastic strained optimization problems, Ind. Eng. Chem. Res.,
programs with decision dependent uncertainty, Math. 2011, vol. 50, p. 5693.
Prog., 2006, vol. 108, nos. 2–3, p. 335.
20. Acevedo, J. and Pistikopoulos, E.N., Stochastic opti-
4. Ben-Tal, A., Ghaoui, L.E., and Nemirovski, A., Robust mization based algorithms for process synthesis under
Optimization, Princeton Series in Applied Mathemat- uncertainty, Comput. Chem. Eng., 1998, vol. 22, p. 647.
ics, Princeton, N.J.: Princeton Univ. Press, 2009.
21. Knopov, P.S. and Norkin, V.I., On convergence condi-
5. Pflug, G.C. and Pichler, A., Multistage Stochastic Opti- tions for the method of empirical averages in stochastic
mization, New York: Springer, 2014. programming, Kibern. Sist. Anal., 2018, vol. 54, no. 1,
6. Calfa, B.A., Grossmann, I.E., Agarwal, A., Bury, S.J., p. 51.
and Wassick, J.M., Data-driven individual and joint 22. Calafiore, G.C. and Campi, M.C., The scenario ap-
chance-constrained optimization via kernel smooth- proach to robust control design, IEEE Trans. Autom.
ing, Comput. Chem. Eng., 2015, vol. 78, p. 51. Control, 2006, vol. 51, p. 742.

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020


156 LAPTEVA et al.

23. Nemirovski, A. and Shapiro, A., Scenario approxima- 36. Chen, W., Sim, M., Sun, J., and Teo, C.P., From CVaR
tions of chance constraints, Probabilistic and Random- to uncertainty set: Implications in joint chance-con-
ized Methods for Design under Uncertainty, Calafiore, G. strained optimization, Oper. Res., 2010, vol. 58, no. 2,
and Dabbene, F, Eds., London: Springer-Verlag, 2006, p. 470.
p. 3. 37. Li, Z., Ding, R., and Floudas, C.A., A comparative
24. Robert, C.P. and Casella, G., Monte Carlo integration, theoretical and computational study on robust counter-
Introducing Monte Carlo Methods with R, Springer Se- part optimization: I. Robust linear and robust mixed in-
ries in Use R!, New York: Springer-Verlag, 2010, ch. 3, teger linear optimization, Ind. Eng. Chem. Res., 2011,
p. 61. vol. 50, p. 10567.
25. Heitsch, H., A note on scenario reduction for two-stage 38. Li, Z. and Floudas, C.A., A comparative theoretical
stochastic programs, Oper. Res. Lett., 2007, vol. 35, and computational study on robust counterpart optimi-
no. 6, p. 731. zation: III. Improving the quality of robust solutions,
26. Pennanen, T. and Koivu, M., Epi-convergent discreti- Ind. Eng. Chem. Res., 2014, vol. 53, p. 13112.
zations of stochastic programs via integration quadra- 39. Hu, Z., Hong, L.J., and So, A.M., Ambiguous proba-
tures, Numer. Math., 2005, vol. 100, no. 1, p. 141. bilistic programs. http://www.optimization-on-
https://doi.org/10.1007/s00211-004-0571-4 line.org/ DB_FILE/ 2013/09/ 4039.pdf. Accessed July
27. Mehrotra, S. and Papp, D., Generating moment 10, 2019.
matching scenarios using optimization techniques, 40. Ostrovsky, G.M., Ziyatdinov, N.N., Lapteva, T.V., and
SIAM J. Optim., 2013, vol. 23, no. 2, p. 963. Zaitsev, I.V., Two-stage optimization problem with
28. Dempster, M.A., Medova, E.A., and Yong, Y.S., Com- chance constraints, Chem. Eng. Sci., 2011, vol. 66,
parison of sampling methods for dynamic stochastic p. 3815.
programming, Stochastic Optimization Methods in Fi- https://doi.org/10.1016/j.CP.2011.05.001
nance and Energy, New York: Springer, 2011.
41. Ostrovsky, G.M., Ziyatdinov, N.N., and Lapteva, T.V.,
29. Löhndorf, N., An empirical analysis of scenario gener- Optimal design of chemical processes with chance con-
ation methods for stochastic optimization, Eur. J. Oper. straints, Comput. Chem. Eng., 2013, vol. 59, p. 74.
Res., 2016, vol. 255, p. 121. https://doi.org/10.1016/j.compchemeng.2013.05.029
30. Ivanov, S.V. and Kibzun, A.I., Sample average approx- 42. Baker, K. and Toomey, B., Efficient relaxations for
imation in the two-stage stochastic linear programming joint chance constrained AC optimal power flow, Elec-
problem with quantile criterion, Proc. Steklov Inst. tr. Power Syst. Res., 2017, no. 148, p. 230.
Math., 2018, vol. 303, no. 1, p. 115.
31. Bidhandi, H.M. and Patrick, J., Accelerated sample av- 43. Ostrovsky, G.M., Ziyatdinov, N.N., and Lapteva, T.V.,
erage approximation method for two-stage stochastic One-stage optimization problem with chance con-
programming with binary first-stage variables, Appl. straints, Chem. Eng. Sci., 2010, vol. 65, p. 2373.
Math. Modell., 2017, vol. 41, p. 582. https://doi.org/10.1016/j.CP.2009.09.072
32. Xu, H., Caramanis, C., and Mannor, S., Optimization 44. Ostrovsky, G.M., Ziyatdinov, N.N., and Lapteva, T.V.,
under probabilistic envelope constraints, Oper. Res., Optimization problem with normally distributed uncer-
2012, vol. 60, no. 3, p. 682. tain parameters, AIChE J., 2013, vol. 59, no. 7, p. 2471.
https://doi.org/10.1002/aic.14044
33. Bertsimas, D. and Sim, M., The price of robustness,
Oper. Res., 2004, vol. 52, p. 35. 45. Ostrovsky, G.M., Lapteva, T.V., and Ziyatdinov, N.N.,
Optimal design of chemical processes under uncertain-
34. Hong, L.J., Yang, Y., and Zhang, L., Sequential convex
ty, Theor. Found. Chem. Eng., 2014, vol. 48, no. 5,
approximations to joint chance con-strained programs:
pp. 583–593.
A Monte Carlo approach, Oper. Res., 2011, vol. 59,
https://doi.org/10.1134/S0040579514050212
p. 617.
35. Bertsimas, D., Gupta, V., and Kallus, N., Data-driven 46. Gill, P.E., Murray, W., and Wright, M.H., Practical
robust optimization, Cornell University Library. Optimization, London: Academic, 1981.
https://arxiv.org/abs/1401.0212. Accessed July 10,
2019. Translated by L. Mosina

THEORETICAL FOUNDATIONS OF CHEMICAL ENGINEERING Vol. 54 No. 1 2020

You might also like