Chemical Process Design Taking Into Account JCC
Chemical Process Design Taking Into Account JCC
Chemical Process Design Taking Into Account JCC
, 2020.
Russian Text © The Author(s), 2020, published in Teoreticheskie Osnovy Khimicheskoi Tekhnologii, 2020, Vol. 54, No. 1, pp. 17–29.
Abstract—A method for solving the problems of chemical process design taking into account the incomplete-
ness and inaccuracy of the initial information presented in the form of one-stage optimization problems with
joint chance constraints is proposed. Joint chance constraints guarantee the required probability level of per-
formance, unlike individual chance constraints, which provide only an estimate of the level of performance.
However, the calculation of joint chance constraints is much more complicated than that of individual chance
constraints. This work proposes a method for solving the problems of chemical process design with allowance
for joint chance constraints, which eliminates the operation of the direct calculation of constraints. The
method involves converting a stochastic programming problem into a sequence of deterministic nonlinear
programming problems and can significantly reduce the time it takes to solve the problem. The effectiveness
of the proposed approach is illustrated by model examples.
Keywords: chemical process design, optimization, joint chance constraints, chemical engineering
DOI: 10.1134/S0040579520010133
145
146 LAPTEVA et al.
ation of the CP are satisfied on average or with a given probability level of constraint performance after solv-
probability. When using the latter, a very high proba- ing the optimization problem with constraints of the
bility level of satisfying such constraints or a low level form (3), some constraints will be satisfied at the spec-
of risk is required. The methods for solving two-stage ified moment of the CP operation stage and some will
optimization problems with hard constraints and one- not be. It occurs that we cannot guarantee the required
stage problems with chance constraints have been pro- probability level of the satisfaction of constraints, and
posed in many papers [9–11]. In this work, we con- we get only an estimate of the solution of the problem
sider the formulations of optimization problems with [17].
soft constraints in the form of chance constraints. On the other hand, joint constraints (5) in a for-
The chance constraints were first introduced in malized statement already imply ensuring the level of
[12], after which significant studies of the properties of fulfillment of the probability of the constraint satisfac-
such constraints were carried out in [13]. The researchers tion when they are simultaneously satisfied in the
were mainly attracted to problems with individual chance same subdomain of the domain of uncertainty. This
constraints, which can be presented as ensures that the required level of the probability of
constraint satisfaction is provided. The guarantee of
Pr{g j (d, z, θ) ≤ 0} ≥ α j , j = 1,..., m, the simultaneous implementation of joint constraints
Pr{g j (d, z, θ) ≤ 0} = ρ(θ)d θ,
Ωj
(3) has attracted active attention to such constraints in the
last decade.
The problem of solving optimization problems with
Ω j = {θ : g j (d, z, θ) ≤ 0; θ ∈ T }, (4) joint chance constraints is that it is necessary to calcu-
here, ρ(θ) is probability density function of θ parameters. late multidimensional integrals in the left-hand side of
(5) at each step of the optimization method in order to
The calculation of constraints of the form (3) when obtain the level of the probability of constraint satis-
solving the optimization problem requires a multidi- faction; in addition, the constraints are not convex
mensional integration procedure at each step of the [14]. The methods aimed at solving the arising prob-
optimization method, and the integration region Ω j in lems can be divided into three groups: methods that
the form (4) is not convex in most problems [14]. offer modifications of quadrature formulas for multi-
Moreover, already in [15], the joint chance constraints dimensional integration with a given accuracy, meth-
of the following form were analyzed: ods on the basis of the idea of statistical tests, and
Pr{g j (d, z, θ) ≤ 0; j = 1,..., m} ≥ α, methods providing an easy-to-calculate approxima-
tion of the left-hand sides of constraints.
(5)
Pr{g j (d, z, θ) ≤ 0; j = 1,..., m} =
Ωα
ρ(θ)d θ ≥ α, The first group of methods involves the use of the
features of the problem to be solved for obtaining spe-
Ωα = {θ : g j (d, z, θ) ≤ 0; j = 1,..., m; θ ∈ T }. (6) cial quadrature formulas that make it possible to effi-
ciently obtain the values of multidimensional integrals
Constraints of form (5) are even more complicated with a given accuracy [18–20]. However, the use of
in the computational sense, since they are often not this group of methods still leads to a sharp increase in
computable directly and lead to ravine regions and the number of nodes in the integration region with the
hard problems [16]. increase in its dimension and demand for high accu-
Comparing the two types of constraints (3) and (5), racy. The second group of methods includes statistical
we consider the features of the resulting solution. The test methods or sample approximation [21]. The group
requirement to satisfy constraints (3) does not imply the of methods develops the idea of the Monte Carlo
satisfaction of all m constraints in the same region, mak- method, proposing its modifications, which make it
ing it possible to have different regions Ω j , j = 1,..., m. It possible to accelerate the convergence of the method
is necessary only to provide the required probability without a loss of accuracy [22, 23]. Here it is necessary
level for the satisfaction of constraints. Since to highlight the actively used quasi-Monte Carlo
methods that use sequences with the so-called low
Ω j ⊂ T , j = 1,..., m, Ωr ≠ Ω s , r ≠ s, mismatch, e.g., Sobol sequences. These sequences can
r = 1,..., m, s = 1,..., m, be considered pseudorandom for many known distri-
butions. A review of such methods is given in [24]. In
the situation is possible in which a part of constraints this field, there are methods that use probabilistic
with numbers from the multitude J , J ⊂ {1,..., m}, is metrics that make it possible to estimate the error of a
satisfied in some point θt ∈ T , solution obtained using statistical tests [25, 26]. The
third direction in this field is methods that fit the
g j (d, z, θt ) ≤ 0, θt ∈ T , j ∈ J , moments of the generated set to the characteristics of
the original one [27]. A comparison of statistical test
and part of constraints is not satisfied in this point. methods is given in Section 16 of the book [28]. Since
Since such a point of the uncertainty domain char- all the listed methods of the group are based on the
acterizes the moment of the CP operation stage, we generation of a discrete distribution according to some
can get a situation when, having obtained the specified requirements and the algorithm, their common name
is used in the literature as sample approximation or dom variables. We also assume that uncertain param-
sample average approximation (SAA) methods [29]. eters θi have a normal distribution with an average μi
The original idea of the method was proposed by
A. Nemirovski and A. Shapiro in [23]. In the work, the and variance σi2 . We consider chance constraints with
authors developed a method for solving the two-stage the uncertainty only in the left-hand side of con-
optimization problem by soft constraints for the case straints.
of biaffine dependences on variables d, z and θ param-
eters in the left-hand sides of constraints. The SAA FORMULATION OF OPTIMIZATION
method was adapted to solving two-stage linear opti- PROBLEMS WITH JOINT CHANCE
mization problems with a quantile criterion by the CONSTRAINTS
authors of [30]. In [31], the method is modified and
combined with the Benders decomposition algorithm The problem of the optimal CP design in the for-
for solving the two-stage optimization problem with mulation of the one-stage optimization problem with
binary parameters of the first stage and continuous individual chance constraints in the optimality strat-
ones at the second stage of the system life cycle. How- egy on average can be formulated as [41]
ever, in turn, the SAA method uses random number min E[ f (d, z, θ)], (7)
generation to simulate the initial distribution and cal- d , z∈H
culate constraints. This can lead to significant devia-
tions from the actual distribution or it may require Pr{θ ∈ Ω} ≡ ρ(θ)d θ ≥ α,
Ω
(8)
large computational costs to achieve acceptable accu-
racy [14]. Ω = {θ : g j (d, z, θ) ≤ 0; θ ∈ T }, j = 1,..., m, (9)
Methods of the third group include methods of
reducing existing constraints to well-known and easily
computable ones, as well as methods of reducing
E[ f (d, z, θ)] = f (d, z, θ)ρ(θ)d θ.
T
(10)
chance constraints to a deterministic form. Among the In problem (7)–(8), the probability level of the
first methods, the use of the Chebyshev inequality constraint satisfaction (8) is set by a designer under the
[32] and the Hoeffding inequality [33] should be men- assumption 0 ≤ α ≤ 1; (10) is the mathematical expec-
tioned. A convex approximation of joint chance con- tation of the function f (d, z, θ).
straints was proposed in [34], where the authors con-
sidered the linear optimization problem. The use of If we consider joint chance constraints and
the Bonferroni inequality for reducing joint chance accounting for information from the design and oper-
constraints to individual chance constraints is shown ation stages in the CP life, then, in general, the prob-
in [23, 35]. The approximations of constraints are pro- lem of the design of optimal CP in the formulation of
posed in [37] for multistage linear optimization prob- a two-stage optimization problem with joint chance
lems. The calculation of constraints for different types constraints (TSOPJCC) in the optimality strategy on
of continuous and discrete distributions of uncertain average has the form [42]
parameters is considered in [38]. A convex approxima-
tion of joint constraints for special cases of constraint
min
d , z(θ)∈H f (d, z(θ), θ)ρ(θ)d θ,
T
(11)
functions is presented in [39]. These approximations
make it possible to simplify the calculation of multidi- Pr{g j (d, z(θ), θ) ≤ 0; j = 1,..., m} ≥ α, (12)
mensional integrals when obtaining the left-hand
sides of chance constraints. At the same time, meth-
ods are being developed that reduce chance con-
Pr{g j (d, z(θ), θ) ≤ 0; j = 1,..., m} = ρ(θ)d θ,
ΩT
(13)
straints to a deterministic form, which makes it possi- ΩT = {θ : g j (d, z(θ), θ) ≤ 0; j = 1,..., m; θ ∈ T }. (14)
ble to completely eliminate multidimensional integra-
tion operations. Initially, such methods were proposed We further assume that the solution of the problem
for individual chance constraints; however, as early as (11)–(12) exists and denote it as d *, z * (θ).
1997, K. Maranas considered a way to convert the Considering that the left-hand side of constraints
combined chance constraints to a deterministic form for (12) can be rewritten in the form (13), (14), the prob-
the linear form of constraints. G.M. Ostrovsky et al. for- lem (11)–(12) can be rewritten in the following form:
mulated a two-stage optimization problem with
chance constraints [40, 41] and proposed a method for
solving it involving the transformation of chance con-
min
d , z(θ)∈H f (d, z(θ), θ)ρ(θ)d θ,
T
(15)
straints into a deterministic form. However, all of the
above works either involve the use of individual chance
constraints or are designed for a narrow class of con-
Pr{θ ∈ ΩT } ≡ ρ(θ)d θ ≥ α,
ΩT
(16)
problem (15)–(16) is transformed into a one-stage T = T1(k ) ∪ T2(k ) ∪ ... ∪ TQ((kk)) , (21)
optimization problem with joint chance constraints
(OSOPJCC) of the form
Tq(1k ) ∩ Tq(2k ) = ∅, q1 = 1,.., Q(k ),
min E[ f (d, z, θ)], (18) (22)
d , z∈H q2 = 1,.., Q (k ), q1 ≠ q2,
Pr{θ ∈ ΩO } ≡ ρ(θ)d θ ≥ α,
ΩO
(19) Tq
(k )
= {θ : θi
(k )L,q
≤ θi ≤ θi
(k )U ,q
, i = 1,..., nθ},
(23)
q = 1,.., Q . (k )
(24)
Further we consider the solution of OSOPJCC nθ
∂f (d, z, θ )
q
(18)–(19). +
i =1
∂θi
(E[θi ;Tq(k )] − aqθiq ) ,
The proposed approach is based on the following
operations:
(i) The construction of Ω ⊂ T for the approxima-
aq = ρ(θ)d θ, (25)
Tq( k )
tion domain ΩO . Domain Ω is constructed in such a
way as to exclude multidimensional integration opera-
tions when calculating the left-hand sides of con-
E[θi ;Tq(k )] = θ ρ(θ)d θ.
i (26)
Tq( k )
straints (19). Domain Ω is specified at each iteration of
Using information on the independence of uncer-
the OSOPJCC solution, for which a procedure is devel- tain parameters, we can calculate (25) on the basis of
oped to refine the approximation of the domain ΩO . the dependence [43]
(ii) The approximation Eap[ f (d, z, θ)] of the nθ
OSOPJCC criterion having the form of the mathemat-
ical expectation E[ f (d, z, θ)]. We use the approxima-
aq = ∏ [Φ(θ
i =1
(k )U ,q
i ) − Φ(θ (i k )L,q )], (27)
tion proposed earlier in [43], which avoids multidi-
θ − E[θi ] (k )U ,q θi − E[θi ]
(k )L,q (k )U ,q
mensional integration in (18). θ (i k )L,q = i , θi = ,
Then at some iteration with the number k we solve σi σi (28)
q = 1,.., Q , i = 1,..., nθ,
the following problem: (k )
= min Eap [ f (d, z, θ)], where Φ(ξ) is the function of the standard normal dis-
(k ) (k )
f1
d , z∈H
tribution; E[θi ] is the mathematical expectation; and
(k )} ≥ α.
Pr{θ ∈ Ω (σi )2 is the variance of the parameter θi , i = 1,..., nθ.
The integral of the right-hand side of expression
APPROXIMATION OF THE CRITERION (26) is also calculated as
OF OSOPJCC i −1
θi
( k )U , q
( R ) ∩ ( R ) = ∅,
(k )
q1
(k )
q2 q1 = 1,.., N ,
(k )
We introduce new functions of the form g j (d, z, θ1, θ2,..., θnθ ). We consider the case when the
following condition is fulfilled:
g j (d, z, θ1, θ2,..., θnθ )
∂g j ∂g j (d, z, θ)
≥ 0, j = 1,..., m. (40)
θnθ − ϕ j (d, z, θ1, θ2,..., θnθ −1), ∂θ ≥ 0; ∂θnθ
nθ
(34)
= In this case, domain Ωl,(k ) can be presented in the
∂g j form
ϕ j (d, z, θ1, θ2,..., θnθ −1) − θnθ , ≤ 0.
∂θnθ Ωl,(k ) = {θ : θnθ − ϕ j (d, z, θ1, θ2,..., θnθ −1) ≤ 0,
Obviously, the above reasoning can be extended to (41)
j = 1,..., m, θ ∈ Rl(k )}.
all domains Ω j , j = 1,..., m . Then, using the form (34)
of functions g j (d, z, θ1, θ2,..., θnθ ), it is possible to pres- We approximate the function
ent sought-after domain ΩO in the form ϕ j (d, z, θ1, θ2,..., θnθ −1), j = 1,..., m, at the iteration k ,
by piecewise linear dependences
ΩO = {θ : g j (d, z, θ1, θ2,..., θnθ ) ≤ 0,
(35) ϕ(jk )(d, z, θ) = {ϕ(jk,l), if θ ∈ Rl(k )}, j = 1,..., m, (42)
j = 1,..., m, θiL ≤ θi ≤ θUi , i = 1,..., nθ}.
We introduce a partition of the uncertainty domain where the functions ϕ(jk,l), j = 1,..., m, l = 1,..., L(k ), are
into subdomains Rl(k ) = {θ : θ(i k )L,l ≤ θi ≤ θ(i k )U ,l , constructed in the central point θ(k ),l,mid =
i = 1,..., nθ} , l = 1,..., N (k ), at the iteration with number (θ1(k ),l,mid ,..., θ(nkθ ),−1l,mid ), θ(i k ),l,mid = 0.5(θ(i k ),U ,l − θ(i k ),L,l ),
k , such that the following conditions are fulfilled: i = 1,..., nθ − 1,, of the region Rl(k ) and have the form
T = ∪ ∪ ... ∪
(k ) (k ) (k )
R1 R2 RN (k ) , (36) ϕ(jk,l) = ϕ j (d, z, θ(k ),l,mid ).
Then, for the approximation of domains Ωl,(k ), (50). We replace constraint (50). We introduce a con-
straint of the form
l,(k ),
l = 1,..., L , it is possible to introduce domains Ω
(k )
With allowance for (45) and (43), it is possible to Problem (53) contains only the continuously dif-
write ferentiable function and is a common nonlinear pro-
} = Pr{θ ∈ Ω
Pr{θ ∈ Ω (k ) 1,(k )
}
gramming problem.
(47)
+ Pr{θ ∈ Ω
2,(k )
} + ... + Pr{θ ∈ Ω
L( k ),(k ) ∂g j (d, z, θ)
}. Now we consider the case when ≤ 0,
∂θnθ
We substitute expressions (24), (45), and (47) into
problem (18)–(19). Taking into account additional j = 1,..., m. In this case, domain Ωl,(k ) can be presented
search variables in the new formulation, we get the in the form
problem
f1
(k )
= min(k ) Eap [ f (d, z, θ)],
(k )
(48) Ωl,(k ) = {θ : ϕ j (d, z, θ1, θ2,..., θnθ −1) − θnθ ≤ 0,
d , z∈H , yl
j = 1,..., m, θ ∈ Rl(k )}.
} ≥ α,
Pr{θ ∈ Ω (k )
(49)
Repeating the above arguments for the new case,
= min ϕ j,l , l = 1,.., L .
(k ) (k ) (k )
yl (50) we introduce the domains
j =1,...,m
Problem (48)–(50) gives an estimate of the sought- l,(k ) = {θ : ϕ(jk,l) − θn ≤ 0, j = 1,..., m, θ ∈ Rl(k )},
Ω
after solution to the problem (15)–(16), but it contains θ
l = 1,..., L .
(k )
a nondifferentiable function in the right-hand side of
A
l =1
l [Φ( y
(k )
l(k ) ) − Φ(vl(k ) )] ≥ α, (55)
Further we consider the principle of choosing the
partition domains. We assume that we solved one of
vl(k ) ≥ κ(jk,l), j ∈ J 2, yl(k ) ≤ ϕ(jk,l), the problems (53), (54), (55) on the iteration with the
number k and obtained the solution d (k ), z (k ) .
j ∈ J1, l = 1,.., L(k ).
Obviously, problem (55) can be used if the follow- To refine the approximation f (d, z, θ, θq ) on each
ing condition is fulfilled: new iteration with the number, we partition some
min ϕ j,l ≥ max ϕ j,l .
(k ) (k ) domain Ts(k ) with number s from set Tq(k ), q = 1,..., Q (k ).
j∈J1 j∈J 2
To select such a domain, we estimate the approxima-
Note: It should be noted that, in the course of solv- tion quality of function f (d, z, θ) by dependences
∂g (d, z, θ) f (d, z, θ, θq ) in each domain Tq(k ), q = 1,..., Q (k ) and we
ing the problem, the derivative j may change
∂θnθ choose the domain to which the worst approximation
sign. This implies the nondifferentiability of functions quality corresponds:
CA1
CA0, F0, T0
F1, T2
2
1
V, T1 3
Tw2 Fw, Tw1
Fig. 1. Designed CP: (1) reactor, (2) heat exchanger, and (3) compressor.
T2 = −
− ϕ j (d , z , θ
(k ) (k ) (k ),l ,mid 2
)) , AU AU
− (T1 − Tw2 ) + Tw1,
where θ(k ),l,mid = (θ1(k ),l,mid ,..., θ(nkθ ),−1l,mid ), θ(i k ),l,mid =
F1 = Q , Fw = Q .
0.5(θ(i k ),U ,l − θ(i k ),L,l ), i = 1,..., nθ − 1. ρc p(T1 − T2 ) ρwc pw(Tw2 − Tw1)
We partition domain Rp(k ) in the center of the edge The values of the model state parameters are given
corresponding to an uncertain parameter selected in Table 1.
sequentially from set {θ1, θ2,..., θnθ −1}. The vector of uncertain parameters consists of five
quantities F0,T0,Tw1, kR ,U . The domain of uncertainty
is set in the form of intervals of the variation of uncer-
COMPUTATIONAL EXPERIMENT tain parameters with respect to their nominal values:
To demonstrate the effectiveness of the proposed {θi : θiN (1 − γσi ) ≤ θi ≤ θiN (1 + γσi )}, i = 1,...,5.
method, OSOP was solved with chance constraints at Parameter γ sets the size of the domain of uncertainty.
different probability levels and domain of uncertainty
sizes for CP described in [45]. The problem was solved The nominal θiN values of uncertain parameters and
in two formulations: in the form of OSOP with indi- variance σi are given in Table 2. The statistical mutual
vidual constraints (7)–(8) and in the form of OSOP independence of uncertain parameters is assumed.
Table 3. Results of solving the CP design problem in the OSOP form with individual chance constraints
Method 2 Method 1
γ α
f2 V A t, s f1 V A t, s
0 9374 5.48 7.88 0.01 – – – –
0.5 9878 5.63 7.44 10 9379 5.48 7.87 2.9
1 0.75 9957 5.79 7.48 11 9608 5.64 7.70 3.38
0.95 10132 6.04 7.62 12.59 9983 5.84 7.68 3.85
0.5 9941 5.7 7.68 65 9386 5.48 7.86 2.73
1.5 0.75 10019 5.95 7.98 70 9750 5.72 7.66 2.32
0.95 10155 6.35 8.07 73 10113 6.03 7.64 14
0.5 – – – – 9409 5.48 7.82 2.19
2.5
0.95 – – – – 10824 6.42 8.61 10
Table 4. Results of solving the CP design problem in the mance of the designed CP in different stages of its
OSOP form with joint chance constraints operation.
γ α f V A t, s
1.0 0.5 9937 5.76 7.41 0.3 NOTATION
0.75 10038 5.85 7.54 0.35
0.95 10168 5.97 7.84 0.3 Notation of the theoretical section
1.5 0.5 10014 5.81 7.49 0.3 a coefficient in the approximation of the integrand
0.75 10155 6.15 7.84 0.3 d design search variables
0.95 10268 6.23 8.02 0.35 E mathematical expectation
2.5 0.5 10093 5.91 7.81 0.3 f estimation function of the operating efficiency of the
0.95 10833 6.89 8.75 0.5 CP
g constraint function
H allowable range of search variables
(7)–(8), but significantly exceed the values obtained
by method 1. This can be explained by the low accu- h constraints independent of uncertain parameters
racy achieved by the approximation of constraints I one-dimensional integral
when using method 1; i.e., more iterations should be k iteration number
carried out for method 1. number of subdomains in the domain Ω
L
We note that, in comparison with method 1 and m number of chance constraints
method 2, the proposed method finds a solution in a
very short time. In addition, this method works on N number of subdomains in the domain Ω
large domains of uncertainty, unlike method 2. n vector dimensionality
The undoubted advantage of the method proposed Pr Probability
in this work is the guarantee of compliance with the p number of constraints independent of uncertain
required probability level of satisfaction of constraints parameters
at any time during the operation phase of the designed Q number of subdomains in the domain T
CP, as well as the short time it takes to obtain a solution.
R approximation of domain Ω
S range of uncertain parameter
CONCLUSIONS domain of uncertainty
T
This article proposes an approach to solving design z control search variables
problems of flexible CPs presented in the form of one- α probability level
stage optimization problems with joint chance con-
straints. The difficulty of solving such problems lies in θ uncertain parameters
the impossibility of calculating the resulting multidi- ξ random quantity
mensional integrals in the left-hand sides of con- ρ probability density
straints by the well-known methods for many nonlin-
ear functions that formalize the requirements for CP σ standard deviation
operation. When such a calculation is possible, it is Φ function of the standard normal distribution
required to carry out multidimensional integration φ approximable hypersurface obtained for chance
operations at each iteration of the method at the direct constraint
solution of problems.
Ω domain of constraint satisfaction
The method proposed in this work involves solving approximation of domain Ω
a sequence of problems that give an estimate of the cri- Ω
terion of the sought-after problem. For the construc- Subscripts and superscripts of the theoretical section
tion of such problems, the use of approximations of ap approximation
integrands is proposed, which makes it possible to get i index
free of multidimensional integration in the criterion of
the problem and leads joint chance constraints to a set j constraint number
of deterministic ones. l subdomain number
The advantages of the proposed method include L left domain boundary
the high speed of obtaining the result, as well as the O one-stage optimization problem
final form, size, and location of the domain of con-
straint satisfaction in the uncertainty domain. This q subdomain number
will make it possible to more fully estimate the perfor- r subdomain number
23. Nemirovski, A. and Shapiro, A., Scenario approxima- 36. Chen, W., Sim, M., Sun, J., and Teo, C.P., From CVaR
tions of chance constraints, Probabilistic and Random- to uncertainty set: Implications in joint chance-con-
ized Methods for Design under Uncertainty, Calafiore, G. strained optimization, Oper. Res., 2010, vol. 58, no. 2,
and Dabbene, F, Eds., London: Springer-Verlag, 2006, p. 470.
p. 3. 37. Li, Z., Ding, R., and Floudas, C.A., A comparative
24. Robert, C.P. and Casella, G., Monte Carlo integration, theoretical and computational study on robust counter-
Introducing Monte Carlo Methods with R, Springer Se- part optimization: I. Robust linear and robust mixed in-
ries in Use R!, New York: Springer-Verlag, 2010, ch. 3, teger linear optimization, Ind. Eng. Chem. Res., 2011,
p. 61. vol. 50, p. 10567.
25. Heitsch, H., A note on scenario reduction for two-stage 38. Li, Z. and Floudas, C.A., A comparative theoretical
stochastic programs, Oper. Res. Lett., 2007, vol. 35, and computational study on robust counterpart optimi-
no. 6, p. 731. zation: III. Improving the quality of robust solutions,
26. Pennanen, T. and Koivu, M., Epi-convergent discreti- Ind. Eng. Chem. Res., 2014, vol. 53, p. 13112.
zations of stochastic programs via integration quadra- 39. Hu, Z., Hong, L.J., and So, A.M., Ambiguous proba-
tures, Numer. Math., 2005, vol. 100, no. 1, p. 141. bilistic programs. http://www.optimization-on-
https://doi.org/10.1007/s00211-004-0571-4 line.org/ DB_FILE/ 2013/09/ 4039.pdf. Accessed July
27. Mehrotra, S. and Papp, D., Generating moment 10, 2019.
matching scenarios using optimization techniques, 40. Ostrovsky, G.M., Ziyatdinov, N.N., Lapteva, T.V., and
SIAM J. Optim., 2013, vol. 23, no. 2, p. 963. Zaitsev, I.V., Two-stage optimization problem with
28. Dempster, M.A., Medova, E.A., and Yong, Y.S., Com- chance constraints, Chem. Eng. Sci., 2011, vol. 66,
parison of sampling methods for dynamic stochastic p. 3815.
programming, Stochastic Optimization Methods in Fi- https://doi.org/10.1016/j.CP.2011.05.001
nance and Energy, New York: Springer, 2011.
41. Ostrovsky, G.M., Ziyatdinov, N.N., and Lapteva, T.V.,
29. Löhndorf, N., An empirical analysis of scenario gener- Optimal design of chemical processes with chance con-
ation methods for stochastic optimization, Eur. J. Oper. straints, Comput. Chem. Eng., 2013, vol. 59, p. 74.
Res., 2016, vol. 255, p. 121. https://doi.org/10.1016/j.compchemeng.2013.05.029
30. Ivanov, S.V. and Kibzun, A.I., Sample average approx- 42. Baker, K. and Toomey, B., Efficient relaxations for
imation in the two-stage stochastic linear programming joint chance constrained AC optimal power flow, Elec-
problem with quantile criterion, Proc. Steklov Inst. tr. Power Syst. Res., 2017, no. 148, p. 230.
Math., 2018, vol. 303, no. 1, p. 115.
31. Bidhandi, H.M. and Patrick, J., Accelerated sample av- 43. Ostrovsky, G.M., Ziyatdinov, N.N., and Lapteva, T.V.,
erage approximation method for two-stage stochastic One-stage optimization problem with chance con-
programming with binary first-stage variables, Appl. straints, Chem. Eng. Sci., 2010, vol. 65, p. 2373.
Math. Modell., 2017, vol. 41, p. 582. https://doi.org/10.1016/j.CP.2009.09.072
32. Xu, H., Caramanis, C., and Mannor, S., Optimization 44. Ostrovsky, G.M., Ziyatdinov, N.N., and Lapteva, T.V.,
under probabilistic envelope constraints, Oper. Res., Optimization problem with normally distributed uncer-
2012, vol. 60, no. 3, p. 682. tain parameters, AIChE J., 2013, vol. 59, no. 7, p. 2471.
https://doi.org/10.1002/aic.14044
33. Bertsimas, D. and Sim, M., The price of robustness,
Oper. Res., 2004, vol. 52, p. 35. 45. Ostrovsky, G.M., Lapteva, T.V., and Ziyatdinov, N.N.,
Optimal design of chemical processes under uncertain-
34. Hong, L.J., Yang, Y., and Zhang, L., Sequential convex
ty, Theor. Found. Chem. Eng., 2014, vol. 48, no. 5,
approximations to joint chance con-strained programs:
pp. 583–593.
A Monte Carlo approach, Oper. Res., 2011, vol. 59,
https://doi.org/10.1134/S0040579514050212
p. 617.
35. Bertsimas, D., Gupta, V., and Kallus, N., Data-driven 46. Gill, P.E., Murray, W., and Wright, M.H., Practical
robust optimization, Cornell University Library. Optimization, London: Academic, 1981.
https://arxiv.org/abs/1401.0212. Accessed July 10,
2019. Translated by L. Mosina