0% found this document useful (0 votes)
15 views11 pages

A Wiener Neural Network-Based Identification and Adaptive Generalized Predictive Control For Nonlinear SISO Systems

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/231376830

A Wiener Neural Network-Based Identification and Adaptive Generalized


Predictive Control for Nonlinear SISO Systems

Article  in  Industrial & Engineering Chemistry Research · May 2011


DOI: 10.1021/ie102203s

CITATIONS READS

19 216

4 authors, including:

Jinzhu Peng Rickey Dubay


Zhengzhou University University of New Brunswick
35 PUBLICATIONS   299 CITATIONS    94 PUBLICATIONS   835 CITATIONS   

SEE PROFILE SEE PROFILE

Ma'moun M. Abu-Ayyad
Pennsylvania State University-Harrisburg
51 PUBLICATIONS   140 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

NSFC-61773351 View project

A nonlinear model-based predicitve controller for injection speed View project

All content following this page was uploaded by Ma'moun M. Abu-Ayyad on 16 July 2018.

The user has requested enhancement of the downloaded file.


ARTICLE

pubs.acs.org/IECR

A Wiener Neural Network-Based Identification and Adaptive


Generalized Predictive Control for Nonlinear SISO Systems
Jinzhu Peng, Rickey Dubay,* and Jose Mauricio Hernandez
Department of Mechanical Engineering, University of New Brunswick, Fredericton, NB, E3B5A3, Canada

Ma’moun Abu-Ayyad
Mechanical Engineering Department, Penn StateHarrisburg, Middletown, Pennsylvania, 17057, United States

ABSTRACT: In this study, a Wiener-type neural network (WNN) is derived for identification and control of single-input and
single-output (SISO) nonlinear systems. The nonlinear system is identified by the WNN, which consists of a linear dynamic block in
cascade with a nonlinear static gain. The Lipschitz criteria for model order determination and back propagation for the adjustment of
weights in the network are presented. Using the parameters of the Wiener model, the analytical expressions used in the controller,
generalized predictive control (GPC) is modified every time step, to handle the nonlinear dynamics of the controlled variable.
Finally, the proposed WNN-based GPC algorithm is tested in simulation on several nonlinear plants with different degrees of
nonlinearity. Simulation results show that WNN identification approach has better accuracy, in comparison to other neural network
identifiers. The WNN-based GPC has better control performance, in comparison to standard GPC.

’ INTRODUCTION formulate the dynamic linear part and nonlinear static part of the
Wiener and Hammerstein models are the most known and Wiener or Hammerstein model, using a multilayer neural net-
most widely used for modeling of various processes, such as work. Janczak13 designed a neural network for formulation of the
chemical processes,13 separation processes,4 hydraulic sys- Hammerstein model, which was composed of one hidden layer
tems,5 and chaotic systems.6,7 In Wiener modeling, a linear with nonlinear nodes and one linear output node. Wu et al.14
dynamic block precedes a nonlinear steady-state one, while proposed a Hammerstein neural network compensator to identify
Hammerstein models contain the same elements in the reverse the dynamic calibration process for an infrared thermometer sensor.
order .8 These types of models are called block-oriented non- In many instances, most of these investigations assume that the
linear models.9 Unlike black-box models, the block-oriented orders of dynamic linear part in the Wiener or Hammerstein model
models have a clear physical interpretation: the steady-state part are known a priori.
describes the gain of the system.10 Model predictive control (MPC) is an optimization-based
Artificial neural network (ANN) models have been success- control methodology that is widely used in industry. The main
fully applied to the identification and control of a variety of goal of MPC is to use a system mathematical model to obtain a
nonlinear dynamical systems and processes. Many researchers control signal by minimizing an objective function.15,16 Various
have integrated neural networks with Wiener and Hammerstein MPC algorithms were proposed, such as dynamic matrix control
model structures, to formulate the system static nonlinearities. (DMC), generalized predictive control (GPC), and nonlinear
Al-Duwaish et al.11 proposed an identification model using MPC13 and others. yawryn’czuk10 proposed a computationally
a hybrid model consisting of a linear autoregressive moving efficient nonlinear MPC algorithm based on neural Wiener
average (ARMA) model in cascade with a multilayer neural models, where nonlinear prediction and linearalization are
network. The multilayer network was used to represent the adopted. Arefi et al.2 proposed a nonlinear MPC based on classic
dynamic linear block and the static nonlinear element of Wiener optimization methods with nonlinear identification, using a
model, respectively. For identifying a chaotic system, Chen et al.6 Wiener model for a nonlinear chemical process. The nonlinear
used a simple linear model to represent the dynamic part and static term is a neural network; then, the design of the nonlinear
a neural network to represent the nonlinear static part. Also, predictive controller is based on the identified Wiener model.
the dynamic linear part was replaced by Laguerre filters and These methods achieved satisfactory performance; however, the
the nonlinear static part was described as a neural network.4 fixed parameters in these controllers restricted their ability to
T€otterman and Toivonen12 used support vector regression to control highly nonlinear systems. In addition, from the compar-
identify nonlinear Wiener systems; the linear block is expanded ison of several predictive controllers by Abu-Ayyad and Dubay,17
in terms of Laguerre or Kautz filters, and the static nonlinear
block is determined using support vector machine regression. Received: October 29, 2010
This multiple-input and multiple-output (MIMO) Wiener Accepted: May 4, 2011
model has been used for identification of the chromatographic Revised: April 29, 2011
separation process. In addition, some researchers tried to Published: May 04, 2011

r 2011 American Chemical Society 7388 dx.doi.org/10.1021/ie102203s | Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

it is clear that these static controllers cannot provide good control


of highly nonlinear systems.
To control challenging systems with high nonlinearities, the
approach is to have a reasonable model identifier of the system
Figure 1. The general Wiener model.
offline, capable of refinements online, where the identifier output
is fed into the controller to update its parameters. This study
presents such a methodology that can be applied to single-input ’ WIENER-TYPE NEURAL NETWORK FOR
and single-output (SISO) nonlinear systems, itemized as follows. IDENTIFICATION
First, a dynamic neural network is designed to formulate a For general neural networks, the model structures have no relation
Wiener model of the system, the weights of which are corre- to the physical nature of the process and the model parameters
sponding to the parameters of the Wiener model, which con- (weights) have no physical interpretation, similar to black-box
stitutes the Wiener-type neural network (WNN). Second, the models.10 To determine which terms to be included in the model
Lipschitz criterion for minimal model order determination is (i.e., the values of na and nb), the order determination method based
derived. To determine the parameters of the Wiener model, a on Lipschitz quotients25 is used. Finally, a back-propagation learning
back-propagation algorithm is used for the adjustment of weights algorithm for weights updating is presented in detail.
in the network. Third, a WNN-based GPC (WGPC) controller
for the nonlinear system is formulated. The proposed method ’ WIENER-TYPE NEURAL NETWORK
will be tested in simulation on several nonlinear plants. From the general Wiener structure26 and as in Figure 1, a
block-oriented artificial neural network is designed as shown in
’ WIENER MODEL FORMULATION Figure 2, which consists of a single linear node with two tapped
Many industrial processes can be described by the Wiener delay lines. These lines form the model of the linear dynamic
model or the Hammerstein model.3 As shown in Figure 1, the element and the nonlinear static element comprising of one layer
general Wiener model can be expressed as a cascade model, with p nodes. A polynomial function is used for the nonlinear
which consists of a linear dynamic block and a nonlinear static static block output ^y(t), which can be expressed as
block. For a SISO Wiener model, the linear dynamic block can be p
described as y^ ðtÞ ¼ f ðx^ ðtÞÞ ¼ ∑ ^ckxkðtÞ
k¼1
^
ð3Þ
1
Bðq Þ
xðtÞ ¼ Gðq1 ÞuðtÞ ¼ uðtÞ ð1Þ
Aðq1 Þ where the weights ^ck for k = 1, ...,p are the nonlinear static block
parameters.
with The output of hidden layer ^x(t) can be expressed as
Aðq1 Þ ¼ 1 þ a1 q1 þ a2 q2 þ 3 3 3 þ ana qna
x^ ðtÞ ¼ ^a 1 xðt  1Þ  ^a2 xðt  2Þ þ 3 3 3  ^ana xðt  na Þ
^ ^ ^

þ ^b1 uðt  1Þ þ ^b2 uðt  2Þ þ 3 3 3 þ ^bnb uðt  nb Þ


Bðq1 Þ ¼ b1 q1 þ b2 q2 þ 3 3 3 þ bnb qnb
na nb
1
where q is the unit delay operator, and na and nb are the orders ¼ ∑
i¼1
^ai x^ ðt  iÞ þ ∑ ^bjuðt  jÞ
j¼1
ð4Þ
of the linear dynamics (generally, nb e na).
The nonlinear static block is given by where the weights ^ai for i = 1, ..., na and ^bj for j = 1, ..., nb are linear
dynamic block parameters. In this way, the parameters of the
yðtÞ ¼ f ðxðtÞÞ ð2Þ Wiener model are obtained directly as the weights of the block-
The terms u(t) and y(t) are the orders associated with the input oriented neural network. Hence, the training algorithm can be
executed every time step to update the weights and, therefore, the
and output, respectively; x(t) is a nonmeasured intermediate
identified Wiener model. The WNN structure is now obtained
variable that does not necessarily have a physical meaning; f( 3 )
represents the nonlinear component of the Wiener model. from eqs 3 and 4 and is shown in Figure 2.
In the formulation of the Wiener model, the tasks are to
identify the parameters a1, ..., ana and b1, ..., bnb, as well as the ’ MODEL ORDER DETERMINATION
orders na and nb, in addition to identifying the relative parameters At this point, the WNN has a general structure. Although the
of the nonlinear static function. For the nonlinear static block, performance of the networks with extra orders might be similar
many methods were investigated, the simplest and most com- to the performance of networks with the estimated order, the
mon are polynomial forms,9 neural networks,6,7,10 and the computational load to tune the network parameters increases
piecewise linear (PWL) method.18 Furthermore, other methods exponentially as the system order increases.29 Before training the
have been derived to identify the total set of Wiener model neural network, it is necessary to determine how many neurons
parameters, such as the least-squares method or its recursive are in the network (that is, the values of na and nb) by analyzing
version,19 correlation methods,20 maximum likelihood methods,21 the original data. Billings et al.27 proposed a forward-regression
linear optimization methods,22 and nonlinear optimization orthogonal estimator for identifying the structure of MIMO
methods.4,23 Different from these methods, the WNN structure nonlinear systems. Although this method may be used for
is designed in order to formulate the Wiener model entirely. In identifying nonlinear terms, it cannot be used to identify terms
our approach, the identification of the Wiener model is more in the Wiener model. This is so because they are not just
efficient, since the identified parameters can be directly obtained functions of the network inputs, but also functions of the
by training the WNN. intermediate variable.8 He and Asada25 proposed an order
7389 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

Figure 2. Structure of a Wiener-type neural network.

determination scheme that used the Lipschitz criterion. Luh and and l(nþ1)
ij are calculated, if jnþ1 is a redundant input variable,
Rizzoni28 extended this scheme to MIMO systems and also there will be a slight difference between l(n) (nþ1)
ij and lij .
improved its performance to a limited extent, using the concept To avoid the effect of measurement noise, the following
of orthogonal basis functions. Wang and Chen29 extended the index25 is used to determine an appropriate order:
approach to develop the order determination algorithm for !1=m
multiple-input single-output (MISO) systems. Here, the order ðnÞ pffiffiffi Y
m
ðnÞ
determination scheme for SISO systems is provided for com- l ¼ n l ðsÞ ð9Þ
s¼1
pleteness of the study.
Consider a general nonlinear SISO dynamic system that can (n)
where l (s) is the sth-largest Lipschitz quotient among all l(n)
ij
be represented as follows: with the n input variables (j1, ..., jn). The parameter m is a
yðtÞ ¼ gðyðt  1Þ, ... , yðt  ny Þ, uðt  1Þ, ... , uðt  nu ÞÞ ð5Þ positive number usually selected to be m ∈ [0.01Nset, 0.02Nset].
For testing purposes, the stop criterion can be defined as29
where y(t) and u(t) are the output and input variables of the
dynamic system, and ny and nu are the true orders of the output jlðn þ 1Þ  lðnÞ j
<ε ð10Þ
and input, respectively; g( 3 ) is a nonlinear function assumed to maxð1, jlðnÞ jÞ
be continuous and smooth.
Rewriting eq 5 in a compact form gives where ε > 0 is a prespecified threshold. From the investigation in
the work of Wang and Chen,29 ε = 0.1 is suitable for most cases.
y ¼ gðj1 , j2 , ... , jn Þ ð6Þ To obtain the number of nodes in the nonlinear layer, p is
chosen manually. This is because, from empirical experience, the
where n is the number of input variables (n = ny þ nu). The next
value of p has less influence on the accuracy of the identification
task is to reconstruct the nonlinear function g( 3 ) from the
performance than the values of na and nb. For most cases, a value
inputoutput data patterns [j(k),y(k)]N k=1, where Nset is the
set
of 3 e p e 6 can be chosen.
number of datasets used for the model order determination.
Define the Lipschitz quotient lij as follows:
’ LEARNING ALGORITHM
jyðiÞ  yðjÞj In the neural network learning procedure, the weights are updated
lij ¼ ði 6¼ jÞ ð7Þ
jjðiÞ  jðjÞj along the negative gradient of a given error function as follows:
where |j(i)  j(j)| is the distance of two points in the input 1 2 1
Ξðw, tÞ ¼ yðtÞ  y^ ðtÞ ¼ ^eðtÞ2 ð11Þ
space and |y(i)  y(j)| is the difference between g(j(i)) and 2 2
g(j(j)). For data points with a small distance |j(i)  j(j)|
where^e(t) = y(t)  ^y(t), and y(t) and ^y(t) are the actual output and
between them, the Lipschitz quotient l(n)
ij can be rewritten as
the neural network output, respectively. Let w be the adjustable
ðnÞ jδyj parameter, which consists of the weights as shown:
lij ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð8Þ
ðδj1 Þ þ ðδj2 Þ2 þ 3 3 3 þ ðδjn Þ2 w ¼ ½^a1 , ... , ^ana , ^b1 , ... , ^bnb ,^c1 , ... ,^cp T
2

where δy = y(i)  y(j) and δjr = jr(i)  jr(j) for r = 1, 2, ..., n. Applying a pattern learning version of the steepest descent
The superscript n in l(n)
ij is an index representing the number of
optimization method, the partial derivatives for the minimization
input variables in eq 6. From the investigation in He and Asada,25 of the error function (eq 11), with respect to the adjustable
the values of l(n)
ij can be used to indicate when one or more input
parameters w of the network,
variables is missing, or the case when one or more redundant ∂Ξ ∂y^ ðtÞ
input variable is included. For example, if a variable jn is missing ¼  ^eðtÞ ð12Þ
from the input set, the Lipschitz quotient l(n1) will be con- ∂w ∂w
ij
siderably larger than l(n)
ij , or even unbounded. In contrast, when where w represents the elements of w. To overcome the short-
an input variable jnþ1 is included and the Lipschitz quotients l(n)
ij coming of the traditional BP algorithm, a momentum term is
7390 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

added to train the weights:


∂Ξ
ΔwðtÞ ¼  η þ γΔwðt  1Þ ð13Þ
∂w
where η is the learning rate whose function is controlling the
stability and the speed of convergence. The term γ∈[0,1) is
the momentum factor and γΔw(t  1) remembers the change
direction of the weight in the former time step. The change in
the direction of the weight at t þ 1 is the combination of t  1
and t. When the sign of  η(∂Ξ/∂w) at t is the same as t  1,
Δw(t) increases, resulting in faster convergence. In contrast,
when the sign of η(∂Ξ/∂w) is opposite to the former time
step and Δw(t) is decreasing, the momentum factor guaran-
tees a global minimum. When γ = 0, the algorithm becomes Figure 3. System identification using a Wiener-type neural network.
the traditional BP algorithm. The general update rule is
expressed as Also, using eq 14, the update law of ^ck, ^ai, and ^bj are given as
follows:
wðtÞ ¼ wðt  1Þ þ ΔwðtÞ ð14Þ
∂y^ ðtÞ
Substituting eqs 12 and 13 into eq 14 yields ^ck ðtÞ ¼ ^ck ðt  1Þ þ ηc^eðtÞ þ γΔ^ck ðt  1Þ
∂^ck
∂y^ ðtÞ ðk ¼ 1, 2, ... , pÞ ð22Þ
wðtÞ ¼ wðt  1Þ þ η^eðtÞ þ γΔwðt  1Þ ð15Þ
∂w
According to the error function defined in eq 11 and the Wiener ∂y^ ðtÞ
neural network functions in eqs 3 and 4, the partial derivative of the ^ai ðtÞ ¼ ^ai ðt  1Þ þ ηa^eðtÞ þ γΔ^ai ðt  1Þ
∂^ai
Wiener neural network output ^y(t) to parameters ^ck and inter-
mediate variables ^x(t) can be calculated as ði ¼ 1, 2, ... , na Þ ð23Þ

∂y^ ðtÞ
^bj ðtÞ ¼ ^b ðt  1Þ þ ηb^eðtÞ∂yðtÞ þ γΔ^bj ðt  1Þ
^
¼ x^ k ðtÞ ðk ¼ 1, 2, :::, pÞ ð16Þ
∂^c k j
∂^bj

∂y^ ðtÞ p ðj ¼ 1, 2, ... , nb Þ ð24Þ


∂x^ ðtÞ
¼ ∑ k^ckxk  1ðtÞ
k¼1
^
ð17Þ
The partial derivatives in eqs 22, 23, and 24 are shown in
eqs 16, 20, and 21, respectively. From the above analysis, the
Considering the elements of ^x(t  i) as functions of ^ai and ^bj, structure of the nonlinear dynamic system identification using
the partial derivatives of the intermediate variables ^x(t) to linear the WNN is shown in Figure 3.
dynamic block parameters are
na
∂x^ ðt  sÞ ’ WNN-BASED GENERALIZED PREDICTIVE CONTROL
∂x^ ðtÞ
∂^ai
¼  x^ ðt  iÞ  ∑ ^as
s¼1 ∂^ai
ði ¼ 1, 2,..., na Þ ð18Þ (WGPC)
In this section, a predictive controller based on WNN for a
nonlinear dynamic plant is shown in Figure 4. Generalized
∂x^ ðtÞ nb
∂x^ ðt  sÞ
∂^bj
¼ uðt  jÞ  ∑
s¼1
^as
∂^bj
ðj ¼ 1, 2,..., nb Þ ð19Þ predictive control (GPC) is one of the most successful MPC
schemes, proposed by Clarke et al.30 The standard GPC method
is based on the controlled autoregressive integrated moving
From eqs 17, 18, and 19, we can calculate the following partial
average (CARIMA) model. Since the linear dynamic block in
derivatives:
Wiener model has the same structure as the CARIMA model, the
∂y^ ðtÞ ∂y^ ðtÞ ∂x^ ðtÞ nonlinear static part can be eliminated using the inverse function
¼ ^ 3 of f( 3 ) shown in eq 2. In this way, when designing the WNN-
∂^ai ∂xðtÞ ∂^ai
! ! based GPC, the measured value y(t) of the process and its
p
∂x^ ðt  sÞ
na
prediction ^y(t) and set points ysp are now x(t), ^x(t), and xsp,
¼ ∑ k^ck x
k¼1
^ ðk  1Þ
ðtÞ xðt  iÞ  ^as
^

s¼1
∑ ∂^ai respectively. This is shown in Figure 4.
The linear dynamic equation in the WNN is regarded as the
ði ¼ 1, 2, ..., na Þ ð20Þ
CARIMA model for designing the WGPC. Recall eq 1 with
disturbances,
∂y^ ðtÞ ∂y^ ðtÞ ∂x^ ðtÞ εðtÞ
¼ Aðq1 ÞxðtÞ ¼ Bðq1 Þuðt  1Þ þ Hðq1 Þ ð25Þ
∂^b j ∂x^ ðtÞ 3 ∂^bj Δ
! !
p
∂x^ ðt  sÞ
nb where u(t) and x(t) are the control input and output variables
¼ ∑ k^ckx
k¼1
^ ðk  1Þ
ðtÞ uðt  jÞ  ^as
s¼1
∑ ∂^bj and ε(t) is a zero mean white noise. Δ = 1  q1 denotes the
backward-difference operator. A(q1) and B(q1) are the same
ðj ¼ 1, 2,... , nb Þ ð21Þ as eq 4. For simplicity, the H(q1) polynomial is chosen to be 1.
7391 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

Figure 4. Structure of WNN-based GPC for a nonlinear system.

The prediction of the system output x(t þ j) can be evaluated the WNN. This makes ^x more accurate as the nonlinea-
using the following optimum jth-step-ahead predictor: rities are taken into account locally.
• Second, the dynamic matrix D(t) is also recalculated,
x^ ðt þ jjtÞ ¼ Dj ðq1 ÞΔuðt þ j  1Þ providing a localized open-loop behavior during closed-
þ Fj ðq1 ÞxðtÞ þ Ej ðq1 Þεðt þ jÞ ð26Þ loop experiences, thereby providing more optimal control
actions to the nonlinear dynamic system, in comparison to
where the polynomials Ej, Fj, and Dj can be derived by the having a fixed D(t).
following Diophantine equation: These major differences are key to having improved control
1 ¼ Ej ðq1 ÞΔAðq1 Þ þ qj Fj ðq1 Þ ð27Þ when controlling nonlinear systems.
The step response coefficients at each time step j in Dj are no
The polynomial Ej is uniquely defined with a degree of j  1. As longer static but changing every j; hence, the dynamic matrix
the degree of polynomial Ej(q1) = j  1, the noise terms in eq 27 D(t), which contains a local linear approximation of the non-
are all in the future. Therefore, the best prediction of ^x(t þ j) is, linear model, is
2 3
x^ ðt þ jjtÞ ¼ Dj ðq1 ÞΔuðt þ j  1Þ þ Fj ðq1 ÞxðtÞ ð28Þ d1 ðtÞ 0 3 3 3 0
6
6 d2 ðtÞ 7
7
d1 ðtÞ 333 0
The GPC algorithm consists of applying a control sequence DðtÞ ¼ 6 6 l
7
7
that minimizes a cost function of the following form: 4 l l l 5
N
dN ðtÞ dN1 ðtÞ 3 3 3 dNNu þ 1 ðtÞ
JðtÞ ¼ ∑
j¼1
jj xðt þ jjtÞ  xsp ðt þ jÞjj 2
^

where the step response coefficients can be computed as31


Nu
þ ∑
j¼1
λjj Δuðt þ j  1Þjj 2 ð29Þ dj ðtÞ ¼ 
minðj  1, na Þ
∑ ^ai ðtÞdji ðtÞ þ
minðj, nb Þ
∑ ^bi ðtÞ ðj ¼ 1, ... , NÞ
i¼1 i¼1
where ^x(t þ j|t) is an optimum jth-step-ahead prediction of the ð31Þ
system output up to time t; and N and Nu are the prediction and
control horizons, respectively (N g Nu). The parameter λ is the The cost function (eq 29) is now time-varying and can be
move suppression coefficient and xsp(t þ j) is the future expressed as
reference trajectory. The objective of the controller is to compute
JðtÞ ¼ ðDðtÞΔuðtÞ þ x 0 ðtÞ  x sp ðtÞÞT ðDðtÞΔuðtÞ þ x0 ðtÞ  x sp ðtÞÞ
the future control moves Δu in such a way that the future plant
output x(t þ j) is driven close to xsp(t þ j). For continuity, key þ λΔuT ðtÞΔuðtÞ
derivations associated with GPC will be presented with more
detail in the conventional approach reported by Camacho.31 ¼ ΔuT ðtÞðDT ðtÞDðtÞ þ λlÞΔuðtÞ þ 2ðx 0 ðtÞ  xsp ðtÞÞT ΔuðtÞ
The prediction of ^x(t þ j) in eq 28 can be expressed in vector þ ðx 0 ðtÞ  x sp ðtÞÞT ðx 0 ðtÞ  x sp ðtÞÞ ð32Þ
form as 1
where xsp(t) = [xsp(t þ 1), ..., xsp(t þ N)] , and xsp = f (ysp).
T

^x ðtÞ ¼ DðtÞΔuðtÞ þ x 0 ðtÞ ð30Þ By minimizing J(t), the control law of unconstrained WGPC
can be given as
where the first part of the right-hand of eq 30 is dependent only
on the future control moves, while the second part x0(t) is a free ΔuðtÞ ¼ ðDT ðtÞDðtÞ þ λlÞ1 DT ðtÞðx sp ðtÞ  x0 ðtÞÞ ð33Þ
response that is dependent only on the past moves.
At this point, key enhancements to the GPC method are now Note that only the first element of Δu(t) is applied to the process,
provided in the context of WGPC. i.e., u(t) = Δu(t) þ u(t  1). The prediction is shifted one step
• First, the coefficients of the Diophantine equation are forward and the procedure is repeated at the next sampling
recalculated every time step, as shown in Figure 4, using instant.
7392 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

Figure 5. Values of the order determination based on Lipschitz Figure 6. Comparison of the mean square error (MSE) with different
quotients. orders.

obtained as
’ SIMULATION EXAMPLES 8
In this section, three examples are considered to illustrate the >
< xðtÞ¼ 0:1410xðt  1Þ þ 0:0249xðt  2Þ  0:0419xðt  3Þ
^ ^ ^ ^

WNN identifier and the WGPC controller described above. The  0:9172uðt  1Þ  0:1511uðt  2Þ
first example is aimed at demonstrating the WNN identification >
: y^ ðtÞ ¼ 0:8300x^ ðtÞ  0:0138x^ 2 ðtÞ þ 0:0242x^ 3 ðtÞ  0:2466x^ 4 ðtÞ
on a nonlinear dynamic system. The second and third examples
are aimed at the control of nonlinear dynamic systems using ð36Þ
the WGPC.
Example 1: Nonlinear System Identification. The following The testing input signal used to verify the identification
process is a nonlinear dynamic process formulated in a discrete performance of the WNN is
form as24 uðtÞ
8  
yðtÞ ¼ f ðyðt  1Þ, yðt  2Þ, yðt  3Þ, uðt  1Þ, uðt  2ÞÞ ð34Þ >
>
πt
>
> sin 0 e t < 250
>
> 25
where < 1:0 250 e t < 500
¼
>
> 1:0       500 e t < 750
x1 x2 x3 x5 ðx3  1Þ þ x4 >
> πt πt πt
f ðx1 , x2 , x3 , x4 , x5 Þ ¼ ð35Þ >
> þ þ 750 e t < 1000
1 þ x3 2 þ x2 2 : 0:3 sin
25
0:1 sin
32
0:6 sin
10
A total of 1000 data pairs are used to train the network. The ð37Þ
first 500 timesteps are an independent, identically distributed The proposed WNN was compared to several identification
(i.i.d.) uniform sequence u(t) within the limits [  1.0,1.0] and procedures, the Wiener-type Recurrent Neural Network
the remaining timesteps are given by a sinusoidal function u(t) = (WRNN),24 the Controllable-Canonical-Form-Based Recurrent
1.05 sin(πt/45). Also, the first 500 timesteps were used for Neurofuzzy Network (CReNN),32 and the Dynamic Fuzzy Neural
determination of the system orders. Network (DFNN).33 The results are quantified in Table 1,
In the order determination procedure, we use the input j1 = illustrating that the proposed WNN has the least number of
y(t  1) only to compute the Lipschitz quotient l(1,0) = þ¥. Adding parameters and the lowest MSE values. Figure 7 shows the output
other inputs ji gives the Lipschitz quotients, in relation to its terms of the plant using the true model and the WNN.
shown in Figure 5. For increasing orders, the corresponding Example 2: WGPC for a Nonlinear Plant. The nonlinear
quotients l(2,1) = 27.65, l(3,1) = 18.02, l(2,2) = 4.201 are decreasing plant is given by16,24
significantly. In addition, l(3,2) = 2.525, l(3,3) = 2.489, l(4,2) =  
2.450,l(4,3) = 2.416 are relatively constant from l(3,2); therefore, yðt  1Þyðt  2Þðyðt  1Þ þ βÞ
yðtÞ ¼ R þ uðt  1Þ ð38Þ
the stop criterion (eq 10) is satisfied. From Figure 5, the best 1 þ y2 ðt  1Þ þ y2 ðt  2Þ
order of the system using the WNN is (3,2) and from eq 34, the
true order of the system is (3,2). Figure 6 demonstrates the where the parameters have values of R = 0.35 and β = 2.5. The
comparison of the mean square error (MSE) of the WNN with control procedure contains two phases: one is the offline
different orders. training phase and another is the online control phase. During
The number of neurons in nonlinear static block is the off-line training phase, the same procedure as that described
chosen as p = 4. To train the neural network, the learning in Example 1 is used to train the WNN. A total of 1000 data
rate and the momentum factor are chosen as η = 0.01 and pairs of i.i.d. uniform sequences within the limits u(t) ∈
γ = 0.1, respectively. The initial parameters in eqs 2224 [ 1.0, 1.0] are used to train the network. From Figure 8, the
are y(t) = 0, x(t) = 0, ∂y(t)/∂^ai = 0, ∂y(t)/∂^bj = 0, and ∂y(t)/ stop criterion ends at l(3,1) = 1.895, showing that the best order
∂^ck = 0 as t e 0. After training, the identified model is of the system is (3,1). The number of neurons in nonlinear
7393 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

Table 1. Comparison among Several Neural Networks for


Identification of Example 1
network number of parameters MSE

proposed 9 2.974  103


WRNN 15 1.25  102
CReNN 51 5.05  102
DFNN 39 2.5  103

Figure 9. Training result using WNN.

Figure 7. Identification results using WNN.

Figure 10. Control results using GPC and WGPC.

During the online control phase, the initial parameters of


WNN are shown as eq 39, so that we can obtain the initial
dynamic matrix D(t), according to eq 31. The controller para-
meters are chosen as N = 8, Nu = 3, λ = 1.05, and the time interval
Δt = 0.01 s. Note that the WNN can be updated online, i.e., the
parameters ^ai(t), ^bi(t) in eq 31 and ^ci(t) in f( 3 ) can be adjusted at
every sampling instant. As a result, D(t) and f1( 3 ) are adaptive,
Figure 8. Values of the order determination based on Lipschitz as shown in Figure 4. Figure 10 shows the results of WGPC
quotients. tracking various setpoints in comparison to GPC having the same
controller parameters and D(t) and ^x(t) being fixed. Figure 10
shows that the WGPC has better performance indices, as shown
static block p = 3 with η = 0.05 and γ = 0.1. The initial in Table 2.
parameters in eqs 2224 are the same as those described in To test the adaptive ability of the WGPC, the algorithms were
Example 1. The training result is shown in Figure 9, and the MSE further tested to track a complex trajectory16 and the uncertain-
value is 3.4829  104. After training, the identified model is ties were introduced by setting R = 0.5 and β = 4.0. Closed-loop
obtained as results of the WGPC and GPC in Figure 11 show the WGPC
rejecting the uncertainties better than GPC, which continues to
8
> have oscillatory behavior.
< xðtÞ ¼ 0:1825xðt  1Þ þ 0:0043xðt  2Þ þ 0:0375xðt  3Þ
^ ^ ^ ^

Example 3: WGPC for a Continuous Stirred Tank Reactor


 1:0385uðt  1Þ (CSTR). The proposed WGPC was also used to identify and
>
: y^ ðtÞ ¼ 0:8614x^ ðtÞ þ 0:4157x^ 2 ðtÞ þ 0:0391x^ 3 ðtÞ control a typical chemical process: a continuous stirred tank
reactor (CSTR). An irreversible first-order reaction is part of the
ð39Þ CSTR, which has the dimensionless mass and energy balances.
7394 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

Table 2. Settling Time (Ts) and Overshoot (σ%) of GPC and Table 3. Model Parameters of CSTR Process
WGPC
notation description value
controller Ts (s) σ% (%)
Q volumetric flow rate 10 L min1
GPC 0.28 18.92 V reactor volume 10 L
WGPC 0.16 2.102 F density of reaction mixture 1000 g L1
CAf feed concentration 1.0 mol L1
Tf feed temperature 350 K
Cp specific heat capacity 1.0 J g1 K1)
ΔH heat of reaction 1.0  105 J mol1
k0 Arrhenius pre-exponential constant 5.33685  107 min1
E/R activation energy/gas law constant 6000 K
UA heat-transfer term 5000 J min1 K1

Figure 11. Control results using GPC and WGPC.

The system is described by the following equation:34,35


8  
>
> dCA Q E
>
< dt ¼ ðCAf  CA Þ  k0 CA exp 
V RT
 
>
> dT Q ΔH E UA
>
: dt ¼ ðTf  TÞ  k0 CA exp  þ ðTc  TÞ
V FCp RT V FCp
Figure 12. Values of the order determination based on Lipschitz
ð40Þ
quotients.
where the description and value of each notation are given in
Table 3. The two state variables of the model are the reactant to this data. After training, the identified Wiener model is
concentration CA and the reactor temperature T. The control obtained as
objective is to control the reactant concentration CA through the (
manipulation of the coolant temperature Tc. Note that the x^ ðtÞ ¼ 0:1056x^ ðt  1Þ þ 0:7439x^ ðt  2Þ þ 0:3945uðt  1Þ
reactor temperature T is not controlled for this simulation.
y^ ðtÞ ¼ 0:1937x^ ðtÞ  0:4170x^ 2 ðtÞ  0:1162x^ 3 ðtÞ
Therefore, the output variable and manipulated variable are
given by y(t) = CA and u(t) = Tc. For practicability, the coolant ð42Þ
temperature Tc is constrained to the range [273.0,373.0].
During the offline training phase, the above model is used to During the online control phase, the initial values of the
generate a series of inputoutput time-series data. The sampling reactant concentration CA and the reactor temperature T are
time of the process measurements is set to be 0.1 min. From set to be 0.5 mol/L and 337.2 K, respectively, and the initial
Figure 12, the stop criterion ends at l(2,1) = 2.333, indicating that control input (the coolant temperature Tc) is set to be 306.65 K.
the best order of the system is (2,1). The number of neurons The controller parameters are chosen as N = 25, Nu = 3, λ = 1.1
in the nonlinear static block is p = 3 with η = 0.01 and γ = 0.1. and the time interval Δt = 0.1 min. Figure 14 shows the results of
The initial parameters in eqs 2224 are the same as given in WGPC for tracking a set of setpoints and comparison with GPC
Example 1. The nonlinear part of WNN is sensitive to the data having the same controller parameters for this case and fixed D(t)
between 1 and 1; however, the input values (Tc) are within and ^x(t). As previously stated, the WGPC strategy provides
[273.0,373.0]. Therefore, it is necessary to normalize the input improved control of the CSTR, in comparison to GPC.
signals as follows: According to the above results shown in Figures 10, 11, and 14,
Tc ðtÞ  273:0 373:0  Tc ðtÞ we can conclude that the WGPC shows better control perfor-
uðtÞ ¼  ð41Þ mance than conventional GPC. Table 2 shows that the GPC has
373:0  273:0 373:0  273:0 higher overshoot and settling time than WGPC. The GPC
The normalized data is then used to train the WNN, a scheme has oscillatory closed-loop behavior when tracking a
common three-layer neural network with 5 hidden neurons36 complex trajectory with high uncertainties. Analytically, this is
was also developed for comparison. The training results are because the Diophantine equation and dynamic matrix D(t) in
shown in Figure 13, which shows that the WNN gives a good fit conventional GPC are fixed, whereas in WGPC, they are
7395 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397
Industrial & Engineering Chemistry Research ARTICLE

resulting in more-accurate predictions of the controlled variable.


The WGPC was tested on several nonlinear systems, demon-
strating better control performance over the generalized pre-
dictive control (GPC) algorithm.

’ AUTHOR INFORMATION
Corresponding Author
*Tel.: þ1 506 458-7770. Fax: þ1 506 453-5025. E-mail:
dubayr@unb.ca.

’ ACKNOWLEDGMENT
The authors would like to acknowledge the funding received
from the Natural Sciences and Engineering Research Council of
Canada to conduct this research investigation.

Figure 13. Training results using three-layer neural network and WNN. ’ REFERENCES
(1) Jeong, B. G.; Yoo, K. Y.; Rhee, H. K. Nonlinear Model Predictive
Control Using a Wiener Model of a Continuous Methyl Methacrylate
Polymerization Reactor. Ind. Eng. Chem. Res. 2001, 40, 5968.
(2) Arefi, M. M.; Montazeri, A.; Jahed-Motlagh, M. R.; Poshtan, J.
Nonlinear model predictive control of chemical processed with a Wiener
identification approach. In Proceedings of the IEEE International Con-
ference on Industrial Technology, Mumbai, India, December 2006; p 1735.
(3) Arefi, M. M.; Montazeri, A.; Poshtan, J.; Jahed-Motlagh, M. R.
Wiener-neural identification and predictive control of a more realistic
plug-flow tubular reactor. Chem. Eng. J. 2008, 138, 274.
(4) Arto, V.; Hannu, P.; Halme, A. Modeling of chromatographic
separation process with Wiener-MLP representation. J. Process Control
2001, 11, 443.
(5) Knohl, T.; Unbehauen, H. Adaptive position control of electro-
hydraulic servo system using ANN. Mechatronics 2000, 10, 127.
(6) Chen, G.; Chen, Y.; Ogmen, H. Identifying chaotic systems via a
Wiener-type cascade model. IEEE Control Syst. Mag. 1997, 17, 29.
(7) Xu, M.; Chen, G.; Tian, Y. T. Identifying chaotic systems Using
Wiener and Hammerstein cascade models. Math. Comput. Modell. 2001,
33, 483.
(8) Norquay, S. J.; Palazoglu, A.; Romagnoli, J. A. Model predictive
Figure 14. Control results using GPC and WGPC.
control based on Wiener models. Chem. Eng. Sci. 1998, 53, 75.
(9) Janczak, A. Identification of Nonlinear Systems Using Neural Net-
recalculated by the WNN to provide localized model parameters works and Polynomial Models: A Block-Oriented Approach; Springer
every time step during closed-loop control. This improves the Verlag: New York, 2004.
(10) yawryn’czuk, M. Computationally efficient nonlinear predictive
robustness and adaptiveness of WGPC, despite the existence of control based on neural Wiener models. Neurocomputing 2010, 74, 401.
uncertainties and external disturbances. In addition, the predic- (11) Al-Duwaish, H.; Karim, M. N.; Chandrasekar, V. Use of multi-
tion of the plant output ^x from eqs 28 and 30 is improved, since layer feedforward neural networks in identification and control of
D(t) is recalculated from the Wiener model. The WNN is Wiener model. IEE Proc.: Control Theory Appl. 1996, 143, 255.
designed to formulate the Wiener model entirely every time step (12) T€ otterman, S.; Toivonen, H. T. Support vector method for
during closed-loop control. From Table 1 and Figure 13, we can see identification of Wiener models. J. Process Control 2009, 19, 1174.
that the WNN has better identification performance with simpler (13) Janczak, A. Neural network approach for identification of
structure (less neurons) than other similar neural networks. Hammerstein systems. Int. J. Control 2003, 76, 1749.
(14) Wu, D.; Huang, S.; Zhao, W.; Xin, J. Infrared thermometer
sensor dynamic error compensation using Hammerstein neural net-
’ CONCLUSION work. Sens. Actuators A 2009, 149, 152.
In this paper, a Wiener-based adaptive generalized predictive (15) Dubay, R.; Abu-Ayyad, M.; Hernandez, J. M. A nonlinear
controller was developed for identifying and controlling non- regression model-based predictive control algorithm. ISA Trans. 2009,
linear single-input and single-output (SISO) systems. The Wi- 48, 180.
ener neural network (WNN)-based generalized predictive (16) Abu-Ayyad, M.; Dubay, R. Improving the Performance of
Generalized Predictive Control for Nonlinear Processes. Ind. Eng. Chem.
control (WGPC) is comprised of a WNN identifier that has
Res. 2010, 49, 4809.
a learning structure, thereby facilitating online identification (17) Abu-Ayyad, M.; Dubay, R. Real-time comparison of a number
every time step during closed-loop control. This adaptiveness of predictive controllers. ISA Trans. 2007, 46, 411.
is introduced in the generalized predictive controller, providing (18) V€or€os, J. Parameter identification of Wiener systems with
more-accurate localized model parameters at any closed-loop multisegment piecewise-linear nonlinearities. Syst. Control Lett. 2007,
state. This allows the reformulation of the plant system matrix, 56, 99.

7396 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397


Industrial & Engineering Chemistry Research ARTICLE

(19) Boutayeb, M.; Darouach, M. Recursive identification method


for MISO WienerHammerstein Model. IEEE Trans. Autom. Control
1995, 40, 287.
(20) Billings, S. A.; Fakhouri, S. Y. Identification of systems contain-
ing linear dynamic and static nonlinear elements. Automatica 1982,
18, 15.
(21) Hagenblad, A.; Ljung, L.; Wills, A. Maximum likelihood
identification of Wiener models. Automatica 2008, 44, 2697.
(22) Kalafatis, A. D.; Arifin, N.; Wang, L.; Cluett, W. R. A new
approach to the identification of pH processes based on the Wiener
model. Chem. Eng. Sci. 1995, 50, 3693.
(23) Wigren, T. Recursive prediction error identification algorithm
using the nonlinear Wiener model. Automatica 1993, 29, 1011.
(24) Hsu, Y. L.; Wang, J. S. A Wiener-type recurrent neural network
and its control strategy for nonlinear dynamic applications. J. Process
Control 2009, 19, 942.
(25) He, X.; Asada, H. A new method for identifying orders of input-
output models for nonlinear dynamic systems. In Proceedings of the 1993
American Control Conference (ACC), San Francisco, CA, 1993; p 2520.
(ISBN: 0-7803-0860-3.)
(26) Nelles, O. Nonlinear System Identification; SpringerVerlag:
Berlin, 2001.
(27) Billings, S. A.; Chen, S.; Korenberg, M. J. Identification of
MIMO non-linear systems using a forward-regression orthogonal
estimator. Int. J. Control 1989, 49, 2157.
(28) Luh, G. C.; Rizzoni, G. Identification of a Nonlinear MIMO
Internal Combustion Engine Model. In Transportation Systems, Proceed-
ings of the 1994 ASME Winter Meeting; American Society of Mecha-
nical Engineers (ASME): New York, 1994; p 2520.
(29) Wang, J. S.; Chen, Y. P. A Fully Automated Recurrent Neural
Network for Unknown Dynamic System Identification and Control.
IEEE Trans. Circuit Syst. 2006, 53, 1363.
(30) Clarke, D. W.; Mohtadi, C.; Tuffs, P. S. Generalized Predictive
Control—Part I. The Basic Algorithm. Automatica 1987, 23, 137.
(31) Camacho, E. F.; Bordons, C. Model Predictive Control;
SpringerVerlag: London, 1999.
(32) Gonzalez-Olvera, M. A.; Tang, Y. A new recurrent neurofuzzy
network for identification of dynamic systems. Fuzzy Sets Syst. 2007,
158, 1023.
(33) Mastorocostas, P. A.; Theocharis, J. B. A recurrent fuzzy-neural
model for dynamic system identification. IEEE Trans. Syst. Man. Cybern.
2002, 32, 176.
(34) Hussain, M. A.; Ho, P. Y. Adaptive sliding-mode control with
neural network based hybrid models. J. Process Control 2004, 14, 157.
(35) Knapp, T. D.; Budman, H. M.; Broderick, G. Adaptive control
of a CSTR with a neural network model. J. Process Control 2001, 11, 53.
(36) Peng, J. Z.; Wang, Y. N.; Sun, W.; Liu, Y. A Neural Network
Sliding Mode Controller with Application to Robotic Manipulator. In
Proceedings of the 6th World Congress on Intelligent Control and Automa-
tion, Dalian, China, 2006; p 2101.

7397 dx.doi.org/10.1021/ie102203s |Ind. Eng. Chem. Res. 2011, 50, 7388–7397

View publication stats

You might also like