Control Adaptativo
Control Adaptativo
Control Adaptativo
www.intechopen.com
126
in the optimization procedure. Two different approaches for NMPC are proposed in Section
3. They consider the unconstrained and constrained model predictive control problem. Both
of the approaches use the proposed Takagi-Sugeno fuzzy-neural predictive model.
The proposed techniques of fuzzy-neural MPC are studied in Section 4, by experimental
simulations in Matlab environment in order to control the levels in a multi tank system
(Inteco, 2009). The case study is capable to show how the proposed NMPC algorithms
handle multivariable processes control problem.
(1)
where x(k) n , u(k) m and y(k) q are state, control and output variables of the
system, respectively. The unknown nonlinear functions fx and fy can be approximated by
Takagi-Sugeno type fuzzy rules in the next form:
www.intechopen.com
127
R l : if
z1 ( k ) is Ml 1
xl ( k + 1) = A l x( k ) + Bl u( k )
then
yl ( k ) = C l x( k ) + Dl u( k )
and zp ( k ) is Mlp
(2)
where Rl is the l-th rule of the rule base. Each rule is represented by an if-then conception.
The antecedent part of the rules has the following form zi(k) is Mli where zi(k) is an i-th
linguistic variable (i-th model input) and Mli is a membership function defined by a fuzzy set
of the universe of discourse of the input zi. Note that the input regression vector z(k) p in
this chapter contains the system states and inputs z(k)=[x(k) u(k)]T. The consequent part of
the rules is a mathematical function of the model inputs and states. A state-space
implementation is used in the consequent part of Rl, where Al nn , Bl nm , Cl qn
and Dl qm are the state-space matrices of the model (Ahmed et al., 2009).
The states in the next sampling time x ( k + 1) and the system output y ( k ) can be obtained by
taking the weighted sum of the activated fuzzy rules, using
x ( k + 1) = yl ( k )( Al x( k ) + Bl u( k ))
L
y ( k ) = yl ( k )(C l x ( k ) + Dl u( k ))
l =1
(3)
l =1
On the other hand the state-space matrices A, B, C, and D for the global state-space plant
model could be calculated as a weighted sum of the local matrices Al, Bl, Cl, and Dl from the
activated fuzzy rules (2):
A( k ) = Al yl ( k )
L
C ( k ) = C l yl ( k )
l =1
L
l =1
B( k ) = Bl yl ( k )
L
D( k ) = Dl yl ( k )
l =1
L
(4)
l =1
where yl = yl
yl
l =1
the l-th activated fuzzy rule and L is the number of the activated rules at the moment k.
www.intechopen.com
upon
128
yl = ij
(5)
i =1
where ij specifies the membership degree (Fig. 2) upon the activated j-th fuzzy set of the
corresponded i-th input signal and it is calculated according to the chosen here Gaussian
membership function (6) for the l-th activated rule:
ij ( zi ) = exp
( zi cGij ) 2
2 ij 2
(6)
where zi is the current input value of the i-th model input, cGij is the centre (position) and ij is
the standard deviation (wide) of the j-th membership function (j=1, 2, .., s) (Fig.2).
2.1 Identification procedure for the fuzzyneural model
The proposed identification procedure determines the unknown parameters in the TakagiSugeno fuzzy model, i.e. the parameters of membership functions, according to their shape
and the parameters of the functions fx and fy in the consequent part of the rules (2). It is
realised by a five-layer fuzzy-neural network (Fig. 3). Each of the layers performs typical
fuzzy logic strategy operations:
www.intechopen.com
129
Layer 2. The fuzzification procedure of the input variables is performed in the second layer.
The weights in this layer are the parameters of the chosen membership functions. Their
number depends on the type and the number of the applied functions. All these parameters
ij are adjustable and take part in the premise term of the Takagi-Sugeno type fuzzy rule
base (2). In that section the membership functions for each model input variable are
represented by Gaussian functions (Fig. 2). Hence, the adjustable parameters ij are the
centres cGij and standard deviations ij of the Gaussian functions (6). The nodes in the second
layer of the fuzzy-neural architecture represent the membership degrees ij(zi) of the
activated membership functions for each model input zi(k) according to (6). The number of
the neurons depends on the number of the model inputs p and the number of the
membership functions s in corresponding fuzzy sets. It is calculated as p s .
Layer 3. The third layer of the network interprets the fuzzy rule base (2). Each neuron in the
third layer has as many inputs as the input regression vector size p. They are the corresponding
membership degrees for the activated membership functions calculated in the previous layer.
Therefore, each node in the third layer represents a fuzzy rule Rl, defined by Takagi-Sugeno
fuzzy model. The outputs of the neurons are the results of the applied fuzzy rule base.
Layer 4. The fourth layer implements the fuzzy implication (5). Weights in this layer are set
to one, in case the rule Rl from the third layer is activated, otherwise weights are zeros.
Layer 5. The last layer (one node layer) represents the defuzzyfication procedure and forms
the output of the fuzzy-neural network (3). This layer also contains a set of adjustable
parameters l. These are the parameters in the consequent part of Takagi-Sugeno fuzzy
model (2). The single node in this layer computes the overall model output signal as the
summation of all signals coming from the previous layer.
I 5 = f yl yl or I 5 = f xl yl O 5 =
L
l =1
l =1
f yl yl
L
yl
l =1
L
f xl yl
L
or O 5 =
l =1
yl
l =1
L
(7)
l =1
where f xl = Al x( k ) + Bl u( k ) and f yl = C l x( k ) + Dl u( k ) .
2.2 Learning algorithm of the fuzzyneural model
Two-step gradient learning procedure is used as a learning algorithm of the internal fuzzyneural model. It is based on minimization of an instant error function EFNN. At time k the
function is obtained from the following equation
EFNN ( k ) = 2 ( k ) / 2
(8)
where the error (k) is calculated as a difference between the controlled process output y(k)
and the fuzzy-neural model output (k):
( k ) = y( k ) y ( k )
(9)
During step one of the procedure, the consequent parameters of Takagi-Sugeno fuzzy rules
are calculated according to summary expression (10) (Petrov et al., 2002).
EFNN
l ( k + 1) = l ( k ) +
www.intechopen.com
(10)
130
where is a learning rate and l represents an adjustable coefficient aij, bij, cij, dij (11) for the
activated fuzzy rule Rl (2). The coefficients take part in the state matrix Al, control matrix Bl
and output matrices Cl and Dl of the l-th activated rule (Ahmed et al., 2009). The matrices
approximate the unknown nonlinear functions fx and fy according to defined fuzzy rule
model (2). The matrix dimensions are specified by the system parameters numbers of
inputs m, outputs q and states n of the system.
a11 a1n
Al =
an1 ann
b11 b1m
Bl =
bn 1 bnm
c11 c1n
Cl =
c
q 1 cqn
d11 d1m
(11)
Dl =
d
q 1 dqm
In order to find a weight correction for the parameters in the last layer of the proposed
EFNN
fuzzy-neural network the derivative
of the instant error should be determined.
Following the chain rule, the derivative is calculated considering the expressions (7) and (8)
EFNN EFNN y I 5
=
y I 5 l
l
(12)
After the calculation of the partial derivatives, the matrix elements for each matrix of the
state-space equations corresponding to the l-th activated rule (2) are obtained according to
the summary expression (12) (Petrov et al., 2002; Ahmed et al., 2010):
aij ( k + 1) = aij ( k ) + ( k ) yl ( k )xi ( k )
i = j =1n
bij ( k + 1) = bij ( k ) + ( k ) yl ( k )u j ( k )
i = 1 n, j = 1 m
cij ( k + 1) = cij ( k ) + ( k ) yl ( k )x j ( k )
i = 1 q, j = 1 n
dij ( k + 1) = dij ( k ) + ( k ) yl ( k )u j ( k )
i = 1 q, j = 1 m
(13)
The proposed fuzzy-neural architecture allows the use of the previously calculated output
error (8) in the next step of the parameters update procedure. The output error EFNN is
propagated back directly to the second layer, where the second group of adjustable
parameters are situated (Fig. 3). Depending on network architecture, the membership
degrees calculated in the fourth and the second network layer are related as yl ij.
Therefore, the learning rule for the second group adjustable parameters can be done in
similar expression as (10):
E
FNN
ij
ij ( k + 1) = ij ( k ) +
(14)
where the derivative of the output error EFNN is calculated by the separate partial
derivatives:
EFNN EFNN y ij
=
ij
y ij ij
www.intechopen.com
(15)
131
The adjustable premise parameters of the fuzzy-neural model are the centre cGij and the
deviation ij of the Gaussian membership function (6). They are combined in the
representative parameter ij, which corresponds to the i-th model input and its j-th activated
fuzzy set. Following the expressions (14) and (15) the parameters are calculated as follows
(Petrov et al., 2002; Ahmed et al., 2010):
[ zi ( k ) cGij ( k )]
cGij ( k + 1) = cGij ( k ) + ( k ) yl ( k )[ f yl y( k )]
ij2 ( k )
ij ( k + 1) = ij ( k ) + ( k ) yl ( k )[ f yl y( k )]
[ zi ( k ) cGij ( k )]2
ij3 ( k )
(16)
(17)
The proposed identification procedure for the fuzzy-neural model could be summarized in
the following steps (Table 1).
Initialize the membership functions number, shape, parameters;
Assign initial values for the network inputs;
Start the algorithm at the current moment k;
Fuzzify the network inputs and calculate the membership degrees upon the
activated fuzzy set of the membership functions according to (6);
Step 5. Perform fuzzy implication according to (5);
Step 6. Calculate the fuzzy-neural network output, which is represented by state-space
description of the modelled system (3) and (4);
Step 7. Calculate the instant error according to (8) and (9);
Step 8. Start training procedure for fuzzy-neural network;
Step 9. Adjust the consequent parameters according to (13);
Step 10. Adjust the premise parameters according to (16) and (17).
Repeat the algorithm from Step 3 for each sampling time.
Step 1.
Step 2.
Step 3.
Step 4.
H p + H w 1
J( k ) =
i =Hw
2
y ( k + i ) r ( k + i ) Q +
Hu 1
i =0
u( k + i )
2
R
(18)
where (k), r(k) and u(k) are the predicted outputs, the reference trajectories, and the
predicted control increments at time k, respectively. The length of the prediction horizon is
www.intechopen.com
132
Hp, and the first sample to be included in the horizon is Hw. The control horizon is given by
Hu. Q 0 and R >0 are weighting matrices representing the relative importance of each
controlled and manipulated variable and they are assumed to be constant over the Hp.
The cost function (18) may be rewritten in a matrix form as follows
2
J ( k ) = Y ( k ) T ( k ) Q + U ( k )
2
R
(19)
where Y(k), T(k), U(k), Q and R are predicted output, system reference, control variable
increment and weighting matrices, respectively,
u( k| k )
y ( k|k )
r ( k|k )
Y(k) =
, T(k) =
, U ( k ) =
y ( k + H - 1| k )
r ( k + H - 1| k )
u( k + H u - 1| k )
p
p
Q (1)
0
Q=
Q ( H p )
0
R (1)
0
R=
0
R ( H u )
The linear state-space model used for Takagi-Sugeno fuzzy rules (2) could be represented in
the following form:
(20)
Based on the state-space matrices A, B, C and D (4), the future state variables are calculated
sequentially using the set of future control parameters:
x ( k + 1) = Ax( k ) + Bu( k 1) + Bu( k )
x ( k + 2) = A2 x( k ) + ( AB + B)u( k 1) + ( AB + B)u( k ) + Bu( k + 1)
x ( k + 3) = A3 x( k ) + ( A2 B + AB + B)u( k 1) + ( A2 B + AB + B)u( k ) + ( AB + B)u( k + 1) + Bu( k + 2)
.....................................................................................................................
x ( k + j ) = A j x( k ) + Ai Bu( k 1) + Ai B
j 1
j 1
j 1 i
i =0
i =0
m=0
u( k + m)
.....................................................................................................................
H
x ( k + H p ) = A p x( k ) +
H p 1
i =0
H p 1
Ai Bu( k 1) +
i =0
Hp 2
Ai Bu( k ) +
Ai Bu( k + 1) + + A
H p Hu
Bu( k + H u 1)
i =0
The predictions of the output y for j steps ahead could be calculated as follows
y ( k + 1) = Cx( k + 1) + Du( k + 1) = CAx( k ) + (CB + D)u( k 1) + (CB + D)u( k ) + Du( k + 1)
y ( k + 2) = CA2 x( k ) + (CAB + CB + D)u( k 1) + (CAB + CB + D)u( k ) + (CB + D)u( k + 1) + Du( k + 2)
........................................................................................................................................................................
www.intechopen.com
133
j 1
j 1
i =0
i =0
........................................................................................................................................................................
Hp 2
Hp 2
H 1
y ( k + H p 1) = CA p x( k ) + C Ai B + D u( k 1) + C Ai B + D u( k ) +
i =0
i =0
H
H
H
3
1
p u
+ C Ai B + D u( k + 1) + + C Ai B + D u( k + H u 1)
i =0
i =0
The recurrent equation for the output predictions y ( k + jp ) , where jp= 1, 2,..., Hp 1, is in the
next form:
j p jp 1
C A j B + D u( k + i ), jp < H u
i = 0 j = 0
jp 1
. y ( k+ j ) = CA jp x( k ) + C
.
Ai B + D u( k 1) +
p
j
i
1
i =0
Hu 1
p
j
C A B + D u( k + i ), j p > H u
i
j
0
0
=
=
(21)
The prediction model defined in (21) can be generalized by the following matrix equality
Y ( k ) = x( k ) + u( k - 1) + U ( k )
(22)
where
C
CA
= CA2
Hp 1
CA
CB + D
CAB + CB + D
=
Hp 2
C Ai B + D
i = 0
0
D
+
CB
D
D
CAB + CB + D CB + D
= Hu 2 i
C A B + D
i =0
Hp 2
i
C A B + D
i =0
H p Hu 1
i
C A B + D
i =0
All matrices, which take part in the equations above, are derived by the Takagi-Sugeno
fuzzy-neural predictive model (4).
It is also possible to define the vector
E( k ) = T ( k ) - u( k - 1) - U ( k )
(23)
This vector can be thought as a tracking error, in the sense that it is the difference between the
future target trajectory and the free response of the system, namely the response that would
occur over the prediction horizon if no input changes were made, i.e. U(k)=0. Hence, the
quantity of the so called free response F(k) is defined as follows
F( k ) = x( k ) + u( k - 1)
www.intechopen.com
(24)
134
(25)
(26)
Hence, substituting the predictive model (25) into expression (26), the cost function of the
model predictive optimization problem can be specified as follows:
J ( U ) = U T (T Q+R )U+ 2(FT )T QU + (T F )T Q(T F )
(27)
The minimum of the function J(U) can be obtained by calculating the input sequence U so
that J/U = 0:
U T (T Q+R )U + 2(FT )T QU + (F T )T Q( F T ) = 0
J ( U ) =
U
U
(28)
(29)
The input applied to the controlled plant at time k is computed according to the receding
horizon principle, i.e. the first element from the control sequence u*(k) of the vector U* is
taken. Then, control signal is calculated from:
u( k ) = u( k - 1) + u * ( k )
(30)
It is evident that the expression given by the matrix equation (29) is the same as expression
obtained for the generalized predictive control. However, in the GPC formulation the
components involved in the calculation of the formula (29) are obtained from a linear model.
In the present case the components introduced in this expression are generated by the
designed nonlinear fuzzy-neural model. A more rigorous formulation of (29) will be
representation of the components as time-variant matrices, as they are shown in the
expression (22). In this case the matrix (k) and the vectors (k), T(k) are being reconstructed
at each sampling time. The vector (k) is obtained by simulating the fuzzy model with the
current input u(k); the matrix (k) is also rebuilt using a method described below.
(31)
The proposed method solves the problem of unconstrained MPC. A system of equations is
solved at each sampling time k. The proposed approach decreases computational burden
avoiding the necessity to inverse the gain matrix in (31) at each sampling time k.
www.intechopen.com
135
Applying this method, minimization of the GPC criterion (18) is based on a calculation of
the gradient vector of the criterion cost function J at the moment k subject to the predicted
control actions:
T
J ( k )
J ( k )
J ( k )
J ( k ) =
,
, ,
u( k + H u 1)
u( k ) u( k + 1)
(32)
Each element of this gradient vector (32) can be calculated using the following derivative
matrix equation:
J ( k )
Y ( k )
U ( k )
T
= 2 [ T ( k ) Y ( k )] Q
+ 2 U ( k )T R
U ( k )
U ( k )
U ( k )
(33)
From the above expression (33) it can be seen that it is necessary to obtain two groups of
Y ( k )
U ( k )
partial derivatives. The first one is
, and the second one is
. The first
U
(
k
)
U ( k )
partial derivatives in (33) have the following matrix form:
y ( k + H w )
y ( k + H w )
u
k
u
k
H
(
)
(
1)
u
Y(k)
U ( k )
y ( k + H p + H w 1)
y ( k + H p + H w 1)
u( k )
u ( k + H u 1)
(34)
For computational simplicity assume that Hw=0 (18). Then each element of the matrix (34)
is calculated by the expressed equations according to the Takagi-Sugeno rules consequents (2).
For example the derivatives from first column of the matrix (34) have the following form:
L
y ( k )
= Dl yl ( k )
u( k ) l = 1
y ( k + 1) L
= (C l Bl + Dl ) yl ( k + 1)
u( k )
l =1
y ( k + 2) L
= (C l Al Bl + C l Bl + Dl ) yl ( k + 2)
u( k )
l =1
(35)
(36)
(37)
..
y ( k + H p 1)
u( k )
Hp 2
L
= C l Alj Bl + Dl yl ( k + H p 1)
l =1
j =0
The second group partial derivatives in (33) has the following matrix form:
www.intechopen.com
(38)
136
u (k)
u (k)
....
u ( k )
u ( k + H u 1)
U ( k )
U ( k )
u ( k + H u 1)
u ( k + H u 1)
....
u ( k )
u ( k + H u 1)
(39)
1
1 1
1
1
(40)
Following this procedure it is possible to calculate the rest column elements of the matrix
(34) which belongs to the next gradient vector elements (32). Finally, each element of the
gradient-vector (32) could be obtained by the following system of equations:
y ( k + H p )
y ( k + 1)
J ( k )
= 2 e( k + 1)Q (1)
+ ... + 2 e( k + H p )Q ( H p )
+
u( k + 1)
u( k )
u( k )
+2 R (1)u( k ) 2 R (2)u( k + 1) + ... 2 R ( H )u( k + H 1) = 0
(41)
y ( k + H p )
y ( k + 1)
J ( k )
= 2 e( k + 1)Q (1)
+ ... + 2 e( k + H p )Q ( H p )
+
u( k + 2)
u( k + 1)
u( k + 1)
+2 R (2)u( k + 1) 2 R (3)u( k + 2) + ... 2 R ( H )u( k + H 1) = 0
(42)
y ( k + H p )
y ( k + 1)
J ( k )
= 2 e( k + 1)Q (1)
+ ... + 2 e( k + H p )Q ( H p )
+
u( k + H u 2)
u( k + H u 2)
u( k + H u 2)
(43)
+2 R( H 1)u( k + H 2) 2 R( H )u( k + H 1) = 0
u
y ( k + H p )
y ( k + 1)
J ( k )
= 2 e( k + 1)Q (1)
+ ... + 2 e( k + H p )Q ( H p )
+
u( k + H u 1)
u( k + H u 1)
u( k + H u 1)
+2 R ( H )u( k + H 1) = 0
u
where e( k + j ) = r( k + j ) y ( k + j ),
(44)
The obtained system of equations (41)-(44) can be solved very easily, starting from the last
equation (44) and calculating the last control action u(k+Hu-1) first. Then, the procedure can
continue with finding the previous control action u(k+Hu-2) from (43). The calculations
continue until the whole number of the control actions over the horizon Hu is obtained. The
www.intechopen.com
137
calculation order of the control actions is very important, since the calculations should
contain only known quantities. After that, only the first control action u(k) (30) will be used
at the moment k as an input to the controlled process. The software implementation of the
proposed algorithm is realized easily according to the following equations:
u( k + H u 1) = R ( H u )
y ( k + H p )
y ( k + 1)
+ + e( k + H p )Q ( H p )
e( k + 1)Q (1)
u( k + H u 1)
u( k + H u 1)
(45)
u( k + H u 2) = u( k + H u 1) +
+ R ( H u 1)
y ( k + H p ) (46)
y ( k + 1)
+ + e( k + H p )Q ( H p )
e( k + 1)Q (1)
u( k + H u 2)
u( k + H u 2)
y ( k + H p )
y ( k + 1)
+ + e( k + H p )Q ( H p )
e( k + 1)Q (1)
u( k + 1)
u( k + 1)
(47)
y ( k + H p )
y ( k + 1)
u( k ) = u( k + 1) + R (1)1 e( k + 1)Q (1)
+ + e( k + H p )Q ( H p )
u( k )
u( k )
(48)
u( k + 1) = u( k + 2) + R (2)
www.intechopen.com
138
Umin ( k ) U ( k ) Umax ( k )
U min ( k ) U ( k ) Umax ( k )
(49)
Ymin ( k ) Y ( k ) Ymax ( k )
Where
umax ( k )
u ( k + 1)
max
U max ( k ) =
umax ( k + N u 1)
umax ( k )
u ( k + 1)
max
U max ( k ) =
umax ( k + N u 1)
umin ( k )
u ( k + 1)
min
Umin ( k ) =
umin ( k + N u 1)
umin ( k )
u ( k + 1)
min
Umin ( k ) =
umin ( k + N u 1)
ymax ( k )
y ( k + 1)
max
Ymax ( k ) =
ymax ( k + N p 1)
ymin ( k )
y ( k + 1)
min
Ymin ( k ) =
ymin ( k + N p 1)
Therefore, for multi-input case the number of the constraints for the change of the control
variable u(k) is mNu. Similarly, the number of the constraints for the control variable
amplitude is also mNu and for the output constraints it is qNp.
3.2.2 Quadratic programming in use of constrained MPC
Since the cost function J(k) (19) is quadratic and the constraints are linear inequalities, the
problem of finding an optimal predictive control becomes one of finding an optimal solution
to a standard quadratic programming problem with linear inequality constraints
www.intechopen.com
1 T
x Hx + f T x
2
subject to Ax b
minJ ( x ) =
139
(50)
where H and f are the Hessian and the gradient of the Lagrange function, x is the decision
variable. Constraints on the QP problem (50) are specified by Ax b according to (49).
The Lagrange function is defined as follows
L( x , ) = J ( x ) + i ai , i = 1, 2,N ,
N
(51)
i =1
where i are the Lagrange multipliers, ai are the constraints on the decision variable x, N is
the number of the constraints considered in the optimization problem.
Several algorithms for constrained optimization are described in (Fletcher, 2000). In this
chapter a primal active set method is used. The idea of active set method is to define a set S
of constraints at each step of algorithm. The constraints in this active set are regarded as
equalities whilst the rest are temporarily disregarded and the method adjusts the set in
order to identify the correct active constraints on the solution to (52)
1 T
x Hx + f T x
2
subject to ai x = bi
ai x bi
min J ( x ) =
(52)
At iteration k a feasible point x(k) is known which satisfies the active constraints as
equalities. Each iteration attempts to locate a solution to an equality problem (EP) in which
only the active constraints occur. This is most conveniently performed by shifting the origin
to x(k) and looking for a correction (k) which solves
1
min T Hx + f T
2
subject to ai = 0 ai S
(53)
where f(k) is defined by f(k) =f + Hx(k) and is J ( x ( k )) for the function defined by (52). If
(k) is feasible with regard to the constraints not included in S, then the feasible point in
next iteration is taken as x(k+ 1) = x(k) + (k). If not, a line search is made in the direction of
(k) to find the best feasible point. A constraint is active if the Lagrange multipliers i 0, i.e.
it is at the boundary of the feasible region defined by the constraints. On the other hand, if
there exist i < 0, the constraint is not active. In this case the constraint is relaxed from the
active constraints set S and the algorithm continues as before by solving the resulting
equality constraint problem (53). If there is more than one constraint with corresponding
i < 0, then the min i ( k ) is selected (Fletcher, 2000).
iS
The QP, described in that way, is used to provide numerical solutions in constrained MPC
problem.
www.intechopen.com
140
J(k) = [ x(k) + u(k -1) + U(k)-T(k)] Q [ x(k) + u(k -1) + U(k)-T(k)] + U T (k)R U(k)=
T
Assuming that
H = T Q + R and = 2T QE( k ),
(54)
the cost function for the model predictive optimization problem can be specified as follow
J ( k ) = U T ( k ) U ( k ) - U T ( k ) + ET ( k )QE( k )
(55)
The problem of minimizing the cost function (55) is a quadratic programming problem. If
the Hessian matrix H is positive definite, the problem is convex (Fletcher, 2000). Then the
solution is given by the closed form
U =
1 1
H
2
(56)
The constraints (49) on the cost function may be rewritten in terms of U(k).
U min ( k ) I uu( k - 1) + I u U ( k ) Umax ( k )
U min ( k ) U ( k ) U max ( k )
(57)
Ymin ( k ) x( k ) + u( k - 1) + U ( k ) Ymax ( k )
where Im mm
Im
Im
I
I
is an identity matrix, I u = m mNu m , I u = m
I
m
Im
0
Im
Im
0
0
mNu mN u .
Im
(
1)
U
I
u
k
u
max
u
U min
U max
I
Y + ( x( k ) + u( k 1))
min
Ymax ( x( k ) + u( k 1))
www.intechopen.com
(58)
141
(59)
In (59) the constraints expression (58) has been denoted by U , where is a matrix
with number of rows equal to the dimension of and number of columns equal to the
dimension of U. In case that the constraints are fully imposed, the dimension of is equal
to 4mNu + 2qNp, where m is the number of system inputs and q is the number of
outputs. In general, the total number of constraints is greater than the dimension of the U.
The dimension of represents the number of constraints.
The proposed model predictive control algorithm can be summarized in the following steps
(Table 3).
At each sampling time:
Step 1. Read the current states, inputs and outputs of the system;
Step 2. Start identification of the fuzzy-neural predictive model following Algorithm 1;
Step 3. With A(k), B(k), C(k), D(k) from Step 2 calculate the predicted output Y(k) according
to (17);
Step 4. Obtain the prediction error E(k) according to (23);
Step 5. Construct the cost function (55) and the constraints (58) of the QP problem;
Step 6. Solve the QP problem according to (59);
Step 7. Apply only the first control action u(k).
Table 3. State-space implementation of fuzzy-neural model predictive control strategy
At each sampling time, LIQP (59) is solved with new parameters. The Hessian and the
Lagrangian are constructed by the state-space matrices A(k), B(k), C(k) and D(k) (4) obtained
during the identification procedure (Table 1). The problem of nonlinear constrained
predictive control is formulated as a nonlinear quadratic optimization problem. By means of
local linearization a relaxation can be obtained and the problem can be solved using
quadratic programming. This is the solution of the linear constrained predictive control
problem (Espinosa et al., 2005).
www.intechopen.com
142
awH 111
1
A=
1
w ( c + b H 1 H 1 max ) H 1 1
B=
1
aw
awH 11 1
www.intechopen.com
0
2
w ( c + b H 2 H 2 max ) H 212
2
w R ( H 3 max H 3 ) 2 H 212
2
1 3
2
2
w R ( H 3 max H 3 ) H 3
0
w ( c + b H 2 H 2 max ) H 21 2
0
1
w R ( H 3 max H 3 )2 H 31 3
2
143
1 0 0
C = 0 1 0
0 0 1
(60)
0 0 0 0
D = 0 0 0 0
0 0 0 0
The parameters 1, 2 and 3 are flow coefficients for each tank of the model. The described
linearized state-space model is used as an initial model for the training process of the fuzzyneural model during the experiments.
Fig. 5. Model of the Multi Tank system as a pump and valve-controlled system
In this case study a multi-input multi-output (MIMO) configuration of the Inteco Multi Tank
system is used (Fig. 5). This corresponds to the linearized state-space model (60). Several
issues have been recognized as causes of additional nonlinearities in plant dynamics:
www.intechopen.com
144
Figures below show typical results for level control problem. The reference value for each
tank is changed consequently in different time. The proposed fuzzy-neural identification
procedure ensures the matrices for the optimization problem of model predictive control at
each sampling time Ts. The plant modelling process during the unconstrained and
constrained MPC experiments are shown in Fig. 6 and Fig. 9, respectively.
4.2 Experimental results with unconstrained model predictive control
The proposed unconstrained model predictive control algorithm (Table 2) with the TakagiSugeno fuzzy-neural model as a predictor has been applied to the level control problem. The
experiments have been implemented with the parameters in Table 4. The weighting
matrices are specified as follow: Q = 0.01 * diag(1, 1, 1) and R = 10 e 4 * diag(1, 1, 1, 1) . Note
identification H3, m
identification H2, m
identification H1, m
that the weighting matrix R is constant over all prediction horizon, which allows to avoid
matrix inversion at each sampling time with one calculation of R 1 at time k=0.
plant output
model output
0.3
0.2
0.1
0
-0.1
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
0.4
0.2
0
0.4
0.2
www.intechopen.com
145
reference
FNN MPC
H1, m
0.2
0.1
0
0
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
0.3
H2, m
0.2
0.1
0
0.3
H3, m
0.2
0.1
0
-4
x 10
1
0
100
200
300
time,sec
400
500
600
0
-4
x 10
100
200
300
time,sec
400
500
600
0
-4
x 10
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
-4
C1
2
1
0
C2
2
1
0
2
C3
x 10
1
0
www.intechopen.com
146
identification H3, m
identification H2, m
identification H1, m
0.4
plant output
model output
0.3
0.2
0.1
0
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
0.4
0.2
0
-0.2
0.4
0.2
0
-0.2
www.intechopen.com
147
0.4
reference
liquid level
H1, m
0.3
0.2
0.1
0
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
0.4
H2, m
0.3
0.2
0.1
0
0.4
H3, m
0.2
0
-0.2
Fig. 10. Transient responses of the multi tank system outputs constrained NMPC
-5
20
x 10
10
0
0
-4
x 10
100
200
300
time,sec
400
500
600
0
-4
x 10
100
200
300
time,sec
400
500
600
0
-4
x 10
100
200
300
time,sec
400
500
600
100
200
300
time,sec
400
500
600
C1
2
1
0
-1
C2
2
1
0
-1
C3
2
1
0
-1
Fig. 11. Transient responses of the multi tank system inputs constrained NMPC
www.intechopen.com
148
5. Conclusions
This chapter has presented an effective approach to fuzzy model-based control. The
effective modelling and identification techniques, based on fuzzy structures, combined with
model predictive control strategy result in effective control for nonlinear MIMO plants. The
goal was to design a new control strategy - simple in realization for designer and simple in
implementation for the end user of the control systems.
The idea of using fuzzy-neural models for nonlinear system identification is not new,
although more applications are necessary to demonstrate its capabilities in nonlinear
identification and prediction. By implementing this idea to state-space representation of
control systems, it is possible to achieve a powerful model of nonlinear plants or processes.
Such models can be embedded into a predictive control scheme. State-space model of the
system allows constructing the optimization problem, as a quadratic programming problem.
It is important to note that the model predictive control approach has one major advantage
the ability to solve the control problem taking into consideration the operational constraints
on the system.
This chapter includes two simple control algorithms with their respective derivations. They
represent control strategies, based on the estimated fuzzy-neural predictive model. The twostage learning gradient procedure is the main advantage of the proposed identification
procedure. It is capable to model nonlinearities in real-time and provides an accurate model
for MPC optimization procedure at each sampling time.
The proposed consequent solution for unconstrained MPC problem is the main contribution
for the predictive optimization task. On the other hand, extraction of a local linear model,
obtained from the inference process of a TakagiSugeno fuzzy model allows treating the
nonlinear optimization problem in presence of constraints as an LIQP.
The model predictive control scheme is employed to reduce structural response of the
laboratory system - multi tank system. The inherent instability of the system makes it
difficult for modelling and control. Model predictive control is successfully applied to the
studied multi tank system, which represents a multivariable controlled process. Adaptation
of the applied fuzzy-neural internal model is the most common way of dealing with plants
nonlinearities. The results show that the controlled levels have a good performance,
following closely the references and compensating the disturbances.
The contribution of the proposed approach using TakagiSugeno fuzzy model is the
capacity to exploit the information given directly by the TakagiSugeno fuzzy model. This
approach is very attractive for systems from high order, as no simulation is needed to obtain
the parameters for solving the optimization task. The models state-space matrices can be
www.intechopen.com
149
generated directly from the inference of the fuzzy system. The use of this approach is very
attractive to the industry for practical reasons related with the capacity of this model
structure to combine local models identified in experiments around the different operating
points.
6. Acknowledgment
The authors would like to acknowledge the Ministry of Education and Science of Bulgaria,
Research Fund project BY-TH-108/2005.
7. References
Ahmed S., M. Petrov, A. Ichtev (July 2010). Fuzzy Model-Based Predictive Control
Applied to Multivariable Level Control of Multi Tank System. Proceedings of 2010
IEEE International Conference on Intelligent Systems (IS 2010), London, UK. pp. 456
- 461.
Ahmed S., M. Petrov, A. Ichtev, Model predictive control of a laboratory model coupled
water tanks, in Proceedings of International Conference Automatics and Informatics09,
October 14, 2009, Sofia, Bulgaria. pp. VI-33 - VI-35.
kesson Johan. MPCtools 1.0Reference Manual. Technical report ISRN LUTFD2/TFRT-7613--SE, Department of Automatic Control, Lund Institute of Technology,
Sweden, January 2006.
Camacho E. F., C. Bordons (2004). Model Predictive Control (Advanced Textbooks in
Control and Signal Processing). Springer-Verlag London, 2004.
Espinosa J., J. Vandewalle and V. Wertz. Fuzzy Logic, Identification and Predictive Control.
(Advances in industrial control). Springer-Verlag London Limited, 2005.
Fletcher R. (2000). Practical Methods of Optimization. 2nd.ed., Wiley, 2000.
Inteco
Ltd.
(2009).
Multitank
System
Users
Manual.
Inteco
Ltd.,
http://www.inteco.com.pl.
Lee, J.H.; Morari, M. & Garcia, C.E. (1994). State-space interpretation of model predictive
control, Automatica, 30(4), pp. 707-717.
Maciejowski J. M. (2002). Predictive Control with Constraints. Prentice Hall Inc., NY, USA,
2002.
Martinsen F., Lorenz T. Biegler, Bjarne A. Foss (2004). A new optimization algorithm with
application to nonlinear MPC, Journal of Process Control, vol.14, pp 853865, 2004.
Mendona L.F., J.M. Sousa J.M.G. S da Costa (2004). Optimization Problems in
Multivariable Fuzzy Predictive Control, International Journal of Approximate
Reasoning, vol. 36, pp. 199221, 2004 .
Mollov S, R. Babuska, J. Abonyi, and H. Verbruggen (October 2004). Effective Optimization
for Fuzzy Model Predictive Control. IEEE Transactions on Fuzzy Systems, Vol. 12,
No. 5, pp. 661 675.
Petrov M., A. Taneva, T. Puleva, S. Ahmed (September, 2008). Parallel Distributed NeuroFuzzy Model Predictive Controller Applied to a Hydro Turbine Generator.
Proceedings of the Forth International IEEE Conference on "Intelligent Systems",
www.intechopen.com
150
Golden Sands resort, Varna, Bulgaria. ISBN 978-1-4244-1740-7, Vol. I, pp. 9-20 - 925.
Petrov M., I. Ganchev, A. Taneva (November 2002). Fuzzy model predictive control of
nonlinear processes. Preprints of the International Conference on "Automation and
Informatics 2002", Sofia, Bulgaria, 2002. ISBN 954-9641-30-9, pp. 77-80.
Rossiter J.A. (2003). Model based predictive control A practical Approach. CRC Press, 2003.
www.intechopen.com
ISBN 978-953-307-298-2
Hard cover, 418 pages
Publisher InTech
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:
Michail Petrov, Sevil Ahmed, Alexander Ichtev and Albena Taneva (2011). Fuzzyneural Model Predictive
Control of Multivariable Processes, Advanced Model Predictive Control, Dr. Tao ZHENG (Ed.), ISBN: 978-953307-298-2, InTech, Available from: http://www.intechopen.com/books/advanced-model-predictivecontrol/fuzzy-neural-model-predictive-control-of-multivariable-processes
InTech Europe
InTech China