Kronecker Product
Kronecker Product
Kronecker Product
1. Introduction
T he Kronecker product of two matrices, from its author Leopold Kronecker
(1823-1891) is a very important field of linear algebra, numerical linear algebra, nu-
merical analysis, data mining, signal processing, computer vision and others. Many
properties about its trace, determinant, eigenvalues, and other decompositions have
been discovered during this time, and are now part of classical linear algebra litera-
ture (See for instance [1, 2, 3, 4]).
As noted above, the Kronecker product is a particulary important tool in linear
algebra and in matrix equations. By using appropriate hypotheses, it is possible
to transform a matrix equation into a linear system [5] and vice-versa. The same
applies to the Kronecker sum. This will be a major part of our paper.
Kronecker sum decomposition (KSD) means that a matrix T can be transformed
that the Kronecker sum form of other matrices A, B, i.e. T = A⊗I +I ⊗B, Kronecker
sum gemel decomposition (KSGD) means the Kronecker sum in the special case
A = B. Kronecker sum isomer decomposition (KSID) corresponds to the case
T = A ⊗ I + I ⊗ AT , where AT is the transpose matrix of A.
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
Liu [6] had some interesting work about Kronecker product decomposition (KPD),
Zhang and Ding [7] gave a new result about the singular value of the Kronecker prod-
uct and their applications. Moreover, Jarlebring [8] studied methods for Lyapunov
equations and Datta [9] dealt with the iterative methods of solving matrix equations.
In this paper, from the point of view of the inverse problem theory [6], we mainly
explore the sufficient and necessary conditions and algorithms for KSD, KSGD,
KSID and some of their applications to build a new preconditioner for solving linear
systems using the Conjugate Gradient (CG) method, which can be converted into a
Conjugate Gradient method in matrix format (CG-M) for solving matrix equations
such as Lyapunov equations and Sylvester equations resulting from transforming lin-
ear systems using Kronecker sum decomposition. We note the difference in execution
speed between CG and CG-M methods.
Our paper is organized as follows. We present the preliminaries on the definitions
and properties of Kronecker product and sum in Section 2. We devote the Section
3 to the study of Kronecker sum decomposition with results on the sufficient and
necessary conditions respectively of KSD in Section 3.1 and KSGD and KSID in
Section 3.2. We present in Section 4, some applications of the algebraic theory of
Kronecker previously studied. Finally, the Section 5 achieves our work.
2. Preliminaries
2.1. Definitions.
Definition 2.1 (Vec-operation [10]). For any F ∈ Rr×s , we define the vector
vec(F ) = [f11 , ..., fr1 , f12 , ..., fr2 , ..., f1s , ..., frs ]T ∈ Rrs ,
Definition 2.2 (Kronecker product [11]). Let A ∈ Rr×s and B ∈ Rm×n be two
matrices. Then the rm × sn matrix
a11 B a12 B ... a1s B
a21 B a22 B ... a2s B
(2.1) A⊗B = :
: :
ar1 B ar2 B . . . ars B
1×4 1×6 2×4 2×6
1 2 4 6 1 × 5 1×8 2×5 2 × 8
A⊗B = ⊗ =
3 × 4
3 7 5 8 3×6 7×4 7 × 6
3×5 3×8 7×5 7×8
4 6 8 12
5 8 10 16
=
12
.
18 28 42
15 24 35 56
Some immediate consequences of Definition 2.2 are listed below [11].
• The Kronecker product of two matrices differs from ordinary matrix multi-
plication in that it is defined for any two matrices . It does not require that
the number of columns in first matrix be equal to the number of rows in
second matrix.
Example 2.5. Here is an example of a Kronecker product of a 2 × 3 matrix and a
3 × 3 matrix
4 8 2 6 12 3 2 4 1
10 0 14 15 0 21 5 0 7
2 4 1
2 3 1 6 4 4 9 6 6 3 2 2
⊗ 5 0 7 =
.
0 5 4 0 0 0 10 20 5 8 16 4
3 2 2
0
0 0 25 0 35 20 0 28
0 0 0 15 10 10 12 8 8
Then
3 3 0 0
1 6 0 0
A ⊕ B = A ⊗ I2 + I2 ⊗ B =
,
0 0 3 3
0 0 1 6
while
3 0 3 0
0 3 0 3
B ⊕ A = B ⊗ I2 + I2 ⊗ A =
1
.
0 6 0
0 1 0 6
2.2. Properties.
2.2.1. Basic properties.
We state some basic facts about Kronecker product [11, 17], which are easily derived
from Definition 2.2.
(1) µ ⊗ A = A ⊗ µ = µA, for any scalar µ.
(2) (λA) ⊗ (µB) = λµ(A ⊗ B), for any scalars λ and µ.
(3) (A + B) ⊗ C = (A ⊗ C) + (B ⊗ C), if A and B are the same size.
(4) A ⊗ (B + C) = (A ⊗ B) + (A ⊗ C), if A and B are the same size.
(5) A ⊗ (B ⊗ C) = (A ⊗ B) ⊗ C.
(6) (A ⊗ B)T = AT ⊗ B T .
Theorem 2.11 ([7]). Let A and B any two square matrices.
(1) If A and B are invertible matrices, then A ⊗ B is invertible and (A ⊗ B)−1 =
A ⊗ B −1 .
−1
= BXAT .
Theorem 2.13 (5) is extremely important in solving the matrix equations [7].
For more properties on Kronecker products and sum see [16] and [10].
A A B B
T11 ... T1n T11 ... T1n T11 ... T1n
A⊗I = : .. .. ..
: , I ⊗B = : : , A⊕B = : : ,
. . .
A A B B
Tn1 ... Tnn Tn1 ... Tnn Tn1 ... Tnn
with Tij , TijA , TijB ∈ Rm×m , and Tij = TijA + TijB (i = 1, ..., n, j = 1, ..., n), where
rank{vec(T11 ), ..., vec(Tnn )}
A B A B
= rank{vec(T11 + T11 ), ..., vec(Tnn + Tnn )}
A B A B
= rank{vec(T11 ) + vec(T11 ), ..., vec(Tnn ) + vec(Tnn )}
A A B B
= rank{{vec(T11 ), ..., vec(Tnn )} + {vec(T11 ), ..., vec(Tnn )}}
A A B B
≤ rank{vec(T11 ), ..., vec(Tnn )} + rank{vec(T11 ), ..., vec(Tnn )}
≤ 1 + 1 = 2.
We adapt Theorem 3.6 valid for the Kronecker product to Kronecker sum by
posing
q
Tii − ((tii
i,i )/2)I = A in place of Tij / tij
i,j = A.
8
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
2
×n2
Theorem 3.7 (KSGD). An arbitrary square matrix T ∈ Rn ,
T11 ... T1n
.. n×n
T = : : , with Tij ∈ R (i = 1, ..., n, j = 1, ..., n)
.
Tn1 ... Tnn
can be decomposed tothe form:
ii
there exists a diagonal sub-block Tii = (ts,t )s=1,...,n,t=1,...,n ,
where Tij = (tii
i,j )I if i 6= j,
T = A⊗I+I⊗A ⇔
and denote A = Tij − ((tij i,j )/2)I if i = j,
then for ∀i, Tii = ((ti,i )/2)I + A, with A ∈ Rn×n .
ii
2
×n2
Theorem 3.8 (KSID). An arbitrary square matrix T ∈ Rn ,
T11 ... T1n
.. n×n
T = : : , with Tij ∈ R (i = 1, ..., n, j = 1, ..., n)
.
Tn1 ... Tnn
can be decomposed to the form:
= (tii
there exists a diagonal sub-block Tii r,s )r=1,...,n,s=1,...,n ,
where Tij = (tii
j,i )I if i 6= j,
T = A⊗I+I⊗A ⇔
and denote A0 = Tij − ((tij
i,j )/2)I if i = j,
then for ∀i, Mii = ((ti,i )/2)I + A0 ,
ii
with A ∈ Rn×n .
4. Applications
4.1. Preconditioning the Conjugate Gradient method.
To solve large sparse linear systems,
(4.1) T x = f,
9
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
αk = (rkT rk )/(pTk dk );
xk+1 = xk + αk pk ;
rk+1 = rk − αk dk ;
T
βk = (rk+1 rk+1 )/(rkT rk );
pk+1 = rk+1 + βk pk .
A preconditioner is a matrix M that we use to speed up the convergence of the
CG method to get the Preconditioned Conjugate Gradient method (PCG) [10].
αk = (sTk rk )/(pTk dk );
xk+1 = xk + αk pk ;
rk+1 = rk − αk dk ;
sk+1 = sk − αk M dk ;
pk+1 = sk+1 + βk pk .
∂2φ ∂2φ
(4.2) + 2 = f (x, y) on Ω,
∂x2 ∂y
10
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
2 −1 0 0 0 ... 0
−1 2 −1 0 0 ... 0
0
−1 2 −1 0 ... 0
A= :
.. .. .. .. ..
. . . . . :
0
... 0 −1 2 −1 0
0 ... ... 0 −1 2 −1
0 ... ... ... 0 −1 2
It is clear that the matrix T is symmetric positive definite [10], so the linear system
(4.4) can be solved by the Conjugate Gradient method.
The following table summarizes the results of solving the Poisson problem using
the CG method and PCG method by taking different preconditioners as incomplete
Cholesky preconditioner IC(0) and Incomplete Cholesky with threshold dropping
ICT(1e-3) preconditioner, and finally our preconditioner IC-K defined by M =
L ⊗ I + I ⊗ L, where A = LL0 is the Incomplete Cholesky factorisation of the matrix
A. This is done using Matlab database [19].
12
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
4.2.2. Lyapunov equation.
Definition 4.3 ([8]). Lyapunov matrix equation is a particular case of the Sylvester
matrix equation when B equals A. For square matrices A, F ∈ Rn×n , this matrix is
written in the following form:
(4.8) AX + XAT = F.
13
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
2
×n2
Property 4.4. If T ∈ Rn can be decomposed as KSGD (T = A ⊗ I + I ⊗ A)
such that A ∈ Rn×n , then
T x = f ⇔ AX + XAT = F,
where X, F ∈ Rn×n and x = vec(X), f = vec(F ).
4.3. The Conjugate Gradient algorithm in matrix format.
According to Property 4.4 the linear system (4.4) becomes the following Lyapunov
equation:
(4.9) AX + XAT = F,
where x = vec(X) and f = vec(F ) .
Xk+1 = Xk + αk Pk ;
Rk+1 = Rk − αk Dk ;
Xm Xm m X
X m
βk = (Rij )k+1 ∗ (Rij )k+1 / (Rij )k ∗ (Rij )k ;
i=1 j=1 i=1 j=1
Pk+1 = Rk+1 + βk Pk .
Example 4.5. We try to solve the linear system (4.4) with the two algorithms CG
and CG-M for various m.
The following table shows the difference in speed between CG and CG-M. We use
T T
tol = 1e − 8, f = [1, 1, ..., 1] and x0 = [0, 0, ..., 0] .
14
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
5. Conclusion
Defining the Kronecker sum decomposition of matrices can be very useful for
speeding up the solution of linear systems T x = f either by creating preconditioners
or converting them into matrix equations.
6. Acknowledgements.
The authors wish to warmly thank the referees for all their useful suggestions.
References
[1] C. F. Van Loan, The ubiquitous Kronecker product, Journal of Computational and Applied
Mathematics 123 (2000) 85–100.
[2] S. Liu and G. Trenkler, Hadamard, Khatri-Rao, Kronecker And Other Matrix Products, Inter-
national Journal of Information and Systems Sciences 4 (1) (2008) 160–177.
15
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx
[3] P. A. Regaliat and S. K. Mitra, Kronecker products, unitary matrices and signal processing
applications, Industrial and Applied Mathematics 31 (4) (1989) 586–613.
[4] D. S. Bernstein, Matrix Mathematics, Theory, Facts and Formulas, 2nd edition, Princeton
University Press 2009.
[5] E. M. Sadek, Méthodes itératives pour la résolution d’équations matricielles, Modélisation et
simulation, Université du Littoral Côte d’Opale 2015.
[6] F. Liu, New Kronecker product decompositions and its applications, International Journal of
Engineering and Science 1 (11) (2012) 25–30.
[7] H. Zhang and F. Ding, On the Kronecker products and their applications, Journal of Applied
Mathematics 2013 (2013) Article ID 296185, 8 pages.
[8] E. Jarlebring, Numerical Methods for Lyapunov Equations, KTH Royal Institute of Technology
2017.
[9] B. N. Datta, Numerical Methods For Linear Control Systems Design And Analysis, Elsevier
Academic Press 2003.
[10] T. Lyche, Numerical Linear Algebra and Matrix Factorizations, Texts in Computational Sci-
ence and Engineering, Springer 2020.
[11] S. Banerjee and A. Roy, Linear algebra and matrix analysis for statistics, Texts in Statistical
Science 2014.
[12] W. H. Steeb, Matrix Calculus and Kronecker Product with Applications and C++ Programs,
International School of Scientific Computing 1997.
[13] A. J. Laub, Matrix Analysis for Scientists and Engineers, Industrial and App Math, SIAM
Philadelphia 2005.
[14] V. Amoia, G. De Micheli and M. Santomauro, Computer-oriented formulation of transition-
rate matrices via Kronecker algebra, IEEE Transactions on Reliability 30 (1981) 123–132.
[15] A. Graham, Kronecker Products and Matrix Calculus With Applications, Ellis Horwood Lim-
ited, Chichester 1981.
[16] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cam-
bridge 1991.
[17] Â. Björck, Numerical Methods in Matrix Computations, Texts in Applied Mathematics, Vol-
ume 59 2015.
[18] E. Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, Inc. 2011.
[19] MATLAB User’s Guide, Version 8.1.0.604, The Math Works (R2013a).
16