Kronecker Product

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Annals of Fuzzy Mathematics and Informatics

Volume x, No. x, (Month 201y), pp. 1–xx @FMI


ISSN: 2093–9310 (print version) c Research Institute for Basic
ISSN: 2287–6235 (electronic version) Science, Wonkwang University
http://www.afmi.or.kr http://ribs.wonkwang.ac.kr

Kronecker sum decomposition and its applications

Youssouf Mezzar, Kacem Belghaba

Received 24 June 2021; Revised 24 September 2021; Accepted 4 October 2021

Abstract. We develop here, two types of Kronecker sum decompo-


sitions KSGD and KSID. We study Kronecker sum decomposition KSD,
KSGD, and KSID and its algorithms. We conclude with some applications
of the Kronecker sum decomposition.

2010 AMS Classification: 15A69, 65F08, 65F10, 65Y04


Keywords: Kronecker product decomposition, Kronecker sum decomposition,
Isomer decomposition, Gemel decomposition, Matrix equation, Rank, Conjugate
Gradient, Preconditioner incomplete Cholesky factorisation.
Corresponding Author: Kacem Belghaba (belghaba@yahoo.fr)

1. Introduction
T he Kronecker product of two matrices, from its author Leopold Kronecker
(1823-1891) is a very important field of linear algebra, numerical linear algebra, nu-
merical analysis, data mining, signal processing, computer vision and others. Many
properties about its trace, determinant, eigenvalues, and other decompositions have
been discovered during this time, and are now part of classical linear algebra litera-
ture (See for instance [1, 2, 3, 4]).
As noted above, the Kronecker product is a particulary important tool in linear
algebra and in matrix equations. By using appropriate hypotheses, it is possible
to transform a matrix equation into a linear system [5] and vice-versa. The same
applies to the Kronecker sum. This will be a major part of our paper.
Kronecker sum decomposition (KSD) means that a matrix T can be transformed
that the Kronecker sum form of other matrices A, B, i.e. T = A⊗I +I ⊗B, Kronecker
sum gemel decomposition (KSGD) means the Kronecker sum in the special case
A = B. Kronecker sum isomer decomposition (KSID) corresponds to the case
T = A ⊗ I + I ⊗ AT , where AT is the transpose matrix of A.
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

Liu [6] had some interesting work about Kronecker product decomposition (KPD),
Zhang and Ding [7] gave a new result about the singular value of the Kronecker prod-
uct and their applications. Moreover, Jarlebring [8] studied methods for Lyapunov
equations and Datta [9] dealt with the iterative methods of solving matrix equations.
In this paper, from the point of view of the inverse problem theory [6], we mainly
explore the sufficient and necessary conditions and algorithms for KSD, KSGD,
KSID and some of their applications to build a new preconditioner for solving linear
systems using the Conjugate Gradient (CG) method, which can be converted into a
Conjugate Gradient method in matrix format (CG-M) for solving matrix equations
such as Lyapunov equations and Sylvester equations resulting from transforming lin-
ear systems using Kronecker sum decomposition. We note the difference in execution
speed between CG and CG-M methods.
Our paper is organized as follows. We present the preliminaries on the definitions
and properties of Kronecker product and sum in Section 2. We devote the Section
3 to the study of Kronecker sum decomposition with results on the sufficient and
necessary conditions respectively of KSD in Section 3.1 and KSGD and KSID in
Section 3.2. We present in Section 4, some applications of the algebraic theory of
Kronecker previously studied. Finally, the Section 5 achieves our work.

2. Preliminaries
2.1. Definitions.

Definition 2.1 (Vec-operation [10]). For any F ∈ Rr×s , we define the vector

vec(F ) = [f11 , ..., fr1 , f12 , ..., fr2 , ..., f1s , ..., frs ]T ∈ Rrs ,

where, the columns of F are stacked on top of each other.

Definition 2.2 (Kronecker product [11]). Let A ∈ Rr×s and B ∈ Rm×n be two
matrices. Then the rm × sn matrix
 
a11 B a12 B ... a1s B
a21 B a22 B ... a2s B 
(2.1) A⊗B =  :

: : 
ar1 B ar2 B . . . ars B

is the Kronecker product of A and B.

Remark 2.3. The Kronecker product is sometimes called a direct product or a


tensor product [12].

Example 2.4. The Kronecker product of two 2 × 2 matrices gives a 4 × 4 matrix .


   
1 2 4 6
If A = and B = , then
3 7 5 8
2
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

 
    1×4 1×6 2×4 2×6
1 2 4 6 1 × 5 1×8 2×5 2 × 8
A⊗B = ⊗ = 
3 × 4

3 7 5 8 3×6 7×4 7 × 6
3×5 3×8 7×5 7×8
 
4 6 8 12
5 8 10 16
= 
12
.
18 28 42
15 24 35 56
Some immediate consequences of Definition 2.2 are listed below [11].
• The Kronecker product of two matrices differs from ordinary matrix multi-
plication in that it is defined for any two matrices . It does not require that
the number of columns in first matrix be equal to the number of rows in
second matrix.
Example 2.5. Here is an example of a Kronecker product of a 2 × 3 matrix and a
3 × 3 matrix
 
4 8 2 6 12 3 2 4 1
  10 0 14 15 0 21 5 0 7
  2 4 1  
2 3 1 6 4 4 9 6 6 3 2 2
⊗ 5 0 7 =
  .
0 5 4 0 0 0 10 20 5 8 16 4
3 2 2 
0

0 0 25 0 35 20 0 28
0 0 0 15 10 10 12 8 8

• The dimensions of A ⊗ B and B ⊗ A are equal.


If A is r × s matrix and B is m × n matrix, then both A ⊗ B and B ⊗ A have
rm rows and sn columns. However, Kronecker product is not commutative.
Example 2.6. Let A and B be the following 2 × 2 matrices:
   
2 1 2 0
A= and B= .
5 4 1 5
Then
   
4 0 2 0 4 2 0 0
2 10 1 5 10 8 0 0
A⊗B =
10
, while B⊗A= .
0 8 0  2 1 10 5
5 25 4 20 5 4 25 20
Remark 2.7. In matlab, the vec-operation can be computed with b=B(:) and
the inverse operation can be computed with B=reshape(b,n,length(b)/n), such
that n is the row number of matrix B. The Kronecker product is implemented in
kron(A,B).
Definition 2.8 (Kronecker sum [13]). Let A ∈ Rr×r and B ∈ Rs×s . We denote by
A ⊕ B the Kronecker sum of A and B defined by:
(2.2) A ⊕ B = (A ⊗ Is ) + (Ir ⊗ B),
3
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

where A ⊕ B is an rs × rs matrix, and Ir , Is represent the identity matrices of order


r, s respectively.
Note that the Kronecker sum definition may have other formulations in the liter-
ature. Notably, Amoia et al. [14] as well as Graham [15] use the above definition,
while Horn and Johnson [16] use A ⊕ B = (Im ⊗ A) + (B ⊗ In ). In our case, we will
work with the Amoia-De Micheli’s version of the Kronecker sum.
Example 2.9.   
1 2 3 1 0 0    
2 1 1 0
(1) Let A = 3 2 1 , I3 = 0 1 0 and B = , I2 = . Then
2 3 0 1
1 1 4 0 0 1
we have
   
1 0 2 0 3 0 2 1 0 0 0 0
0 1 0 2 0 3 2 3 0 0 0 0
   
3 0 2 0 1 0 + 0 0 2 1 0 0

A ⊕ B = (A ⊗ I2 ) + (I3 ⊗ B) = 
0

 3 0 2 0 1 0
 0 2 3 0 0
1 0 1 0 4 0 0 0 0 0 2 1
0 1 0 1 0 4 0 0 0 0 2 3

1 2 0 3 0 3
4 0 2 0 3 2

0 4 1 1 0 3
= .

0
3 2 5 0 1 
0 1 0 6 1 1
1 0 1 2 7 0
   
2 −1 0 1 0 0
(2) Let A = −1 2 −1 and I = 0 1 0 . Then we get
0 −1 2  0  0 1 
2I −I 0 A 0 0
A ⊕ A = A ⊗ I + I ⊗ A = −I 2I −I  +  0 A 0 
0 −I 2I 0 0 A
 
4 -1 0 -1 0 0 0 0 0

 -1 4 -1 0 -1 0 0 0 0  

 0 -1 4 0 0 -1 0 0 0  

 -1 0 0 4 -1 0 -1 0 0 
= 
 0 -1 0 -1 4 -1 0 -1 0 .

 0 0 -1 0 -1 4 0 0 -1 

 0 0 0 -1 0 0 4 -1 0 
 0 0 0 0 -1 0 -1 4 -1 
0 0 0 0 0 -1 0 -1 4
In general, A ⊕ B 6= B ⊕ A.
Example 2.10. Let A and B be the following 2 × 2 matrices:
   
1 0 2 3
A= and B = .
0 1 1 5
4
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

Then  
3 3 0 0
1 6 0 0
A ⊕ B = A ⊗ I2 + I2 ⊗ B = 
 ,
0 0 3 3
0 0 1 6
while  
3 0 3 0
0 3 0 3
B ⊕ A = B ⊗ I2 + I2 ⊗ A = 
1
.
0 6 0
0 1 0 6
2.2. Properties.
2.2.1. Basic properties.
We state some basic facts about Kronecker product [11, 17], which are easily derived
from Definition 2.2.
(1) µ ⊗ A = A ⊗ µ = µA, for any scalar µ.
(2) (λA) ⊗ (µB) = λµ(A ⊗ B), for any scalars λ and µ.
(3) (A + B) ⊗ C = (A ⊗ C) + (B ⊗ C), if A and B are the same size.
(4) A ⊗ (B + C) = (A ⊗ B) + (A ⊗ C), if A and B are the same size.
(5) A ⊗ (B ⊗ C) = (A ⊗ B) ⊗ C.
(6) (A ⊗ B)T = AT ⊗ B T .
Theorem 2.11 ([7]). Let A and B any two square matrices.
(1) If A and B are invertible matrices, then A ⊗ B is invertible and (A ⊗ B)−1 =
A ⊗ B −1 .
−1

(2) If A and B are normal matrices, then A ⊗ B is a normal matrix.


(3) If A and B are unitary (orthogonal) matrices, then A ⊗ B is a unitary (or-
thogonal) matrix.
Theorem 2.12 (The mixed product rule [7]). Suppose that A, B, C, D are rectangu-
lar matrices whose dimensions make it possible to define the products AC and BD.
Then
(2.3) (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD).
Theorem 2.13 ([11, 10]). For m, n ∈ N, consider the square matrices A ∈ Rm,m
with eigenpairs (αi ; xi ), i = 1, ...m, and B ∈ Rn,n with eigenpairs (βj ; yj ), j = 1, ...n.
(1) If AT = A and B T = B, then (A ⊗ B)T = A ⊗ B and (A ⊕ B)T = A ⊕ B.
(2)(A ⊗ B)(xi ⊗ yj ) = (αi βj )(xi ⊗ yj ), i = 1, ...m, j = 1, ...n.
(3) (A ⊕ B)(xi ⊗ yj ) = (αi + βj )(xi ⊗ yj ), i = 1, ...m, j = 1, ...n.
(4) If one of A, B is positive definite and the other positive semi definite, then
A ⊕ B is positive definite.
(5) If A, B and X are matrices in Rn×n , then BXAT = (A ⊗ B)vec(X).
Proof. (1) By Basic properties 2.2.1 (6), (A ⊗ B)T = AT ⊗ B T = A ⊗ B. Moreover,
(A ⊕ B)T = (A ⊗ I + I ⊗ B)T = (A ⊗ I)T + (I ⊗ B)T = (A ⊗ I) + (I ⊗ B) = A ⊕ B.
(2) For all i, j, we have
(A ⊗ B)(xi ⊗ yj ) = (Axi ) ⊗ (Byj ) = (αi xi ) ⊗ (βj yj ) = (αi βj )(xi ⊗ yj ).
5
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

(3) We have (A ⊗ I)(xi ⊗ yj ) = αi (xi ⊗ yj ) and (I ⊗ B)(xi ⊗ yj ) = βj (xi ⊗ yj ).


Moreover, for i = 1, ..., m, j = 1, ..., n, we get
(A ⊕ B)(xi ⊗ yj ) = A ⊗ I + I ⊗ B)(xi ⊗ yj )
= (A ⊗ I)(xi ⊗ yj ) + (I ⊗ B)(xi ⊗ yj )
= αi (xi ⊗ yj ) + βj (xi ⊗ yj )
= (αi + βj )(xi ⊗ yj ).
(4) By (1), the eigenvalues αi + βj are positive because for all i, j, both αi and
βj are non negative and one of them is positive. Then A ⊕ B is positive definite.
Partition Xand A by columns as follows : X = [x1 , ..., xn ] and A = [a1 , ..., an ]. Then
 
a11 B ... a1n B  x1 
(A ⊗ B)vec(X) =  : ..
:  :
  
.
an1 B . . . ann B xn
 

X X
= B a1j xj , ..., anj xj 
j j

= B [Xa1 , ..., Xan ]

= BXAT .
Theorem 2.13 (5) is extremely important in solving the matrix equations [7].
For more properties on Kronecker products and sum see [16] and [10].

3. Kronecker sum decomposition


Initially, we assume that all matrices T, A, B used below have the appropriate
size.

3.1. Sufficient and necessary conditions of KSD.


First, we give the sufficient and necessary conditions of Kronecker sum decomposi-
tion.
Theorem 3.1 (KSD). An arbitrary square matrix T ∈ Rnm×nm ,
 
T11 ... T1n
.. m×m
T = : :  with Tij ∈ R (i = 1, ..., n, j = 1, ..., n),
 
.
Tn1 . . . Tnn
may be decomposed to the form:
T= A ⊗ I + I ⊗ B

 if we take B = T11 , Te = T − (In ⊗ B) and


⇔ C = {vec(Te11 ), vec(Te12 ), ..., vec(Te1n ), ..., vec(Tenn )}, then

 rank{C, vec(Im )} = 1, and denote vec(A)T = the first line of the matrix C

with A ∈ Rn×n , B ∈ Rm×m , m, n are some certain integers.


6
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

Corollary 3.2. An arbitrary square matrix T ∈ Rnm×nm ,


 
T11 ... T1n
.. m×m
T = : :  with Tij ∈ R (i = 1, ..., n, j = 1, ..., n),
 
.
Tn1 ... Tnn
may be decomposed to the form:

T = A ⊗ I + I ⊗ B ⇒ rank{vec(T11 ), vec(T12 ), ..., vec(T1n ), ..., vec(Tnn )} ≤ 2

with A ∈ Rn×n , B ∈ Rm×m , m, n are integers.

Proof. Suppose A ∈ Rn×n , B ∈ Rm×m and let In , Im be the identity matrices of


size n, m respectively. Then A ⊗ Im , In ⊗ B and A ⊕ B are the block matrices of
size nm × nm, where

A A B B
     
T11 ... T1n T11 ... T1n T11 ... T1n
A⊗I =  : .. .. ..
:  , I ⊗B =  : :  , A⊕B =  : : ,
     
. . .
A A B B
Tn1 ... Tnn Tn1 ... Tnn Tn1 ... Tnn

with Tij , TijA , TijB ∈ Rm×m , and Tij = TijA + TijB (i = 1, ..., n, j = 1, ..., n), where
rank{vec(T11 ), ..., vec(Tnn )}
A B A B
= rank{vec(T11 + T11 ), ..., vec(Tnn + Tnn )}
A B A B
= rank{vec(T11 ) + vec(T11 ), ..., vec(Tnn ) + vec(Tnn )}
A A B B
= rank{{vec(T11 ), ..., vec(Tnn )} + {vec(T11 ), ..., vec(Tnn )}}
A A B B
≤ rank{vec(T11 ), ..., vec(Tnn )} + rank{vec(T11 ), ..., vec(Tnn )}
≤ 1 + 1 = 2. 

This can be demonstrated by using the following two theorems.

Theorem 3.3 (KPD [6]). An arbitrary matrix T ∈ Rnm×nm ,


 
T11 ... T1n
.. m×m
T = : :  with Tij ∈ R (i = 1, ..., n, j = 1, ..., n),
 
.
Tn1 ... Tnn
can be decomposed to the form:

T = A ⊗ B ⇔ rank{vec(T11 ), vec(T12 ), ..., vec(T1n ), ..., vec(Tnn )} = 1

with A ∈ Rn×n , B ∈ Rm×m , m, n are integers.

Theorem 3.4 ([13]). Let C, D ∈ Rn×n . Then

0 ≤ rank(C + D) ≤ rank(C) + rank(D).

Remark 3.5. In general, the KSD of a matrix is not unique because of


7
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

(A + λI) ⊕ (B + µI) = (A + λI) ⊗ I + I ⊗ (B + µI)


= (A ⊗ I) + λ(I ⊗ I) + (I ⊗ B) + µ(I ⊗ I)
= (A ⊗ I) + (I ⊗ B) + (λ + µ)(I ⊗ I)
= (A ⊗ I) + (I ⊗ B)
= A⊕B
with λ + µ = 0 for arbitrary matrices A, B and constants λ, µ.

Algorithm 1: Kronecker sum decomposition (KSD)


step 1: input T, n, m, verify the size of T is nm × nm
step 2: define B = T11
step 3: calculate Te = T − (In ⊗ B)
step 4: define Teij , i = 1, ..., n, j = 1, ..., n
step 5: calculate C = {vec(Te11 ), vec(Te12 ), ..., vec(Te1n ), ..., vec(Tenn )}
step 6: if rank{C, vec(In )} = 1 go to step 7
else output ”can not decompose”; end
step 7: define A which vec(A)0 =the first line of the matrix C; output A, B; end

3.2. Sufficient and necessary conditions of KSGD and KSID.


As we pointed out in the introduction, Fuxiang Liu [6] developed two new kinds
of Kronecker decompositions, Kronecker product gemel decomposition(KPGD) and
Kronecker product isomer decomposition (KPID). We will try to adapt these to Kro-
necker sum gemel decomposition (KSGD) and Kronecker sum isomer decomposition
(KSID).
2
×n2
Theorem 3.6 (KPGD [6]). An arbitrary square matrix T ∈ Rn , (T 6= 0),
 
T11 ... T1n
.. n×n
T = : :  with Tij ∈ R (i = 1, ..., n, j = 1, ..., n),
 
.
Tn1 ... Tnn
may be decomposed to the form:
T(= A ⊗ A
there is a sub-block Tij 6= 0, and tij
i,j > 0, where Tijq= (tij
s,t )s=1,...,n,t=1,...,n ,
⇔ q
ij ij
if denotes Tij / ti,j = A or − A, then ∀i, j, Tij = ti,j A, with A ∈ Rn×n .

We adapt Theorem 3.6 valid for the Kronecker product to Kronecker sum by
posing
q
Tii − ((tii
i,i )/2)I = A in place of Tij / tij
i,j = A.

8
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

2
×n2
Theorem 3.7 (KSGD). An arbitrary square matrix T ∈ Rn ,
 
T11 ... T1n
.. n×n
T = : :  , with Tij ∈ R (i = 1, ..., n, j = 1, ..., n)
 
.
Tn1 ... Tnn
can be decomposed tothe form:
ii
 there exists a diagonal sub-block Tii = (ts,t )s=1,...,n,t=1,...,n ,


 where Tij = (tii


 i,j )I if i 6= j,
T = A⊗I+I⊗A ⇔

 and denote A = Tij − ((tij i,j )/2)I if i = j,


 then for ∀i, Tii = ((ti,i )/2)I + A, with A ∈ Rn×n .
ii

Algorithm 2: Kronecker sum gemel decomposition (KSGD)


step 1: input T, n, verify the size of T is n2 × n2
step 2: define f lag = 0, Tij , i = 1, ..., n, j = 1, ..., n
step 3: for (i, j), i = 1, ..., n, j = 1, ..., n
define B = Tij − I ∗ (tij
i,j )/2;

if T = (B ⊗ I) + (I ⊗ B); flag=1; break;


step 4: if f lag = 1; A = B; output A; end
else output ”can not gemel decompose”; end

2
×n2
Theorem 3.8 (KSID). An arbitrary square matrix T ∈ Rn ,
 
T11 ... T1n
.. n×n
T = : :  , with Tij ∈ R (i = 1, ..., n, j = 1, ..., n)
 
.
Tn1 ... Tnn
can be decomposed to the form:

= (tii


 there exists a diagonal sub-block Tii r,s )r=1,...,n,s=1,...,n ,

 where Tij = (tii


 j,i )I if i 6= j,
T = A⊗I+I⊗A ⇔

 and denote A0 = Tij − ((tij
i,j )/2)I if i = j,


 then for ∀i, Mii = ((ti,i )/2)I + A0 ,
ii
with A ∈ Rn×n .

4. Applications
4.1. Preconditioning the Conjugate Gradient method.
To solve large sparse linear systems,
(4.1) T x = f,
9
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

where T is a sparse symmetric positive definite matrix, we use an iterative method


such as the Conjugate Gradient method (CG) [10], which we summarize in the fol-
lowing algorithm:
We start with x0 , p0 = r0 = f − T x0 , the algorithm is stopped when
||rk ||2 / ||f ||2 ≤ tol,

Algorithm 3: Conjugate Gradient (CG)


dk = T pk ;

αk = (rkT rk )/(pTk dk );

xk+1 = xk + αk pk ;

rk+1 = rk − αk dk ;

T
βk = (rk+1 rk+1 )/(rkT rk );

pk+1 = rk+1 + βk pk .
A preconditioner is a matrix M that we use to speed up the convergence of the
CG method to get the Preconditioned Conjugate Gradient method (PCG) [10].

We start with x0 , r0 = f − T x0 , p0 = s0 = M r0 , the algorithm is stopped when


||rk ||2 / ||f ||2 ≤ tol,

Algorithm 4: Preconditioned Conjugate Gradient (PCG)


dk = T pk ;

αk = (sTk rk )/(pTk dk );

xk+1 = xk + αk pk ;

rk+1 = rk − αk dk ;

sk+1 = sk − αk M dk ;

βk = (sTk+1 rk+1 )/(sTk rk );

pk+1 = sk+1 + βk pk .

4.1.1. Numerical example.


Consider an open Ω and a function f continuous on Ω. The Poisson problem is to
find a function of two variables φ(x, y) defined on Ω which verifies the following two
relations:

∂2φ ∂2φ
(4.2) + 2 = f (x, y) on Ω,
∂x2 ∂y
10
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

and boundary conditions


φ(x, y) = 0 on ∂Ω.
The finite difference approximation of the Poisson problem (4.2) can be written as
follows:
(4.3) 4wi,j − wi+1,j − wi−1,j − wi,j+1 − wi,j−1 = −h2 f (xi , yj ),
where wij is an approximation of φ(xi , yj ) , xi = ih and yj = jh , 0 ≤ i, j ≤ m + 1.
The system (4.3) can be written as the following linear system [18]:
(4.4) T x = f, T ∈ Rn×n , f ∈ Rn , n = m2 ,
where
 
D −I 0 0 0 ... 0
−I D −I 0 0 ... 0 
 
0
 −I D −I 0 ... 0 

(4.5) T = :
 .. .. .. .. .. 
 . . . . . : 

0
 ... 0 −I D −I 0
0 ... ... 0 −I D −I 
0 ... ... ... 0 −I D
such I is the identity matrix of size m × m , and D is also an m × m matrix, where
 
4 −1 0 0 0 ... 0
−1 4 −1 0 0 ... 0 
 
0
 −1 4 −1 0 . . . 0  
D= :
 .. .. .. .. .. 
 . . . . . : 

0
 . . . 0 −1 4 −1 0  
0 . . . . . . 0 −1 4 −1
0 . . . . . . . . . 0 −1 4
and
 T  T
x = φ1 , φ2 , φ3 , ..., φn , f = f1 , f2 , f3 , ..., fn .
For m = 3, the system (4.4) becomes
    
4 −1 0 −1 0 0 0 0 0 φ1 f1
−1 4 −1 0 −1 0 0 0 0 φ2  f2 
    
 0 −1 4 0 0 −1 0 0 0  φ3  f3 
   

−1 0 0 4 −1 0 −1 0 0 φ4  f4 
   

 0 −1 0 −1 4 −1 0 −1 0  φ5 
(4.6)  = f5 
   

0
 0 −1 0 −1 4 0 0 −1 φ6  
  
f6 

0
 0 0 −1 0 0 4 −1 0  φ7  f7 
   

0 0 0 0 −1 0 −1 4 −1 φ8  f8 
0 0 0 0 0 −1 0 −1 4 φ9 f9
The matrix T may be decomposed as KSGD (T = A ⊗ I + I ⊗ A) such that I is the
m × m identity matrix and A ∈ Rm×m , where
11
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

 
2 −1 0 0 0 ... 0
−1 2 −1 0 0 ... 0 
 
0
 −1 2 −1 0 ... 0 

A= :
 .. .. .. .. .. 
 . . . . . : 
0
 ... 0 −1 2 −1 0
0 ... ... 0 −1 2 −1
0 ... ... ... 0 −1 2
It is clear that the matrix T is symmetric positive definite [10], so the linear system
(4.4) can be solved by the Conjugate Gradient method.

The following table summarizes the results of solving the Poisson problem using
the CG method and PCG method by taking different preconditioners as incomplete
Cholesky preconditioner IC(0) and Incomplete Cholesky with threshold dropping
ICT(1e-3) preconditioner, and finally our preconditioner IC-K defined by M =
L ⊗ I + I ⊗ L, where A = LL0 is the Incomplete Cholesky factorisation of the matrix
A. This is done using Matlab database [19].

PCG with tol=1e-8


Matrix size (n)
Prec # Iterations P-time It-time Tot-time
No-P 369 0 0.9822 0.9822
IC-K 63 0.0203 0.3244 0.3447
40 000
IC(0) 139 0.0051 0.7240 0.7291
ICT(1e-3) 35 0.0341 0.2832 0.3174
No-P 734 0 9.1999 9.1999
IC-K 95 0.0926 2.2571 2.3497
160 000
IC(0) 274 0.0263 6.3183 6.3446
ICT(1e-3) 66 0.1206 2.3659 2.4865
No-P 1105 0 31.4799 31.4799
IC-K 119 0.2008 6.5616 6.7624
360 000
IC(0) 401 0.0520 22.8266 22.8786
ICT(1e-3) 98 0.3789 9.5338 9.9127
No-P 1479 0 80.2185 80.2185
IC-K 140 0.3864 15.9863 16.3727
640 000
IC(0) 533 0.0889 64.2007 64.2896
ICT (1e-3) 125 1.3558 23.5370 24.8927
No-P / / / /
IC-K 159 0.5161 22.9225 23.4386
1 000 000
IC(0) 666 0.1277 115.8678 115.9954
ICT(1e-3) 143 1.0350 39.8834 40.9184
Table 1. Comparison of the preconditioners, IC-K, IC(0) and
ICT(1e-3) for solving the Poisson problem with Preconditioned Con-
jugate Gradient method (PCG).

12
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

Figure 1. Relative residual versus iteration number and execution


time for PCG with variant incomplete Cholesky factorization pre-
conditioners.

4.2. Transform linear system into matrix equation.


4.2.1. Sylvester equation.
Definition 4.1. Let A, B, F ∈ Rn×n be square matrices. Sylvester matrix equation
is a matrix equation in the form:
(4.7) BX + XAT = F
in which the matrix X ∈ Rn×n is the unknown.
2 2
Property 4.2. If the matrix T ∈ Rn ×n , may be decomposed as KSD (T = A ⊗
I + I ⊗ B), such that A, B ∈ Rn×n , then
T x = f ⇔ BX + XAT = F,
where X, F ∈ Rn×n and x = vec(X), f = vec(F ).
Proof. This follows immediately from Theorem 2.13 (5).
Tx = f ⇔ (A ⊗ I + I ⊗ B)x = f
⇔ (A ⊗ I + I ⊗ B)vec(X) = vec(F )
⇔ (A ⊗ I)vec(X) + (I ⊗ B)vec(X) = vec(F )
⇔ (IXAT + BXI) = F
⇔ BX + XAT = F.


4.2.2. Lyapunov equation.
Definition 4.3 ([8]). Lyapunov matrix equation is a particular case of the Sylvester
matrix equation when B equals A. For square matrices A, F ∈ Rn×n , this matrix is
written in the following form:
(4.8) AX + XAT = F.
13
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

2
×n2
Property 4.4. If T ∈ Rn can be decomposed as KSGD (T = A ⊗ I + I ⊗ A)
such that A ∈ Rn×n , then
T x = f ⇔ AX + XAT = F,
where X, F ∈ Rn×n and x = vec(X), f = vec(F ).
4.3. The Conjugate Gradient algorithm in matrix format.
According to Property 4.4 the linear system (4.4) becomes the following Lyapunov
equation:
(4.9) AX + XAT = F,
where x = vec(X) and f = vec(F ) .

It is advantageous to use a matrix equation formulation [10] to use the conjugate


gradient algorithm if n large. We define the matrices X; R; P ; F ; T ∈ Rm×m by

x = vec(X), r = vec(R), p = vec(P ), d = vec(D) and f = vec(F ). Then


T x = f ⇔ AX + XA = F
and d = T p ⇔ D = AP + P A. This leads to the following algorithm.
We start with X0 , P0 = R0 = F − (AX0 + X0 A), the iteration is stopped when
v   
u
u X m
m X m X
X m
(Rij )k ∗ (Rij )k  /  (Fij )k ∗ (Fij )k  ≤ tol
u
t
i=1 j=1 i=1 j=1

Algorithm 5: Conjugate Gradient in matrix format (CG-M)


Dk =  APk + Pk A;   
X m
m X X m
m X
αk =  (Rij )k ∗ (Rij )k  /  (Pij )k ∗ (Dij )k  ;
i=1 j=1 i=1 j=1

Xk+1 = Xk + αk Pk ;
Rk+1 = Rk − αk Dk ;
   
Xm Xm m X
X m
βk =  (Rij )k+1 ∗ (Rij )k+1  /  (Rij )k ∗ (Rij )k  ;
i=1 j=1 i=1 j=1

Pk+1 = Rk+1 + βk Pk .

Example 4.5. We try to solve the linear system (4.4) with the two algorithms CG
and CG-M for various m.
The following table shows the difference in speed between CG and CG-M. We use
T T
tol = 1e − 8, f = [1, 1, ..., 1] and x0 = [0, 0, ..., 0] .

14
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

CG and CGM with tol=1e-8


Matrix size (m)
# Iterations Norm ||Ax − b|| Time
CG CG-M CG CG-M CG CG-M
40 000 369 369 1.9968e-06 6.6725e-07 0.4446 0.3459
160 000 734 734 3.8562e-06 1.1138e-06 4.7613 4.8170
360 000 1105 1105 5.9286e-06 1.6579e-06 15.9729 17.3569
640 000 1479 1479 7.7670e-06 2.1074e-06 44.2815 43.1813
1 000 000 1853 1853 9.8535e-06 2.6705e-06 94.3168 87.0503
Table 2. The number of iterations and the time of execution for
solving the Poisson problem on an m × m grid for various m by CG
method and CG-M method

Figure 2. Comparison of CG and CG-M methods in terms of ex-


ecution time.

5. Conclusion
Defining the Kronecker sum decomposition of matrices can be very useful for
speeding up the solution of linear systems T x = f either by creating preconditioners
or converting them into matrix equations.

6. Acknowledgements.
The authors wish to warmly thank the referees for all their useful suggestions.

References
[1] C. F. Van Loan, The ubiquitous Kronecker product, Journal of Computational and Applied
Mathematics 123 (2000) 85–100.
[2] S. Liu and G. Trenkler, Hadamard, Khatri-Rao, Kronecker And Other Matrix Products, Inter-
national Journal of Information and Systems Sciences 4 (1) (2008) 160–177.
15
Youssouf Mezzar et al. /Ann. Fuzzy Math. Inform. x (201y), No. x, xxx–xxx

[3] P. A. Regaliat and S. K. Mitra, Kronecker products, unitary matrices and signal processing
applications, Industrial and Applied Mathematics 31 (4) (1989) 586–613.
[4] D. S. Bernstein, Matrix Mathematics, Theory, Facts and Formulas, 2nd edition, Princeton
University Press 2009.
[5] E. M. Sadek, Méthodes itératives pour la résolution d’équations matricielles, Modélisation et
simulation, Université du Littoral Côte d’Opale 2015.
[6] F. Liu, New Kronecker product decompositions and its applications, International Journal of
Engineering and Science 1 (11) (2012) 25–30.
[7] H. Zhang and F. Ding, On the Kronecker products and their applications, Journal of Applied
Mathematics 2013 (2013) Article ID 296185, 8 pages.
[8] E. Jarlebring, Numerical Methods for Lyapunov Equations, KTH Royal Institute of Technology
2017.
[9] B. N. Datta, Numerical Methods For Linear Control Systems Design And Analysis, Elsevier
Academic Press 2003.
[10] T. Lyche, Numerical Linear Algebra and Matrix Factorizations, Texts in Computational Sci-
ence and Engineering, Springer 2020.
[11] S. Banerjee and A. Roy, Linear algebra and matrix analysis for statistics, Texts in Statistical
Science 2014.
[12] W. H. Steeb, Matrix Calculus and Kronecker Product with Applications and C++ Programs,
International School of Scientific Computing 1997.
[13] A. J. Laub, Matrix Analysis for Scientists and Engineers, Industrial and App Math, SIAM
Philadelphia 2005.
[14] V. Amoia, G. De Micheli and M. Santomauro, Computer-oriented formulation of transition-
rate matrices via Kronecker algebra, IEEE Transactions on Reliability 30 (1981) 123–132.
[15] A. Graham, Kronecker Products and Matrix Calculus With Applications, Ellis Horwood Lim-
ited, Chichester 1981.
[16] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cam-
bridge 1991.
[17] Â. Björck, Numerical Methods in Matrix Computations, Texts in Applied Mathematics, Vol-
ume 59 2015.
[18] E. Kreyszig, Advanced Engineering Mathematics, John Wiley & Sons, Inc. 2011.
[19] MATLAB User’s Guide, Version 8.1.0.604, The Math Works (R2013a).

Youssouf Mezzar (youcefmezzar@gmail.com)


Université Oran1, Laboratory of mathematics and its applications (LAMAP), P.O.
Box 1524, Oran 31000, Algeria
Kacem Belghaba(belghaba@yahoo.fr)
Université Oran1, Laboratory of mathematics and its applications (LAMAP), P.O.
Box 1524, Oran 31000, Algeria

16

You might also like