97 Matysiak Przewozniak Rulinska
97 Matysiak Przewozniak Rulinska
97 Matysiak Przewozniak Rulinska
Lukasz Matysiak∗
Kazimierz Wielki University
Kazimierz Wielki University
ul. Powstańców Wielkopolskich 2
85-090 Bydgoszcz
Poland
lukmat@ukw.edu.pl
Weronika Przewoźniak
Kazimierz Wielki University
ul. Powstańców Wielkopolskich 2
85-090 Bydgoszcz
Poland
weronika.przewozniak@student.ukw.edu.pl
Natalia Rulińska
ul. Powstańców Wielkopolskich 2
85-090 Bydgoszcz
Poland
natalia.rulinska@student.ukw.edu.pl
Abstract. Matrices are very popular and widely used in mathematics and other fields
of science. Every mathematician has known the properties of finite-sized matrices since
the time of study. In this paper, we consider the basic theory of infinite matrices. So
far, there have been references and few results in certain scientific fields, but they have
not been thoroughly researched.
Keywords: matrix, determinant, inverse matrix, rank.
1. Introduction
*. Corresponding author
1236 LUKASZ MATYSIAK, WERONIKA PRZEWOŹNIAK and NATALIA RULIŃSKA
Matrices with an infinite number of rows and/or columns are also considered
- formally, it is sufficient that for any elements indexing rows and columns there
is a well-defined matrix element (index sets do not even have to be subsets of
natural numbers). Similarly to the finite case, we can define addition, subtrac-
tion, multiplication by scalar or matrix shifting, although matrix multiplication
requires some assumptions.
Applications of matrices are found in most scientific fields ([5]). In every
branch of physics, including classical mechanics, optics, electromagnetism, quan-
tum mechanics, and quantum electrodynamics, they are used to study physical
phenomena, such as the motion of rigid bodies. In computer graphics, they are
used to manipulate 3D models and project them onto a 2-dimensional screen. In
probability theory and statistics, stochastic matrices are used to describe sets of
probabilities. For example, they are used within the PageRank algorithm that
ranks the pages in a Google search. ([4]) Matrix calculus generalizes classical
analytical notions such as derivatives and exponentials to higher dimensions.
Matrices are used in economics to describe systems of economic relationships.
A major branch of numerical analysis is devoted to the development of effi-
cient algorithms for matrix computations, a subject that is centuries old and is
today an expanding area of research. Matrix decomposition methods simplify
computations, both theoretically and practically. Algorithms that are tailored
to particular matrix structures, such as sparse matrices and near-diagonal ma-
trices, expedite computations in finite element method and other computations.
Infinite matrices occur in planetary theory and in atomic theory. A simple ex-
ample of an infinite matrix is the matrix representing the derivative operator,
which acts on the Taylor series of a function.
In this paper, we formalize and develop the basic theory of infinite matrices.
So far, they have not been thoroughly researched, despite their significant use
in some fields of science.
2. Results
By an infinite dimension matrix we call a matrix for which the number of rows
is infinite or the number of columns is infinite.
We define zero, triangular, diagonal, unitary and transposed matrices of an
infinite dimension very analogously.
A square matrix of an infinite dimension is a matrix in which the number of
rows is equinumerous to the number of columns.
Matrix sum and by scalar multiplication are also analogous.
Corollary 2.1. If we try to multiply matrix Am×n with matrix Bn×k , we get
the following conclusions:
(a) If m = ∞, k = ∞, then AB = C∞×∞ .
P∞
(b) If n = ∞, then AB = Cm×k = [cij ], where cij = l=1 ail blj (1 ⩽ i ⩽ m,
1 ⩽ j ⩽ k) be a convergent series.
MATRICES OF INFINITE DIMENSIONS AND THEIR APPLICATIONS 1237
Let M1 (∞, R) = M1 (R) be denote the set of all square matrices of an in-
finite dimension with coefficients from any integral domain R, where all rows
and columns are convergent series. Then M1 (∞, R) be a ring. Easy to check
that {A ∈ M1 (∞, Z) : det A ∈ {−1, 1}} and {A ∈ M1 (∞, Z) : det A = 1} are
multiplicative groups.
The determinant of a square matrix A of finite dimension can be easily
determined by the formula:
P∞ Ak k+1
where log A = k=1 (−1). For an infinite dimension we must add the
k
assumption that tr(log A) be a convergent series.
X
det(AB) = det(Aj1 j2 ...jm ) det(Bj1 j2 ...jm ).
1⩽j1 <j2 <···<jm ⩽n
Hence:
X Xn Xn
det(AB) = η(l1 , . . . , lm )( a1k bkl1 ) . . . ( amk bklm ) =
1⩽l1 ,...,lm ⩽m k=1 k=1
X X
= a1k1 . . . amkm η(l1 , . . . , lm )bk1 l1 . . . bkm lm =
1⩽k1 ,...,km ⩽n 1⩽l1 ,...,lm ⩽m
X
= a1k1 . . . amkm det(Bk1 ...km ) =
1⩽k1 ,...,km ⩽n
X
= a1k1 η(k1 , . . . , km ) . . . amkm det(Bj1 ...jm ) =
1⩽k1 ,...,km ⩽n
X
= det(Aj1 ...jm ) det(Bj1 ...jm ).
1⩽j1 ⩽j2 ⩽···⩽jm ⩽n
The following two Propositions give us a way to compute the inverse matrix.
Proposition 2.2. Let A be a matrix in which every rows and colums form
convergent series such that ||I − A|| < 1, where || · || is a submultiplicative norm.
Then
A−1 = I + (I − B) + (I − B)2 + . . .
3. Applications
Let A be an m × n matrix over an arbitrary field F (m, n ∈ N ∪ {∞}). There
is an associated linear mapping f : F n → F m defined by f (x) = Ax. The rank
of A is the dimension of the image f . This definition has the advantage that it
can be applied to any linear map without need for a specific matrix.
Let
a11 x1 + a12 x2 + a13 x3 + . . . = b1
a x + a x + a x + . . . = b
21 1 22 2 23 3 2
a31 x1 + a32 x2 + a33 x3 + . . . = b3
. . .
MATRICES OF INFINITE DIMENSIONS AND THEIR APPLICATIONS 1239
Thus, the fact that b ∈ Im(f ) is equivalent to the fact that b belongs to the
span of the column vectors of the matrix A:
b = (I1 , I2 , I3 , . . . ).
A = (I1 I2 I3 . . . )
and
[A|b] = (I1 I2 I3 . . . b)
have the same rank. Thus, the system is compatible if rank A = rank [A|B].
We will try to find the eigenvalues and eigenvectors of the infinity matrix.
Solve a characteristic equation:
det(A − λI) = 0.
Let A be a matrix of any dimension, whose rows are given linearly indepen-
dent vectors. We are building a block matrix [AAT | A]. Applying elementary
row operations we bring it to the block matrix of the form [G | A′ ], where G be
the upper triangular matrix. The rows of A′ form orthogonal vectors.
MATRICES OF INFINITE DIMENSIONS AND THEIR APPLICATIONS 1241
References
[1] A. Cayley, A memoir on the theory of matrices, 1855.