Complex-Valued Matrix Differentiation: Techniques and Key Results
Complex-Valued Matrix Differentiation: Techniques and Key Results
Complex-Valued Matrix Differentiation: Techniques and Key Results
Abstract
A systematic theory is introduced for finding the derivatives of complex-valued matrix functions with respect
to a complex-valued matrix variable and the complex conjugate of this variable. In the framework introduced,
the differential of the complex-valued matrix function is used to identify the derivatives of this function. Matrix
differentiation results are derived and summarized in tables which can be exploited in a wide range of signal
processing related situations. The Hessian matrix of a scalar complex function is also defined, and it is shown how
it can be obtained from the second-order differential of the function. The methods given are general such that many
other results can be derived using the framework introduced, although only important cases are treated.
Keywords: Complex differentials, non-analytical complex functions, complex matrix derivatives, complex Hessian.
I. I NTRODUCTION
In many engineering problems, the unknown parameters are complex-valued vectors and matrices and, often, the
task of the system designer is to find the values of these complex parameters which optimize a chosen criterion
function. Examples of areas where these types of optimization problems might appear, can be in telecommunications,
where digital filters can contain complex-valued coefficients [1], electric circuits [2], control theory [3], adaptive
filters [4], acoustics, optics, mechanical vibrating systems, heat conduction, fluid flow, and electrostatics [5]. For
solving this kind of optimization problems, one approach is to find necessary conditions for optimality. When a
scalar real-valued function depends on a complex-valued matrix parameter, the necessary conditions for optimality
Corresponding author: Are Hjørungnes is with UniK - University Graduate Center, University of Oslo, Instituttveien 25, P. O. Box 70,
David Gesbert is with Mobile Communications Department, Eurécom Institute, 2229 Route des Crêtes, BP 193, F-06904 Sophia Antipolis
This work was supported by the Research Council of Norway and the French Ministry of Foreign Affairs through the Aurora project
can be found by either setting the derivative of the function with respect to the complex-valued matrix parameter or
its complex conjugate to zero. Differentiation results are well-known for certain classes of functions, e.g., quadratic
functions, but can be tricky for others. This paper provides the tools for finding derivatives in a systematic way.
In an effort to build adaptive optimization algorithms, it will also be shown that the direction of maximum rate
of change of a real-valued scalar function, with respect to the complex-valued matrix parameter, is given by the
derivative of the function with respect to the complex conjugate of the complex-valued input matrix parameter. Of
course, this is a generalization of a well-known result for scalar functions of vector variables. A general framework
is introduced here showing how to find the derivative of complex-valued scalar-, vector-, or matrix functions with
respect to the complex-valued input parameter matrix and its complex conjugate. The main contribution of this
paper lies in the proposed approach for how to obtain derivatives in a way that is both simple and systematic,
based on the so-called differential of the function. In this paper, it is assumed that the functions are differentiable
with respect to the complex-valued parameter matrix and its complex conjugate, and it will be seen that these two
parameter matrices should be treated as independent when finding the derivatives, as is classical for scalar variables.
It is also shown how the Hessian matrix, i.e., second-order derivate, of a complex scalar function can be calculated.
The proposed theory is useful when solving numerous problems which involve optimization when the unknown
The problem at hand has been treated for real-valued matrix variables in [6], [7], [8], [9], [10]. Four additional
references that give a brief treatment of the case of real-valued scalar functions which depend complex-valued vectors
are Appendix B of [11], Appendix 2.B in [12], Subsection 2.3.10 of [13], and the article [14]. The article [15]
serves as an introduction to this area for complex-valued scalar functions with complex-valued argument vectors.
Results on complex differentiation theory is given in [16], [17] for differentiation with respect to complex-valued
scalars and vectors, however, the more general matrix case is not considered. Examples of problems where the
unknown matrix is a complex-valued matrix are wide ranging including precoding of MIMO systems [18], linear
equalization design [19], array signal processing [20] to only cite a few.
Some of the most relevant applications to signal and communication problems are presented here, with key results
being highlighted and other illustrative examples are listed in tables. The complex Hessian matrix is defined for
3
TABLE I
C LASSIFICATION OF FUNCTIONS .
Function type Scalar variables z, z ∗ ∈ C Vector variables z, z ∗ ∈ CN ×1 Matrix variables Z , Z ∗ ∈ CN ×Q
Scalar function f ∈ C f z, z ∗ f z, z ∗ f Z, Z∗
f :C×C→C f : CN ×1 × CN ×1 → C f : CN ×Q × CN ×Q → C
M ×1 ∗ ∗
Vector function f ∈ C f z, z f z, z f Z, Z∗
f : C × C → CM ×1 f : CN ×1 × CN ×1 → CM ×1 f : CN ×Q × CN ×Q → CM ×1
M ×P ∗ ∗
Matrix function F ∈ C F z, z F z, z F Z, Z∗
F : C × C → CM ×P F : CN ×1 × CN ×1 → CM ×P F : CN ×Q × CN ×Q → CM ×P
complex scalar functions, and it is shown how it can be obtained from the second-order differential of the functions.
Hessian matrices can be used to check if a stationary point is a minimum, maximum, or a saddle point.
The rest of this paper is organized as follows: Section II contains a discussion on the differences between analytical
functions, which are usually studied in mathematical courses for engineers, and non-analytical functions, which
are often encountered when dealing with engineering problems of complex variables. In Section III, the complex
differential is introduced, and based on this differential, the definition of the derivatives of complex-valued matrix
function with respect to the complex-valued matrix argument and its complex conjugate is given in Section IV. The
key procedure showing how the derivatives can be found from the differential of a function is also presented in
Section IV. Section V contains important results such as the chain rule, equivalent conditions for finding stationary
points, and in which direction the function has the maximum rate of change. In Section VI, several key results
are placed in tables and some results are derived for various cases with high relevance for signal processing and
communication problems. The Hessian matrix of scalar complex-valued functions dependent on complex matrices
is defined in Section VII, and it is shown how it can be obtained. Section VIII contains some conclusions. Some
Notation: The following conventions are always used in this article: Scalar quantities (variables z or functions f )
are denoted by lowercase symbols, vector quantities (variables z or functions f ) are denoted by lowercase boldface
symbols, and matrix quantities (variables Z or functions F ) are denoted by capital boldface symbols. The types
of functions used throughout this paper are classified in Table I. From the table, it is seen that all the functions
√
depend on a complex variable and the complex conjugate of the same variable. Let j = −1 be the imaginary
unit. Let the symbol z = x + jy denote a complex variable, where the real and imaginary parts are denoted by
x and y , respectively. The real and imaginary operators return the real and imaginary parts of the input matrix,
4
respectively. These operators are denoted by Re{·} and Im{·}. If Z ∈ C N ×Q is a complex-valued 1 matrix, then
Z = Re {Z} + j Im {Z}, and Z ∗ = Re {Z} − j Im {Z}, where Re {Z} ∈ RN ×Q , Im {Z} ∈ RN ×Q , and the
operator (·)∗ denotes complex conjugate of the matrix it is applied to. The real and imaginary operators can be
1
expressed as Re {Z} = 2 (Z + Z ∗ ) and Im {Z} = 1
2j (Z − Z ∗ ).
Definition 1: Let D ⊆ C be the domain of definition of the function f : D → C. The function f , which depends
f (z + Δz) − f (z)
on a complex scalar variable z , is an analytical function in the domain D [5] if lim exists
Δz→0 Δz
for all z ∈ D.
If f (z) satisfies the Cauchy-Riemann equations [5], then it is analytical. A function that is complex differentiable
is also named analytic, holomorphic, or regular. The Cauchy-Riemann condition can be formulated as one equation
as ∂
∂z ∗ f = 0, where the derivative of f with respect to z ∗ and z , respectively, is found by treating the variable z
∂
be defined with more details. From ∂z ∗ f = 0, it is seen that any analytical function f is not dependent on the
variable z ∗ . This can also be seen from Theorem 1, page 804 in [5], which states that any analytical function f (z)
can be written as an infinite power series 2 with non-negative exponents of the complex variable z , and this power
series is called the Taylor series. The series does not contain any terms that depend on z ∗ . The derivative of
a complex-valued function in mathematical courses of complex analysis for engineers is often defined only for
analytical functions. However, in many engineering problems, the functions of interest are often not analytic, since
they are often real-valued functions. Conversely, if a function is only dependent on z , like an analytical function,
and not implicitly or explicitly dependent on z ∗ , then this function cannot in general be real-valued, since the
function can only be real if the imaginary parts of z can be eliminated, and this is handled by the terms that depend
on z ∗ . An alternative treatment of how to find the derivative of real functions dependent on complex variables
than the one used for analytical function is needed. In this article, a systematic theory is provided for finding the
A. Euclidean Distance
In engineering problems, the squared Euclidean distance is often used. Let f (z) = |z| 2 = zz ∗ . If the traditional
definition of the derivative given in Definition 1 is used, then the function f is not differentiable because:
f (z0 + Δz) − f (z0 ) |z0 + Δz|2 − |z0 |2 (Δz) z0∗ + z0 (Δz)∗ + Δz(Δz)∗
lim = lim = lim . (1)
Δz→0 Δz Δz→0 Δz Δz→0 Δz
This limit does not exist because different values are found depending on how Δz is approaching 0. Let Δz =
j(Δy)z0∗ −jz0 Δy+(Δy)2
Δx + jΔy . Firstly, let Δz approach zero such that Δx = 0, then the last fraction in (1) is jΔy =
z0∗ − z0 − jΔy , which approaches z 0∗ − z0 = −2j Im{z0 } when Δy → 0. Secondly, let Δz approach zero such that
(Δx)z0∗ +z0 Δx+(Δx)2
Δy = 0, then the last fraction in (1) is Δx = z0 + z0∗ + Δx, which approaches z 0 + z0∗ = 2 Re{z0 }
when Δx → 0. For any non-zero complex number z 0 the following holds: Re{z0 } = −j Im{z0 }, and therefore, the
function f (z) = |z|2 is not differentiable when using the commonly encountered definition given in Definition 1.
There are two alternative ways [13] to find the derivative of a scalar real-valued function f ∈ R with respect to
the complex-valued matrix parameter Z ∈ C N ×Q . The standard way is to rewrite f as a function of the real X and
imaginary parts Y of the complex variable Z , and then to find the derivatives of the rewritten function with respect
to these two independent real variables, X and Y , separately. Notice that N Q independent complex parameters
in Z correspond to 2N Q independent real parameters in X and Y . In this paper, we bring the reader’s attention
onto an alternative approach that can lead to a simpler derivation. In this approach, one treats the differentials of
the variables variables Z and Z ∗ as independent, in the way that will be shown by Lemma 1, see also [15]. It will
be shown later, that both the derivative of f with respect to Z and Z ∗ can be identified by the differential of f .
Definition 2: The derivatives with respect to z and z ∗ of f (z0 ) are called the formal partial derivatives of f at
z0 ∈ C. Let z = x + jy , where x, y ∈ R, then the formal derivatives, or Wirtinger derivatives [21], are defined as:
∂ 1 ∂ ∂ ∂ 1 ∂ ∂
f (z0 ) f (z0 ) − j f (z0 ) and ∗
f (z0 ) f (z0 ) + j f (z0 ) . (2)
∂z 2 ∂x ∂y ∂z 2 ∂x ∂y
∂ ∂
From Definition 2, it follows that ∂x f (z0 ) and ∂y f (z0 ) can be found as:
∂ ∂ ∂ ∂ ∂ ∂
f (z0 ) = f (z0 ) + ∗ f (z0 ) and f (z0 ) = j f (z0 ) − ∗ f (z0 ) . (3)
∂x ∂z ∂z ∂y ∂z ∂z
If f is dependent on several variables, Definition 2 can be extended. Later in this article, it will be shown how
the derivatives, with respect to the complex-valued matrix parameter and its complex conjugate, of all the function
types given in Table I can be identified from the differentials of these functions.
z ∗ , and ∂ ∗
∂z f (z, z ) = z ∗ and ∂ ∗
∂z ∗ f (z, z ) = z . When the complex variable z and its complex conjugate twin z ∗
are treated as independent variables [15] when finding the derivatives of f , then f is differentiable in both of
these variables. However, as shown earlier in this section, the same function is not differentiable when studying
that is, (dZ)k,l = d (Z)k,l . A procedure that can often be used for finding the differentials of a complex matrix
where First-order(·, ·) returns the terms that depend on either dZ 0 or dZ 1 of the first order, and Higher-order(·, ·)
returns the terms that depend on the higher order terms of dZ 0 and dZ 1 . The differential is then given by
First-order(·, ·), i.e., the first order term of F (Z 0 +dZ 0 , Z 1 +dZ 1 )−F (Z 0 , Z 1 ). As an example, let F (Z 0 , Z 1 ) =
Z 0 Z 1 . Then the difference in (4) can be developed and readily expressed as: F (Z 0 +dZ 0 , Z 1 +dZ 1 )−F (Z 0 , Z 1 ) =
Z 0 dZ 1 + (dZ 0 )Z 1 + (dZ 0 )(dZ 1 ). The differential of Z 0 Z 1 can then be identified as all the first-order terms on
either dZ 0 or dZ 1 as dZ 0 Z 1 = Z 0 dZ 1 + (dZ 0 )Z 1 .
H
ZZ + = ZZ + , (5)
3
Here, Definition 2 is used for finding the formal partial derivatives.
4
The indexes are chosen to start with 0 everywhere in this article.
7
H
Z+Z = Z + Z, (6)
ZZ + Z = Z, (7)
Z + ZZ + = Z + , (8)
where the operator (·) H is the Hermitian operator, or the complex conjugate transpose.
Let ⊗ and denote the Kronecker and Hadamard product [23], respectively. Some of the most important rules on
complex differentials are listed in Table II, assuming A, B , and a to be constants, and Z , Z 0 , and Z 1 to be complex-
valued matrix variables. The vectorization operator vec(·) stacks the columns vectors of the argument matrix into
a long column vector in chronological order [23]. The differentiation rule of the reshaping operator reshape(·) in
Table II is valid for any linear reshaping 5 operator reshape(·) of the matrix, and examples of such operators are
the transpose (·)T or vec(·). Some of the basic differential results in Table II can be derived by means of (4), and
others can be derived by generalizing some of the results found in [7], [9] to the complex differential case.
Differential of the Moore-Penrose Inverse: The differential of the real-valued Moore-Penrose inverse can be
dZ + = −Z + (dZ)Z + + Z + (Z + )H (dZ H ) I N − ZZ + + I Q − Z + Z (dZ H )(Z + )H Z + . (9)
From Table II, the following four equalities follows dZ = d Re {Z} + jd Im {Z}, Z ∗ = d Re {Z} − jd Im {Z},
1
Re {Z} = 2 (dZ + dZ ∗ ), and Im {Z} = 1
2j (dZ − dZ ∗ ).
The following two lemmas are used to identify the first- and second-order derivatives later in the article. The
real variables Re {Z} and Im {Z} are independent of each other and hence are their differentials. Although the
complex variables Z and Z ∗ are related, their differentials are linearly independent in the following way:
TABLE II
I MPORTANT RESULTS FOR COMPLEX DIFFERENTIALS .
Function Differential
A 0
aZ adZ
AZ B A(dZ )B
Z0 + Z1 dZ 0 + dZ 1
Tr {Z } Tr {dZ }
Z 0Z 1 (dZ 0 )Z 1 + Z 0 (dZ 1 )
Z0 ⊗ Z1 (dZ 0 ) ⊗ Z 1 + Z 0 ⊗ (dZ 1 )
Z0 Z1 (dZ 0 ) Z 1 + Z 0 (dZ 1 )
−1
Z −Z −1 (dZ )Z −1
det(Z ) det(Z ) Tr Z −1 dZ
ln(det(Z )) Tr Z −1 dZ
reshape(Z ) reshape(dZ )
∗
Z (dZ )∗
ZH (dZ )H
Z+ −Z + (dZ )Z + + Z + (Z + )H (dZ H ) IN − ZZ+ + I Q − Z +Z (dZ H )(Z + )H Z +
and they will later be given in an identification table which shows how the derivatives can be obtained from the
with respect to Z ∈ CN ×Q is denoted DZ F , and the derivative of the matrix function F (Z, Z ∗ ) ∈ CM ×P with
respect to Z ∗ ∈ CN ×Q is denoted DZ∗ F and the size of both these derivatives is M P × N Q. The derivatives
DZ F (Z, Z ∗ ) and DZ∗ F (Z, Z ∗ ) are called the Jacobian matrices of F with respect to Z and Z ∗ , respectively.
Remark 1: Definition 4 is a generalization of Definition 1, page 173 in [7] to include complex matrices. In [7],
several alternative definitions of the derivative of real-valued functions with respect to a matrix are discussed, and
9
TABLE III
I DENTIFICATION TABLE .
Function type Differential Derivative with respect to z, z, or Z Derivative with respect to z ∗ , z ∗ , or Z ∗ Size of derivatives
∗ ∗ ∗ ∗
f z, z df = a0 dz + a1 dz Dz f z, z = a0 Dz ∗ f z, z = a1 1×1
f z, z ∗ df = a 0 dz + a 1 dz ∗ Dz f z, z ∗ = a 0 Dz ∗ f z, z ∗ = a 1 1×N
f Z, Z∗ df = vecT (A 0 )d vec(Z ) + vecT (A 1 )d vec(Z ∗ ) DZ f Z , Z ∗ = vecT (A 0 ) DZ ∗ f Z , Z ∗ = vecT (A 1 ) 1 × NQ
∗ ∗ ∗
f Z, Z df = Tr AT T
0 dZ + A 1 dZ
∂
∂Z
f Z, Z = A0 ∂
∂Z ∗
f Z , Z ∗ = A1 N ×Q
∗ ∗ ∗ ∗
f z, z df = b 0 dz + b 1 dz Dz f z, z = b0 Dz ∗ f z, z = b1 M ×1
∗ ∗ ∗ ∗
f z, z df = B 0 dz + B 1 dz Dz f z, z = B0 Dz ∗ f z, z = B1 M ×N
f Z, Z∗ df = β 0 d vec(Z ) + β 1 d vec(Z ∗ ) DZ f Z , Z ∗ = β 0 DZ ∗ f Z , Z ∗ = β 1 M × NQ
F z, z ∗ d vec(F ) = c 0 dz + c 1 dz ∗ Dz F z, z ∗ = c 0 Dz ∗ F z, z ∗ = c 1 MP × 1
F z, z ∗ d vec(F ) = C 0 dz + C 1 dz ∗ Dz F z, z ∗ = C 0 Dz∗ F z, z ∗ = C 1 MP × N
F Z, Z∗ d vec(F ) = ζ 0 d vec(Z ) + ζ 1 d vec(Z ∗ ) DZ F Z , Z ∗ = ζ 0 DZ ∗ F Z , Z ∗ = ζ 1 M P × NQ
it is concluded that the definition that matches Definition 4 is the only reasonable definition. Definition 4 is also a
generalization of the definition used in [15] for complex-valued vectors to the case of complex-valued matrices.
Table III shows how the derivatives of the different function types in Table I can be identified from the differentials
of these functions.6 To show the uniqueness of the representation in (10), we subtract the differential in (10) from the
corresponding differential in Table III to get (ζ 0 − DZ F (Z, Z ∗ )) d vec(Z) + (ζ 1 − DZ∗ F (Z, Z ∗ )) d vec(Z ∗ ) =
0M P ×1 . The derivatives in the last line of Table III then follow by using Lemma 1 on this equation. Table III is an
extension of the corresponding table given in [7], valid in the real variable case. In Table III, z ∈ C, z ∈ C N ×1 ,
of z , z , Z , z ∗ , z ∗ , or Z ∗ .
M × N are defined as
⎡ ⎤ ⎡ ⎤
⎢
∂
∂z0 f0 ··· ∂
∂zN −1 f0 ⎥ ⎢
∂
∂z0∗ f0 ··· ∗
∂
f0 ⎥
⎢ ⎥ ⎢ ∂zN −1 ⎥
∂ ⎢ ⎥ ∂ ⎢ ⎥
f z ∗ ⎢ .. .. ⎥, ∗ ⎢ .. .. ⎥,
∂z T
(z, ) = ⎢ . . ⎥ ∂z H f (z, z ) = ⎢ . . ⎥ (11)
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
∂
∂z0 fM −1 ··· ∂
∂zN −1 fM −1
∂
∂z0∗ fM −1 ··· ∂
∗
∂zN fM −1
−1
Notice that ∂
∂zT f = Dz f and ∂zH f
∂
= Dz∗ f . Using the partial derivative notation in Definition 5, the derivatives
∂ vec(F (Z, Z ∗ ))
DZ F (Z, Z ∗ ) = , (12)
∂ vecT (Z)
∂ vec(F (Z, Z ∗ ))
DZ∗ F (Z, Z ∗ ) = . (13)
∂ vecT (Z ∗ )
This is a generalization of the real-valued matrix variable case treated in [7] to the complex-valued matrix variable
case. (12) and (13) show how the all the M P N Q partial derivatives of all the components of F with respect to
all the components of Z and Z ∗ are arranged when using the notation introduced in Definition 5.
Key result: Finding the derivative of the complex-valued matrix function F with respect to the complex-valued
For less general function types, see Table I, a similar procedure can be used.
V. F UNDAMENTAL R ESULTS
In this section, some useful theorems are presented that exploit the theory introduced earlier. All of these results
are important when solving practical optimization problems involving differentiation with respect to a complex-
valued matrix. These results include the chain rule, conditions for finding stationary points for a real scalar function
dependent on complex-valued matrices, and in which direction the same type of function has the minimum or
maximum rate of change. A comment will be made on how this result should be used in the steepest decent
method [24]. For certain types of functions, the same procedure used for the real-valued matrix case [7] can be
1) Chain Rule: One big advantage of the way the derivative is defined in Definition 4 compared to other
definitions of the derivative of F (Z, Z ∗ ) is that the chain rule is valid. The chain rule is now formulated.
its first and second argument at an interior point (Z, Z ∗ ) in the set S0 × S1 . Let T0 × T1 ⊆ CM ×P × CM ×P be such
11
that (F (Z, Z ∗ ), F ∗ (Z, Z ∗ )) ∈ T0 ×T1 for all (Z, Z ∗ ) ∈ S0 ×S1 . Assume that G : T0 ×T1 → CR×S is differentiable
H (Z, Z ∗ ) G (F (Z, Z ∗ ), F ∗ (Z, Z ∗ )). The derivatives D Z H and DZ∗ H are obtained through:
By substituting the results from (17) and (18), into (16), and then using the definition of the derivatives with respect
2) Stationary Points: The next theorem presents conditions for finding stationary points of f (Z, Z ∗ ) ∈ R.
or
Proof: In optimization theory [7], a stationary point is a defined as point where the derivatives with respect to
all the independent variables vanish. Since Re{Z} = X and Im{Z} = Y contain only independent variables, (19)
7
Notice that a stationary point can be a local minimum, a local maximum, or a saddle point.
8
In (19), the symbol ∧ means that both of the equations stated in (19) must be satisfied at the same time.
12
gives a stationary point by definition. By using the chain rule in Theorem 1, on both sides of f (Z, Z ∗ ) = g(X, Y )
and taking the derivative with respect to X and Y , the following two equations are obtained:
From Table II, it follows that DX Z = DX Z ∗ = I N Q and DY Z = −DY Z ∗ = jI N Q . If these results are inserted
into (22) and (23), these two equations can be formulated into block matrix form in the following way:
⎡ ⎤ ⎡ ⎤⎡ ⎤
⎢ DX g ⎥ ⎢ 1 1 ⎥ ⎢ DZ f ⎥
⎢ ⎥=⎢ ⎥⎢ ⎥. (24)
⎣ ⎦ ⎣ ⎦⎣ ⎦
DY g j −j DZ∗ f
Since DX g ∈ R1×N Q and DY g ∈ R1×N Q , it is seen from (25), that (19), (20), and (21) are equivalent.
3) Direction of Extremal Rate of Change: The next theorem states how to find the maximum and minimum
Theorem 3: Let f : CN ×Q × CN ×Q → R. The directions where the function f have the maximum and minimum
rate of change with respect to vec(Z) are given by [D Z∗ f (Z, Z ∗ )]T and − [DZ∗ f (Z, Z ∗ )]T , respectively.
Proof: Since f ∈ R, df can be written in the following two ways df = (D Z f ) d vec(Z) + (DZ∗ f ) d vec(Z ∗ ),
and df = df ∗ = (DZ f )∗ d vec(Z ∗ ) + (DZ∗ f )∗ d vec(Z), where df = df ∗ since f ∈ R. Subtracting the two different
expressions of df from each other and then applying Lemma 1, gives that D Z∗ f = (DZ f )∗ and DZ f = (DZ∗ f )∗ .
From these results, it follows that: df = (DZ f ) d vec(Z) + ((DZ f ) d vec(Z))∗ = 2 Re {(DZ f ) d vec(Z)} =
where
·, · is the ordinary Euclidean inner product [25] between real vectors in R 2K×1 . Using this
⎡ ⎤ ⎡ ⎤
T
⎢ Re (DZ∗ f ) ⎥ ⎢ Re {d vec(Z)} ⎥
df = 2 ⎢ ⎣
⎥ ⎢
⎦,⎣
⎥ .
⎦ (27)
T
Im (DZ∗ f ) Im {d vec(Z)}
Cauchy-Schwartz inequality gives that the maximum value of df occurs when d vec(Z) = α (D Z∗ f )T for α > 0
and from this, it follows that the minimum rate of change occurs when d vec(Z) = −β (D Z∗ f )T , for β > 0.
It is now explained why [DZ f (Z, Z ∗ )]T it not the direction of maximum change of rate of the function f . Let
g : CK×1 × CK×1 → R be g (a0 , a1 ) = 2 Re aT0 a1 . If K = 2 and a0 = [1, j]T , then g (a0 , a0 ) = 0 despite the
fact that [1, j]T = 02×1 . Therefore, the function g is not an inner product and Cauchy-Schwartz inequality is not
valid for this function. By examining the proof of Theorem 3, it is seen that the reason why [D Z f (Z, Z ∗ )]T it not
the direction of maximum change of rate, is that the function g is not an inner product.
4) Steepest Descent Method: If a real-valued function f is being optimized with respect to the parameter Z
by means of the steepest decent method, it follows from Theorem 3 that the updating term must be proportional to
DZ∗ f (Z, Z ∗ ) and not DZ f (Z, Z ∗ ). The update equation for optimizing the real-valued function in Theorem 3 by
means of the steepest decent method, can be expressed as vec T (Z k+1 ) = vecT (Z k ) + μDZ∗ f (Z k , Z ∗k ), where μ is
a real positive constant if it is a maximization problem or a real negative constant if it is a minimization problem,
and Z k ∈ CN ×Q is the value of the matrix after k iterations. The size of vec T (Z k ) and DZ∗ f (Z k , Z ∗k ) is 1 × N Q.
Example 2: Let f : C × C → R be f (z, z ∗ ) = |z|2 = zz ∗ . The partial derivatives are D z ∗ f (z, z ∗ ) = z and
Dz f (z, z ∗ ) = z ∗ . This result is in accordance with the intuitive picture for optimizing f , i.e., it is more efficient to
maximize f in the direction of Dz ∗ f (z, z ∗ ) = z than in the direction of Dz f (z, z ∗ ) = z ∗ when standing at z .
5) Complex Case versus Real Case: In order to show that the consistence with the real-valued case given
in [7], the following theorem shows how the complex-valued case is an extension of [7].
DZ F (Z, Z ∗ ) = DZ G(Z) can be obtained by the procedure given in [7] for finding the derivative of the function
aT z = z T a a T dz aT 01×N
T ∗ H T ∗
a z =z a a dz 01×N aT
z T Az zT A + AT dz zT A + AT 01×N
H H T T ∗ H
z Az z Adz + z A dz z A z T AT
z H Az ∗ zH A + AT dz ∗ 01×N zH A + AT
Proof: Assume that F (Z 0 , Z 1 ) = G(Z 0 ) where Z 0 and Z 1 have independent differentials. Applying the vec(·)
and the differential operator to this equation, leads to: d vec (F ) = (D Z0 F ) d vec (Z 0 ) + (DZ1 F ) d vec (Z 1 ) =
DZ F (Z, Z ∗ ) = DZ G(Z). Since the last equation only has one matrix variable Z , the same techniques as given
in [7] can be used. The first part of the theorem is then proved, and the second part can be shown similarly.
The case of scalar function of scalar variables is treated in the literature for one input variable, see for example [5],
[26]. If the variables z and z ∗ are treated as independent variables, the derivatives D z f (z, z ∗ ) and Dz ∗ f (z, z ∗ ) can
be found as for scalar functions having two independent variables, see Example 1.
B. Derivative of f (z, z ∗ )
filter optimization and in array signal processing [20]. For instance, a frequently occurring example is that of output
power z H Az optimization where z is the filter coefficients and A is the input covariance matrix. The differential
of f is df = (dz H )Az + z H Adz = z H Adz + z T AT dz ∗ , where Table II was used. From df , the derivatives of
z H Az with respect to z and z ∗ follow from Table III, and are Dz f (z, z ∗ ) = z H A and Dz∗ f (z, z ∗ ) = z T AT ,
see Table IV. The other selected examples are given in Table IV and they can be derived similarly. Some of the
C. Derivative of f (Z, Z ∗ )
For functions of the type f (Z, Z ∗ ), it is common to arrange the partial derivatives ∂
∂zk,l f and ∂
∗ f
∂zk,l in an
alternative way [7] than in the expressions D Z f (Z, Z ∗ ) and DZ∗ f (Z, Z ∗ ). The notation for the alternative way
15
∂ ∂
of organizing all the partial derivatives is ∂Z f and ∂Z∗ f . In this alternative way, the partial derivatives of the
∂
∂Z f and ∂
∂Z∗ f are called the gradient of f with respect to Z and Z ∗ , respectively. (28) generalizes to the
complex case of one of the ways to define the derivative of real scalar functions with respect to real matrices
in [7]. The way of arranging the partial derivatives in (28) is different than than the way given in (12) and (13).
If df = vecT (A0 )d vec(Z) + vecT (A1 )d vec(Z ∗ ) = Tr AT0 dZ + AT1 dZ ∗ , where Ai , Z ∈ CN ×Q , then it can
be shown that ∂
∂Z f = A0 and ∂
∂Z∗ f = A1 , where the matrices A 0 and A1 depend on Z and Z ∗ in general.
DZ∗ f (Z, Z ∗ ) is 1× N Q, so these two ways of organizing the partial derivatives are different. It can be shown, that
∂
DZ f (Z, Z ∗ ) = vecT ∂
∂Z f (Z, Z ∗ ) , and DZ∗ f (Z, Z ∗ ) = vecT ∂Z ∗
∗ f (Z, Z ) . The steepest decent method can
∗
be formulated as Z k+1 = Z k + μ ∂Z
∂
∗ f (Z k , Z k ). Key examples of derivatives are developed in the text below,
1) Determinant Related Problems: Objective functions that depend on the determinant appear in several parts
of signal processing related problems, e.g., in the capacity of wireless multiple-input multiple-output (MIMO)
communication systems [19], in an upper bound for the pair-wise error probability [18], [27], and for the exact
symbol error rate (SER) for precoded orthogonal space-time block-code (OSTBC) systems [28].
Tr{Z } Tr {I N dZ } IN 0N ×N
∗ ∗
Tr{Z } Tr I N dZ 0N ×N IN
Tr{AZ } Tr {AdZ } AT 0N ×Q
Tr{Z A 0 Z T A 1 } Tr A0Z T A1 + AT T T
0 Z A1 dZ AT T
1 Z A0 + A1Z A0 0N ×Q
Tr{Z A 0 Z A 1 } Tr {(A 0 Z A 1 + A 1 Z A 0 ) dZ } AT T T
1 Z A0 + AT T T
0 Z A1 0N ×Q
∗ ∗ T
Tr{Z A 0 Z H A 1 } Tr A 0 Z H A 1 dZ + A T T T
0 Z A 1 dZ AT
1 Z A0 A1Z A0
Tr{Z A 0 Z ∗ A 1 } Tr A 0 Z ∗ A 1 dZ + A 1 Z A 0 dZ ∗ AT H T
1 Z A0 AT T T
0 Z A1
−1 −1
Tr{AZ −1 } − Tr Z −1 AZ −1 dZ − ZT AT ZT 0N ×N
p−1
p p−1 T
Tr{Z } p Tr Z dZ p Z 0N ×N
−1
det(Z ) det(Z ) Tr Z −1 dZ det(Z ) Z T 0N ×N
−1
−1
det(A 0 Z A 1 ) det(A 0 Z A 1 ) Tr A 1 (A 0 Z A 1 ) A 0 dZ det(A 0 Z A 1 )A T
0 AT T T
1 Z A0 AT
1 0N ×Q
−1
det(Z 2 ) 2 det2 (Z ) Tr Z −1 dZ 2 det2 (Z ) Z T 0N ×N
−1
−1
det(Z Z T ) 2 det(Z Z T ) Tr ZT ZZT dZ 2 det(Z Z T ) Z Z T Z 0N ×Q
−1
∗ ∗ ∗ ∗ −1 ∗ −1 ∗ ∗ H T −1 H ∗
det(Z Z ) det(Z Z ) Tr Z (Z Z ) dZ + (Z Z ) Z dZ det(Z Z )(Z Z ) Z det(Z Z )Z T ZHZT
−1
−1
det(Z Z H ) det(Z Z H ) Tr Z H (Z Z H )−1 dZ + Z T Z ∗Z T dZ ∗ det(Z Z H )(Z ∗ Z T )−1 Z ∗ det(Z Z H ) Z Z H Z
−1
det(Z p ) p detp (Z ) Tr Z −1 dZ p detp (Z ) Z T 0N ×N
vH
0 (dZ)u0 u0 v H
0 dZ v∗ T
0 u0
λ(Z ) = Tr 0N ×N
v H u0 vH u v H u0
0 0 0 0
vT ∗ ∗ ∗
u0 v T v 0 uH
λ∗ (Z ) 0 (dZ )u0 = Tr 0 dZ ∗ 0N ×N 0
v u∗
T T
v u ∗ v T u∗
0 0 0 0 0 0
Zu(Z) = λ(Z)u(Z), uH
0 u(Z) = 1, λ(Z 0 ) = λ0 , u(Z 0 ) = u0 . (29)
equivalently Z H ∗
0 v 0 = λ0 v 0 . In order to find the differential of λ(Z) at Z = Z 0 , take the differential of both sides
Pre-multiplying (30) by vH
0 gives: v 0 (dZ) u0 = (dλ) v 0 u0 . From Lemma 6.3.10 in [22], v 0 u0 = 0, and hence
H H H
vH
0 (dZ) u0 u0 v H
0
dλ = = Tr dZ . (31)
vH
0 u0 vH
0 u0
9
The matrix Z 0 ∈ CN×N has in general N different complex eigenvalues. However, the roots of the characteristic equation, i.e., the
eigenvalues need not to be distinct. The number of times an eigenvalue appears is equal to its multiplicity. If one eigenvalue appear only
This result is included in Table V, and it will be used later when the derivatives of the eigenvector u and the
Hessian of λ are found. The differential of λ ∗ at Z 0 is also included in Table V. These results are derived in [7].
D. Derivative of f (z, z ∗ )
In engineering, the objective function is often a real scalar, and this might be a very complicated function in the
parameter matrix. To be able to find the derivative in a simple way, the chain rule in Theorem 1 might be very
Let f (z, z ∗ ) = af (z, z ∗ ), then df = adf = a (Dz f (z, z ∗ )) dz + a (Dz ∗ f (z, z ∗ )) dz ∗ , where df follows from
Table III. From this equation, it follows that Dz f = aDz f (z, z ∗ ) and Dz ∗ f = aDz ∗ f (z, z ∗ ). The derivatives of
E. Derivative of f (z, z ∗ )
First a useful results is introduced. In order to extract the vec(·) of an inner matrix from the vec(·) of a multiple
vec (ABC) = C T ⊗ A vec (B) . (32)
(32) and Table III were utilized. From df , the derivatives of f with respect to z and z ∗ follow.
F. Derivative of f (Z, Z ∗ )
R (A) = R A+ A , C (A) = C AA+ , rank (A) = rank A+ A = rank AA+ (33)
where C(A), R(A), and N (A) are the symbols used for the column, row, and null space of A ∈ C N ×Q ,
respectively. 10
The differential of the eigenvector u(Z) is now found at Z = Z 0 . The derivation here is similar to the one in [7],
where the same result for du at Z = Z 0 was derived, however, more details are included here. Let Y 0 = λ0 I N −Z 0 ,
then it follows from (30), that Y 0 du = (dZ) u0 − (dλ) u0 = (dZ) u0 − v0 v(dZ)u u0 vH
H
H
u0
0
u 0 = IN − v u0
H
0
(dZ) u0 ,
0 0
+
where (31) was utilized. Pre-multiplying Y 0 du with Y 0 results in:
+ + u0 v H
Y 0 Y 0 du =Y 0 IN − H 0 (dZ) u0 . (34)
v 0 u0
The operator dim(·) returns the dimension of the vector space. Since λ 0 is a simple eigenvalue, dim (N (Y 0 )) =
1 [5]. Hence, it follows from 0.4.4 (g) on page 13 in [7] that rank(Y 0 ) = N − dim (N (Y 0 )) = N − 1. From
Set C 0 = Y + H +
0 Y 0 + u0 u0 , then it can be shown from the two facts u 0 Y 0 = 01×N and Y 0 u0 = 0N ×1 that
H
C 20 = C 0 , i.e., C 0 is idempotent. It can be shown by the direct use of Definition 3, that the matrix Y +
0 Y 0 is also
idempotent. By the use of Lemma 10.1.1 and Corollary 10.2.2, in [8], it is found that rank (C 0 ) = Tr {C 0 } =
Tr Y + H = rank Y + Y
0 Y 0 + Tr u0 u0 0 0 + 1 = rank (Y 0 ) + 1 = N , where the third equation of (33) was
used. From Lemma 10.1.1 and Corollary 10.2.2, in [8], and rank (C 0 ) = N , it follows that C 0 = I N . Using the
that:
Y+
0 Y 0 du = I N − u0 u0 du = du − u0 u0 du = du.
H H
(35)
From (36), it is possible to find the differential of u(Z) evaluated at Z 0 with respect to the matrix Z :
+ u0 v H
0
du = vec(du) = vec (λ0 I N − Z 0) IN − H (dZ) u0
v 0 u0
u0 v H
+ 0
= ⊗ (λ0 I N − Z 0 )
uT0 IN − H d vec (Z) , (37)
v 0 u0
u0 vH
where (32) was used. From (37), it follows that D Z u = uT0 ⊗ (λ0 I N − Z 0 )+ I N − vH
0
. The differential
0 u0
and the derivative of u∗ follow by the use of Table II, and the results found here.
19
The left eigenvector function v : C N ×N → CN ×1 with the argument Z ∈ CN ×N , denoted v(Z), is defined
differential of v(Z) at Z = Z 0 can be found, using a procedure similar to the one used earlier in this subsection
u0 vH +
for finding du at Z = Z 0 , giving dv H = v H
0 (dZ) I N − vH u0 (λ0 I N − Z 0 ) .
0
0
G. Derivative of F (z, z ∗ )
Examples of functions of the type F (z, z ∗ ) are Az , Azz ∗ , and Af (z, z ∗ ), where A ∈ CM ×P . These functions
can be differentiated by finding the differentials of the scalar functions z , zz ∗ , and f (z, z ∗ ).
H. Derivative of F (z, z ∗ )
I. Derivative of F (Z, Z ∗ )
1) Kronecker Product Related Problems: Examples of objective functions which depends on the Kronecker
Definition 6: Let A ∈ CN ×Q , then there exists a permutation matrix that connects the vectors vec (A) and
vec AT . The commutation matrix K N,Q has size N Q × N Q, and it defines the connection between vec (A) and
vec AT in the following way: K N,Q vec(A) = vec(AT ).
The matrix K Q,N is a permutation matrix and it can be shown [7] that K TQ,N = K −1
Q,N = K N,Q . The following
result is Theorem 3.9 in [7]: Let Ai ∈ CNi ×Qi where i ∈ {0, 1}, then the reason for why the commutation matrix
received its name can be seen from the following identity: K N1 ,N0 (A0 ⊗ A1 ) = (A1 ⊗ A0 ) K Q1 ,Q0 .
Let F : CN0 ×Q0 × CN1 ×Q1 → CN0 N1 ×Q0 Q1 be given by F (Z 0 , Z 1 ) = Z 0 ⊗ Z 1 , where Z i ∈ CNi ×Qi . The
differential of this function follows from Table II: dF = (dZ 0 ) ⊗ Z 1 + Z 0 ⊗ dZ 1 . Applying the vec(·) operator
Inserting the result from (38) and (39) into d vec(F ) gives:
Define the matrices A(Z 1 ) and B(Z 0 ) by A(Z 1 ) (I Q0 ⊗ K Q1,N0 ⊗ I N1 ) [I N0 Q0 ⊗ vec(Z 1 )], and B(Z 0 ) =
as d vec(F ) = A(Z 1 )d vec(Z 0 ) + B(Z 0 )d vec(Z 1 ). From d vec(F ), the differentials and derivatives of Z ⊗ Z ,
Z ⊗ Z ∗ , and Z ∗ ⊗ Z ∗ can be derived and these results are included in Table VI. In the table, diag(·) returns the
square diagonal matrix with the input column vector elements on the main diagonal [22] and zeros elsewhere.
2) Moore-Penrose Inverse Related Problems: In pseudo-inverse matrix based receiver design, the Moore-
Penrose inverse might appear [19]. This is applicable for MIMO, CDMA, and OFDM systems.
both variables Z and Z ∗ in this function definition is that the differential of Z + , see Table II, depends on both
dZ and dZ ∗ . Using the vec(·) operator on the differential of the Moore-Penrose inverse in Table II, in addition to
T ∗
+ Z+ Z+ ⊗ I Q − Z + Z K N,Q d vec(Z ∗ ). (41)
T H + T + ∗
This leads to DZ∗ F = I N − Z+ ZT ⊗ Z+ Z+ + Z Z ⊗ I Q − Z +Z K N,Q and
DZ F = − Z + ⊗ Z + . If Z is invertible, then the derivative of Z + = Z −1 with respect to Z ∗ is equal to the
T
zero matrix and the derivative of Z + = Z −1 with respect to Z can be found from Table VI. Since the expressions
associated with the differential of the Moore-Penrose inverse is so long, it is not included in Table VI.
stationary points in optimization problems. To identify the nature of these stationary points (minimum, maximum,
or saddle point) or to study the stability of iterative algorithms, the second-order derivatives are useful. This can
The Hessian matrix of a function is a matrix containing the second-order derivatives of the function. In this
section, the Hessian matrix is defined and it is shown how it can be obtained. Only the case of scalar functions
f ∈ C is treated, since this is the case of interest in most practical situations. However, the results can be extended
to the vector- and matrix-functions as well. The way the Hessian is defined here follows in the same lines as given
22
in [7], treating the real case. A complex version of Newton’s recursion formula is derived in [29], [30], and there
the topic of Hessian matrices is briefly treated for real scalar functions. Therefore, the complex Hessian might be
When dealing with the Hessian matrix, it is the second-order differential that has to be calculated in order to
identify the Hessian matrix. Neither of the matrices dZ nor dZ ∗ is a function of Z or Z ∗ and, hence, their
differentials are the zero matrix. Mathematically, this can be formulated as:
T H
If f ∈ C, then d2 f = d (df )T = d2 f T = d2 f and if f ∈ R, then d2 f = d (df )H = d2 f H = d2 f .
The Hessian depends on two variables such that the notation must include which variable the Hessian is calculated
with respect to. If the Hessian is calculated with respect to Z 0 and Z 1 , the Hessian is denoted H Z0 ,Z1 f . The
following definition is used for HZ0 ,Z1 f and it is an extension of the definition in [7], to complex scalar functions.
Definition 7: Let Z i ∈ CNi ×Qi , i ∈ {0, 1}, then HZ0 ,Z1 f ∈ CN1 Q1 ×N0 Q0 and HZ0 ,Z1 f DZ0 (DZ1 f )T .
Later, the following proposition is important for showing the symmetry of the complex Hessian matrix.
Proposition 2: Let f : CN ×Q × CN ×Q → C. Assumed that the function f (Z, Z ∗ ) is twice differentiable with
respect to all of the variables inside Z and Z ∗ , when these variables are treated as independent variables. Then [7]
∂2 ∂2 ∂2 ∂2 ∂2 ∂2
∂zk,l ∂zm,n f = ∂zm,n ∂zk,l f , ∂zk,l
∗ ∗
∂zm,n f = ∗
∂zm,n ∗ f,
∂zk,l and ∗
∂zk,l ∂zm,n f = ∗ f,
∂zm,n ∂zk,l where m, k ∈ {0, 1, . . . , N −1}
Definition 7 and (12), it follows that element number (p 0 , p1 ) of HZ0 ,Z1 f is given by:
T
∂ ∂ ∂ ∂
(HZ0 ,Z1 f )p0 ,p1 = DZ0 (DZ1 f )T = T T
f = f
p0 ,p1 ∂ vec (Z 0 ) ∂ vec (Z 1 ) ∂ (vec(Z 0 ))p1 ∂ (vec(Z 1 ))p0
p0 ,p1
∂ ∂ ∂2f
= f= . (43)
∂ (vec(Z 0 ))N1 k1 +l1 ∂ (vec(Z 1 ))N0 k0 +l0 ∂ (Z 0 )l1 ,k1 ∂ (Z 1 )l0 ,k0
And as an immediate consequence of (43) and Proposition 2, it follows that for twice differentiable functions f :
In order to find an identification equation for the Hessians of the function f with respect to all the possible
combinations of the variables Z and Z ∗ , an appropriate form of the expression d 2 f is required. This expression
is now derived. From (10), the first-order differential of the function f can be written as df = (D Z f )d vec(Z) +
(DZ∗ f )d vec(Z ∗ ), where DZ f ∈ C1×N Q and DZ∗ f ∈ C1×N Q can be expressed by the use of (10) as:
(dDZ f )T = DZ (DZ f )T d vec(Z) + DZ∗ (DZ f )T d vec(Z ∗ ), (45)
(dDZ∗ f )T = DZ (DZ∗ f )T d vec(Z) + DZ∗ (DZ∗ f )T d vec(Z ∗ ). (46)
T T
dDZ f = d vecT (Z) DZ (DZ f )T + d vecT (Z ∗ ) DZ∗ (DZ f )T , (47)
T T
dDZ∗ f = d vecT (Z) DZ (DZ∗ f )T + d vecT (Z ∗ ) DZ∗ (DZ∗ f )T . (48)
d2 f is found by applying the differential operator on df and then utilizing the results from (42), (47), and (48):
= d vecT (Z) DZ (DZ f )T d vec(Z) + d vecT (Z) DZ∗ (DZ f )T d vec(Z ∗ )
+ d vecT (Z ∗ ) DZ (DZ∗ f )T d vec(Z) + d vecT (Z ∗ ) DZ∗ (DZ∗ f )T d vec(Z ∗ )
⎡ ⎤⎡ ⎤
T T
⎢ DZ (DZ∗ f ) , DZ∗ (DZ∗ f ) ⎥ ⎢ d vec(Z) ⎥
= d vecT (Z ∗ ), d vecT (Z) ⎢⎣
⎥⎢
⎦⎣
⎥.
⎦ (49)
DZ (DZ f )T , DZ∗ (DZ f )T d vec(Z ∗ )
d2 f = d vecT (Z ∗ ) A0,0 d vec(Z) + d vecT (Z ∗ ) A0,1 d vec(Z ∗ )
24
+ d vecT (Z) A1,0 d vec(Z) + d vecT (Z) A1,1 d vec(Z ∗ )
⎡ ⎤⎡ ⎤
⎢ A0,0 , A0,1 ⎥ ⎢ d vec(Z) ⎥
= d vecT (Z ∗ ), d vecT (Z) ⎢⎣
⎥⎢
⎦⎣
⎥,
⎦ (51)
A1,0 , A1,1 d vec(Z ∗ )
where Ak,l ∈ CN Q×N Q , k, l ∈ {0, 1}, can possibly be dependent on Z and Z ∗ but not on d vec(Z) and d vec(Z ∗ ).
The four complex Hessian matrices in (50) can now be identified from A k,l in (51) in the following way: Subtract
the second-order differentials in (50) from (51), and then use (44) together with Lemma 2 to get
The complex Hessian matrices of the scalar function f can be computed using a three-step procedure:
1) Compute d2 f .
In order to check convexity of f , the middle matrix on the right hand side of (50) must be positive definite.
0N ×N . (52) gives the four complex Hessian matrices H Z,Z f = 0N ×N , HZ∗ ,Z∗ f = 0N ×N , HZ∗ ,Z f = ΦT , and
HZ,Z∗ f = Φ.
2) Complex Hessian of f (Z, Z ∗ ): See the discussion around (29) for an introduction to the eigenvalue problem.
d2 λ(Z) is now found at Z = Z 0 . The derivation here is similar to the one in [7], where the same result for
d2 λ was found. Applying the differential operator to both sides of (30), results in: 2 (dZ) (du) + Z 0 d2 u =
d2 λ u0 + 2 (dλ) du + λ0 d2 u. Left-multiplying this equation by v H 2
0 , and solving for d λ gives
H dZ − I v0 (dZ)u0 du H dZ − v0 (dZ)u0 v0
H H H
2v (dZ − I N dλ) du
H 2v 0 N vH u0 2 v 0 vH
du
0 u0
d2 λ = 0 = 0
=
vH
0u0 vH
0u0 vH0 u0
u0 v0
(λ0 I N − Z 0 )+ I N − u 0 v0
H H
0 (dZ) I N − vH
2v H
0 u0 vH
0 u0
(dZ) u0
= , (53)
vH0 u0
25
where (31) and (36) were utilized. d2 λ can be reformulated by means of Theorem 2.3 in [7] in the following way:
2 2 u0 v H
0 + u0 v H
0
d λ = H Tr u0 v0 (dZ) I N − H
H
(λ0 I N − Z 0 ) IN − H (dZ)
v 0 u0 v 0 u0 v 0 u0
2 u0 v H
0 + u0 v H
0 ∗ T
T
= H d vec (Z) IN − H (λ0 I N − Z 0 ) IN − H ⊗ v0 u0 d vec Z T
v 0 u0 v 0 u0 v 0 u0
2 u0 v H u0 v H
= H d vecT (Z) I N − H 0 (λ0 I N − Z 0 )+ I N − H 0 ⊗ v∗0 uT0 K N,N d vec (Z) . (54)
v 0 u0 v 0 u0 v 0 u0
From this expression, it is possible to identify the four complex Hessian matrices by means of (51) and (52).
Let f : CN ×Q × CN ×Q → C be given by f (Z, Z ∗ ) = Tr ZAZ H , where Z ∈ CN ×Q and A ∈ CQ×Q . From
Theorem 2.3 in [7], f can written f = vec T (Z ∗ ) AT ⊗ I N vec(Z). By applying the differential operator twice
to the latest expression of f , it follows that d2 f = 2 d vecT (Z ∗ ) AT ⊗ I N d vec(Z). From this expression, it
is possible to identify the four complex Hessian matrices by means of (51) and (52).
VIII. C ONCLUSIONS
An introduction is given to a set of very powerful tools that can be used to systematically find the derivative of
complex-valued matrix functions that are dependent on complex-valued matrices. The key idea is to go through the
complex differential of the function and to treat the differential of the complex variable and its complex conjugate
as independent. This general framework can be used in many optimization problems that depend on complex
A PPENDIX I
P ROOF OF P ROPOSITION 1
Proof: (8) leads to dZ + = dZ + ZZ + = (dZ + Z)Z + +Z + ZdZ + . If ZdZ + is found from dZZ + = (dZ)Z + +
It is seen from (55), that it remains to express dZ + Z and dZZ + in terms of dZ and dZ ∗ . Firstly, dZ + Z is
handled:
The expression Z(dZ + Z) can be found from dZ = dZZ + Z = (dZ)Z + Z + Z(dZ + Z), and it is given by
Z(dZ + Z) = dZ − (dZ)Z + Z = (dZ)(I Q − Z + Z). If this expression is inserted into (56), it is found that:
26
dZ + Z = (Z + (dZ)(I Q −Z + Z))H +Z + (dZ)(I Q −Z + Z) = (I Q −Z + Z) dZ H (Z + )H +Z + (dZ)(I Q −Z + Z).
Secondly, it can be shown in a similar manner that: dZZ + = (I N −ZZ + )(dZ)Z + +(Z + )H dZ H (I N −ZZ + ).
If the expressions for dZ + Z and dZZ + are inserted into (55), (9) is obtained.
A PPENDIX II
P ROOF OF L EMMA 1
Proof: Let Ai ∈ CM ×N Q be an arbitrary complex-valued function of Z ∈ C N ×Q and Z ∗ ∈ CN ×Q .
From Table II, it follows that d vec(Z) = d vec(Re {Z}) + jd vec(Im {Z}) and d vec(Z ∗ ) = d vec(Re {Z}) −
jd vec(Im {Z}). If these two expressions are substituted into A 0 d vec(Z) + A1 d vec(Z ∗ ) = 0M ×1 , then it follows
that A0 (d vec(Re {Z}) + jd vec(Im {Z})) + A1 (d vec(Re {Z}) − jd vec(Im {Z})) = 0M ×1 . The last expression
is equivalent to: (A0 + A1 )d vec(Re {Z}) + j(A0 − A1 )d vec(Im {Z}) = 0M ×1 . Since the differentials d Re {Z}
and d Im {Z} are independent, so are d vec(Re {Z}) and d vec(Im {Z}). Therefore, A 0 + A1 = 0M ×N Q and
A PPENDIX III
P ROOF OF L EMMA 2
First three other results are stated and proven:
Proof: Let (A)k,l = ak,l and (B)k,l = bk,l . Assume first that z T Az = z T Bz ∀ z ∈ CN ×1 , and set z = ek
where k ∈ {0, 1, . . . , N − 1}. Then e Tk Aek = eTk Bek , gives that ak,k = bk,k for all k ∈ {0, 1, . . . , N − 1}. Setting
z = ek + el leads to (eTk + eTl )A(ek + el ) = (eTk + eTl )B(ek + el ), which results in ak,k + al,l + ak,l + al,k =
bk,k +bl,l +bk,l +bl,k . Eliminating equal terms gives a k,l +al,k = bk,l +bl,k which can be written A+A T = B +B T .
1
1 T
Assuming that A + AT = B + B T , it follows that z T Az = 2 z T Az + z T AT z = 2z A + AT z =
1 T
1
T 1
2z B + BT z = 2 z Bz + z T B T z = 2 z T Bz + z T Bz = z T Bz , for all z ∈ CN ×1 .
Proof: Let (A)k,l = ak,l and (B)k,l = bk,l . Assume first that z H Az = z H Bz ∀ z ∈ CN ×1 , and set z = ek
where k ∈ {0, 1, . . . , N − 1}. This gives that a k,k = bk,k for all k ∈ {0, 1, . . . , N − 1}. Also in the same way as in
27
the proof of Lemma 4 setting z = e k + el leads to A + AT = B + B T . Next, set z = ek + jel , then manipulations
Proof: Substitution of d vec(Z) = d vec(Re {Z}) + jd vec(Im {Z}) and d vec(Z ∗ ) = d vec(Re {Z}) −
jd vec(Im {Z}) into the second-order differential expression given in the lemma leads to:
d vecT (Re {Z}) [B 0 + B 1 + B 2 ] d vec(Re {Z}) + d vecT (Im {Z}) [−B 0 + B 1 − B 2 ] d vec(Im {Z})
+ d vecT (Re {Z}) j B 0 + B T0 + j B 1 − B T1 − j B 2 + B T2 d vec(Im {Z}) = 0. (57)
(57) is valid for all dZ and the differentials of d vec(Re {Z}) and d vec(Im {Z}) are independent. If d vec(Im {Z})
is set to the zero vector, then it follows from (57) and Corollary 1 that:
B 0 + B 1 + B 2 = −B T0 − B T1 − B T2 . (58)
In the same way, setting d vec(Re {Z}) to the zero vector, then it follows from (57) and Corollary 1 that:
−B 0 + B 1 − B 2 = B T0 − B T1 + B T2 . (59)
Because of the skew symmetry in (58) and (59) and the linear independence of d vec(Re {Z}) and d vec(Im {Z}),
B 0 + B T0 + B 1 − B T1 − B 2 + B T2 = 0N Q×N Q . (60)
(58), (59), and (60) lead to B 0 = −B T0 , B 1 = −B T1 , and B 2 = −B T2 . Since the matrices B 0 and B 2 are skew,
Corollary 1 reduces d vecT (Z) B 0 d vec(Z) + d vecT (Z ∗ ) B 1 d vec(Z) + d vecT (Z ∗ ) B 2 d vec(Z ∗ ) = 0 to
d vecT (Z ∗ ) B 1 d vec(Z) = 0. Then Lemma 5 results in B 1 = 0N Q×N Q .
A PPENDIX IV
P ROOF OF L EMMA 3
Proof: Let the symbol ⊆ means subset of. From (7) and the definition of R (A), it follows that R (A) =
w ∈ C1×Q | w = zA A+ A , for some z ∈ C1×N ⊆ R A+ A . From the definition of R A+ A , it follows
that R A+ A = w ∈ C1×Q | w = zA+ A, for some z ∈ C1×Q ⊆ R (A). From R (A) ⊆ R A+ A and
28
R A+ A ⊆ R (A), the first equality of (33) follows. From (7) and the definition of C (A), it follows that:
C (A) = w ∈ CN ×1 | w = AA+ Az, for some z ∈ CQ×1 ⊆ C AA+ . From the definition of C AA+ , it
follows that: C AA+ = w ∈ CN ×1 | w = AA+ z, for some z ∈ CN ×1 ⊆ C (A). From C (A) ⊆ C AA+
and C AA+ ⊆ C (A), the second equality of (33) follows. The third equation of (33) is a direct consequence of
A PPENDIX V
inverse:
+ +
A = A, (62)
+ H
AH = A+ , (63)
H H
A+ = AH A+ A+ = A+ A+ AH , (65)
+ H
AH A = A+ A+ , (66)
+ H
AAH = A+ A+ , (67)
+ +
A+ = AH A AH = AH AAH , (68)
−1 H
A+ = AH A A if A has full column rank, (69)
−1
A+ = AH AAH if A has full row rank, (70)
AB = 0N ×R ⇔ B + A+ = 0R×N . (71)
Proof: (61), (62), and (63) can be proved by direct insertion in Definition 3 of the Moore-Penrose inverse.
29
+ H
The first part of (64) can be proved like this A H = AH AH AH = AH AA+ = AH AA+ , where the
+
result from (63) was used. The second part of (64) can be proved in a similar way A H = AH AH AH =
+ H H + H +
A A A = A+ AAH . The first part of (65) can be shown by A + = A+ AA+ = AH AH A =
H H H +
AH A+ A+ . The second part of (65) follows from A + = A+ AA+ = A+ A+ AH = A+ AH AH .
(66) and (67) can be proved by using the results from (64) and (65) in the definition of the Moore-Penrose inverse.
Now, (71) will be shown. Firstly, it is shown that AB = 0 N ×R implies B + A+ = 0R×N . Assume that AB = 0N ×R .
+ +
B + A+ = B H B B H AH AAH . (72)
AB = 0N ×R leads to B H AH = 0R×N , and then (72) yields B + A+ = 0R×N . Secondly, it will be shown that
B + A+ = 0R×N implies AB = 0N ×R . Assume that B + A+ = 0R×N . Using the implication just proved, i.e., if
CD = 0M ×P , then D + C + = 0P ×M , where M and P are integers given by the size of the matrices C and D ,
+ + +
gives A+ B = 0N ×R , and then the desired result follows from (62).
R EFERENCES
[1] A. Paulraj, R. Nabar, and D. Gore, Introduction to Space-Time Wireless Communications. Cambridge, United Kingdom: Cambridge
[2] F. J. González-Vázquez, “The differentiation of functions of conjugate complex variables: Application to power network analysis,”
IEEE Trans. Education, vol. 31, no. 4, pp. 286–291, Nov. 1988.
[3] S. Alexander, “A derivation of the complex fast Kalman algorithm,” IEEE Trans. Acoust., Speech, Signal Process., vol. 32, no. 6, pp.
[4] A. I. Hanna and D. P. Mandic, “A fully adaptive normalized nonlinear gradient descent algorithm for complex-valued nonlinear adaptive
filters,” IEEE Trans. Signal Process., vol. 51, no. 10, pp. 2540–2549, Oct. 2003.
[5] E. Kreyszig, Advanced Engineering Mathematics, 7th ed. New York, USA: John Wiley & Sons, Inc., 1993.
[6] J. W. Brewer, “Kronecker products and matrix calculus in system theory,” IEEE Trans. Circuits, Syst., vol. CAS-25, no. 9, pp. 772–781,
Sept. 1978.
[7] J. R. Magnus and H. Neudecker, Matrix Differential Calculus with Application in Statistics and Econometrics. Essex, England: John
[8] D. A. Harville, Matrix Algebra from a Statistician’s Perspective. Springer-Verlag, 1997, corrected second printing, 1999.
30
[9] T. P. Minka. (December 28, 2000) Old and new matrix algebra useful for statistics. [Online]. Available: http://research.microsoft.com/
~minka/papers/matrix/
[10] K. Kreutz-Delgado, “Real vector derivatives and gradients,” Dept. of Electrical and Computer Engineering, UC San Diego, Tech. Rep.
[11] S. Haykin, Adaptive Filter Theory, 4th ed. Englewood Cliffs, New Jersey, USA: Prentice Hall, 1991.
[12] A. H. Sayed, Fundamentals of Adaptive Filtering. IEEE Computer Society Press, 2003.
[13] M. H. Hayes, Statistical Digital Signal Processing and Modeling. John Wiley & Sons, Inc., 1996.
[14] A. van den Bos, “Complex gradient and Hessian,” Proc. IEE Vision, Image and Signal Process., vol. 141, no. 6, pp. 380–383, Dec.
1994.
[15] D. H. Brandwood, “A complex gradient operator and its application in adaptive array theory,” IEE Proc., Parts F and H, vol. 130,
[17] K. Kreutz-Delgado, “The complex gradient operator and the CR-calculus,” Dept. of Electrical and Computer Engineering, UC San Diego,
Tech. Rep. Course Lecture Supplement No. ECE275A, Sept.-Dec. 2005, http://dsp.ucsd.edu/~kreutz/PEI05.html.
[18] G. Jöngren, M. Skoglund, and B. Ottersten, “Combining beamforming and orthogonal space-time block coding,” IEEE Trans. Inform.
[19] D. Gesbert, M. Shafi, D. Shiu, P. J. Smith, and A. Naguib, “From theory to practice: An overview of MIMO space-time coded wireless
systems,” IEEE J. Select. Areas Commun., vol. 21, no. 3, pp. 281–302, Apr. 2003.
[20] D. H. Jonhson and D. A. Dudgeon, Array Signal Processing: Concepts and Techniques. Prentice-Hall, Inc., 1993.
[21] W. Wirtinger, “Zur formalen theorie der funktionen von mehr komplexen veränderlichen,” Mathematische Annalen, vol. 97, pp. 357–375,
1927.
[22] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge University Press Cambridge, UK, 1985, reprinted 1999.
[23] ——, Topics in Matrix Analysis. Cambridge University Press Cambridge, UK, 1991, reprinted 1999.
[24] D. G. Luenberger, Introduction to Linear and Nonlinear Programming. Reading, Massachusetts: Addison–Wesley, 1973.
[25] N. Young, An Introduction to Hilbert Space. The Pitt Building, Trumpington Street, Cambridge: Press Syndicate of the University of
[26] C. H. Edwards and D. E. Penney, Calculus and Analytic Geometry, 2nd ed. Englewood Cliffs, New Jersey: Prentice-Hall, Inc, 1986.
[27] H. Sampath and A. Paulraj, “Linear precoding for space-time coded systems with known fading correlations,” IEEE Commun. Lett.,
[28] A. Hjørungnes and D. Gesbert, “Precoding of orthogonal space-time block codes in arbitrarily correlated MIMO channels: Iterative
and closed-form solutions,” IEEE Trans. Wireless Commun., 2005, accepted 11.10.2005.
[29] T. J. Abatzoglou, J. M. Mendel, and G. A. Harada, “The constrained total least squares technique and its applications to harmonic
superresolution,” IEEE Trans. Signal Process., vol. 39, no. 5, pp. 1070–1087, May 1991.
[30] G. Yan and H. Fan, “A Newton-like algorithm for complex variables with applications in blind equalization,” IEEE Trans. Signal