A Some Basic Rules of Tensor Calculus PDF
A Some Basic Rules of Tensor Calculus PDF
A Some Basic Rules of Tensor Calculus PDF
The tensor calculus is a powerful tool for the description of the fundamentals in con-
tinuum mechanics and the derivation of the governing equations for applied prob-
lems. In general, there are two possibilities for the representation of the tensors and
the tensorial equations:
the direct (symbolic) notation and
the index (component) notation
The direct notation operates with scalars, vectors and tensors as physical objects
defined in the three dimensional space. A vector (first rank tensor) a is considered
as a directed line segment rather than a triple of numbers (coordinates). A second
rank tensor A is any finite sum of ordered vector pairs A = a b + . . . + c dd. The
scalars, vectors and tensors are handled as invariant (independent from the choice of
the coordinate system) objects. This is the reason for the use of the direct notation
in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199,
246, 313, 334] among others.
The index notation deals with components or coordinates of vectors and tensors.
For a selected basis, e.g. g i , i = 1, 2, 3 one can write
a = ai g i , A = ai b j + . . . + ci d j g i g j
Here the Einsteins summation convention is used: in one expression the twice re-
peated indices are summed up from 1 to 3, e.g.
3 3
ak g k ak g k , Aik bk Aik bk
k =1 k =1
In the above examples k is a so-called dummy index. Within the index notation the
basic operations with tensors are defined with respect to their coordinates, e. g. the
sum of two vectors is computed as a sum of their coordinates ci = ai + bi . The
introduced basis remains in the background. It must be remembered that a change
of the coordinate system leads to the change of the components of tensors.
In this work we prefer the direct tensor notation over the index one. When solv-
ing applied problems the tensor equations can be translated into the language
of matrices for a specified coordinate system. The purpose of this Appendix is to
168 A Some Basic Rules of Tensor Calculus
give a brief guide to notations and rules of the tensor calculus applied through-
out this work. For more comprehensive overviews on tensor calculus we recom-
mend [54, 96, 123, 191, 199, 311, 334]. The calculus of matrices is presented in
[40, 111, 340], for example. Section A.1 provides a brief overview of basic alge-
braic operations with vectors and second rank tensors. Several rules from tensor
analysis are summarized in Sect. A.2. Basic sets of invariants for different groups
of symmetry transformation are presented in Sect. A.3, where a novel approach to
find the functional basis is discussed.
a b c
a
a a a
Figure A.1 Axial vectors. a Spin vector, b axial vector in the right-screw oriented reference
frame, c axial vector in the left-screw oriented reference frame
A.1 Basic Operations of Tensor Algebra 169
Multiplication by a Scalar. For any vector a and for any scalar a vector b = aa
is defined in such a way that
|b | = |||a |,
for > 0 the direction of b coincides with that of a ,
for < 0 the direction of b is opposite to that of a .
For = 0 the product yields the zero vector, i.e. 0 = 0aa . It is easy to verify that
(a + b ) = aa + bb , ( + )a = aa + aa
a b
b c c
b
a a
Figure A.2 Addition of two vectors. a Parallelogram rule, b triangle rule
170 A Some Basic Rules of Tensor Calculus
a b
b b
a a a
2 na = (b n a )n a
|a |
Figure A.3 Scalar product of two vectors. a Angles between two vectors, b unit vector and
projection
Scalar (Dot) Product of two Vectors. For any pair of vectors a and b a scalar
is defined by
= a b = |a ||b | cos ,
where is the angle between the vectors a and b . As one can use any of the two
angles between the vectors, Fig. A.3a. The properties of the scalar product are
a b = b a (commutativity),
a (b + c ) = a b + a c (distributivity)
Two nonzero vectors are said to be orthogonal if their scalar product is zero. The
unit vector directed along the vector a is defined by (see Fig. A.3b)
a
na =
|a |
The projection of the vector b onto the vector a is the vector (b n a )n a , Fig. A.3b.
The length of the projection is |b || cos |.
Vector (Cross) Product of Two Vectors. For the ordered pair of vectors a and
b the vector c = a b is defined in two following steps [334]:
the spin vector c is defined in such a way that
the axis is orthogonal to the plane spanned on a and b , Fig. A.4a,
the circular arrow shows the direction of the shortest rotation from a to b ,
Fig. A.4b,
the length is |a ||b | sin , where is the angle of the shortest rotation from a
to b ,
from the resulting spin vector the directed line segment c is constructed according
to one of the rules listed in Subsect. A.1.1.
The properties of the vector product are
a b = b a , a (b + c ) = a b + a c
The type of the vector c = a b can be established for the known types of the
vectors a and b , [334]. If a and b are polar vectors the result of the cross product
A.1 Basic Operations of Tensor Algebra 171
a b c
c
c
a b a b a b
Figure A.4 Vector product of two vectors. a Plane spanned on two vectors, b spin vector, c
axial vector in the right-screw oriented reference frame
will be the axial vector. An example is the moment of momentum for a mass point m
defined by r (mvv ), where r is the position of the mass point and v is the velocity
of the mass point. The next example is the formula for the distribution of velocities
in a rigid body v = r . Here the cross product of the axial vector (angular
velocity) with the polar vector r (position vector) results in the polar vector v.
The mixed product of three vectors a , b and c is defined by (a b ) c . The result
is a scalar. For the mixed product the following identities are valid
a (b c ) = b (c a ) = c (a b ) (A.1.1)
If the cross product is applied twice, the first operation must be set in parentheses,
e.g., a (b c ). The result of this operation is a vector. The following relation can
be applied
a (b c ) = b (a c ) c (a b ) (A.1.2)
By use of (A.1.1) and (A.1.2) one can calculate
(a b ) (c d ) = a [b (c d )]
= a (c b d d b c ) (A.1.3)
= ac bd ad bc
A.1.3 Bases
Any triple of linear independent vectors e 1 , e 2 , e 3 is called basis. A triple of vectors
e i is linear independent if and only if e 1 (e 2 e 3 ) 6= 0.
For a given basis e i any vector a can be represented as follows
a = a1 e 1 + a2 e 2 + a3 e 3 a i e i
The numbers ai are called the coordinates of the vector a for the basis e i . In order to
compute the coordinates the dual (reciprocal) basis e k is introduced in such a way
that
k k 1, k = i,
e e i = i =
0, k 6= i
172 A Some Basic Rules of Tensor Calculus
e i a = a e i = am e m e i = am m
i
= ai
For the selected basis e i the dual basis can be found from
e2 e3 e3 e1 e1 e2
e1 = , e2 = , e3 = (A.1.4)
(e 1 e 2 ) e 3 (e 1 e 2 ) e 3 (e 1 e 2 ) e 3
By use of the dual basis a vector a can be represented as follows
a = a1 e 1 + a2 e 2 + a3 e 3 a i e i , am = a e m , am 6= am
be a second rank tensor. Introducing a basis e k the vectors a (i) can be represented
by a (i) = ak(i) e k , where ak(i) are coordinates of the vectors a (i) . Now we may write
n n n
A= ak(i)e k b (i) = e k ak(i)b (i) = e k d k , dk ak(i)b (i)
i =1 i =1 i =1
Addition. The sum of two tensors is defined as the sum of the corresponding
dyads. The sum has the properties of associativity and commutativity. In addition,
for a dyad a b the following operation is introduced
a (b + c ) = a b + a c , (a + b ) c = a c + b c
Multiplication by a Scalar. This operation is introduced first for one dyad. For
any scalar and any dyad a b
(a b ) = (aa ) b = a (bb ),
(A.1.5)
( + )a b = aa b + aa b
By setting = 0 in the first equation of (A.1.5) the zero dyad can be defined, i.e.
0(a b ) = 0 b = a 0 . The above operations can be generalized for any finite
sum of dyads, i.e. for second rank tensors.
A.1 Basic Operations of Tensor Algebra 173
Inner Dot Product. For any two second rank tensors A and B the inner dot prod-
uct is specified by A B . The rule and the result of this operation can be explained
in the special case of two dyads, i.e. by setting A = a b and B = c d
A B = a b c d = (b c )a d = aa d , bc
The result of this operation is a second rank tensor. Note that A B 6= B A . This
can be again verified for two dyads. The operation can be generalized for two second
rank tensors as follows
3 3 3 3
AB = a (i) b (i) c (k) d (k) = (b (i) c (k) )a (i) d (k)
i =1 k =1 i =1 k =1
Double Inner Dot Product. For any two second rank tensors A and B the double
inner dot product is specified by A B The result of this operation is a scalar. This
operation can be explained for two dyads as follows
A B = a b c d = (b c )(a d )
By analogy to the inner dot product one can generalize this operation for two second
rank tensors. It can be verified that A B = B A for second rank tensors A and
B . For a second rank tensor A and for a dyad a b
A a b = b A a (A.1.6)
= A B T
a b c = a (b c )
174 A Some Basic Rules of Tensor Calculus
The results of these operations are vectors. One can verify that
A c 6= c A , A c = c AT
Cross Products of a Second Rank Tensor and a Vector. The right cross
product of a second rank tensor A and a vector c is defined by
!
3 3
A c = a (i) b (i) c = a (i) (b (i) c )
i =1 i =1
The results of these operations are second rank tensors. It can be shown that
A c = [c A T ]T
By taking the trace of a second rank tensor the dyadic product is replaced by the dot
product. It can be shown that
tr A = tr A T , tr ( A B ) = tr (B A ) = tr ( A T B T ) = A B
cI = Ic =c
A.1 Basic Operations of Tensor Algebra 175
I = ek ek = ek ek
I = n n +m m + p p
m m, n n + p p = I m m,
where m , n and p are orthonormal vectors. The result of the dot product of the tensor
m m with any vector a is the projection of the vector a onto the line spanned on
the vector m , i.e. m m a = (a m )m . The result of (n n + p p ) a is the
projection of the vector a onto the plane spanned on the vectors n and p .
Skew-symmetric Tensors. A second rank tensor is said to be skew-symmetric
if it satisfies the following equality
A = A T
A = a I = I a
The vector a is called the associated vector. Any second rank tensor can be uniquely
decomposed into the symmetric and skew-symmetric parts
1 1
A= A + AT + A A T = A1 + A2,
2 2
1
A1 = A + A , A 1 = A 1T ,
T
2
1
A2 = A A T , A 2 = A 2T
2
Vector Invariant. The vector invariant or Gibbsian Cross of a second rank ten-
sor A is defined by
!
3 3
A = a (i) b (i) = a (i) b (i)
i =1 i =1
The result of this operation is a vector. The vector invariant of a symmetric tensor is
the zero vector. The following identities can be verified
(a I ) = 2aa , a I b = b a (a b ) I
176 A Some Basic Rules of Tensor Calculus
The inverse of a second rank tensor A 1 is introduced as the solution of the follow-
ing equation
A 1 A = A A 1 = I
A is invertible if and only if det A 6= 0. A tensor A with det A = 0 is called
singular. Examples of singular tensors are projectors.
Cayley-Hamilton Theorem. Any second rank tensor satisfies the following
equation
A 3 J1 ( A ) A 2 + J2 ( A ) A J3 ( A ) I = 0 , (A.1.7)
where A 2 = A A , A 3 = A A A and
1
J1 ( A ) = tr A , [(tr A )2 tr A 2 ],
J2 ( A ) =
2 (A.1.8)
1 1 1
J3 ( A ) = det A = (tr A )3 tr A tr A 2 + tr A 3
6 2 3
The scalar-valued functions Ji ( A ) are called principal invariants of the tensor A .
Coordinates of Second Rank Tensors. Let e i be a basis and e k the dual basis.
Any two vectors a and b can be represented as follows
a = ai e i = a j e j , b = b l e l = bm e m
a b = ai b j e i e j = ai b j e i e j = ai b j e i e j = ai b j e i e j
For the representation of a second rank tensor A one of the following four bases can
be used
ei e j, ei e j, ei e j, ei e j
With these bases one can write
A.1 Basic Operations of Tensor Algebra 177
j
A = Aije i e j = Aije i e j = Aije i e j = Ai e i e j
For a selected basis the coordinates of a second rank tensor can be computed as
follows
Aij = e i A e j , Aij = e i A e j ,
j
Aij = e i A e j , Ai = e i A e j
Principal Values and Directions of Symmetric Second Rank Tensors.
Consider a dot product of a second rank tensor A and a unit vector n . The resulting
vector a = A n differs in general from n both by the length and the direction.
However, one can find those unit vectors n , for which A n is collinear with n , i.e.
only the length of n is changed. Such vectors can be found from the equation
A n = n
n or ( A II ) n = 0 (A.1.9)
The unit vector n is called the principal vector and the scalar the principal value
of the tensor A . Let A be a symmetric tensor. In this case the principal values are
real numbers and there exist at least three mutually orthogonal principal vectors.
The principal values can be found as roots of the characteristic polynomial
det( A II ) = 3 + J1 ( A )2 J2 ( A ) + J3 ( A ) = 0
The principal values are specified by I , I I , I I I . For known principal values and
principal directions the second rank tensor can be represented as follows (spectral
representation)
A = In I n I + I In I I n I I + I I In I I I n I I I
|b |2 = b b = a Q T Q a = a a = |a |2
Furthermore, the orthogonal tensor does not change the scalar product of two arbi-
trary vectors. For two vectors a and b as well as a = Q a and b = Q b one can
calculate
a b = a QT Q b = a b
From the definition of the orthogonal tensor follows
Q T = Q 1 , QT Q = Q QT = I,
det(Q Q T ) = (det Q )2 = det I = 1 det Q = 1
Orthogonal tensors with det Q = 1 are called proper orthogonal or rotation tensors.
The rotation tensors are widely used in the rigid body dynamics, e.g. [333], and in
the theories of rods, plates and shells, e.g. [25, 32]. Any orthogonal tensor is either
178 A Some Basic Rules of Tensor Calculus
the rotation tensor or the composition of the rotation (proper orthogonal tensor) and
the tensor I . Let P be a rotation tensor, det P = 1, then an orthogonal tensor Q
with det Q = 1 can be composed by
Q = ( I ) P = P ( I ), I ) det P = 1
det Q = det(
For any two orthogonal tensors Q 1 and Q 2 the composition Q 3 = Q 1 Q 2 is the or-
thogonal tensor, too. This property is used in the theory of symmetry and symmetry
groups, e.g. [232, 331]. Two important examples for orthogonal tensors are
rotation tensor about a fixed axis
Q (m
m ) = m m + cos ( I m m ) + sin m
m I,
< < , det Q = 1,
where the unit vector m represents the axis and is the angle of rotation,
reflection tensor
Q = I 2n n n , det Q = 1,
where the unit vector n represents a normal to the mirror plane.
One can prove the following identities [334]
(Q a ) (Q b ) = det Q Q (a b )
(A.1.10)
Q (a Q T ) = Q (a I ) Q T = det Q [(Q a ) I ]
r ( x1 , x2 , x3 ) = x1 e 1 + x2 e 2 + x3 e 3 = x i e i
x k = x k ( q1 , q2 , q3 ) qk = qk ( x1 , x2 , x3 )
It is assumed that the above transformations are continuous and continuous differ-
entiable as many times as necessary and for the Jacobians
!
xk
i
q
det i
6= 0, det 6= 0
q xk
A.2 Elements of Tensor Analysis 179
q3
r3
q2
r2
t
c ons
q =
3
P
x3 r1
r
e3
q1
e1 e2 x2
x1
Figure A.5 Cartesian and curvilinear coordinates
must be valid. With these assumptions the position vector can be considered as a
function of curvilinear coordinates qi , i.e. r = r (q1 , q2 , q3 ). Surfaces q1 = const,
q2 = const, and q3 = const, Fig. A.5, are called coordinate surfaces. For given
fixed values q2 = q2 and q3 = q3 a curve can be obtained along which only q1
varies. This curve is called the q1 -coordinate line, Fig. A.5. Analogously, one can
obtain the q2 - and q3 -coordinate lines. The partial derivatives of the position vector
with respect the to selected coordinates
rr rr rr
r1 = , r2 = , r3 = , r 1 (r 2 r 3 ) 6= 0
q1 q2 q3
define the tangential vectors to the coordinate lines in a point P, Fig. A.5. The vec-
tors r i are used as the local basis in the point P. By use of (A.1.4) the dual basis
r k can be introduced. The vector drr connecting the point P with a point P in the
differential neighborhood of P is defined by
rr 1 rr rr
drr = 1
dq + 2 dq2 + 3 dq3 = r k dqk
q q q
The square of the arc length of the line element in the differential neighborhood of
P is calculated by
where gik r i r k are the so-called contravariant components of the metric tensor.
With gik one can represent the basis vectors r i by the dual basis vectors r k as follows
r i = (r i r k )r k = gikr k
180 A Some Basic Rules of Tensor Calculus
Similarly
r i = (r i r k )r k = gikr k , gik r i r k ,
where gik are termed covariant components of the metric tensor. For the selected
bases r i and r k the second rank unit tensor has the following representations
d = drr r k = drr
qk
The vector is called the gradient of the scalar field and the invariant operator
(the Hamilton or nabla operator) is defined by
= rk
qk
aa aa
daa = (drr r k )
= r
dr r k
a ) T dr ,
= drr a = (
qk qk
aa
a = rk k
q
The gradient of a vector field is a second rank tensor. The operation can be applied
to tensors of any rank. For vectors and tensors the following additional operations
are defined
aa
divaa a = r k k
q
aa
rotaa a = r k k
q
The following identities can be verified
rr
r = rk = rk rk = I, r = 3
qk
A.2 Elements of Tensor Analysis 181
For a scalar , a vector a and for a second rank tensor A the following identities are
valid
k (aa ) k aa
(aa ) = r k
= r k
a + rr k k = ( ) a + a ,
q q q
(A.2.1)
k ( A a ) k A A k aa
(A a) = r = r k a+r A k
qk q q
aa (A.2.2)
= ( A ) a + A k
rk
q
= ( a )T
A ) a + A (
Here the identity (A.1.6) is used. For a second rank tensor A and a position vector r
one can prove the following identity
( A r ) A
A rr
(A r ) = r k k
= rk k r + rk A k
q q q (A.2.3)
= A ) r + r k A r k = (
( A) r A
a dV = n a dA,
V
Z A (ZV )
A dV = n A dA
V A (V )
Divergence Theorems
Z Z
a dV = n a dA,
V
Z A (ZV )
A dV = n A dA
V A (V )
182 A Some Basic Rules of Tensor Calculus
Curl Theorems
Z Z
a dV = n a dA,
V
Z A (ZV )
A dV = n A dA
V A (V )
(a , A ) = ( ai e i , Aije i e j ) = ( ai , Aij )
The partial derivatives of with respect to a and A are defined according to the
following rule
i
d = i
da + ij
dAij
a A (A.2.4)
i
= daa e i + dA A e j e i dAij
a Aij
In the coordinates-free form the above rule can be rewritten as follows
T
T
d = daa + dA
A = daa ,aa + dA
A (,A
A) (A.2.5)
aa A
A
with
i
,aa = i e i , ,AA = e ej
aa a AA Aij
One can verify that ,aa and ,A
A are independent from the choice of the basis. One
may prove the following formulae for the derivatives of principal invariants of a
second rank tensor A
T
J1 ( A ),A
A = I, J1 ( A 2 ),A AT,
A = 2A J1 ( A 3 ),A A2 ,
A = 3A
J2 ( A ),A
A = J1 ( A ) I A T , (A.2.6)
2T
J3 ( A ),A
A = A J1 ( A ) A T + J2 ( A ) I = J3 ( A )( A T )1
to the same group is expressible as a single-valued function of the basic set. The ba-
sic set of invariants is called functional basis. To obtain a compact representation
for invariants, it is required that the functional basis is irreducible in the sense that
removing any one invariant from the basis will imply that a complete representation
for all the invariants is no longer possible.
Such a problem arises in the formulation of constitutive equations for a given
group of material symmetries. For example, the strain energy density of an elastic
non-polar material is a scalar valued function of the second rank symmetric strain
tensor. In the theory of the Cosserat continuum two strain measures are introduced,
where the first strain measure is the polar tensor while the second one is the axial
tensor, e.g. [108]. The strain energy density of a thin elastic shell is a function of
two second rank tensors and one vector, e.g. [25]. In all cases the problem is to find
a minimum set of functionally independent invariants for the considered tensorial
arguments.
For the theory of tensor functions we refer to [71]. Representations of tensor
functions are reviewed in [280, 330]. An orthogonal transformation of a scalar , a
vector a and a second rank tensor A is defined by [25, 332]
Q S : f ( A ) = (det Q ) f ( A ), (A.3.2)
Q S : f ( A 1 , A 2 , . . . , A n , a 1 , a 2 , . . . , a k )
= (det Q ) f ( A 1 , A 2 , . . . A n , a 1 , a 2 , . . . , a k ), A i = A iT
(A.3.3)
184 A Some Basic Rules of Tensor Calculus
Ik = tr A k , k = 1, 2, 3, I4 = a a , I5 = a A a ,
(A.3.4)
I6 = a A 2 a , I7 = a A 2 (a A a )
In the above set of invariants only 6 are functionally independent. The relation
between the invariants (so-called syzygy, [71]) can be formulated as follows
I4 I5 I6
2 3
I7 = I5
I6 a A a , (A.3.5)
I6 a A 3 a a A 4 a
Q ( m
m ) = m m + cos ( I m m ) + sin m
m I, det Q ( mm ) = 1,
(A.3.6)
where m is assumed to be a constant unit vector (axis of rotation) and denotes
the angle of rotation about m . The symmetry transformation defined by this tensor
corresponds to the transverse isotropy, whereby five different cases are possible, e.g.
[299, 331]. Let us find scalar-valued functions of a second rank symmetric tensor A
satisfying the condition
f ( A ( )) = f (Q ( m
m ) A Q T ( m A ( ) Q ( m
m )) = f ( A ), m ) A Q T ( m
m)
(A.3.7)
Equation (A.3.7) must be valid for any angle of rotation . In (A.3.7) only the left-
hand side depends on . Therefore its derivative with respect to can be set to zero,
i.e.
A f T
df dA
= =0 (A.3.8)
d d AA
The derivative of A with respect to can be calculated by the following rules
A ( ) = dQ
dA m ) A Q T ( m
Q ( m m ) + Q ( m Q T ( m
m ) A dQ m ),
Q ( m
dQ m ) = m Q ( m
m )d
dQQ T ( m
m ) = Q T ( m
m ) m d
(A.3.9)
By inserting the above equations into (A.3.8) we obtain
f T
(m A A m ) =0 (A.3.10)
AA
Equation (A.3.10) is classified in [92] to be the linear homogeneous first order par-
tial differential equation. The characteristic system of (A.3.10) is
A
dA
= (m A A m ) (A.3.11)
ds
Any system of n linear ordinary differential equations has not more then n 1
functionally independent integrals [92]. By introducing a basis e i the tensor A can
be written down in the form A = Aije i e j and (A.3.11) is a system of six ordi-
nary differential equations with respect to the coordinates Aij . The five integrals of
(A.3.11) may be written down as follows
gi ( A ) = c i , i = 1, 2, . . . , 5,
where ci are integration constants. Any function of the five integrals gi is the so-
lution of the partial differential equation (A.3.10). Therefore the five integrals gi
represent the invariants of the symmetric tensor A with respect to the symmetry
transformation (A.3.6). The solutions of (A.3.11) are
186 A Some Basic Rules of Tensor Calculus
A k (s) = Q (sm
m ) A 0k Q T (sm
m ), k = 1, 2, 3, (A.3.12)
where A 0 is the initial condition. In order to find the integrals, the variable s must
be eliminated from (A.3.12). Taking into account the following identities
tr (Q A k Q T ) = tr (Q T Q A k ) = tr A k , m Q (sm
m) = m,
(A.3.13)
(Q a ) (Q b ) = (det Q )Q (a b )
and using the notation Q m Q (sm
m ) the integrals can be found as follows
tr ( A k ) = tr ( A 0k ), k = 1, 2, 3,
m Al m = m Q m A 0l Q m
T m
= m A 0l m , l = 1, 2,
m A 2 (m A m ) = m Qm T A 2 Q (m Q T A Q m )
0 m m 0 m
= m A 20 Q m (Q mT m ) (Q T A m )
m 0
= m A 20 (m A 0 m )
(A.3.14)
As a result we can formulate the six invariants of the tensor A with respect to the
symmetry transformation (A.3.6) as follows
Ik = tr ( A k ), k = 1, 2, 3, I4 = m A m ,
(A.3.15)
I5 = m A 2 m , I6 = m A 2 (m A m )
The invariants with respect to various symmetry transformations are discussed in
[79]. For the case of the transverse isotropy six invariants are derived in [79] by the
use of another approach. In this sense our result coincides with the result given
in [79]. However, from our derivations it follows that only five invariants listed
in (A.3.15) are functionally independent. Taking into account that I6 is the mixed
product of vectors m , A m and A 2 m the relation between the invariants can be
written down as follows
1 I4 I5
I26 = det I4 I5 m A3 m (A.3.16)
I5 m A m m A 4 m
3
A and B = Q n A Q nT ,
where
Q n Q (n
n ) = 2n
n n I, n n = 1, n m = 0, det Q n = 1
One can prove that the tensor A and the tensor B have the same invariants
I1 , I2 , . . . , I5 . Taking into account that m Q n = m and applying the last iden-
tity in (A.3.13) we may write
I6 (B ) = m B 2 (m B m ) = m A 2 Q nT (m Q n A m )
= m A 2 (m A m ) = I6 ( A )
We observe that the only difference between the two considered tensors is the sign
of I6 . Therefore, the triples of vectors m , A m , A 2 m and m , B m , B 2 m have
different orientations and cannot be combined by a rotation. It should be noted that
the functional relation (A.3.16) would in no way imply that the invariant I6 should
be dependent and hence redundant, namely should be removed from the basis
(A.3.15). In fact, the relation (A.3.16) determines the magnitude but not the sign of
I6 .
To describe yielding and failure of oriented solids a dyad M = v v has been
used in [53, 75], where the vector v specifies a privileged direction. A plastic po-
tential is assumed to be an isotropic function of the symmetric Cauchy stress tensor
and the tensor generator M . Applying the representation of isotropic functions the
integrity basis including ten invariants was found. In the special case v = m the
number of invariants reduces to the five I1 , I2 , . . . I5 defined by (A.3.15). Further de-
tails of this approach and applications in continuum mechanics are given in [59, 71].
However, the problem statement to find an integrity basis of a symmetric tensor A
and a dyad M , i.e. to find scalar valued functions f ( A , M ) satisfying the condition
f (Q A Q T , Q M Q T ) = (det Q ) f ( A , M ),
(A.3.17)
Q , Q QT = I, det Q = 1
essentially differs from the problem statement (A.3.7). In order to show this we
take into account that the symmetry group of a dyad M , i.e. the set of orthogonal
solutions of the equation Q M Q T = M includes the following elements
Q 1,2 = I ,
v
Q3 = Q ( m
m ), m=, (A.3.18)
|v |
Q4 = Q (n
n ) = 2n
n n I, n n = 1, n v = 0,
where Q ( m
m ) is defined by (A.3.6). The solutions of the problem (A.3.17) are
automatically the solutions of the following problem
f (Q i A Q iT , M ) = (det Q i ) f ( A , M ), i = 1, 2, 3, 4,
188 A Some Basic Rules of Tensor Calculus
i.e. the problem to find the invariants of A relative to the symmetry group (A.3.18).
However, (A.3.18) includes much more symmetry elements if compared to the prob-
lem statement (A.3.7).
An alternative set of transversely isotropic invariants can be formulated by the
use of the following decomposition
A = m
m m + ( I m m ) + A pD + t m + m t, (A.3.19)
= m A m = tr ( A P 1 ),
1 1
= (tr A m A m ) = tr ( A P 2 ),
2 2 (A.3.20)
A pD = P 2 A P 2 P
P2,
t = m A P2
The decomposition (A.3.19) is the analogue to the following representation of a
vector a
a = I a = m m a + ( I m m ) a = m
m + , = a m,
= P2 a
(A.3.21)
Decompositions of the type (A.3.19) are applied in [68, 79]. The projections intro-
duced in (A.3.20) have the following properties
tr ( A pD ) = 0, A pD m = m A pD = 0 , tm = 0 (A.3.22)
With (A.3.19) and (A.3.22) the tensor equation (A.3.11) can be transformed to the
following system of equations
d
= 0,
ds
d
= 0,
ds (A.3.23)
dA A pD
= pD pD ,
m A A m
ds
dtt
= m t
ds
From the first two equations we observe that and are transversely isotropic in-
variants. The third equation can be transformed to one scalar and one vector equation
as follows
A pD
dA d( A pD A pD ) dbb
A pD = 0 = 0, = m b
ds ds ds
with b A pD t . We observe that tr ( A 2pD ) = A pD A pD is the transversely
isotropic invariant, too. Finally, we have to find the integrals of the following system
A.3 Orthogonal Transformations and Orthogonal Invariants 189
dtt
= t m,
ds (A.3.24)
dbb = b m
ds
The solutions of (A.3.24) are
t (s) = Q (sm
m) t 0, b (s) = Q (sm
m) b 0,
where t 0 and b 0 are initial conditions. The vectors t and b belong to the plane of
isotropy, i.e. t m = 0 and b m = 0. Therefore, one can verify the following
integrals
t t = t0 t0, b b = b0 b 0, t b = t0 b 0,
(t b ) m = (t 0 b 0 ) m
(A.3.25)
We found seven integrals, but only five of them are functionally independent. In
order to formulate the relation between the integrals we compute
b b = t A 2pD t , t b = t A pD t
1
A 2pD = tr ( A 2pD )( I m m ),
2A t A 2pD t = tr ( A 2pD )(t t )
2
Because tr ( A 2pD ) and t t are already defined, the invariant b b can be omitted.
The vector t b is spanned on the axis m . Therefore
t b = m
m, = (t b ) m ,
2 = (t b ) (t b ) = (t t )(b b ) (t b )2
Now we can summarize six invariants and one relation between them as follows
1
I1 = , I2 = , I3 = tr ( A 2pD ), I4 = t t = t A m ,
2
I5 = t A pD t , I6 = (t A pD t ) m , (A.3.26)
I62 = I42 I3 I52
= , = , A pD = A pD , t = Q n t
190 A Some Basic Rules of Tensor Calculus
I6 = (t A pD t ) m = (Q n t ) Q n A pD t m
= (t A pD t ) Q n m = (t A pD t ) m = I6
Consequently
f ( A ) = f ( I1 , I2 , . . . , I5 , I6 ) = f ( I1 , I2 , . . . , I5 , I6 )
f ( A ) = f ( I1 , I2 , . . . , I5 , I62 )
f T
f
(m A A m ) + (m a ) = 0 (A.3.29)
A
A aa
A.3 Orthogonal Transformations and Orthogonal Invariants 191
The above set of includes 9 scalars. The number of independent scalars is 7 due to
the obvious relations
tr ( A k ) = n 1 A k n 1 + n 2 A k n 2 + n 3 A k n 3 , k = 1, 2, 3