Fast MIMO Antenna Selection
Fast MIMO Antenna Selection
Fast MIMO Antenna Selection
\
|
' M
M
M is large, the required computational load for finding
the optimal solution to (6) can be high. In order to find
simplified approximated solutions, we propose two
algorithms to lower the computational complexity.
A. Decremental Selection
The decremental selection begins with the full set of
available antennas indexed as and then
removes one antenna per step. This process is repeated until
the required number of antennas (
} ,..., 2 , 1 { M r =
M' ) remained. In each
step, the antenna with the lowest contribution to the system
capacity is removed. After steps, the n N n M ) ( channel
matrix corresponding to the selected ) ( n M receive
antennas is denoted by . The matrix contains the
rows of
n
H
n
H
) ( n M H , which corresponds to the
selected receive antennas. Let ) ( n M
r be the index set of
the selected antennas at n th step and define as one row
of . By removing from at the step, the
j
h
n
H
j
h
n
H th n ( + ) 1
N n M ) 1 ( channel matrix is formed and the
corresponding capacity is given by
1 + n
H
( ) ( ) | |
1 1 0 2 1
/ det log ) (
+ + +
+ =
n
H
n ss s N n
H H R N E I H C
( ) ( ) | |
j
H
j n
H
n ss s N
h h H H R N E I + =
0 2
/ det log (7)
For sake of simplicity, we will assume uncorrelated
transmitters . Applying the Sherman-Morrison
N ss
I R =
formula for determinants of general elementary matrix
(GEM), we obtain that
) ( ) (
1 n n
H C H C =
+
( ) ( ) ( )
(
+ +
+ +
H
j n
H
n s N j s
h H H N E I h N E
1
1 1 0 0 2
/ / 1 log (8)
According to the last equation, maximization of
w.r.t. given yields ) (
1 + n
H C j
n
H
( ) ( )
H
j n
H
n N s j
r j
h H H I N E h p
1
1
0
/ min arg
e
+ =
(9)
and the subset of available antennas r is updated at the end of
each step by
} {p r r =
(10)
To reduce the computational complexity of (9), let us
define
( ) ( )
H H
N s
H H H I N E H G
1
1
0 0
/
+ = (11)
( ) ( )
H
n
H
n N s n
H H H I N E H G
1
1
0
/
+ =
(12)
and (9) can be then rewritten as
) , ( min arg j j G p
n
r je
=
(13)
where is the diagonal element of , ) , ( j j G
n
jth
n
G p denotes
the index of the largest diagonal element of . After finding
the solution of (13), we remove from at the
n
G
p
h
n
H
th n ) 1 ( + step to obtain the channel matrix
since the receiving antenna is with the lowest
contribution to the system capacity.
N n M ) 1 (
1 + n
H pth
Using matrix inversion lemma, the M M matrix can
be updated in the
n
G
th n ) 1 ( + step by
p
H
p
n
n n
g g
p p G
G G
) , ( 1
1
1
+ =
+
(14)
where is the row of .
p
g pth
n
G
Denote our proposed algorithm for decremental selection as
Algorithm I, which includes the selection rule (13) and the
recursive updating algorithm (14). The details of Algorithm I
is summarizes in Table I with the right column showing the
complexity corresponding to each part of the algorithm. Note
that the subscript is dropped for simplicity. n
Next let us compare the computational complexity of
Algorithm I and Gorokhovs approach. The latter is
M
N
Rx .
.
.
.
.
.
M
Tx
RF chain
RF chain
RF
switch
H
Coding
RF chain
RF chain
Decoding
Figure1. Receive antenna selection in MIMO
1778
summarized in Table II. It can be seen that the dominant
complexity in recursions has been lowered from
( ) ) ' (
2
M M M N O to . Considering that the
decremental selection algorithm is often used when
( ) ( M M M O ' )
M' is
relatively large, the proposed Algorithm I is demonstrated to
have a significantly computational complexity than
Gorokhovs approach.
B. Incremental Selection
As shown in [5] that if M ' is small, i.e. ) ( M M ' is large,
incremental selection may be more applicable than
decremental selection. In incremental selection, we start with
an empty set of selected antennas and then add one antenna
per step to this set. At each step, the objective is to select one
more antenna, which yields the highest increase of the
capacity. We extend Algorithm I to the incremental selection
and name it Algorithm II.
The corresponding capacity at the step is given by th n ) 1 ( +
( ) ( ) | |
1 1 0 2 1
/ det log ) (
+ + +
+ =
n
H
n ss s N n
H H R N E I H C
( ) ( ) | |
j
H
j n
H
n ss s N
h h H H R N E I + + =
0 2
/ det log (15)
The updating formulas of Algorithm II are given by
( ) ( )
H
N s
H I N E H G
0 0
/ = (16)
p
H
p
n
n n
g g
p p G
G G
) , ( 1
1
1
+
=
+ (17)
and the selection rule is
) , ( max arg j j G p
n
r je
=
. (18)
By this selection, is inserted in a proper position in at
the
p
h
n
H
th n ) 1 ( + step to obtain the N n + ) 1 ( channel
matrix .
1 + n
H
Algorithm II is summarized in Table I and the dominant
complexity in recursions is . The complexity of the
incremental selection algorithm of [6] is . The
details are described in Table II. Noting that the incremental
selection algorithm is mostly used when
( M M O ')
) ' (
2
N MM O
M ' is small. By a
comparison of Tables I and II, we can see that the proposed
Algorithm II achieves a significant complexity reduction.
ALGORITHEM I
DECREMENTAL SELECTION
Set:
( ) ( )
H H
N s
H H H I N E H G
1
1
0
/ :
+ =
,
;
} ,..., 2 ,. 1 { : M r =
for to 1 = n ) ( M M '
compute ,
) , ( min arg : j j G p
r je
=
;
} { : p r r =
update
) , ( 1
:
p p G
g g
G G
p
H
p
+ =
;
end
) (
3 2
N N M O +
( ) ) ( M M M O '
( ) ) (
2
M M M O '
ALGORITHEM II
INCREMENTAL SELECTION
Set: ,
( ) ( )
H
N s
H I N E H G
0
/ :=
;
} ,..., 2 ,. 1 { : M r =
for to 1 = n M '
compute ,
) , ( max arg : j j G p
r je
=
;
} { : p r r =
update
) , ( 1
:
p p G
g g
G G
p
H
p
+
=
;
end
) (
2
N M O
( ) M M O '
( ) M M O '
2
ALGORITHEM FOR
DECREMENTAL SELECTION
Set:
( ) ( )
1
1
0
/ :
+ = H H I N E A
H
N s
,
} ,..., 2 ,. 1 { : M r =
;
for 1 = n to
) ( M M '
compute ,
H
j j
r j
Ah h p
e
= min arg :
} { : p r r =
;
update
;
A h Ah h Ah A A
p
H
p p
H
p
1
) 1 ( :
+ =
end
) (
3 2
N MN O +
( ) ) ' (
2
M M M N O
( ) ) (
2
M M N O '
ALGORITHEM FOR
INCREMENTAL SELECTION
Set:
( )
N s
I N E A
0
/ :=
,
} ,..., 2 ,. 1 { : M r =
;
for 1 = n to M '
compute ,
H
j j
r j
Ah h p
e
= max arg :
} { : p r r =
;
update
;
A h Ah h Ah A A
p
H
p p
H
p
1
) 1 ( :
+ =
end
) ' (
2
N MM O
( ) ) (
2
M M N O '
TABLE II
GOROKHOVES APPROACHES [5], [6]
TABLE I
PROPOSED APPROACHES
1779
IV.PERFORMANCE ANALYSIS
The proposed algorithms and Gorokhovs approaches are
summarized in Table I and Table II. As we mentioned in the
previous section, it is shown that the proposed approaches
always have lower computational complexity than that of the
algorithms of [5] and [6].
Table III summarizes Gharavi-Alkhansaris algorithm for
incremental selection and the complexity of this algorithm
is . Although the proposed Algorithm II has about
the same order of complexity for incremental selection when
compared to Gharavi-Alkhansaris approach, a lack of
capability for decremental selection in Gharavi-Alkhansaris
approach makes our Algorithm I promising, because
decremental selection becomes essential when
( M NM O ')
' M is closer
to M . One specific case to illustrate this point is that,
if 1 ' = M M , then the decremental selection method always
finds the optimal solution in the first step. Moreover,
decremental selection is expected to perform generally better
than incremental selection. The reason is that antenna
incremental rule is based on individual contributions of the
appended antennas while antenna removal rule takes into
account join contributions of all (remaining) antennas [6].
Regarding to memory requirement, our method shows the
highest need in memory among all the algorithms under
comparison. The memory requirement of the proposed
algorithms is since the proposed methods need to
store and update
) (
2
M O
M M matrix in each step. However, in
real implementation it may not be a big issue.
n
G
The motivation of this paper is to lower complexity while
minimizing the channel capacity loss. Since the proposed
algorithms employ the same antenna selection rules as those
of [6] and [7], all methods under comparison should have the
same performance in the sense of capacity. It has been
demonstrated in [7] that the performance difference between
the optimal and all the sub-optimal approaches compared in
this paper is relatively small.
V.CONCLUSION
In this paper, we propose two novel fast receive antenna
selection algorithms for MIMO communication systems.
Comparing with algorithms of [5]-[7] and the optimal
selection method, our approach can achieve the similar
outrage capacity with significant reduction in computational
complexity.
REFERENCES
[1] J. H. Winters, On the capacity of radio communications systems with
diversity in Rayleigh fading environments, IEEE J. Selected Areas
Comm., 1987.
[2] G. J. Foschini and M. J. Gans, On limits of wireless communications
in a fading environment when using multiple antennas, Wireless
Personal Commun., vol. 6, pp. 311-335, Mar. 1998.
[3] E. Talatar, Capacity of multi-antenna Gaussian channels, Eur. Trans.
Telecommun., vol. 10, pp. 585-595, Nov. 1999.
[4] M. Win and J. Winters, Virtual branch analysis of symbol error
probability for hybrid selection/maximal-ratio combining in Rayleigh
fading, IEEE Trans. Commun., vol. 49, pp. 1926-1934, Nov. 2001.
[5] A. Gorokhov, Antenna Selection Algorithms for MEA Transmission
Systems, Proc. IEEE ICASSP, Orlando, FL, May 2002, pp. 2875-60.
[6] A. Gorokhov, D. Gore and A. Paulraj, Receive Antenna Selection for
MIMO Flat-Fading Channels: Theory and Algorithms IEEE Trans.
Inform. Theory, vol. 49, pp2687-2696, Oct. 2003
[7] M. Gharavi-Alkhansari and A. B. Gershman, Fast Antenna Subset
Selection in MIMO Systems in IEEE Trans. Signal Processing., vol.
52, No.2, Feb. 2004
ALGORITHEM FOR
INCREMENTAL SELECTION
For to 1 = j M
set
H
j j j
h h = : o
end
} ,..., 2 ,. 1 { : M r = ;
for to 1 = n M'
compute
j
r j
p o
e
= max arg :
} { : p r r =
) (
1
1
:
1
1
H
i
H
p
n
i
i
H
p
p
H
n
b h b h b
_
+
=
o
for all r j e
update
2
| | :
H
j n j j
h b =o o
end
end
) (MN O
( ) M M O '
( )
2
M N O '
( ) M NM O '
TABLE III
GHARAVI-ALKHANSARIS APPROACH [7]
1780