On The Stochastic Restricted Modified Almost Unbiased Liu Estimator in Linear Regression Model
On The Stochastic Restricted Modified Almost Unbiased Liu Estimator in Linear Regression Model
On The Stochastic Restricted Modified Almost Unbiased Liu Estimator in Linear Regression Model
https://doi.org/10.1007/s40304-018-0131-3
S. Arumairajan1
1 Introduction
B S. Arumairajan
arumais@gmail.com
123
S. Arumairajan
β̂OLSE C −1 X y, (1.2)
where C X X .
The mean squared error matrix of OLSE is given by
MSE β̂OLSE σ 2 C −1 . (1.3)
Instead of using the ordinary least square estimator (OLSE), the biased estimators are
proposed in the regression analysis to overcome the multicollinearity problem. Some
of these are, namely the ridge estimator (RE) [6], the Liu estimator (LE) [11], the
almost unbiased Liu estimator (AULE) [1]. Recently, Arumairajan and Wijekoon [3]
proposed the modified almost unbiased Liu estimator (MAULE) based on the sample
information (1.1).
Hoerl and Kennard [6] proposed the ridge estimator as follows:
and
MSE β̂RE (k) σ 2 Wk C −1 Wk + (Wk − I ) ββ (Wk − I ) , (1.6)
respectively.
The Liu estimator proposed by Liu [11] is given by
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
and
MSE β̂LE (d) σ 2 Fd C −1 Fd + (Fd − I ) ββ (Fd − I ) , (1.9)
respectively.
The almost unbiased Liu estimator (AULE) proposed by Akdeniz and Erol [1] is
given by
and
MSE β̂AULE (d) σ 2 Td C −1 Td + (Td − I ) ββ (Td − I ) , (1.12)
respectively.
Recently, Arumairajan and Wijekoon [3] proposed the modified almost unbiased
Liu estimator as follows:
β̂MAULE (d, k) I − (1 − d)2 (C + I )−2 (C + k I )−1 X y. (1.13)
where Fd,k Td Wk .
The bias vector and MSE matrix of MAULE are given by
B β̂MAULE (d, k) Fd,k − I β (1.15)
and
MSE β̂MAULE (d, k) σ 2 Fd,k C −1 Fd,k
+ Fd,k − I ββ Fd,k − I , (1.16)
respectively.
An alternative method to deal with multicollinearity problem is to consider parame-
ter estimation with some restrictions on the unknown parameters, which may be exact
or stochastic restrictions. In addition to sample model (1.1), let us be given some prior
123
S. Arumairajan
−1 −1
where G C −1 H Ω + H C −1 H H C −1 and A C −1 − G C + H Ω −1 H .
By replacing ME in the place of OLSE in LE, Hubert and Wijekoon [7] proposed
the stochastic restricted Liu estimator (SRLE) as follows:
and
MSE β̂SRLE (d) σ 2 Fd C −1 Fd − σ 2 Fd G Fd + (Fd − I ) ββ (Fd − I ) , (1.22)
respectively.
In the literature, the researchers are still trying to find the best estimator to combat
the multicollinearity problem by using sample information only or both sample and
prior information. Since the combination of two different estimators might inherit
the advantages of both estimators, it is motivated us to propose the new estima-
tor by combining MAULE and ME. Also, the idea used by Hubert and Wijekoon
[7] helps us to introduce the new estimator which we will see in the next section.
The rest of the paper is organized as follows. We propose the new estimator and
obtain its stochastic properties in Sect. 2. The proposed estimator is compared with
OLSE, ME, RE, LE, AULE and MAULE in the mean squared error matrix sense
in Sect. 3. The optimal shrinkage parameters are obtained in Sect. 4. In Sect. 5,
a Monte Carlo simulation study is carried out and a numerical example is used
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
to illustrate the theoretical findings. Finally, some concluding remarks are given in
Sect. 6.
Following Hubert and Wijekoon [7], we propose the new estimator, namely stochastic
restricted modified almost unbiased Liu estimator (SRMAULE) as follows:
The bias vector, dispersion matrix and MSE matrix of β̂SRMAULE (d, k) can be obtained
as:
B β̂SRMAULE (d, k) Fd,k − I β, (2.2)
D β̂SRMAULE (d, k) σ 2 Fd,k C −1 Fd,k
− σ 2 Fd,k G Fd,k (2.3)
and
MSE β̂SRMAULE (d, k) σ 2 Fd,k C −1 Fd,k
− σ 2 Fd,k G Fd,k
+ Fd,k − I ββ Fd,k − I , (2.4)
respectively.
Even though incorporating the prior information with sample information in the esti-
mation, in the literature, it has been noticed that the estimator based on the sample
information only is superior to the estimator based on the both sample and prior infor-
mation under certain condition in the mean squared error matrix sense. Therefore,
the researchers have been comparing the all kinds of estimators in the literature to
find the superiority conditions. Therefore, in this section, the proposed estimator is
compared with OLSE, ME, RE, LE, AULE, SRLE and MAULE by the mean squared
error matrix criterion.
123
S. Arumairajan
MSE β̂RE (k) − MSE β̂SRMAULE (d, k) σ 2 Wk C −1 Wk − σ 2 Fd,k C −1 Fd,k
+ B β̂
+ σ 2 Fd,k G Fd,k RE (k) B β̂RE (k)
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .
(3.1)
Theorem 3.1 The SRMAULE is superior to RE in the mean squared error matrix
sense if and only if
−1
B β̂SRMAULE (d, k) σ D + B β̂RE (k) B β̂RE (k)
2
× B β̂SRMAULE (d, k) ≤ 1,
+ F G F .
where D Wk C −1 Wk − Fd,k C −1 Fd,k d,k d,k
Wk C −1 Wk − Fd,k C −1 Fd,k
Wk C −1 Wk − I − (1 − d)2 (C + I )−2 Wk C −1 Wk I − (1 − d)2 (C + I ) −2
(C + I )2 Wk C −1 Wk
(1 − d)2 (C + I )−2 −1 W (C + I )2 − (1 − d)2 W C −1 W (C + I )−2
+W k C k k k
(C + I )2 Wk C
−1 Wk
(1 − d)2 (C + I )−2 −1 W (C + I )2 − (1 − d)2 I (C + I )−2
+W k C k
−1
−2 (C + I ) Wk C
Wk
2
(1 − d) (C + I )
2 (C + I )−2 .
+Wk C −1 Wk C 2 + 2C + d (2 − d) I
Since 0 < d < 1, the matrix Wk C −1 Wk − Fd,k C −1 Fd,k is clearly positive definite.
> 0. Now, according to
Therefore, the matrix D is positive definite since Fd,k G Fd,k
Lemma 2 (“Appendix”), MSE β̂RE (k) − MSE β̂SRMAULE (d, k) ≥ 0 if and only
if
−1
B β̂SRMAULE (d, k) σ D + B β̂RE (k) B β̂RE (k)
2
B β̂SRMAULE (d, k) ≤ 1.
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
We consider the MSE matrix difference between AULE and SRMAULE as:
MSE β̂AULE (d) − MSE β̂SRMAULE (d, k) σ 2 Td C −1 Td − σ 2 Fd,k C −1 F d,k
+ σ 2 Fd,k G F d,k F d,k + B β̂AULE (d) B β̂AULE (d)
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .
(3.2)
Theorem 3.2 The SRMAULE is superior to AULE in the mean squared error matrix
sense if and only if
−1
B β̂SRMAULE (d, k) σ 2 D1 + B β̂AULE (d) B β̂AULE (d)
∗B β̂SRMAULE (d, k) ≤ 1,
+ F G F .
where D1 Td C −1 Td − Fd,k C −1 Fd,k d,k d,k
Td C −1 Td − Fd,k C −1 Fd,k
Td C −1 − Wk C −1 Wk Td
Td Wk I + kC −1 C −1 I + kC −1 − C −1 Wk Td
k Fd,k C −2 2I + C −1 Fd,k
.
is positive definite. It
Now, we can clearly say that the matrix Td C −1 Td − Fd,k C −1 Fd,k
follows that the matrix D1 is positive definite since Fd,k G Fd,k > 0. Now, by applying
Lemma 2, we can say that MSE β̂AULE (d) − MSE β̂SRMAULE (d, k) ≥ 0 if and
only if
−1
B β̂SRMAULE (d, k) σ 2 D1 + B β̂AULE (d) B β̂AULE (d)
∗ B β̂SRMAULE (d, k) ≤ 1.
123
S. Arumairajan
−1
B β̂SRMAULE (d, k) σ2D 2 + B β̂LE (d) B β̂LE (d)
∗ B β̂SRMAULE (d, k) ≤ 1,
−1
B β̂SRMAULE (d, k) σ 2 D2 + B β̂LE (d) B β̂LE (d)
∗ B β̂SRMAULE (d, k) ≤ 1.
The MSE matrix difference between SRLE and SRMAULE is given as:
MSE β̂SRLE (d) − MSE β̂SRMAULE (d, k) σ 2 Fd C −1 Fd − σ 2 Fd G Fd
F
− σ 2 Fd,k C −1 Fd,k
d,k + σ Fd,k G Fd,k + B β̂SRLE (d) B β̂SRLE (d)
2 (3.4)
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
Theorem 3.4 When λmax N M −1 < 1, the SRMAULE is superior to SRLE in the
mean squared error matrix sense if and only if
−1
B β̂SRMAULE (d, k) σ D3 + B β̂SRLE (d) B β̂SRLE (d)
2
∗ B β̂SRMAULE (d, k) ≤ 1,
where D3 Fd C −1 Fd + Fd,k G Fd,k
− F G F + F C −1 F
d d d,k d,k M − N , M
Fd C −1 Fd + Fd,k G Fd,k
, N F G F + F C −1 F , and λ
d d d,k d,k max N M
−1 is the
Proof We clearly know that the matrices M and N are positive definite. Now, accord-
ing to Lemma 1, it can be said that M − N > 0 if and only if λmax N M −1 < 1.
This implies that the matrix D3 > 0. By applying Lemma 2, it can be concluded that
MSE β̂SRLE (d) − MSE β̂SRMAULE (d, k) ≥ 0
if and only if
−1
B β̂SRMAULE (d, k) σ D3 + B β̂SRLE (d) B β̂SRLE (d)
2
∗ B β̂SRMAULE (d, k) ≤ 1.
We consider the MSE matrix difference between OLSE and SRMAULE as:
MSE β̂OLSE − MSE β̂SRMAULE (d, k) σ 2 C −1 − σ 2 Fd,k C −1 Fd,k
+ σ 2 Fd,k G Fd,k
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .
Theorem 3.5 The SRMAULE is superior to OLSE in the mean squared error matrix
sense if and only if
B β̂SRMAULE (d, k) D4−1 B β̂SRMAULE (d, k) ≤ σ 2 ,
+ F G F .
where D4 C −1 − Fd,k C −1 Fd,k d,k d,k
123
S. Arumairajan
C −1 − Fd,k C −1
F d,k 2
C − I − (1 − d) (C + I )−2 Wk C −1 Wk I − (1 − d)2 (C + I )−2
−1
Let us consider
C −1 − Wk C −1 Wk
Wk I + kC −1 C −1 I + kC −1 − C −1 Wk
kWk 2I + kC −1 C −2 Wk . (3.7)
Now, we can say that the matrix (1 − d)2 (C + I )−2 Wk C −1 Wk − (1 − d)4 (C + I )−2
Wk C −1 Wk (C + I )−2 is positive definite.
is
From Eqs. (3.6), (3.7) and (3.8), one can say that the matrix C −1 − Fd,k C −1 Fd,k
positive definite. Therefore, the matrix D4 is positive definite since Fd,k G Fd,k > 0.
Now according to Lemma 3 (“Appendix”),
MSE β̂OLSE − MSE β̂SRMAULE (d, k) ≥ 0 if and only if
B β̂SRMAULE (d, k) D4−1 B β̂SRMAULE (d, k) ≤ σ 2 .
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
MSE β̂ME − MSE β̂SRMAULE (d, k) σ 2 C −1 − σ 2 G − σ 2 Fd,k C −1 Fd,k
G F − B β̂
+ σ 2 Fd,k d,k SRMAULE (d, k) B β̂SRMAULE (d, k) .
(3.9)
the matrix F E −1 .
Proof In the proof of Theorem 3.5, it has already shown that the matrix E C −1 −
is positive definite.
Fd,k C −1 Fd,k
After some straightforward calculation, the matrix F can be shown as follows:
F G − Fd,k G Fd,k
(1 − d)2 (C + I )−2 Wk GWk C 2 + 2C + d (2 − d) I (C + I )−2 .
Now, we can clearly say that the matrix F G − Fd,k G Fd,k > 0. According to
−1
Lemma 1, the matrix D5 > 0 if and only λmax F E < 1. Now, by applying
Lemma 3, we can conclude that
MSE β̂ME − MSE β̂SRMAULE (d, k) ≥ 0
if and only if
B β̂SRMAULE (d, k) D5−1 B β̂SRMAULE (d, k) ≤ σ 2 .
The MSE matrix difference between MAULE and SRMAULE can be obtained as:
MSE β̂MAULE (d, k) − MSE β̂SRMAULE (d, k) σ 2 Fd,k G Fd,k . (3.10)
123
S. Arumairajan
To verify the conditions that obtain in the above theorems, the OLSE of β will be
used instead of unknown parameter β.
y Z α + ε, (4.1)
p
2
(λi + 1)2 − (1 − d)2 λi2 aii
MSE α̂SRMAULE (d, k) σ 2
i1
(λi + 1)4 (λi + k)2
2
p
(λi + 1)2 − (1 − d)2 λi
+ −1 αi2 ,
i1
(λi + 1)2 (λi + k)
(1−d)λi αi2
+4 −1 .
(λi +1)2 (λi +k) (λi +1)2 (λi +k)
i1
p
λi σ 2 λi aii −kαi2
(λi +1) (λi +k)
2 2
i1
d 1− . (4.3)
p
λi2 σ 2 aii +αi2
(λi +1) (λi +k)2
4
i1
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
It has been noticed that the value d depends on the unknown parameters σ 2 and αi2
when k is fixed. For practical purpose, we can replace them by their estimated values
σ̂ 2 and α̂i2 , respectively. Therefore, the estimated optimum value of d is given by
p
λi σ̂ 2 λi aii −k α̂i2
(λi +1) (λi +k)
2 2
i1
d̂opt 1 − . (4.4)
p
λi2 σ̂ 2 aii +α̂i2
(λi +1)4 (λi +k)2
i1
Now differentiating MSE α̂SRMAULE (d, k) with respect to k for fixed d, it can be
derived
i1
(λi +1)2 −(1−d)2 λi
p
(λi +1)2 −(1−d)2 λi αi2
−2 −1 .
(λi +1) (λi +k)
2 (λi +1) (λi +k)2
2
i1
σ 2 (λi + 1)2 − (1 − d)2 λi aii − (1 − d)2 λi αi2
k . (4.6)
(λi + 1)2 αi2
According to Kibria [8], we can propose the optimal value of k by using the arithmetic
mean as follows:
1 1 σ̂ 2 (λi + 1)2 − (1 − d)2 λi aii − (1 − d)2 λi α̂i2
p p
k̂opt k̂i . (4.7)
p
i1
p
i1
(λi + 1)2 α̂i2
5 Numerical Illustration
To illustrate the behavior of the proposed estimator, we consider the data set on Total
National Research and Development Expenditures as a Percent of Gross National
product originally due to Gruber [5], and later considered by Akdeniz and Erol [1], Li
and Yang [10] and Alheety and Kibria [2]. The data set is given as below:
123
S. Arumairajan
⎛ ⎞ ⎛ ⎞
1.9 2.2 1.9 3.7 2.3
⎜ 1.8 2.2 2.0 3.8 ⎟ ⎜ 2.2 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 1.8 2.4 2.1 ⎟
3.6 ⎟ ⎜ 2.2 ⎟
⎜ ⎜ ⎟
⎜ 1.8 2.4 2.2 3.8 ⎟ ⎜ 2.3 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 2.0 2.5 2.3 3.8 ⎟ ⎜ ⎟
X ⎜ ⎟ and y ⎜ 2.4 ⎟ .
⎜ 2.1 2.6 2.4 3.7 ⎟ ⎜ 2.5 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 2.1 2.6 2.6 3.8 ⎟ ⎜ 2.6 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 2.2 2.6 2.6 4.0 ⎟ ⎜ 2.6 ⎟
⎜ ⎟ ⎜ ⎟
⎝ 2.3 2.8 2.8 3.7 ⎠ ⎝ 2.7 ⎠
2.3 2.7 2.8 3.8 2.7
Tables 1, 2, 3 and 4 are obtained by using the estimated scalar mean squared error
(SMSE) values of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE for
different k values and four different d values selected within the interval 0–1. Note
that the SMSE values can be obtained by using trace of MSE matrix.
From Table 1, we can say that the SRMAULE is worse than OLSE, ME, RE, LE and
AULE when d 0.1. Moreover, the SRMAULE has lower SMSE than MAULE and
Table 1 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
Table 2 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7
Table 3 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9
Table 4 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95
123
S. Arumairajan
SRLE. From Table 2, it has been noticed that the proposed estimator has the smallest
SMSE than other estimators when k 0.1 and d 0.7. From Table 3, one can say
that the SRMAULE has the smallest SMSE than other estimators when k ≤ 0.2 and
d 0.9. From Table 4, it can be said that the SRMAULE has the smallest SMSE than
other estimators when k ≤ 0.4 and d 0.95. Furthermore, the proposed estimator
has lower SMSE than OLSE, RE, LE, AULE, MAULE and SRLE when k ≤ 0.5 and
d 0.95.
In order to further illustrate the behavior of the proposed estimator, we perform a Monte
Carlo simulation study by considering three levels of multicollinearity. Following
McDonald and Galarneau [12], we generate explanatory variables as follows:
1/2
xi j 1 − γ 2 z i j + γ z i, p+1 , i 1, 2, . . . , n, j 1, 2, . . . , p,
where εi is a normal pseudorandom number with mean zero and variance σ 2 . New-
house and Oman [13] have noted that if the MSE is a function of σ 2 and β, and
if the explanatory variables are fixed, then subject to the constraint β β 1, the
MSE is minimized when β is the normalized eigenvector corresponding to the largest
eigenvalue of the X X matrix. In this study, we choose the normalized eigenvector
corresponding to the largest eigenvalue of X X as the coefficient vector β, n 50,
p 4, and σ 2 1. Three different sets of correlations are considered by selecting
the values as γ 0.9, 0.99 and 0.999. In the simulation study, we have used same
stochastic constrains used in Sect. 5.1.
Tables 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 and 16 are obtained by using the estimated
SMSE values of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE for
different k values and four different d values selected within the interval 0–1.
From Table 5, it has been observed that the SRMAULE is worse than OLSE, ME,
RE and AULE. Nevertheless, the SRMAULE has smallest SMSE than the LE, SRLE
and MAULE for k ≤ 0.7. From Table 6, we can conclude that the SRMAULE has
the smallest SMSE than OLSE, ME, RE, LE, AULE and MAULE when d 0.7
and γ 0.9. However, the SRLE has the smallest SMSE than SRMAULE. From
Table 7, the proposed estimator has smallest SMSE than other estimators for k 0.2
and k 0.3 when d 0.95 and γ 0.9. According to Table 8, the SRMAULE
has smallest SMSE than other estimators for k ≤ 0.4 when d 0.95 and γ 0.9.
Based on Table 9, one can conclude that the SRMAULE is worse than ME and
AULE. Moreover, the SRMAULE has smallest SMSE than OLSE and MAULE. Not
only that, the SRMAULE has smallest SMSE than SRLE when k ≤ 0.7. With that,
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
Table 5 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1
and γ 0.9
Table 6 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7
and γ 0.9
Table 7 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9
and γ 0.9
123
S. Arumairajan
Table 8 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95
and γ 0.9
Table 9 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1
and γ 0.99
Table 10 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7
and γ 0.99
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
Table 11 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9
and γ 0.99
Table 12 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95
and γ 0.99
Table 13 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1
and γ 0.999
123
S. Arumairajan
Table 14 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7
and γ 0.999
Table 15 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9
and γ 0.999
Table 16 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95
and γ 0.999
123
On the Stochastic Restricted Modified Almost Unbiased Liu…
the SRMAULE has smallest SMSE than LE when k ≤ 0.8. From Table 10, it can be
noticed that the SRMAULE has smallest SMSE than other estimators when k 0.4
and k 0.5. Also, the SRMAULE has lower SMSE than OLSE, ME, RE, LE, AULE
and MAULE. From Table 11, we can say that the proposed estimator has smallest
SMSE than other estimators when d 0.9 and γ 0.99 except k 0.1. From
Table 12, it can be concluded that the proposed estimator has smallest SMSE than
other estimators when d 0.95 and γ 0.99.
Table 13 shows that the SRMAULE is worse than AULE. Also, the SRMAULE is
worse than ME when k ≥ 0.2. Moreover, the SRMAULE is worse than RE when
k ≥ 0.3. However, the SRMAULE has the smallest SMSE than SRLE when k ≤ 0.8.
With that, the SRMAULE has the smallest SMSE than LE unless k 1. From
Table 14, it has been noticed that the SRMAULE has the smallest SMSE than other
estimators when 0.4 ≤ k ≤ 0.9 and d 0.7 and γ 0.999. From Table 15, when
d 0.9 and γ 0.999, we can say that the proposed estimator has smallest SMSE
than other estimators unless k 0.1. From Table 16, it can be observed that the
proposed estimator has the best performance among other estimators when d 0.95
and γ 0.999.
Based on the results discussed in Theorems 3.1–3.6, one can say that the proposed
estimator SRMAULE is superior to the other estimators with respect to the mean
squared error matrix sense under certain conditions. Also, in Sect. 3.7, it has been
discussed that the proposed SRMAULE is always superior to MAULE, which agree
with numerical results.
6 Conclusion
In this paper, we proposed the new biased estimator, namely stochastic restricted
modified almost unbiased Liu estimator (SRMAULE) in the multiple linear regression
model to combat the well-known multicollinearity problem. Moreover, necessary and
sufficient conditions for the proposed estimator over the OLSE, RE, LE, AULE, SRLE
and MAULE in the mean squared error matrix sense were obtained. A Monte Carlo
simulation study was carried out, and a numerical example was used to illustrate
the theoretical findings. From the numerical results, it could be concluded that the
proposed estimator performs well when d is large.
Acknowledgements The author is grateful to the editor and the three anonymous referees for their valuable
comments which improved the quality of the paper.
Appendix
Lemma
2 [16] β̂1 and β̂2 be two linear estimators
Let of β. Suppose
that D
D β̂1 − D β̂2 is positive definite then MSE β̂1 −MSE β̂2 is nonnegative
123
S. Arumairajan
−1
definite if and only if b2 D + b1 b1 b2 ≤ 1, where b j denotes the bias vector of
β̂ j , j 1, 2.
Lemma 3 [4] Let M be a positive definite matrix, namely M > 0, α be some vector,
then M − αα ≥ 0 if and only if α M −1 α ≤ 1.
References
1. Akdeniz, F., Erol, H.: Mean squared error matrix comparisons of some biased estimators in linear
regression. Commun. Stat. Theory Methods 32, 2389–2413 (2003)
2. Alheety, M.I., Kibria, B.M.G.: Modified Liu-type estimator based on (r − k) class estimator. Commun.
Stat. Theory Methods 42, 304–319 (2013)
3. Arumairajan, S., Wijekoon, P.: Modified almost unbiased liu estimator in linear regression model.
Commun. Math. Stat. 5, 261–276 (2017)
4. Farebrother, R.W.: Further results on the mean square error of ridge regression. J. R. Stat. Soc. B 38,
248–250 (1976)
5. Gruber, M.H.J.: Improving Efficiency by Shrinkage: The James-Stein and Ridge Regression Estimators.
Dekker Inc., New York (1998)
6. Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Tech-
nometrics 12, 55–67 (1970)
7. Hubert, M.H., Wijekoon, P.: Improvement of the Liu estimator in linear regression model. Stat. Pap.
47, 471–479 (2006)
8. Kibria, B.M.G.: Performance of some new ridge regression estimators. Commun. Stat. Theory Methods
32, 419–435 (2003)
9. Li, Y., Yang, H.: A new stochastic mixed Ridge Estimator in linear regression. Stat. Pap. 51, 315–323
(2010)
10. Li, Y., Yang, H.: Two kinds of restricted modified estimators in linear regression model. J. Appl. Stat.
38, 1447–1454 (2011)
11. Liu, K.: A new class of biased estimate in linear regression. Commun. Stat. Theory Methods 22,
393–402 (1993)
12. McDonald, G.C., Galarneau, D.I.: A Monte Carlo evaluation of some Ridge type estimators. J. Am.
Stat. Assoc. 70, 407–416 (1975)
13. Newhouse, J.P., Oman, S.D.: An evaluation of Ridge Estimators. Rand Report, No. R-716-Pr, 1-28
(1971)
14. Ozkale, R.M., Kaçiranlar, S.: The restricted and unrestricted two parameter estimators. Commun. Stat.
Theory Methods 36, 2707–2725 (2007)
15. Theil, H., Goldberger, A.S.: On pure and mixed estimation in Economics. Int. Econ. Rev. 2, 65–77
(1961)
16. Trenkler, G., Toutenburg, H.: Mean square error matrix comparisons between biased estimators—an
overview of recent results. Stat. Pap. 31, 165–179 (1990)
17. Wang, S.G., Wu, M.X., Jia, Z.Z.: Matrix inequalities, 2nd edn. Chinese Science Press, Beijing (2006)
123