On The Stochastic Restricted Modified Almost Unbiased Liu Estimator in Linear Regression Model

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Commun. Math. Stat.

https://doi.org/10.1007/s40304-018-0131-3

On the Stochastic Restricted Modified Almost Unbiased


Liu Estimator in Linear Regression Model

S. Arumairajan1

Received: 7 November 2017 / Revised: 29 January 2018 / Accepted: 15 March 2018


© School of Mathematical Sciences, University of Science and Technology of China and Springer-Verlag
GmbH Germany, part of Springer Nature 2018

Abstract In this article, we propose a new biased estimator, namely stochastic


restricted modified almost unbiased Liu estimator by combining modified almost
unbiased Liu estimator (MAULE) and mixed estimator (ME) when the stochastic
restrictions are available and the multicollinearity presents. The conditions of supe-
riority of the proposed estimator over the ordinary least square estimator, ME, ridge
estimator, Liu estimator, almost unbiased Liu estimator, stochastic restricted Liu esti-
mator and MAULE in the mean squared error matrix sense are obtained. Finally, a
numerical example and a Monte Carlo simulation are given to illustrate the theoretical
findings.

Keywords Multicollinearity · Stochastic restrictions · Modified almost unbiased


Liu estimator · Stochastic restricted modified almost unbiased Liu estimator · Mean
squared error matrix

Mathematics Subject Classification 62J07 · 62F03

1 Introduction

First, we consider the multiple linear regression model


 
y  Xβ + ε, ε ∼ N 0, σ 2 In , (1.1)

B S. Arumairajan
arumais@gmail.com

1 Department of Mathematics and Statistics, University of Jaffna, Jaffna, Sri Lanka

123
S. Arumairajan

where y is an n × 1 observable random vector, X is an n × p known design matrix


of rank p, β is a p × 1 vector of unknown parameters, In is the identity matrix of
order n, and ε is an n × 1 vector of disturbances.
The ordinary least squares estimator (OLSE) for the model (1.1) is given as:

β̂OLSE  C −1 X  y, (1.2)

where C  X  X .
The mean squared error matrix of OLSE is given by
 
MSE β̂OLSE  σ 2 C −1 . (1.3)

Instead of using the ordinary least square estimator (OLSE), the biased estimators are
proposed in the regression analysis to overcome the multicollinearity problem. Some
of these are, namely the ridge estimator (RE) [6], the Liu estimator (LE) [11], the
almost unbiased Liu estimator (AULE) [1]. Recently, Arumairajan and Wijekoon [3]
proposed the modified almost unbiased Liu estimator (MAULE) based on the sample
information (1.1).
Hoerl and Kennard [6] proposed the ridge estimator as follows:

β̂RE (k)  Wk β̂OLSE , (1.4)


 −1
where Wk  I + kC −1 for k ≥ 0 and I is the identity matrix of order p.
The bias vector and MSE matrix of RE are given by
 
B β̂RE (k)  (Wk − I ) β (1.5)

and
 
MSE β̂RE (k)  σ 2 Wk C −1 Wk + (Wk − I ) ββ  (Wk − I ) , (1.6)

respectively.
The Liu estimator proposed by Liu [11] is given by

β̂LE (d)  Fd β̂OLSE , (1.7)

where Fd  (C + I )−1 (C + d I ) for 0 < d < 1.


The bias vector and MSE matrix of LE are given by
 
B β̂LE (d)  (Fd − I ) β (1.8)

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

and
 
MSE β̂LE (d)  σ 2 Fd C −1 Fd + (Fd − I ) ββ  (Fd − I ) , (1.9)

respectively.
The almost unbiased Liu estimator (AULE) proposed by Akdeniz and Erol [1] is
given by

β̂AULE (d)  Td β̂OLSE , (1.10)

where Td  I − (1 − d)2 (C + I )−2 .


The bias vector and MSE matrix of AULE are given by
 
B β̂AULE (d)  (Td − I ) β (1.11)

and
 
MSE β̂AULE (d)  σ 2 Td C −1 Td + (Td − I ) ββ  (Td − I ) , (1.12)

respectively.
Recently, Arumairajan and Wijekoon [3] proposed the modified almost unbiased
Liu estimator as follows:
 
β̂MAULE (d, k)  I − (1 − d)2 (C + I )−2 (C + k I )−1 X  y. (1.13)

The MAULE can be rewritten as:

β̂MAULE (d, k)  Fd,k β̂OLSE , (1.14)

where Fd,k  Td Wk .
The bias vector and MSE matrix of MAULE are given by
   
B β̂MAULE (d, k)  Fd,k − I β (1.15)

and
     
MSE β̂MAULE (d, k)  σ 2 Fd,k C −1 Fd,k

+ Fd,k − I ββ  Fd,k − I , (1.16)

respectively.
An alternative method to deal with multicollinearity problem is to consider parame-
ter estimation with some restrictions on the unknown parameters, which may be exact
or stochastic restrictions. In addition to sample model (1.1), let us be given some prior

123
S. Arumairajan

information about β in the form of a set of q independent stochastic linear restrictions


as follows:
 
h  Hβ + υ, υ ∼ N 0, σ 2 Ω , (1.17)

where h is an q ×1 stochastic known vector, H is a q × p of full row rank q ≤ p with


known elements, υ is an q ×1 random vector of disturbances, and Ω is assumed to be
  definite. Further, it is assumed that υ is stochastically independent
known and positive
of ε, i.e., E εν   0.
The mixed estimator (ME) [15] due to a stochastic prior restriction (1.17) is given
by
 −1  
β̂ME  β̂OLSE + C −1 H  Ω + H C −1 H  h − H β̂OLSE . (1.18)

The mean squared error matrix of ME is given as:


 
MSE β̂ME  σ 2 C −1 − σ 2 G  σ 2 A, (1.19)

 −1  −1
where G  C −1 H  Ω + H C −1 H  H C −1 and A  C −1 − G  C + H Ω −1 H  .
By replacing ME in the place of OLSE in LE, Hubert and Wijekoon [7] proposed
the stochastic restricted Liu estimator (SRLE) as follows:

β̂SRLE (d)  Fd β̂ME . (1.20)

The bias vector and MSE matrix of SRLE are given by


 
B β̂SRLE (d)  (Fd − I ) β (1.21)

and
 
MSE β̂SRLE (d)  σ 2 Fd C −1 Fd − σ 2 Fd G Fd + (Fd − I ) ββ  (Fd − I ) , (1.22)

respectively.
In the literature, the researchers are still trying to find the best estimator to combat
the multicollinearity problem by using sample information only or both sample and
prior information. Since the combination of two different estimators might inherit
the advantages of both estimators, it is motivated us to propose the new estima-
tor by combining MAULE and ME. Also, the idea used by Hubert and Wijekoon
[7] helps us to introduce the new estimator which we will see in the next section.
The rest of the paper is organized as follows. We propose the new estimator and
obtain its stochastic properties in Sect. 2. The proposed estimator is compared with
OLSE, ME, RE, LE, AULE and MAULE in the mean squared error matrix sense
in Sect. 3. The optimal shrinkage parameters are obtained in Sect. 4. In Sect. 5,
a Monte Carlo simulation study is carried out and a numerical example is used

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

to illustrate the theoretical findings. Finally, some concluding remarks are given in
Sect. 6.

2 The Proposed Estimator and Its Stochastic Properties

Following Hubert and Wijekoon [7], we propose the new estimator, namely stochastic
restricted modified almost unbiased Liu estimator (SRMAULE) as follows:

β̂SRMAULE (d, k)  Fd,k β̂ME . (2.1)

The bias vector, dispersion matrix and MSE matrix of β̂SRMAULE (d, k) can be obtained
as:
   
B β̂SRMAULE (d, k)  Fd,k − I β, (2.2)
 
D β̂SRMAULE (d, k)  σ 2 Fd,k C −1 Fd,k
 
− σ 2 Fd,k G Fd,k (2.3)

and
 
MSE β̂SRMAULE (d, k)  σ 2 Fd,k C −1 Fd,k
 
− σ 2 Fd,k G Fd,k
   
+ Fd,k − I ββ  Fd,k − I , (2.4)

respectively.

3 Mean Squared Error Matrix Comparisons

Even though incorporating the prior information with sample information in the esti-
mation, in the literature, it has been noticed that the estimator based on the sample
information only is superior to the estimator based on the both sample and prior infor-
mation under certain condition in the mean squared error matrix sense. Therefore,
the researchers have been comparing the all kinds of estimators in the literature to
find the superiority conditions. Therefore, in this section, the proposed estimator is
compared with OLSE, ME, RE, LE, AULE, SRLE and MAULE by the mean squared
error matrix criterion.

3.1 Comparison Between RE and SRMAULE

In order to compare the RE and SRMAULE in terms of MSE matrix, we investigate


the following difference.

123
S. Arumairajan
   

MSE β̂RE (k) − MSE β̂SRMAULE (d, k)  σ 2 Wk C −1 Wk − σ 2 Fd,k C −1 Fd,k
   
 + B β̂
+ σ 2 Fd,k G Fd,k RE (k) B β̂RE (k)
   
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .
(3.1)

Now, we can state the following theorem.

Theorem 3.1 The SRMAULE is superior to RE in the mean squared error matrix
sense if and only if
       −1
B β̂SRMAULE (d, k) σ D + B β̂RE (k) B β̂RE (k)
2
 
× B β̂SRMAULE (d, k) ≤ 1,
 + F G F .
where D  Wk C −1 Wk − Fd,k C −1 Fd,k d,k d,k

Proof Let us consider


Wk C −1 Wk − Fd,k C −1 Fd,k



 Wk C −1 Wk − I −  (1 − d)2 (C + I )−2 Wk C −1 Wk I − (1 − d)2 (C + I ) −2

(C + I )2 Wk C −1 Wk
 (1 − d)2 (C + I )−2 −1 W (C + I )2 − (1 − d)2 W C −1 W (C + I )−2
 +W k C k k k
(C + I )2 Wk C
−1 Wk
 (1 − d)2 (C + I )−2 −1 W (C + I )2 − (1 − d)2 I (C + I )−2
 +W k C k
−1
−2 (C + I ) Wk C
Wk
2
 (1 − d) (C + I )
2 (C + I )−2 .
+Wk C −1 Wk C 2 + 2C + d (2 − d) I

Since 0 < d < 1, the matrix Wk C −1 Wk − Fd,k C −1 Fd,k  is clearly positive definite.
 > 0. Now, according to
Therefore, the matrix D is positive definite since Fd,k G Fd,k
   
Lemma 2 (“Appendix”), MSE β̂RE (k) − MSE β̂SRMAULE (d, k) ≥ 0 if and only
if
       −1
B β̂SRMAULE (d, k) σ D + B β̂RE (k) B β̂RE (k)
2

 
B β̂SRMAULE (d, k) ≤ 1.

This completes the proof. 

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

3.2 Comparison Between AULE and SRMAULE

We consider the MSE matrix difference between AULE and SRMAULE as:
   
MSE β̂AULE (d) − MSE β̂SRMAULE (d, k)  σ 2 Td C −1 Td − σ 2 Fd,k C −1 F  d,k
   
+ σ 2 Fd,k G F  d,k F  d,k + B β̂AULE (d) B β̂AULE (d)
   
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .
(3.2)

Now, the following theorem can be stated.

Theorem 3.2 The SRMAULE is superior to AULE in the mean squared error matrix
sense if and only if

       −1
B β̂SRMAULE (d, k) σ 2 D1 + B β̂AULE (d) B β̂AULE (d)
 
∗B β̂SRMAULE (d, k) ≤ 1,

 + F G F .
where D1  Td C −1 Td − Fd,k C −1 Fd,k d,k d,k

Proof First, we simplify the following matrix as follows:

Td C −1 Td − Fd,k C −1 Fd,k

 
 Td C −1 − Wk C −1 Wk Td
    
 Td Wk I + kC −1 C −1 I + kC −1 − C −1 Wk Td
 
 k Fd,k C −2 2I + C −1 Fd,k
.

 is positive definite. It
Now, we can clearly say that the matrix Td C −1 Td − Fd,k C −1 Fd,k
 
follows that the matrix D1 is positive definite since Fd,k G Fd,k > 0. Now, by applying
   
Lemma 2, we can say that MSE β̂AULE (d) − MSE β̂SRMAULE (d, k) ≥ 0 if and
only if
       −1
B β̂SRMAULE (d, k) σ 2 D1 + B β̂AULE (d) B β̂AULE (d)
 
∗ B β̂SRMAULE (d, k) ≤ 1.

This completes the proof. 

123
S. Arumairajan

3.3 Comparison Between LE and SRMAULE

We consider the MSE matrix difference between LE and SRMAULE as:


   

MSE β̂ L E (d) − MSE β̂SRMAULE (d, k)  σ 2 Fd C −1 F  d − σ 2 Fd,k C −1 Fd,k
   
 + B β̂
+ σ 2 Fd,k G Fd,k LE (d) B β̂LE (d)
   
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .
(3.3)

Now, the following theorem can be stated.


 
Theorem 3.3 When λmax B A−1 < 1, the SRMAULE is superior to LE in the mean
squared error matrix sense if and only if

       −1
B β̂SRMAULE (d, k) σ2D 2 + B β̂LE (d) B β̂LE (d)
 
∗ B β̂SRMAULE (d, k) ≤ 1,

where D2  Fd C −1 Fd − Fd,k C −1 Fd,k


 + F G F  , A  F C −1 F  , B 
d,k d
−1 
 −1  d,k d
Fd,k C Fd,k , and λmax B A < 1 is the largest eigenvalue of the matrix B A−1 .
Proof It is obviously known that the matrices A and B are positive definite.
Now, based on Lemma  1 (“Appendix”), one can say that A − B > 0 if and
only if λmax B A−1 < 1. Since the matrix Fd,k G Fd,k is positive definite,
one can
 say that
 the matrix
 D 2 is positive
 definite. Now, according to Lemma 2,
MSE β̂LE (d) − MSE β̂SRMAULE (d, k) ≥ 0 if and only if

       −1
B β̂SRMAULE (d, k) σ 2 D2 + B β̂LE (d) B β̂LE (d)
 
∗ B β̂SRMAULE (d, k) ≤ 1.

This completes the proof. 

3.4 Comparison Between SRLE and SRMAULE

The MSE matrix difference between SRLE and SRMAULE is given as:
   
MSE β̂SRLE (d) − MSE β̂SRMAULE (d, k)  σ 2 Fd C −1 Fd − σ 2 Fd G Fd
   
 F
− σ 2 Fd,k C −1 Fd,k 
d,k + σ Fd,k G Fd,k + B β̂SRLE (d) B β̂SRLE (d)
2 (3.4)
   
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .

Now, one can state the following theorem.

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

 
Theorem 3.4 When λmax N M −1 < 1, the SRMAULE is superior to SRLE in the
mean squared error matrix sense if and only if

       −1
B β̂SRMAULE (d, k) σ D3 + B β̂SRLE (d) B β̂SRLE (d)
2
 
∗ B β̂SRMAULE (d, k) ≤ 1,

 
where D3  Fd C −1 Fd + Fd,k G Fd,k
 − F G F  + F C −1 F 
d d d,k d,k  M − N , M 
 
Fd C −1 Fd + Fd,k G Fd,k
 , N  F G F  + F C −1 F  , and λ
d d d,k d,k max N M
−1 is the

largest eigenvalue of the matrix N M −1 .

Proof We clearly know that the matrices M and N are positive definite. Now, accord-

ing to Lemma 1, it can be said that M − N > 0 if and only if λmax N M −1 < 1.
This implies that the matrix D3 > 0. By applying Lemma 2, it can be concluded that

   
MSE β̂SRLE (d) − MSE β̂SRMAULE (d, k) ≥ 0

if and only if
       −1
B β̂SRMAULE (d, k) σ D3 + B β̂SRLE (d) B β̂SRLE (d)
2
 
∗ B β̂SRMAULE (d, k) ≤ 1.

This completes the proof. 

3.5 Comparison Between OLSE and SRMAULE

We consider the MSE matrix difference between OLSE and SRMAULE as:
   
MSE β̂OLSE − MSE β̂SRMAULE (d, k)  σ 2 C −1 − σ 2 Fd,k C −1 Fd,k
 
+ σ 2 Fd,k G Fd,k
   
− B β̂SRMAULE (d, k) B β̂SRMAULE (d, k) .

Now, the following theorem can be stated.

Theorem 3.5 The SRMAULE is superior to OLSE in the mean squared error matrix
sense if and only if
   
B β̂SRMAULE (d, k) D4−1 B β̂SRMAULE (d, k) ≤ σ 2 ,

 + F G F .
where D4  C −1 − Fd,k C −1 Fd,k d,k d,k

123
S. Arumairajan

 can be simplified as follows:


Proof Now, the matrix C −1 − Fd,k C −1 Fd,k

C −1 − Fd,k C −1 
 F d,k 2   
 C − I − (1 − d) (C + I )−2 Wk C −1 Wk I − (1 − d)2 (C + I )−2
−1

 C −1 − Wk C −1 Wk + (1 − d)2 Wk C −1 Wk (C + I )−2 (3.6)


+ (1 − d)2 (C + I )−2 Wk C −1 Wk
− (1 − d)4 (C + I )−2 Wk C −1 Wk (C + I )−2 .

Let us consider

C −1 − Wk C −1 Wk
    
 Wk I + kC −1 C −1 I + kC −1 − C −1 Wk
 
 kWk 2I + kC −1 C −2 Wk . (3.7)

Therefore, the matrix C −1 − Wk C −1 Wk is positive definite.


Now, we consider

(1 − d)2 (C + I )−2 Wk C −1 Wk − (1 − d)4 (C + I )−2 Wk C −1 Wk (C + I )−2


 
 (1 − d)2 (C + I )−2 Wk C −1 Wk I − (1 − d)2 (C + I )−2
 
 (1 − d)2 (C + I )−2 Wk C −1 Wk (C + I )2 − (1 − d)2 I (C + I )−2
 
 (1 − d)2 (C + I )−2 Wk C −1 Wk C 2 + 2C + d (2 − d) I (C + I )−2 . (3.8)

Now, we can say that the matrix (1 − d)2 (C + I )−2 Wk C −1 Wk − (1 − d)4 (C + I )−2
Wk C −1 Wk (C + I )−2 is positive definite.
 is
From Eqs. (3.6), (3.7) and (3.8), one can say that the matrix C −1 − Fd,k C −1 Fd,k

positive definite. Therefore, the matrix D4 is positive definite since Fd,k G Fd,k > 0.
Now according to Lemma 3 (“Appendix”),
   
MSE β̂OLSE − MSE β̂SRMAULE (d, k) ≥ 0 if and only if

   
B β̂SRMAULE (d, k) D4−1 B β̂SRMAULE (d, k) ≤ σ 2 .

This completes the proof. 

3.6 Comparison Between ME and SRMAULE

In order to compare the ME and SRMAULE in terms of MSE matrix, we investigate


the following difference.

123
On the Stochastic Restricted Modified Almost Unbiased Liu…
   

MSE β̂ME − MSE β̂SRMAULE (d, k)  σ 2 C −1 − σ 2 G − σ 2 Fd,k C −1 Fd,k
   
 G F  − B β̂
+ σ 2 Fd,k d,k SRMAULE (d, k) B β̂SRMAULE (d, k) .
(3.9)

Now, the following theorem can be stated.


 
Theorem 3.6 When λmax F E −1 < 1, the SRMAULE is superior to ME in the
mean squared error matrix sense if and only if
   
B β̂SRMAULE (d, k) D5−1 B β̂SRMAULE (d, k) ≤ σ 2 ,

where D5  E − F  C −1 − G − Fd,k C −1 Fd,k  + F G F  , E  C −1 −


 , F  G − F G F  , and λ
  d,k d,k
Fd,k C −1 Fd,k d,k d,k max F E −1 is the largest eigenvalue of

the matrix F E −1 .
Proof In the proof of Theorem 3.5, it has already shown that the matrix E  C −1 −
 is positive definite.
Fd,k C −1 Fd,k
After some straightforward calculation, the matrix F can be shown as follows:

F  G − Fd,k G Fd,k
 
 (1 − d)2 (C + I )−2 Wk GWk C 2 + 2C + d (2 − d) I (C + I )−2 .


Now, we can clearly say that the matrix F  G − Fd,k G Fd,k > 0. According to
 −1

Lemma 1, the matrix D5 > 0 if and only λmax F E < 1. Now, by applying
Lemma 3, we can conclude that
   
MSE β̂ME − MSE β̂SRMAULE (d, k) ≥ 0

if and only if
   
B β̂SRMAULE (d, k) D5−1 B β̂SRMAULE (d, k) ≤ σ 2 .

This completes the proof. 

3.7 Comparison Between MAULE and SRMAULE

The MSE matrix difference between MAULE and SRMAULE can be obtained as:
   

MSE β̂MAULE (d, k) − MSE β̂SRMAULE (d, k)  σ 2 Fd,k G Fd,k . (3.10)

 is nonnegative definite, it can be said that the MSE


Since the matrix σ 2 Fd,k G Fd,k
matrix difference between MAULE and SRMAULE given in Eq. (3.10) is also non-
negative definite. Therefore, the SRMAULE is always superior to MAULE in the mean
squared error matrix sense.

123
S. Arumairajan

To verify the conditions that obtain in the above theorems, the OLSE of β will be
used instead of unknown parameter β.

4 Selection of the Shrinkage Parameters k and d

The model given in Eq. (1.1) can be rewritten as follows:

y  Z α + ε, (4.1)

where Z  X Γ , α  Γ  β, and Γ is an orthogonal matrix whose columns


 constitute 
the eigenvectors of X  X . Then, Z  Z  Γ  X  X Γ  Λ  diag λ1 , λ2 , . . . , λ p ,
where λ1 , λ2 , . . . λ p are the ordered eigenvalue of X  X .
Now, the MSE matrix of α̂SRMAULE (d, k) can be obtained as:

p
2

(λi + 1)2 − (1 − d)2 λi2 aii
MSE α̂SRMAULE (d, k)  σ 2

i1
(λi + 1)4 (λi + k)2

2

p
(λi + 1)2 − (1 − d)2 λi
+ −1 αi2 ,
i1
(λi + 1)2 (λi + k)

where aii ≥ 0 is the diagonal element of the matrix Γ  AΓ .



Now, we can obtain the following by differentiating MSE α̂SRMAULE (d, k) with
respect to d for fixed k

∂MSE [α̂SRMAULE (d,k)]


∂d

p
(λi +1)2 −(1−d)2 λi2 aii (1−d)
 4σ 2
(λi +1) (λi +k)2
4
(4.2)
i1 
 (λi +1)2 −(1−d)2 λi
p

(1−d)λi αi2
+4 −1 .
(λi +1)2 (λi +k) (λi +1)2 (λi +k)
i1

Equating Eq. (4.2) to zero, we can obtain

 

p
λi σ 2 λi aii −kαi2
(λi +1) (λi +k)
2 2
i1
d 1−   . (4.3)
p
λi2 σ 2 aii +αi2
(λi +1) (λi +k)2
4
i1

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

It has been noticed that the value d depends on the unknown parameters σ 2 and αi2
when k is fixed. For practical purpose, we can replace them by their estimated values
σ̂ 2 and α̂i2 , respectively. Therefore, the estimated optimum value of d is given by

 

p
λi σ̂ 2 λi aii −k α̂i2
(λi +1) (λi +k)
2 2
i1
d̂opt  1 −   . (4.4)
p
λi2 σ̂ 2 aii +α̂i2
(λi +1)4 (λi +k)2
i1



Now differentiating MSE α̂SRMAULE (d, k) with respect to k for fixed d, it can be
derived

∂MSE [α̂SRMAULE (d,k)]


∂k
p
2
(λi +1)2 −(1−d)2 λi2 aii
 − 2σ 2
(λi +1) (λi +k)3
4
(4.5)


i1 

 (λi +1)2 −(1−d)2 λi
p
(λi +1)2 −(1−d)2 λi αi2
−2 −1 .
(λi +1) (λi +k)
2 (λi +1) (λi +k)2
2
i1

Hoerl and Kennard


[6] and Ozkale and Kaçiranlar [14] proposed the value of k by
minimizing MSE α̂SRMAULE (d, k) with respect to k and equating the numerator to
zero. Following them, it can be obtained

 
σ 2 (λi + 1)2 − (1 − d)2 λi aii − (1 − d)2 λi αi2
k . (4.6)
(λi + 1)2 αi2

According to Kibria [8], we can propose the optimal value of k by using the arithmetic
mean as follows:

 
1 1 σ̂ 2 (λi + 1)2 − (1 − d)2 λi aii − (1 − d)2 λi α̂i2
p p
k̂opt  k̂i  . (4.7)
p
i1
p
i1
(λi + 1)2 α̂i2

5 Numerical Illustration

5.1 Numerical Example

To illustrate the behavior of the proposed estimator, we consider the data set on Total
National Research and Development Expenditures as a Percent of Gross National
product originally due to Gruber [5], and later considered by Akdeniz and Erol [1], Li
and Yang [10] and Alheety and Kibria [2]. The data set is given as below:

123
S. Arumairajan

⎛ ⎞ ⎛ ⎞
1.9 2.2 1.9 3.7 2.3
⎜ 1.8 2.2 2.0 3.8 ⎟ ⎜ 2.2 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 1.8 2.4 2.1 ⎟
3.6 ⎟ ⎜ 2.2 ⎟
⎜ ⎜ ⎟
⎜ 1.8 2.4 2.2 3.8 ⎟ ⎜ 2.3 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 2.0 2.5 2.3 3.8 ⎟ ⎜ ⎟
X ⎜ ⎟ and y  ⎜ 2.4 ⎟ .
⎜ 2.1 2.6 2.4 3.7 ⎟ ⎜ 2.5 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 2.1 2.6 2.6 3.8 ⎟ ⎜ 2.6 ⎟
⎜ ⎟ ⎜ ⎟
⎜ 2.2 2.6 2.6 4.0 ⎟ ⎜ 2.6 ⎟
⎜ ⎟ ⎜ ⎟
⎝ 2.3 2.8 2.8 3.7 ⎠ ⎝ 2.7 ⎠
2.3 2.7 2.8 3.8 2.7

The four columns of the 10 × 4 matrix X comprise the data on x1 , x2 , x3 and x4 ,


respectively, and y is the response variable. From these data, we obtain the following
results:
1. The eigenvalues of X  X : 302.9626, 0.7283, 0.0447, 0.0345.
2. The OLSE of β: β̂  (0.6455, 0.0886, 0.1436, 0.1526) .
3. The OLSE of σ 2 : σ̂ 2  0.0015.
4. The condition number of X  X = 8781.53, which indicates a high multicollinearity
among explanatory variables.
We consider the following stochastic linear restriction according to [9]:
 
h  Hβ + υ, H  (1, −2, −2, −2) , υ ∼ N 0, σ̂ 2 .

Tables 1, 2, 3 and 4 are obtained by using the estimated scalar mean squared error
(SMSE) values of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE for
different k values and four different d values selected within the interval 0–1. Note
that the SMSE values can be obtained by using trace of MSE matrix.
From Table 1, we can say that the SRMAULE is worse than OLSE, ME, RE, LE and
AULE when d  0.1. Moreover, the SRMAULE has lower SMSE than MAULE and

Table 1 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 0.0808 0.0451 0.1373 0.1881 0.1407 0.2489 0.1875 0.2449


0.9 0.0808 0.0451 0.1275 0.1881 0.1407 0.2477 0.1875 0.2363
0.8 0.0808 0.0451 0.1175 0.1881 0.1407 0.2462 0.1875 0.2272
0.7 0.0808 0.0451 0.1074 0.1881 0.1407 0.2444 0.1875 0.2176
0.6 0.0808 0.0451 0.0975 0.1881 0.1407 0.2424 0.1875 0.2073
0.5 0.0808 0.0451 0.0880 0.1881 0.1407 0.2398 0.1875 0.1963
0.4 0.0808 0.0451 0.0795 0.1881 0.1407 0.2364 0.1875 0.1849
0.3 0.0808 0.0451 0.0728 0.1881 0.1407 0.2318 0.1875 0.1730
0.2 0.0808 0.0451 0.0691 0.1881 0.1407 0.2243 0.1875 0.1609
0.1 0.0808 0.0451 0.0706 0.1881 0.1407 0.2088 0.1875 0.1490

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

Table 2 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 0.0808 0.0451 0.1373 0.0619 0.0698 0.1449 0.0438 0.1374


0.9 0.0808 0.0451 0.1275 0.0619 0.0698 0.1348 0.0438 0.1265
0.8 0.0808 0.0451 0.1175 0.0619 0.0698 0.1243 0.0438 0.1150
0.7 0.0808 0.0451 0.1074 0.0619 0.0698 0.1136 0.0438 0.1032
0.6 0.0808 0.0451 0.0975 0.0619 0.0698 0.1028 0.0438 0.0911
0.5 0.0808 0.0451 0.0880 0.0619 0.0698 0.0923 0.0438 0.0789
0.4 0.0808 0.0451 0.0795 0.0619 0.0698 0.0823 0.0438 0.0670
0.3 0.0808 0.0451 0.0728 0.0619 0.0698 0.0736 0.0438 0.0558
0.2 0.0808 0.0451 0.0691 0.0619 0.0698 0.0672 0.0438 0.0464
0.1 0.0808 0.0451 0.0706 0.0619 0.0698 0.0649 0.0438 0.0401

Table 3 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 0.0808 0.0451 0.1373 0.0684 0.0793 0.2306 0.0493 0.1294


0.9 0.0808 0.0451 0.1275 0.0684 0.0793 0.2278 0.0493 0.1186
0.8 0.0808 0.0451 0.1175 0.0684 0.0793 0.2245 0.0493 0.1074
0.7 0.0808 0.0451 0.1074 0.0684 0.0793 0.1205 0.0493 0.0959
0.6 0.0808 0.0451 0.0975 0.0684 0.0793 0.1155 0.0493 0.0843
0.5 0.0808 0.0451 0.0880 0.0684 0.0793 0.1091 0.0493 0.0728
0.4 0.0808 0.0451 0.0795 0.0684 0.0793 0.0505 0.0493 0.0618
0.3 0.0808 0.0451 0.0728 0.0684 0.0793 0.0578 0.0493 0.0520
0.2 0.0808 0.0451 0.0691 0.0684 0.0793 0.0671 0.0493 0.0444
0.1 0.0808 0.0451 0.0706 0.0684 0.0793 0.0756 0.0493 0.0409

Table 4 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 0.0808 0.0451 0.1373 0.0739 0.0804 0.1375 0.0615 0.1287


0.9 0.0808 0.0451 0.1275 0.0739 0.0804 0.1277 0.0615 0.1179
0.8 0.0808 0.0451 0.1175 0.0739 0.0804 0.1177 0.0615 0.1067
0.7 0.0808 0.0451 0.1074 0.0739 0.0804 0.1076 0.0615 0.0953
0.6 0.0808 0.0451 0.0975 0.0739 0.0804 0.0976 0.0615 0.0637
0.5 0.0808 0.0451 0.0880 0.0739 0.0804 0.0881 0.0615 0.0523
0.4 0.0808 0.0451 0.0795 0.0739 0.0804 0.0795 0.0615 0.0434
0.3 0.0808 0.0451 0.0728 0.0739 0.0804 0.0727 0.0615 0.0417
0.2 0.0808 0.0451 0.0691 0.0739 0.0804 0.0690 0.0615 0.0408
0.1 0.0808 0.0451 0.0706 0.0739 0.0804 0.0704 0.0615 0.0401

123
S. Arumairajan

SRLE. From Table 2, it has been noticed that the proposed estimator has the smallest
SMSE than other estimators when k  0.1 and d  0.7. From Table 3, one can say
that the SRMAULE has the smallest SMSE than other estimators when k ≤ 0.2 and
d  0.9. From Table 4, it can be said that the SRMAULE has the smallest SMSE than
other estimators when k ≤ 0.4 and d  0.95. Furthermore, the proposed estimator
has lower SMSE than OLSE, RE, LE, AULE, MAULE and SRLE when k ≤ 0.5 and
d  0.95.

5.2 Simulation Study

In order to further illustrate the behavior of the proposed estimator, we perform a Monte
Carlo simulation study by considering three levels of multicollinearity. Following
McDonald and Galarneau [12], we generate explanatory variables as follows:
 1/2
xi j  1 − γ 2 z i j + γ z i, p+1 , i  1, 2, . . . , n, j  1, 2, . . . , p,

where z i j is an independent standard normal pseudorandom number and γ is specified


so that the theoretical correlation between any two explanatory variables is given by
γ 2 . A dependent variable is generated by using the equation:

yi  β1 xi1 + β2 xi2 + β3 xi3 + β4 xi4 + εi , i  1, 2, . . . , n,

where εi is a normal pseudorandom number with mean zero and variance σ 2 . New-
house and Oman [13] have noted that if the MSE is a function of σ 2 and β, and
if the explanatory variables are fixed, then subject to the constraint β  β  1, the
MSE is minimized when β is the normalized eigenvector corresponding to the largest
eigenvalue of the X  X matrix. In this study, we choose the normalized eigenvector
corresponding to the largest eigenvalue of X  X as the coefficient vector β, n  50,
p  4, and σ 2  1. Three different sets of correlations are considered by selecting
the values as γ  0.9, 0.99 and 0.999. In the simulation study, we have used same
stochastic constrains used in Sect. 5.1.
Tables 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 and 16 are obtained by using the estimated
SMSE values of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE for
different k values and four different d values selected within the interval 0–1.
From Table 5, it has been observed that the SRMAULE is worse than OLSE, ME,
RE and AULE. Nevertheless, the SRMAULE has smallest SMSE than the LE, SRLE
and MAULE for k ≤ 0.7. From Table 6, we can conclude that the SRMAULE has
the smallest SMSE than OLSE, ME, RE, LE, AULE and MAULE when d  0.7
and γ  0.9. However, the SRLE has the smallest SMSE than SRMAULE. From
Table 7, the proposed estimator has smallest SMSE than other estimators for k  0.2
and k  0.3 when d  0.95 and γ  0.9. According to Table 8, the SRMAULE
has smallest SMSE than other estimators for k ≤ 0.4 when d  0.95 and γ  0.9.
Based on Table 9, one can conclude that the SRMAULE is worse than ME and
AULE. Moreover, the SRMAULE has smallest SMSE than OLSE and MAULE. Not
only that, the SRMAULE has smallest SMSE than SRLE when k ≤ 0.7. With that,

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

Table 5 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1
and γ  0.9

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 0.4759 0.3187 0.4728 0.7844 0.4861 0.8641 0.7824 0.8623


0.9 0.4759 0.3187 0.4494 0.7844 0.4861 0.8399 0.7824 0.8379
0.8 0.4759 0.3187 0.4264 0.7844 0.4861 0.8136 0.7824 0.8113
0.7 0.4759 0.3187 0.4046 0.7844 0.4861 0.7848 0.7824 0.7823
0.6 0.4759 0.3187 0.3849 0.7844 0.4861 0.7534 0.7824 0.7506
0.5 0.4759 0.3187 0.3688 0.7844 0.4861 0.7189 0.7824 0.7157
0.4 0.4759 0.3187 0.3583 0.7844 0.4861 0.6809 0.7824 0.6772
0.3 0.4759 0.3187 0.3570 0.7844 0.4861 0.6390 0.7824 0.6348
0.2 0.4759 0.3187 0.3698 0.7844 0.4861 0.5929 0.7824 0.5878
0.1 0.4759 0.3187 0.4050 0.7844 0.4861 0.5419 0.7824 0.5360

Table 6 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7
and γ  0.9

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 0.4759 0.3187 0.4728 0.3234 0.4026 0.4977 0.2454 0.4650


0.9 0.4759 0.3187 0.4494 0.3234 0.4026 0.4722 0.2454 0.4359
0.8 0.4759 0.3187 0.4264 0.3234 0.4026 0.4465 0.2454 0.4061
0.7 0.4759 0.3187 0.4046 0.3234 0.4026 0.4211 0.2454 0.3758
0.6 0.4759 0.3187 0.3849 0.3234 0.4026 0.3968 0.2454 0.3457
0.5 0.4759 0.3187 0.3688 0.3234 0.4026 0.3746 0.2454 0.3164
0.4 0.4759 0.3187 0.3583 0.3234 0.4026 0.3562 0.2454 0.2894
0.3 0.4759 0.3187 0.3570 0.3234 0.4026 0.3443 0.2454 0.2668
0.2 0.4759 0.3187 0.3698 0.3234 0.4026 0.3429 0.2454 0.2520
0.1 0.4759 0.3187 0.4050 0.3234 0.4026 0.3587 0.2454 0.2505

Table 7 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9
and γ  0.9

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 0.4759 0.3187 0.4728 0.3967 0.4668 0.4753 0.2690 0.4368


0.9 0.4759 0.3187 0.4494 0.3967 0.4668 0.4517 0.2690 0.4090
0.8 0.4759 0.3187 0.4264 0.3967 0.4668 0.4284 0.2690 0.3808
0.7 0.4759 0.3187 0.4046 0.3967 0.4668 0.4061 0.2690 0.3528
0.6 0.4759 0.3187 0.3849 0.3967 0.4668 0.3859 0.2690 0.3257
0.5 0.4759 0.3187 0.3688 0.3967 0.4668 0.3690 0.2690 0.3005
0.4 0.4759 0.3187 0.3583 0.3967 0.4668 0.3576 0.2690 0.2790
0.3 0.4759 0.3187 0.3570 0.3967 0.4668 0.3550 0.2690 0.2638
0.2 0.4759 0.3187 0.3698 0.3967 0.4668 0.3662 0.2690 0.2591
0.1 0.4759 0.3187 0.4050 0.3967 0.4668 0.3991 0.2690 0.2717

123
S. Arumairajan

Table 8 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95
and γ  0.9

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1.0 0.4759 0.3187 0.4728 0.4327 0.4736 0.4734 0.2907 0.4343


0.9 0.4759 0.3187 0.4494 0.4327 0.4736 0.4499 0.2907 0.4066
0.8 0.4759 0.3187 0.4264 0.4327 0.4736 0.4269 0.2907 0.3787
0.7 0.4759 0.3187 0.4046 0.4327 0.4736 0.4050 0.2907 0.3509
0.6 0.4759 0.3187 0.3849 0.4327 0.4736 0.3851 0.2907 0.3241
0.5 0.4759 0.3187 0.3688 0.4327 0.4736 0.3688 0.2907 0.2993
0.4 0.4759 0.3187 0.3583 0.4327 0.4736 0.3581 0.2907 0.2784
0.3 0.4759 0.3187 0.3570 0.4327 0.4736 0.3565 0.2907 0.2639
0.2 0.4759 0.3187 0.3698 0.4327 0.4736 0.3689 0.2907 0.2603
0.1 0.4759 0.3187 0.4050 0.4327 0.4736 0.4035 0.2907 0.2742

Table 9 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1
and γ  0.99

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 4.4595 2.9328 2.3930 3.7661 3.0051 3.8891 3.7504 3.8750


0.9 4.4595 2.9328 2.3824 3.7661 3.0051 3.8341 3.7504 3.8185
0.8 4.4595 2.9328 2.3863 3.7661 3.0051 3.7742 3.7504 3.7567
0.7 4.4595 2.9328 2.4099 3.7661 3.0051 3.7085 3.7504 3.6889
0.6 4.4595 2.9328 2.4610 3.7661 3.0051 3.6363 3.7504 3.6142
0.5 4.4595 2.9328 2.5501 3.7661 3.0051 3.5568 3.7504 3.5316
0.4 4.4595 2.9328 2.6926 3.7661 3.0051 3.4687 3.7504 3.4398
0.3 4.4595 2.9328 2.9110 3.7661 3.0051 3.3710 3.7504 3.3374
0.2 4.4595 2.9328 3.2389 3.7661 3.0051 3.2621 3.7504 3.2228
0.1 4.4595 2.9328 3.7278 3.7661 3.0051 3.1406 3.7504 3.0938

Table 10 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7
and γ  0.99

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 4.4595 2.9328 2.3930 2.6023 3.7309 2.4175 1.8531 2.1012


0.9 4.4595 2.9328 2.3824 2.6023 3.7309 2.3867 1.8531 2.0362
0.8 4.4595 2.9328 2.3863 2.6023 3.7309 2.3655 1.8531 1.9750
0.7 4.4595 2.9328 2.4099 2.6023 3.7309 2.3580 1.8531 1.9202
0.6 4.4595 2.9328 2.4610 2.6023 3.7309 2.3700 1.8531 1.8758
0.5 4.4595 2.9328 2.5501 2.6023 3.7309 2.4099 1.8531 1.8476
0.4 4.4595 2.9328 2.6926 2.6023 3.7309 2.4895 1.8531 1.8441
0.3 4.4595 2.9328 2.9110 2.6023 3.7309 2.6266 1.8531 1.8780
0.2 4.4595 2.9328 3.2389 2.6023 3.7309 2.8477 1.8531 1.9691
0.1 4.4595 2.9328 3.7278 2.6023 3.7309 3.1939 1.8531 2.1484

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

Table 11 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9
and γ  0.99

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 4.4595 2.9328 2.3930 3.6598 4.3716 2.3940 2.4227 2.0198


0.9 4.4595 2.9328 2.3824 3.6598 4.3716 2.3810 2.4227 1.9664
0.8 4.4595 2.9328 2.3863 3.6598 4.3716 2.3818 2.4227 1.9199
0.7 4.4595 2.9328 2.4099 3.6598 4.3716 2.4017 2.4227 1.8839
0.6 4.4595 2.9328 2.4610 3.6598 4.3716 2.4481 2.4227 1.8635
0.5 4.4595 2.9328 2.5501 3.6598 4.3716 2.5314 2.4227 1.8663
0.4 4.4595 2.9328 2.6926 3.6598 4.3716 2.6665 2.4227 1.9030
0.3 4.4595 2.9328 2.9110 3.6598 4.3716 2.8753 2.4227 1.9898
0.2 4.4595 2.9328 3.2389 3.6598 4.3716 3.1906 2.4227 2.1514
0.1 4.4595 2.9328 3.7278 3.6598 4.3716 3.6627 2.4227 2.4259

Table 12 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95
and γ  0.99

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 4.4595 2.9328 2.3930 4.0371 4.4374 2.3932 2.6589 2.0134


0.9 4.4595 2.9328 2.3824 4.0371 4.4374 2.3820 2.6589 1.9612
0.8 4.4595 2.9328 2.3863 4.0371 4.4374 2.3851 2.6589 1.9162
0.7 4.4595 2.9328 2.4099 4.0371 4.4374 2.4078 2.6589 1.8821
0.6 4.4595 2.9328 2.4610 4.0371 4.4374 2.4577 2.6589 1.8642
0.5 4.4595 2.9328 2.5501 4.0371 4.4374 2.5453 2.6589 1.8701
0.4 4.4595 2.9328 2.6926 4.0371 4.4374 2.6860 2.6589 1.9109
0.3 4.4595 2.9328 2.9110 4.0371 4.4374 2.9020 2.6589 2.0031
0.2 4.4595 2.9328 3.2389 4.0371 4.4374 3.2267 2.6589 2.1717
0.1 4.4595 2.9328 3.7278 4.0371 4.4374 3.7114 2.6589 2.4559

Table 13 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.1
and γ  0.999

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 44.2905 29.0824 21.4551 33.6530 28.3272 34.0683 33.5005 33.9307


0.9 44.2905 29.0824 21.5872 33.6530 28.3272 33.7233 33.5005 33.5708
0.8 44.2905 29.0824 21.8734 33.6530 28.3272 33.3455 33.5005 33.1756
0.7 44.2905 29.0824 22.3668 33.6530 28.3272 32.9302 33.5005 32.7397
0.6 44.2905 29.0824 23.1411 33.6530 28.3272 32.4718 33.5005 32.2567
0.5 44.2905 29.0824 24.2991 33.6530 28.3272 31.9635 33.5005 31.7188
0.4 44.2905 29.0824 25.9874 33.6530 28.3272 31.3972 33.5005 31.1163
0.3 44.2905 29.0824 28.4190 33.6530 28.3272 30.7632 33.5005 30.4374
0.2 44.2905 29.0824 31.9109 33.6530 28.3272 30.0497 33.5005 29.6674
0.1 44.2905 29.0824 36.9469 33.6530 28.3272 29.2431 33.5005 28.7881

123
S. Arumairajan

Table 14 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.7
and γ  0.999

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 44.2905 29.0824 21.4551 25.3959 37.0097 21.4798 17.9430 18.3312


0.9 44.2905 29.0824 21.5872 25.3959 37.0097 21.4102 17.9430 17.9214
0.8 44.2905 29.0824 21.8734 25.3959 37.0097 21.4483 17.9430 17.5611
0.7 44.2905 29.0824 22.3668 25.3959 37.0097 21.6350 17.9430 17.2770
0.6 44.2905 29.0824 23.1411 25.3959 37.0097 22.0267 17.9430 17.1069
0.5 44.2905 29.0824 24.2991 25.3959 37.0097 22.7031 17.9430 17.1055
0.4 44.2905 29.0824 25.9874 25.3959 37.0097 23.7787 17.9430 17.3530
0.3 44.2905 29.0824 28.4190 25.3959 37.0097 25.4209 17.9430 17.9685
0.2 44.2905 29.0824 31.9109 25.3959 37.0097 27.8797 17.9430 19.1335
0.1 44.2905 29.0824 36.9469 25.3959 37.0097 31.5396 17.9430 21.1309

Table 15 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.9
and γ  0.999

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 44.2905 29.0824 21.4551 36.2869 43.4135 21.4409 23.9679 17.7145


0.9 44.2905 29.0824 21.5872 36.2869 43.4135 21.5487 23.9679 17.4198
0.8 44.2905 29.0824 21.8734 36.2869 43.4135 21.8052 23.9679 17.2047
0.7 44.2905 29.0824 22.3668 36.2869 43.4135 22.2620 23.9679 17.1044
0.6 44.2905 29.0824 23.1411 36.2869 43.4135 22.9907 23.9679 17.1683
0.5 44.2905 29.0824 24.2991 36.2869 43.4135 24.0916 23.9679 17.4669
0.4 44.2905 29.0824 25.9874 36.2869 43.4135 25.7073 23.9679 18.1025
0.3 44.2905 29.0824 28.4190 36.2869 43.4135 28.0457 23.9679 19.2259
0.2 44.2905 29.0824 31.9109 36.2869 43.4135 31.4158 23.9679 21.0647
0.1 44.2905 29.0824 36.9469 36.2869 43.4135 36.2899 23.9679 23.9713

Table 16 Estimated SMSE of OLSE, ME, RE, LE, AULE, MAULE, SRLE and SRMAULE with d = 0.95
and γ  0.999

k OLSE ME RE LE AULE MAULE SRLE SRMAULE

1 44.2905 29.0824 21.4551 40.0755 44.0696 21.4512 26.3500 17.6681


0.9 44.2905 29.0824 21.5872 40.0755 44.0696 21.5771 26.3500 17.3854
0.8 44.2905 29.0824 21.8734 40.0755 44.0696 21.8558 26.3500 17.1854
0.7 44.2905 29.0824 22.3668 40.0755 44.0696 22.3401 26.3500 17.1041
0.6 44.2905 29.0824 23.1411 40.0755 44.0696 23.1029 26.3500 17.1919
0.5 44.2905 29.0824 24.2991 40.0755 44.0696 24.2465 26.3500 17.5211
0.4 44.2905 29.0824 25.9874 40.0755 44.0696 25.9165 26.3500 18.1961
0.3 44.2905 29.0824 28.4190 40.0755 44.0696 28.3247 26.3500 19.3708
0.2 44.2905 29.0824 31.9109 40.0755 44.0696 31.7860 26.3500 21.2776
0.1 44.2905 29.0824 36.9469 40.0755 44.0696 36.7813 26.3500 24.2755

123
On the Stochastic Restricted Modified Almost Unbiased Liu…

the SRMAULE has smallest SMSE than LE when k ≤ 0.8. From Table 10, it can be
noticed that the SRMAULE has smallest SMSE than other estimators when k  0.4
and k  0.5. Also, the SRMAULE has lower SMSE than OLSE, ME, RE, LE, AULE
and MAULE. From Table 11, we can say that the proposed estimator has smallest
SMSE than other estimators when d  0.9 and γ  0.99 except k  0.1. From
Table 12, it can be concluded that the proposed estimator has smallest SMSE than
other estimators when d  0.95 and γ  0.99.
Table 13 shows that the SRMAULE is worse than AULE. Also, the SRMAULE is
worse than ME when k ≥ 0.2. Moreover, the SRMAULE is worse than RE when
k ≥ 0.3. However, the SRMAULE has the smallest SMSE than SRLE when k ≤ 0.8.
With that, the SRMAULE has the smallest SMSE than LE unless k  1. From
Table 14, it has been noticed that the SRMAULE has the smallest SMSE than other
estimators when 0.4 ≤ k ≤ 0.9 and d  0.7 and γ  0.999. From Table 15, when
d  0.9 and γ  0.999, we can say that the proposed estimator has smallest SMSE
than other estimators unless k  0.1. From Table 16, it can be observed that the
proposed estimator has the best performance among other estimators when d  0.95
and γ  0.999.
Based on the results discussed in Theorems 3.1–3.6, one can say that the proposed
estimator SRMAULE is superior to the other estimators with respect to the mean
squared error matrix sense under certain conditions. Also, in Sect. 3.7, it has been
discussed that the proposed SRMAULE is always superior to MAULE, which agree
with numerical results.

6 Conclusion

In this paper, we proposed the new biased estimator, namely stochastic restricted
modified almost unbiased Liu estimator (SRMAULE) in the multiple linear regression
model to combat the well-known multicollinearity problem. Moreover, necessary and
sufficient conditions for the proposed estimator over the OLSE, RE, LE, AULE, SRLE
and MAULE in the mean squared error matrix sense were obtained. A Monte Carlo
simulation study was carried out, and a numerical example was used to illustrate
the theoretical findings. From the numerical results, it could be concluded that the
proposed estimator performs well when d is large.

Acknowledgements The author is grateful to the editor and the three anonymous referees for their valuable
comments which improved the quality of the paper.

Appendix

Lemma 1 [17] Let  n × n matrices M


 > 0, N > 0 (or N ≥ 0), then M > N if and
only if λ1 N M −1 < 1, where λ1 N M −1 is the largest eigenvalue of the matrix
N M −1 .

Lemma
  2 [16]  β̂1 and β̂2 be two linear estimators
 Let   of β. Suppose
 that D 
D β̂1 − D β̂2 is positive definite then   MSE β̂1 −MSE β̂2 is nonnegative

123
S. Arumairajan

 −1
definite if and only if b2 D + b1 b1 b2 ≤ 1, where b j denotes the bias vector of
β̂ j , j  1, 2.

Lemma 3 [4] Let M be a positive definite matrix, namely M > 0, α be some vector,
then M − αα  ≥ 0 if and only if α  M −1 α ≤ 1.

References
1. Akdeniz, F., Erol, H.: Mean squared error matrix comparisons of some biased estimators in linear
regression. Commun. Stat. Theory Methods 32, 2389–2413 (2003)
2. Alheety, M.I., Kibria, B.M.G.: Modified Liu-type estimator based on (r − k) class estimator. Commun.
Stat. Theory Methods 42, 304–319 (2013)
3. Arumairajan, S., Wijekoon, P.: Modified almost unbiased liu estimator in linear regression model.
Commun. Math. Stat. 5, 261–276 (2017)
4. Farebrother, R.W.: Further results on the mean square error of ridge regression. J. R. Stat. Soc. B 38,
248–250 (1976)
5. Gruber, M.H.J.: Improving Efficiency by Shrinkage: The James-Stein and Ridge Regression Estimators.
Dekker Inc., New York (1998)
6. Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Tech-
nometrics 12, 55–67 (1970)
7. Hubert, M.H., Wijekoon, P.: Improvement of the Liu estimator in linear regression model. Stat. Pap.
47, 471–479 (2006)
8. Kibria, B.M.G.: Performance of some new ridge regression estimators. Commun. Stat. Theory Methods
32, 419–435 (2003)
9. Li, Y., Yang, H.: A new stochastic mixed Ridge Estimator in linear regression. Stat. Pap. 51, 315–323
(2010)
10. Li, Y., Yang, H.: Two kinds of restricted modified estimators in linear regression model. J. Appl. Stat.
38, 1447–1454 (2011)
11. Liu, K.: A new class of biased estimate in linear regression. Commun. Stat. Theory Methods 22,
393–402 (1993)
12. McDonald, G.C., Galarneau, D.I.: A Monte Carlo evaluation of some Ridge type estimators. J. Am.
Stat. Assoc. 70, 407–416 (1975)
13. Newhouse, J.P., Oman, S.D.: An evaluation of Ridge Estimators. Rand Report, No. R-716-Pr, 1-28
(1971)
14. Ozkale, R.M., Kaçiranlar, S.: The restricted and unrestricted two parameter estimators. Commun. Stat.
Theory Methods 36, 2707–2725 (2007)
15. Theil, H., Goldberger, A.S.: On pure and mixed estimation in Economics. Int. Econ. Rev. 2, 65–77
(1961)
16. Trenkler, G., Toutenburg, H.: Mean square error matrix comparisons between biased estimators—an
overview of recent results. Stat. Pap. 31, 165–179 (1990)
17. Wang, S.G., Wu, M.X., Jia, Z.Z.: Matrix inequalities, 2nd edn. Chinese Science Press, Beijing (2006)

123

You might also like