0% found this document useful (0 votes)
6 views21 pages

Ce (PC) 602

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 21

BIRBHUM INSTITUTE OF

ENGINEERING & TECHNOLOGY

❖ Name:- Somenath Mandal

❖ University Roll no:- 11801322017

❖ Registration no:- 221180120133 of 2022-23

❖ Year:- 3rd year

❖ Semester:- 6th Semester

❖ Department:- Civil Engineering

❖ Subject code:- CE(PC)602

❖ Subject Name:- Engineering Economics, Estimation &


Costing
Introduction to the Process of Estimation
Statistical Inference . . .

Statistical inference is the process by which we infer


population properties from sample properties.

There are two types of statistical inference:

• Estimation
• Hypotheses Testing

The concepts involved are actually very similar, which we


will see in due course. Below, we provide a basic
introduction to estimation.

1
Estimation . . .

The objective of estimation is to approximate the value of


a population parameter on the basis of a sample statistic.
For example, the sample mean X̄ is used to estimate the
population mean µ.

There are two types of estimators:

• Point Estimator

• Interval Estimator

2
Point Estimator . . .

A point estimator draws inferences about a population


by estimating the value of an unknown parameter using
a single value or point.

Recall that for a continuous variable, the probability of


assuming any particular value is zero. Hence, we are only
trying to generate a value that is close to the true value.
Point estimators typically do not reflect the effects of
larger sample sizes, while interval estimator do . . .

3
Interval Estimator . . .

An interval estimator draws inferences about a population


by estimating the value of an unknown parameter using
an interval. Here, we try to construct an interval that
“covers” the true population parameter with a specified
probability.

As an example, suppose we are trying to estimate the


mean summer income of students. Then, an interval es-
timate might say that the (unknown) mean income is
between $380 and $420 with probability 0.95.

4
Quality of Estimators . . .

The desirability of an estimator is judged by its charac-


teristics. Three important criteria are:

• Unbiasedness
• Consistency
• Efficiency

Details . . .

5
Unbiasedness . . .

An unbiased estimator of a population parameter is


an estimator whose expected value is equal to that pa-
rameter. Formally, an estimator µ̂ for parameter µ is said
to be unbiased if:

E (µ̂ ) = µ . (1)
Example: The sample mean X̄ is an unbiased estimator

for the population mean µ, since

E (X̄ ) = µ .

It is important to realize that other estimators for the


population mean exist: maximum value in a sample, min-
imum value in a sample, average of the maximum and the
minimum values in a sample . . .

Being unbiased is a minimal requirement for an estima-


tor. For example, the maximum value in a sample is not
unbiased, and hence should not be used as an estimator
for µ.
6
Consistency . . .

An unbiased estimator is said to be consistent if the


difference between the estimator and the target popula-
tion parameter becomes smaller as we increase the sample
size. Formally, an unbiased estimator µ̂ for parameter µ is
said to be consistent if V ( µ̂ ) approaches zero as n → ∞.

Note that being unbiased is a precondition for an estima-


tor to be consistent.

Example 1: The variance of the sample mean X̄ is σ2/n,


which decreases to zero as we increase the sample size n.
Hence, the sample mean is a consistent estimator for µ.

Example 2: The variance of the average of two randomly-


selected values in a sample does not decrease to zero as
we increase n. This variance in fact stays constant!

7
Efficiency . . .

Suppose we are given two unbiased estimators for a pa-


rameter. Then, we say that the estimator with a smaller
variance is more efficient.

Example 1: For a normally distributed population, it can


be shown that the sample median is an unbiased es-
timator for µ. It can also be shown, however, that the
sample median has a greater variance than that of the
sample mean, for the same sample size. Hence, X̄ is a
more efficient estimator than sample median.

Example 2: Consider the following estimator. First, a


random portion of a sample is discarded from an origi-
nal sample; then, the mean of the retained values in the
sample is taken as an estimate for µ. This estimator is un-
biased, but is not as efficient as using the entire sample.
The intuitive reasoning is that we are not fully utilizing
available information, and hence the resulting estimator
has a greater variance.

8
Estimating µ When σ2 is Known . . .

Constructing point estimates using the sample mean X̄


is the “best” (according to our criteria above) estimator
for the population mean µ.

Suppose the variance of a population is “known.” How


does one construct an interval estimate for µ?

The key idea is that from the central limit theorem, we


know that when n is sufficiently large, the standardized
variable
X¯ − µ
Z = √
σ/ n

follows the standard normal distribution. It is important


to realize that this is true even though we do not know
the value of µ . The value of σ, however, is assumed to
be given (this assumption, which could be unrealistic, will
be relaxed later).

9
It follows that for a given α, we have
σ
σ ¯ α/2
α/2
P —z √ <X≤ +z √ = 1−α.
n n

Since our “unknown” is actually µ , the above can be


rearranged into:
¯ σ
σ α/2
¯ α/2

P X −z √ < µ≤ X +z √ =1−α.
n n

That is, the probability for the interval

σ σ
¯ α/2 ¯ α/2
(2)
X −z √ , X +z √
n n

to contain, or to cover, the unknown population mean µ


is 1 − α; and we now have a so-called confidence in-
terval for µ . Note that the interval estimator (2) is con-
structed from X̄, zα/2, σ, and n, all of which are known.

10
It follows that for a given α, we have
The user-specified value 1 − α is called the confidence
level or coverage probability.

11
Pictorially, we have

Interpretation:

If the interval estimator (2) is used repeatedly to estimate


the mean µ of a given population, then 100(1 − α)% of
the constructed intervals will cover µ.

The often-heard media statement “19 times out of 20”


refers to a confidence level of 0.95. Such a statement is
good, since it emphasizes the fact that we are correct only
95% of the time.

12
Example: Demand during Lead Time

A computer company delivers computers directly to cus-


tomers who order via the Internet. To reduce inven-
tory cost, the company employs an inventory model.
The model requires information about the mean de-
mand during delivery lead time between a central
manufacturing facility and local warehouses.
Past experience indicates that lead-time demand is nor-
mally distributed with a standard deviation of 75 com-
puters per lead time (which is also random).
Construct the 95% confidence interval for the mean de-
mand. Demand data for a sample of 25 lead-time
periods are given in the file Xm10-01.xls.
Solution: Since 1−α = 0.95, we have α = 0.05 and hence
α/2 = 0.025, for which z0.025 = 1.96. From the given
data file, we obtain the sample mean X̄ = 370.16.

The confidence interval is therefore (see (2))


75 75
370.16 − 1.96 √ , 370.16 + 1.96 √
25 25
13
Example: Demand during Lead Time
or simply (340.76, 399.56).

14
Width of Confidence Interval . . .

Suppose we are told that with 95% confidence that the


average starting salary of accountants is between $15,000
and $100,000. Clearly, this provides little information,
despite the high “confidence” level.

Now, suppose instead: With 95% confidence that the av-


erage starting salary of accountants is between $42,000
and $45,000.

The second statement of course offers more precise infor-


mation. Thus, for a given α, the width of a confidence
interval conveys the extent of precision of the estimate.
To reduce the width, or to increase precision, we can in-
crease the sample size.

15
In general, recall that the upper and lower confidence
limits are:
σ
X̄ ± zα/2
√ .
n

Hence, the width of the confidence interval is 2 zα/2 σ/ n.
It follows that precision depends on α, σ, and n.

Details . . .

— A smaller α implies a wider interval:

16
— A larger σ implies a wider interval:

— A larger n implies a narrower interval:

17
Selecting the Sample Size . . .

To control the width of the confidence interval, we can


choose a necessary sample size. Formally, suppose we
wish to “estimate the mean to within w units.” This
means that we wish to construct an interval estimate of
the form X̄ ± w.

By solving the equation


σ
w = zα/2 √ ,
n

we obtain
zα/2 σ 2
n= ,
w

the required sample size.

18
Example: Tree Diameters

A lumber company must estimate the mean diameter of


trees in an area of forest to determine whether or not
there is sufficient lumber to harvest. They need to
estimate this to within 1 inch at a confidence level of
99%. Suppose the tree diameters are normally dis-
tributed with a standard deviation of 6 inches. What
sample size is sufficient to guarantee this?
Solution: The required precision is ±1 inch. That is,
w = 1. For α = 0.01, we have zα/2 = z0.005 = 2.575.
Therefore,
2
zα/2 σ 2 2.575 · 6
n= = = 239 .
w 1

Thus, we need to sample at least 239 trees to achieve


a 99% confidence interval of X̄ ± 1.

19

You might also like