Business Analytics: Basic Statistic
Business Analytics: Basic Statistic
Business Analytics: Basic Statistic
Basic Statistic
© Pristine – www.edupristine.com
© Pristine
Agenda
Introduction
Data
Basic Statistics
© Pristine 1
3. Basic Statistics
I. Probability
© Pristine 2
3.a. Probability
Probability is a numerical way of describing how likely something is to happen.
One of the fundamental methods of calculating probability is by using set theory.
A set is defined as a collection of objects and each individual object is called an element of that set.
• Example from number of credit cards data, the distinct number of credit cards owned form a set:
# Cards = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
• Numbers present on a dice form a set:
Dice = {1, 2, 3, 4, 5, 6}
The sample space (S ) is the set of all possible outcomes that might be observed for an
event/experiment.
If each of the elements in the sample space are equally likely, then we can define the probability of
event A as:
• P(A) = (# elements in A)/(# elements in sample space)
• e.g. P(# Cards = 1) = (# of customers having 1 card)/(Total number of customers) = 100/1000 = 0.10 = 10%
• e.g. Probability of rolling an even number on a dice
Sample space (S) = {1, 2, 3, 4, 5, 6}
Event (A) = {2, 3, 4}
P(A) = 3/6 = 0.5 = 50%
Why is it important from analytics perspective?
• What we do: analyze historical data to find pattern under assumption that past is a reflection of future.
• By means of probability theory, predict the future using historical patterns.
© Pristine 3
3.a. Probability- Other topics
Set operations
• Union (A U B)
U
• Intersection (A B)
Venn diagrams
• Basic operations on Venn diagrams
U
3. P (A U B) = P(A) + P(B) – P (A B)
Conditional probability
U
• P(A|B) = P (A B)/ P(B)
Bayes theorem
© Pristine 4
3.b. Random variables
I. Definition
© Pristine 5
3.b. Random variables- Definition
A random variable is a function or a rule which maps each event in a sample space to real
numbers.
X (w) = x
Random variable
w1 x1
w2 x2
w3 x3
. .
. .
. .
Sample space S Set of real numbers
So, if w is an element of the sample space S (i.e. w is one of the possible outcomes of the
experiment concerned) and the number x is associated with this outcome, then X(w) = x .
Convention:
• Denote random variable by capital letter “X”
• Denote the outcome or possible values by small letter “x” i.e. X(w) = x
© Pristine 6
3.b. Random variables- Definition
Example:
Suppose there are 8 balls in a bag. The random variable X is the weight, in kg, of a ball
selected at random. Balls 1, 2 and 3 weigh 0.1kg, balls 4 and 5 weigh 0.15kg and balls 6, 7
and 8 weigh 0.2kg. Using the notation above, write down this information.
Solution:
X(b1) = 0.10 kg, X(b2) = 0.10 kg, X(b3) = 0.1 kg,
X(b4) = 0.15 kg, X(b5) = 0.15 kg
X(b6) = 0.2 kg, X(b7) = 0.2 kg
X (bi) = x
© Pristine 8
3.b. Discrete Random variables
Definition:
The set of all possible values of the outcome (or x) takes discrete values
• e.g. Outcome of rolling a dice= {1, 2, 3, 4, 5, 6}
• Or # credit cards owned by an individual = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}
Probabilities:
Probabilities are defined on events (subsets of the sample space S).
So what is meant by “P(X = x) ”?
• Suppose sample space consists of eight events {s1, s2, s3, s4, s5, s6, s7, s8}
• Let the outcome for
– E1 = {s1, s2, s3} be associated with number x1
– E2 = {s4, s5} be associated with number x2
– E3 = {s6, s7, s8} be associated with number x3
• P(X = x1) is meant P(E1)
• P(X = x2) is meant P(E2)
• P(X = x3) is meant P(E3)
© Pristine 9
3.b. Discrete Random variables
Probability functions
• The function fX (x) = P(X = x) for each x in the range of X is the probability function (PF) of X
• It specifies how the total probability of 1 is divided up amongst the possible values of X
• Thus, gives the probability distribution of X.
• Also known as “probability distribution functions” (pdf)
Following are the requirements for a function to qualify as the probability function of a discrete
random variable:
• fX (x) >= 0 for all x within the range of X
• ∑fX (x) = 1
Cumulative distribution functions
• Gives the probability that X assumes a value that does not exceed x.
• Denoted as FX(x) = P(X <= x) where max (FX(x)) = 1
© Pristine 10
3.b. Discrete Random variables- Probability
Example:
Suppose there are 8 balls in a bag. The random variable X is the weight, in kg, of a ball
selected at random. Balls 1, 2 and 3 weigh 0.1kg, balls 4 and 5 weigh 0.15kg and balls 6, 7
and 8 weigh 0.2kg. Write down the different probability distribution functions.
Solution:
fX(0.10) = P(X=0.10) = probability the ball b1 or b2 or b3 is selected out of 8 balls = 3/8
fX(0.15) = P(X=0.15) = probability the ball b4 or b5 is selected out of 8 balls = 2/8
fX(0.20) = P(X=0.20) = probability the ball b6 or b7 or b8 is selected out of 8 balls = 3/8
© Pristine 12
3.b. Continuous Random variables
Cumulative distribution function:
The cumulative distribution function (CDF) is defined to be the function:
• FX (x) = P(X ≤ x)
For a continuous random variable, FX (x) is a continuous, non-decreasing function, defined for all
real values of x.
x
• FX (x) = ∫-∞ fX(t) dt
© Pristine 13
3.b. Random variables- Expected values
Definition:
Expected values are numerical summaries of important characteristics of the distributions of
random variables.
Expected values of a Random Variable “X” is denoted as E[X]
Important Expected values are
• Mean
• Variance and Standard deviation
Mean:
• E[X] is a measure of central location
• For discrete case calculated as E[X] = ∑(xi * Pi) OR E[X] = (∑x * fX(x))
∞
• For continuous case calculated as E[X] = ∫-∞ x * fX(x) dx
• Usually denoted by μ
Variance:
• Var[X] = E[{X – E[X]}2]
• Var[X] = E[X2] – E2[X]
© Pristine 14
3.b. Random variables- Expected values
Example:
Suppose there are 8 balls in a bag. The random variable X is the weight, in kg, of a ball
selected at random. Balls 1, 2 and 3 weigh 0.1kg, balls 4 and 5 weigh 0.15kg and balls 6, 7
and 8 weigh 0.2kg. Find mean and variance of weight.
Solution:
fX(0.10) = P(X=0.10) = 3/8
fX(0.15) = P(X=0.15) = 2/8
fX(0.20) = P(X=0.20) = 3/8
E[X] = ∑Pi * xi = 3/8 * 0.10 + 2/8 * 0.15 + 3/8 * 0.20 = 1.2/8 = 0.15 kg
Var[X] = E[X2] – E2[X] = 0.024375 – 0.0225 = 0.001875 kg2
X (bi) = x
Weight (Random variable)
b1
b2
b3
b4
x1=0.10
b5 x2=0.15
b6 x3=0.20
b7
b8
© Pristine 16
3.c. Discrete PDF- Uniform distribution
Sample space S = {1, 2, 3,…,k} .
Probability measure:
• equal assignment (1/k) to all outcomes i.e. all outcomes are equally likely.
Random variable X defined by X(i) = i , (i = 1, 2, 3,…,k) .
Distribution: P(X = x) = 1/k where x = (1, 2, 3, 4,….,k)
Expected values:
• Mean, μ = (k + 1)/2
• Variance, σ2 = (k2 – 1)/12
Example: Assigning equal probability of default to a portfolio of credit card holders.
Uniform distribution- Probability (k=10)
0.1
0.09
0.08
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
1 2 3 4 5 6 7 8 9 10
© Pristine 17
3.c. Discrete PDF- Bernoulli distribution
A Bernoulli trial is an experiment which has only two possible outcomes – s (“success”) and f
(“failure”).
“success” and “failure” are mere labels and should not be taken literally. Instead we could have
“yes” and “no” OR “true” and “false”
Sample space S = {s,f} .
Probability measure:
• P({s}) = p, P({f}) = 1 – p 0<p<1
Random variable X defined by X(s) = 1, X(f) = 0.
Distribution: P(X = x) = px * (1-p)1-x , x = 0, 1; 0 < p < 1
Expected values:
• Mean, μ = p
• Variance, σ2 = p (1 – p)
Examples:
• Tossing of a coin. “Head” corresponds to “success” and “Tail” corresponds to “failure”.
• Defaulting a home loan. “Default” corresponds to “success” and “Non-default” corresponds to “failure”.
• Auto insurance policy. “No claim” corresponds to “success” and “Claim” corresponds to “failure”.
© Pristine 18
3.c. Discrete PDF- Bernoulli distribution
Bernoulli distribution with probability of success (p) = 0.25.
0.50
0.25
0.00
s (X = 1) f (X = 0)
© Pristine 19
3.c. Discrete PDF- Binomial distribution
Assumptions
Each trial has only 2 possible outcomes, success and Binomial Distribution – One Parameter Distr.
failure.
The probability of getting exactly k successes in n
The trials are identical and fixed (Usually denoted by trials is given by the probability mass function:
n)
The probability of success p is constant (0 <= p <=1).
All trials are independent
for k = 0, 1, 2, ..., n, where
Example nC = n!/{(n-k)!*k!}
k
© Pristine 20
3.c. Case study for binomial distribution
Each operational loss, independently, is supposed to be insured with probability 60% in a BL
(probability of 60% arrived from historical data as Number of insured Losses/Total number of losses in
the BL over last 36 months). This implies that the annual number of insured loss is the sum of
Bernoulli trial results and would follow a binomial distribution. If during a particular year, 20 losses
happen in the BL, what is the probability that the Bank would have insurance in 10 or less cases?
© Pristine 21
3.c. Discrete PDF- Poisson distribution
Expresses the probability of a number of events occurring in
a fixed period of time if these events occur with a known
average rate and independently of the time since the last Poisson Distribution – One Parameter Distr.
event
Expected number of occurrences in interval = λ,
Assumptions
Probability there are exactly k occurrences is
Constant mean (number of events in a pre-specified time equal to
interval)
The interval length between two consecutive events
follows an exponential distribution (λ )
•k is the number of occurrences of an event and is
Sum of independent Poisson variables is also Poisson a non-negative integer, k = 0, 1, 2, ...
λ for 12M period maybe taken as 4 x λ for 3M period •k! is the factorial of k
•e is the base of the natural logarithm (e = 2.71)
Example •λ (+ve real number), equal to the expected
number of occurrences during interval
For instance, event of occurrence of operational risk
losses, credit defaults during a time period; if individual
events are independent Mean (X) = Variance (X) = λ
© Pristine 23
3.c. Discrete PDF- Negative- Binomial distribution
Discrete probability distribution of the number of failures (r) in a
sequence of Bernoulli trials before a specified (non-random) Negative Binomial Distr. – 2 Param
number k of success occurs Distr.
Special generalized case of the Poisson distribution The probability of getting exactly r
failures before k successes is given
Intensity rate (λ) is no longer taken to be constant (Assumed to by the probability mass function:
follow a Gamma Distribution)
Two-parameter distribution
for k = 0, 1, 2, ..., n,
Provides additional flexibility in fitting data
Parameter uncertainty maybe high with less data points (typical where
nC = n!/{(n-k)!*k!}
of scenario where annual frequency data points maybe 3-6) k
Advantages
© Pristine 25
3.c. Continuous PDF- Uniform distribution
Assigns equal probability to all values between its minimum and maximum values.
Random variable X takes a value between two number a and b (say).
Probability density function: fX(x) = 1/(b-a), a<x<b
Denoted as X ~ U(a, b)
Expected values:
• Mean, μ = (a + b)/2
• Variance, σ2 = (b - a)2/12
Example: Assigning equal probability of default to a portfolio of credit card holders.
© Pristine 26
3.c. Continuous PDF- Gamma distribution
Gamma family if distributions is a positively-skewed distribution explained by two parameters “α”
and “λ” (say).
It is bounded at zero and can take various shapes depending on values of parameters.
Random variable X takes a non-zero positive value.
Probability density function: fX(x) =( λα xα-1e- λx )/Γ(α) , x > 0
Denoted as X ~ Gamma(α, λ)
Expected values:
• Mean, μ = α/λ
• Variance, σ2 = α/λ 2
Special cases:
• Exponential distribution when α = 1: fX(x) =λ e- λx , x>0
Example:
• Used the predict claim amount in Auto insurance.
• Used the predict loss amount in bank loan defaults
© Pristine 27
3.c. Continuous PDF- Gamma distribution
Plotting PDFs for different Gamma distributions using MS Excel.
Probability
Ga(20, 0.5)
7 7.54% 4.34% 8.17% 10%
8 6.18% 3.38% 13.98% 8%
9 4.98% 2.63% 17.73% 6%
10 3.96% 2.05% 17.77% 4%
11 3.12% 1.60% 14.71% 2%
12 2.44% 1.24% 10.40% 0%
13 1.90% 0.97% 6.44% 1 3 5 7 9 11 13 15 17 19
14 1.46% 0.75% 3.56%
15 1.12% 0.59% 1.79% Random variable (X)
16 0.86% 0.46% 0.82%
17 0.65% 0.36% 0.35%
18 0.50% 0.28% 0.14%
19 0.37% 0.22% 0.05%
20 0.28% 0.17% 0.02%
© Pristine 28
3.c. Continuous PDF- Normal distribution
A symmetrical distribution having bell shaped pdf curve.
Widely used to naturally occurring variables e.g. height, weight, exam scores etc.
Has two parameters mean (μ) and variance (σ2).
Random variable X takes a non-zero positive value.
Probability density function: fX(x) =(1/ σ √2∏ ) exp[-1/2 {(x- μ)/ σ}2]
Denoted as X ~ N(μ, σ2)
It provides good approximations to various other distributions (Central Limit Theorem)
Transformation z = (x- μ)/ σ has N(0, 1) distribution.
Afterwards, The probability is calculated by looking into the “standard probability distribution
table for N(0,1) distribution”.
© Pristine 29
3.c. Continuous PDF- Normal distribution
Plotting PDFs for different Normal distributions using MS Excel.
3 0.4% 4.3% 5%
4 0.0% 1.1% 0%
-5 -4 -3 -2 -1 0 1 2 3 4 5
5 0.0% 0.2%
© Pristine 30
3.c. Continuous PDF- Normal distribution
Problem:
If X ~ N(25,36) , by making use of standard normal probability distribution table, find:
(i) P( X < 28)
(ii) P( X > 30)
(iii) P( X < 20)
Solution:
(i) P(X < 28) = P(Z < (28-25)/sqrt(36)) = P(Z < 3/6) =0.69146
(ii) P(X > 30) = P(Z > 0.833) =1− P(Z < 0.833) =1− 0.79758 = 0.20242
(iii) P(X < 20) = P(Z < −0.833) =1− P(Z < 0.833) =1− 0.79758 = 0.20242
© Pristine 31
3.c. Continuous PDF- Lognormal distribution
A positively skewed distribution.
If random variables X has lognormal distribution then Y = log(X) is normally distributed.
Random variable X is bounded at zero and used to model variables taking non-zero positive values.
Defined by two parameters μ and σ2 and denoted as X ~ log N(μ, σ2)
Probability density function: fX(x) =(1/ xσ √2∏ ) exp[-1/2 {(log x- μ)/ σ}2], 0<x
Expected values:
• Mean, E[X] = exp(μ + (1/2) σ2)
Example:
• Used the predict claim amount in Auto insurance.
• Used the predict loss amount in bank loan defaults
© Pristine 32
3.c. Continuous PDF- Lognormal distribution
Plotting PDFs for lognormal (0,1) distributions using MS Excel.
logN(0,1)
0.45
0.40
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
0 2 4 6 8 10 12 14 16 18 20
© Pristine 33
3.d. The Central Limit Theorem
Introduction:
It is perhaps one of the most important result in statistics
It provides the basis for large-sample inference about a population mean when the population
distribution is unknown.
It also provides the basis for large-sample inference about a population proportion, for example, in
opinion polls and surveys.
Definition:
If X1, X2, ….,Xn is a sequence of independent, identically distributed (iid) random variables with finite
mean μ and finite (non-zero) variance σ 2 then the distribution of (<X> – μ)/(σ /√n) approaches the
standard normal distribution, N(0,1) , as n → ∞
μ is the population mean from which X1, X2, ….,Xn have been extracted.
i=n
<X> is the sample mean calculated as <X> = (1/n) i=1
∑ Xi
For large n, (<X> – μ)/(σ /√n) and (∑ Xi – n μ)/(√(n σ 2)) has N(0, 1) distribution
OR
• <X> ~ N(μ, σ 2/n)
• ∑ Xi ~ N(n μ, n σ 2)
© Pristine 34
3.d. The Central Limit Theorem
Example:
It is assumed that the number of claims arriving at an insurance company per working day has
a mean of 40 and a standard deviation of 12. A survey was conducted over 50 working days.
Find the probability that the sample mean number of claims arriving per working day was less
than 35.
Solution:
We have, μ = 40, σ = 12 , n = 50 .
2
The central limit theorem states that <X) ~ N(40,12 /50) .
We want P( <X> < 35) :
2
P( <X> < 35) = P(Z < (35-40)/ √(12 /50))
= P(Z < -2.946) = 1 – P(Z < 2.946)
= 1 – 0.9984 = 0.0016
© Pristine 35
3.e. Sampling and Statistical Inference
I. Introduction
II. Random samples
III. Sample Mean
IV. Sample variance
V. The t- result
VI. The F- result
© Pristine 36
3.e. Sampling and Statistical Inference
Introduction:
When a sample is taken from a population the sample information can be used to infer certain things
about the population.
For example, a population quantity could be its mean or variance.
If we were to keep taking samples from the same population and calculating the mean and variance
for each of the samples, we would find that the mean and variance results form distributions as well.
The distributions of the sample mean and sample variance are called sampling distributions.
© Pristine 37
3.e. Sampling and Statistical Inference- Normal distribution
The sample mean:
Mean, <X> = (1/n)∑Xi
Distribution:
• (<X> – μ)/(σ /√n) ~ N(0,1)
• <X> ~ N(μ, σ 2/n)
• μ is the population mean for which we are trying to draw the inference
© Pristine 38
3.e. Sampling and Statistical Inference- Normal distribution
The t result:
Distribution:
• (<X> – μ)/(σ /√n) ~ N(0,1) is used to draw inference about μ when population variance σ2 is known.
• But for a population usually σ2 is not known.
• We combine (<X> – μ)/(σ /√n) ~ N(0,1) and (n-1) S2/σ2 ~ χ2n-1 to solve this problem.
• (<X> – μ)/(S /√n) ~ N(0,1)/√(χ2n-1/n-1) = tn-1
• As N(0,1)/√(χ2k/k) = tk
Example:
State the distribution of (<X>-100)/(S/ √5) for a random sample of 5 values taken from a N(100,σ 2 )
population. What is the probability that this quantity will exceed 1.533?
Solution:
Distribution: (<X>-100)/(S/ √5) ~ t4
From the t-Distribution table, the probability that this quantity will exceed 1.533 is 10%.
© Pristine 39
3.e. Sampling and Statistical Inference- Normal distribution
The F result:
if independent random samples of size n1 and n2 respectively are taken from normal populations
with variances σ12 and σ22 , then
• (S12/ σ12 ) / (S22/ σ22 ) ~ Fn1-1, Fn2-1
• The F distribution gives us the distribution of the variance ratio for two normal populations.
© Pristine 40
3.e. The F-test
Example: William Waugh is examining the earnings for two different industries. He suspects that the earnings
for chemical industry are more divergent than those of petroleum industry. To confirm, he took a sample of 35
chemical manufacturers & a sample of 45 petroleum companies. He measured the sample standard deviation
of earnings across the chemical industry to be $3.5 & that of petroleum industry to be $3.00. Determine if the
earnings of the chemical industry have greater standard deviation than those of the petroleum industry.
Answer:
1) State the hypothesis:
© Pristine 41
3.f. Point Estimate & Confidence Intervals
Point estimates: These are the single (sample) values used to estimate population parameters
Confidence interval: It is a range of values in which the population parameter is expected to lie
© Pristine 42
3g. Hypothesis Testing
A statistical hypothesis test is a method of making statistical decisions from and about
experimental data.
• “How well the findings fit the possibility that chance factors alone might be responsible."
43
© Pristine
3g. Key steps in Hypothesis Testing
Null Hypothesis (H0): The hypothesis that the researcher wants to reject
Test Statistic
Rejection/Critical Region
Conclusion
44
© Pristine
3g. Launching a niche course for MBA students?
Sam, a brand manager for a leading financial training center, wants to introduce a new niche finance course for MBA
students. He met some industry stalwarts and found that with the skills acquired by attending such a course, the
students would able to land up a in a good job.
He meets a random sample of 100 students and discovers the following characteristics of the market
• Mean household income to $20,000
• Interest level in students = high
• Current knowledge of students for the niche concepts = low
Sam strongly believes the course would adequately profitable in students if they have the buying power for the
course. They would be able to afford the course only if the mean household income is greater than $19,000.
Would you advice Sam to introduce the course?
• What should be the hypothesis?
o Hint: What is the point at which the decision changes (19,000 or 20,000)?
o What about the alternate hypothesis?
• What other information do you need to ensure that the right decision is arrived at?
o Hint: confidence intervals/ significance levels?
o Hint: Is there any other factor apart from mean, which is important? How do I move from population
parameters to standard errors?
• What is the risk still remaining, when you take this decision?
o Hint: Type-I/II errors?
o Hint: P-value
45
© Pristine
3g. Criterion for Decision Making
• To reach a final decision, Sam has to make a general inference (about the population) from the
sample data.
• Criterion: Mean income across all households in the market area under consideration.
– If the mean population household income is greater than $19,000, then PD should introduce
the product line into the new market.
46
© Pristine
3g. Identifying the Critical Sample Mean Value – Sampling Distribution
0.25
0.2
0.15
Critical Value
0.1 (Xc)
0.05
0
-10 -5 $19,000
0 5 10
• Sample mean values greater than $19,000--that is x-values on the right-hand side of the sampling distribution
centered on µ = $19,000--suggest that H0 may be false.
• More important the farther to the right x is , the stronger is the evidence against H0
47
© Pristine
3g. Computing the Criterion Value
• Standard deviation for the sample of 100 households is $4,000. The standard error of the mean (sx) is given
by:
s
sx $400
n
• Critical mean household income xc through the following two steps:
– Determine the critical z-value, zc. For =0.05:
– zc = 1.645.
– Substitute the values of zc, s, and m (under the assumption that H0 is "just" true )
– Critical Value xc
– xc = m + zcs = $19,658.
– In this case, since the observed sample statistic (20,000) is greater than the critical value (19,658), so the null
hypothesis is rejected =>
Decision Rule
If the sample mean household income is greater than $19,658, reject the null hypothesis and introduce the
new course
48
© Pristine
3g. Test Statistic
The value of the test statistic is simply the z-value corresponding to = $20,000.
xm 0.25
Z 2.5
sx 0.2
0.1 α= 0.05
0.05
0
• There is a significant difference in the -10 x=
5 $ 20,000
hypothesized population parameter and
-5 μ=$19,000
0 10
49
© Pristine
3g. Errors in Estimation
Please note: You are inferring for a population, based only on a sample
• This is no proof that your decision is correct
• It’s just a hypothesis
Actual
– There is still a chance that your inference is wrong H0 is True H0 is False
– How do I quantify the prob. of error in inference? Inference
© Pristine
3g. P - Value – Actual Significance Level
The p-value is the smallest level of
significance at which the null hypothesis
can be rejected. 0.25
P-value
0.2
• The probability of obtaining an
observed value of x (From the sample) 0.15
as high as $20,000 or more when
actual populations mean (m) is only
$19,000 = 0.00621
0.1 α= 0.05
• Calculated probability of rejecting the 0.05
null hypothesis (H0) when that
hypothesis (H0) is true (Type I error) 0
51
© Pristine
3g. Some variations in the Z-Test
What if Sam surveyed the market and found that the student behavior is estimated to be:
־They would found the training too expensive if their household income is < US$ 19,000 and hence
would not have the buying power for the course?
־They would perceive the training to be of inferior quality, if their household income is > US$19,000
and hence not buy the training?
־How would the decision criteria change? What should be the testing strategy?
Hint: From the question wording infer: Two tailed testing
־Appropriately modify the significance value and other parameters
־Use the Z-test
Appropriate change in the decision making and testing process process:
־Students will not attend the course if:
• The household income >$19,000 and the students perceive the course to be inferior
• The household income is <$19,000
־This becomes a two tailed test wherein the student will join the course only when the household lie
between a particular boundary. i.e. the household income should be neither very high neither very
low
© Pristine
Two- Tailed Test
Reject H0 Reject H0
• Conclusion: If the household income lies Do not
between $18,216 and $19,784 then the
Reject H0
student will attend the course at 95%
confidence
53
© Pristine
Thank you!
Pristine
702, Raaj Chambers, Old Nagardas Road, Andheri (E), Mumbai-400 069. INDIA
www.edupristine.com
Ph. +91 22 3215 6191
© Pristine – www.edupristine.com
© Pristine