Probability and Statistics Assignment Help
Probability and Statistics Assignment Help
Probability and Statistics Assignment Help
(a). The significance level of a statistical test is not equal to the probability that
the null hypothesis is true.
Solution: True
(b). If a 99% confidence interval for a distribution parameter θ does not include
θ0, the value under the null hypothesis, then the corresponding test with
significance level 1% would reject the null hypothesis.
Solution: True
(c). Increasing the size of the rejection region will lower the power of a test.
Solution: False
Solution: True
(e). If the p-value is 0.02, then the corresponding test will reject the null at the
0.05 level.
Solution: True
2. Testing Goodness of Fit.
(a). Suppose n = 100 and X = 38. Compute the Pearson chi-square statistic
for testing the goodness of fit to the multinomial distribution with two cells
with H0 : p = p0 = 0.5.
(b). What is the approximate distribution of the test statistic in (a), under
the null Hypothesis H0.
(c). What can you say about the P-value of the Pearson chi-square statistic in
(a) using the following table of percentiles for chi-square random variables ?
(i.e., P(χ2 3 ≤ q.90 = 6.25) = .90 )
df q.90 q.95 q.975 q.99 q.995
1 2.71 3.84 5.02 6.63 9.14
2 4.61 5.99 7.38 9.21 11.98
3 6.25 7.81 9.35 11.34 14.32
4 7.78 9.49 11.14 13.28 16.42
(d). Consider the general case of the Pearson chi-square statistic in (a),
where the outcome X = x is kept as a variable (yet to be observed). Show
that the Pearson chi-square statistic is an increasing function of |x−n/2|.
Solution:
(c). The P-value of the Pearson chi-square statistic is the probability that a
chi-square random variable with q = 1 degrees of freedom exceeds the
5.76, the observed value of the statistic. Since 5.76 is greater than q.975 =
5.02 and less than q.99 = 6.63, (the percentiles of the chi-square
distribution with q = 1 degrees of freedom) we know that the P-value is
smaller than (1 − .975) = .025 but larger than (1 − .99) = .01.
The significance level of the test is the probability of rejecting the null
hypothesis when it is true which is given by:
3. Reliability Analysis
(a). Consider the case of a beta prior with a = 1 and b = 1. Sketch a plot of the
prior density of θ and of the posterior density of θ given S = 2. For each
density, what is the distribution’s mean/expected value and identify it on
your plot.
Solution:
beta(a∗ = a + S, b∗ = b + (n − s))
(b). Repeat (a) for the case of a beta(a = 1, b = 10) prior for θ.
Solution:
Since the mean of a beta(a, b) distribution is a/(a + b), the prior mean is
1/11 = 1/(10 + 1), and the posterior mean is 3/21 = (a + s)/(a + b + n)
Solution:
The prior in (a) is a uniform distribution on the interval 0 < θ < 1. It is a flat
prior and represents ignorance about θ such that any two intervals of θ
have the same probability if they have the same width.
The prior in (b) gives higher density to values of θ closer to zero. The mean
value of the prior in (b) is 1/11 which is much smaller than the mean value
of the uniform prior in (a) which is 1/2.
Solution:
where f(x | θ) is the pmf of a Bernoulli(θ) random variable and π(θ) is the
pdf of a beta(a, b) distribution.
(e). What is the marginal distribution of X after the sample is taken? (Hint:
specify the joint distribution of X and θ using the posterior distribution of
θ.)
Solution:
The marginal distribution of X afer the sample is computed using the same
argument as (c), replacing the prior distribution with the posterior
distribution for θ given S = s.
X is Bernoulli(p)
With
J 1 p = θπ(θ | S)dθ = E[θ | S] = (a + s)/(a + b + n).