Statistics With R
Statistics With R
Statistics With R
Statistics with R
Hypothesis testing and distributions
Steven Buechler
Department of Mathematics 276B Hurley Hall; 1-6233
Fall, 2007
With categorical data about all you can do is examine the frequency table: > f <- factor(c(rep("A", 6), rep("B", 4), + rep("C", 8))) > table(f) f A B C 6 4 8
input and user-dened parameters. Options allow on the y visualization with one-line commands, or publication-quality annotated diagrams.
plot(...) creates a graphics panel displaying the result.
Good for simply picturing what youre doing. These can be saved for future use.
Other commands can generate JPEG, PNG, PDF, etc., les
input and user-dened parameters. Options allow on the y visualization with one-line commands, or publication-quality annotated diagrams.
plot(...) creates a graphics panel displaying the result.
Good for simply picturing what youre doing. These can be saved for future use.
Other commands can generate JPEG, PNG, PDF, etc., les
input and user-dened parameters. Options allow on the y visualization with one-line commands, or publication-quality annotated diagrams.
plot(...) creates a graphics panel displaying the result.
Good for simply picturing what youre doing. These can be saved for future use.
Other commands can generate JPEG, PNG, PDF, etc., les
First visualization is a scatter plot that plots the index on the x axis and the value on the y axis. This is the default when plot is given a numeric vector.
Scatter Plots
For viewing the distribution of values
2.0
q q
1.5
q q
q q q
x1
1.0
q q q
q q
0.5
10 Index
15
20
Create a scatter plot of x1, x2, x3 with notation that separates them. > + > > > plot(x1, xlim = c(1, 4), main = "Plot points(x2, pch = 19, points(x3, pch = 19, abline(h = mean(x3), 40), ylim = c(-4, all Points") col = "orange") col = "blue") lwd = 3)
q q q q q q
q q q q q q q q q
x1
4 0
10
20 Index
30
40
them.
Combine all points and create a historgam. > xx <- c(x1, x2, x3) > hist(xx)
Histogram of xx
20 Frequency 0 1.0 5 10 15
0.5
0.0
0.5 xx
1.0
1.5
2.0
2.5
Histograms in R
There is a theory of histograms that suggest a bin width that is most informative. As expected it is possible to ll or cross-hatch the bars, overlay other plots, etc.
There are several levels of precision in answering this. In deciding if a particular statistical test can be applied it is usually good enough if it looks normal. However, histograms or the related density plots arent the accepted way.
What is a Distribution?
Reminder of Basic Denitions
A discrete random variable is a numerical quantity that takes values with some randomness from a discrete set; often a subset of integers. The probability distribution of a discrete random variable species the probability associated with each possible value.
What is a Distribution?
Continuous Random Variables
A continuous random variable X takes values in an interval of real numbers. There is a probability associated with X falling between two numbers a < b. The density function fX (x ) is such that Prob(a X b) is the area bounded by the graph of y = fX (x ), the x axis, and the vertical lines x = a and x = b. In other words, b Prob(a X b) = a fX (x )dx .
The cumulative distribution function for X is FX (x ) = Prob(X x ). Its the area under the curve of the density function to the left of x . The quantile function of X , QX (u ) is the value x of X such that Prob(X x ) = u . In other words, it is the inverse of FX (x ).
Normal Distribution
A normally distributed random variable with mean and standard deviation is one with density function fX (x ) =
(x )2 1 e 22 . 2
Its graph is a bell curve centered at whose spread is determined by . The standard normal distribution is one with mean 0 and standard deviation 1.
Normal Distribution
Computing Values in R
The distribution function for the normal with mean = mean and standard deviation = sd is pnorm(x, mean, sd). The quantile function of the normal is qnorm(p, mean, sd). The function rnorm(n, mean, sd) randomly generates n values of a normally distributed random variable with given mean and sd. Default values: mean=0, sd=1.
Normal Distribution
Examples
> pnorm(2) [1] 0.9772 > pnorm(0) [1] 0.5 > qnorm(0.95) [1] 1.645 > qnorm(0.025, mean = 2, sd = 0.5) [1] 1.02 > rnorm(4, 2, 2) [1] 1.00381 3.94258 -0.01914 2.30742
Collected data are nitely many values from a random variable X ; a nite sample. While X has a precise distribution we can only estimate it from the nite sample. How do we test a sample against a hypothesized distribution? Is it normal, t , Chi-squared, . . . ? More generally, how do we test when two samples come from the same distribution?
Quantile-Quantile Plots
Quantiles are Calculable
Its hard going from a sample to estimate a density function. We can, however, calculate the sample quantiles. If two samples come from the same distribution they should have the same quantiles. Given two samples A and B let xi and yi be the i th quantile of A and B , resp. Graph the points (xi , yi ) to form the Quantile-Quantile Plot (Q-Q plot) of A and B . When A and B have the same distribution the Q-Q plot is a 45o straight line.
In many instances we need to know if a sample comes from a normal distribution. In this case we compare the sample quantiles against the calculated quantiles of a normal distribution.
The Q-Q Normal Plot of A is then the Q-Q plot of A against the standard normal distribution. NOTE: This will be a straight line if the distribution of A is normal of any mean and standard deviation. Happily, there is an R function that does all of this: qqnorm.
qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq
Sample Quantiles
q q q
2
q
Theoretical Quantiles
qq qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q q
Sample Quantiles
1.0
0.5
q
0.0
0.5
1.0
1.5
Theoretical Quantiles
Q-Q Line
The Q-Q line is drawn so that it passes through the rst and third quantile. All of the points should be on this line when the sample is normal. In this example the distribution appears to be shifted to the left from a normal.
q qq q q
qq q q q q qq q qq qq q q q q q q q q qq q q q q qq q qq q q q q q q q q q q q q q q q q qq qq q q q q q q q q qq q q q q q q q q q q q q q q q q q
v2
0.0
0.5
1.0 v1
1.5
2.0
q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq
Sample Quantiles
Theoretical Quantiles
Test of a null hypothesis against an alternative hypothesis. There are ve steps, the rst four of which should be done before inspecting the data. Step 1. Declare the null hypothesis H0 and the alternative hypothesis H1 . In a sequence matching problem H0 may be that two sequences are uniformly independent, in which case the probability of a match is 0.25. H1 may be probability of a match = 0.35, or probability of a match > 0.25 .
A hypothesis that completely species the parameters is called simple. If it leaves some parameter undetermined it is composite. A hypothesis is one-sided if it proposes that a parameter is > some value or < some value; it is two-sided if it simply says the parameter is = some value.
Types of Error
Rejecting H0 when it is actually true is called a Type I Error. In biomedical settings it can be considered a false positive. (Null hypothesis says nothing is happening but we decide there is disease .) Step 2. Specify an acceptable level of Type I error, , normally 0.05 or 0.01. This is the threshold used in deciding to reject H0 or not. If = 0.05 and we determine the probability of our data assuming H0 is 0.0001, then we reject H0 .
Step 3. Select a test statistic. This is a quantity calculated from the data whose value leads me to reject the null hypothesis or not. For matching sequences one choice would be the number of matches. For a contingency table compute Chi-squared. Normally compute the value of the statistic from the data assuming H0 is true. A great deal of theory, experience and care can go into selecting the right statistic.
Step 4. Identify the values of the test statistic that lead to rejection of the null hypothesis. This is a quantity calculated from the data whose value leads me to reject the null hypothesis or not. For matching sequences one choice would be the number of matches. For a contingency table compute Chi-squared. Normally compute the value of the statistic from the data assuming H0 is true. A great deal of theory, experience and care can go into selecting the right statistic.
The statistic for the number Y of matches between two sequences of nucleotides is a binomial random variable. Let n be the lengths of the two sequences (assume they are the same). Under the null hypothesis that there are only random connections between the sequences the probability of a match at any point is p = 0.25. We reject the null hypothesis if the observed value of Y is so large that the chance of obtaining it is < 0.05.
There is a specic formula for the probability of Y matches in n trials with probability of a match = 0.25. We can similarly calculate the signicance threshold K so that Prob(Y K |p = 0.25) = 0.05. When n = 100, Prob(Y 32) = .069 and Prob(Y 33) = .044. Take as the signicance threshold 33. Reject the null hypothesis if there are at least 33 matches.
Step 5. Obtain the data, calculate the value of the statistic assuming the null hypothesis and compare with the threshold.
P-Values
Substitute for Step 4
Once the data are obtained calculate the null hypothesis probability of obtaining the observed value of the statistic or one more extreme. This is called the p-value. If it is < the selected Type I Error threshold then we reject the null hypothesis.
P-Values
Example
Compare sequences of length 26 under the null hypothesis of only random matches; i.e., p = 0.25. Suppose there are 11 matches in our data. In a binomial distribution of length 26 with p = 0.25 the probability of 11 matches is about 0.04. So, with the Type I Error rate, , at 0.05 we would reject the null hypothesis.