## Chi-square Tutorial - Radford University | Virginia

Basically, you reject the null hypothesis when your test value falls into the . There are four main ways you’ll compute test values and either support or reject your null hypothesis. Which method you choose depends mainly on if you have a proportion or a .

Photo provided by Flickr

As with most test statistics, the larger the difference between observed and expected, the larger the test statistic becomes. To give an example, let's say your null hypothesis is a 3:1 ratio of smooth wings to wrinkled wings in offspring from a bunch of *Drosophila* crosses. You observe 770 flies with smooth wings and 230 flies with wrinkled wings; the expected values are 750 smooth-winged and 250 wrinkled-winged flies. Entering these numbers into the equation, the chi-square value is 2.13. If you had observed 760 smooth-winged flies and 240 wrinkled-wing flies, which is closer to the null hypothesis, your chi-square value would have been smaller, at 0.53; if you'd observed 800 smooth-winged and 200 wrinkled-wing flies, which is further from the null hypothesis, your chi-square value would have been 13.33.

## Pearson's chi-squared test - Wikipedia

Photo provided by Flickr

Mannan and Meslow (1984) studied bird foraging behavior in a forest in Oregon. In a managed forest, 54% of the canopy volume was Douglas fir, 40% was ponderosa pine, 5% was grand fir, and 1% was western larch. They made 156 observations of foraging by red-breasted nuthatches; 70 observations (45% of the total) in Douglas fir, 79 (51%) in ponderosa pine, 3 (2%) in grand fir, and 4 (3%) in western larch. The biological null hypothesis is that the birds forage randomly, without regard to what species of tree they're in; the statistical null hypothesis is that the proportions of foraging events are equal to the proportions of canopy volume. The difference in proportions is significant (chi-square=13.59, 3 d.f., *P*=0.0035).

## This lesson describes when and how to conduct a chi-square test of independence. Key points are illustrated by a sample problem with solution.

The shape of the chi-square distribution depends on the number of degrees of freedom. For an extrinsic null hypothesis (the much more common situation, where you know the proportions predicted by the null hypothesis before collecting the data), the number of degrees of freedom is simply the number of values of the variable, minus one. Thus if you are testing a null hypothesis of a 1:1 sex ratio, there are two possible values (male and female), and therefore one degree of freedom. This is because once you know how many of the total are females (a number which is "free" to vary from 0 to the sample size), the number of males is determined. If there are three values of the variable (such as red, pink, and white), there are two degrees of freedom, and so on.

## 2011-11-13 · Paul Andersen shows you how to calculate the ch-squared value to test your null hypothesis

McDonald (1989) examined variation at the *Mpi* locus in the amphipod crustacean *Platorchestia platensis* collected from a single location on Long Island, New York. There were two alleles, *Mpi*^{90} and *Mpi*^{100} and the genotype frequencies in samples from multiple dates pooled together were 1203 *Mpi*^{90/90}, 2919 *Mpi*^{90/100}, and 1678 *Mpi*^{100/100}. The estimate of the *Mpi*^{90} allele proportion from the data is 5325/11600=0.459. Using the Hardy-Weinberg formula and this estimated allele proportion, the expected genotype proportions are 0.211 *Mpi*^{90/90}, 0.497 *Mpi*^{90/100}, and 0.293 *Mpi*^{100/100}. There are three categories (the three genotypes) and one parameterestimated from the data (the *Mpi*^{90}allele proportion), so there is one degree of freedom. The result is chi-square=1.08, 1 d.f., *P*=0.299, which is not significant. You cannot reject the null hypothesis that the data fit the expected Hardy-Weinberg proportions.

## Affordable Easy to Use Statistical Software for Excel. ANOVA, t test, f tests and more. Download 30 day trial.

You calculate the test statistic by taking an observed number (*O*), subtracting the expected number (*E*), then squaring this difference. The larger the deviation from the null hypothesis, the larger the difference between observed and expected is. Squaring the differences makes them all positive. You then divide each difference by the expected number, and you add up these standardized differences. The test statistic is approximately equal to the log-likelihood ratio used in the . It is conventionally called a "chi-square" statistic, although this is somewhat confusing because it's just one of many test statistics that follows the theoretical chi-square distribution. The equation is