Assumptions
- The populations from which the samples were obtained must be normally or approximately normally distributed.
- The samples must be independent.
- The variances of the populations must be equal.
Hypotheses
The null hypothesis will be that all population means are equal, the alternative hypothesis is that at least one mean is different.In the following, lower case letters apply to the individual samples and capital letters apply to the entire set collectively. That is, n is one of many sample sizes, but N is the total sample size.
Grand Mean
The grand mean of a set of samples is the total of all the data values divided by
the total sample size. This requires that you have all of the sample data available
to you, which is usually the case, but not always. It turns out that all that is
necessary to find perform a one-way analysis of variance are the number of samples, the sample
means, the sample variances, and the sample sizes.
Another way to find the grand mean is to find the weighted average of the
sample means. The weight applied is the sample size.
Total Variation
There is the between group variation and the within group variation. The whole idea behind the analysis of variance is to compare the ratio of between group variance to within group variance. If the variance caused by the interaction between the samples is much larger when compared to the variance that appears within each group, then it is because the means aren't the same.
Between Group Variation
The variance due to the interaction between the samples is denoted MS(B) for Mean Square Between groups. This is the between group variation divided by its degrees of freedom. It is also denoted by
Within Group Variation
The variance due to the differences within individual samples is denoted MS(W) for Mean Square Within groups. This is the within group variation divided by its degrees of freedom. It is also denoted by
F test statistic
Recall that a F variable is the ratio of two independent chi-square variables
divided by their respective degrees of freedom. Also recall that the F test
statistic is the ratio of two sample variances, well, it turns out that's exactly what
we have here. The F test statistic is found by dividing the between group
variance by the within group variance. The degrees of freedom for the
numerator are the degrees of freedom for the between group (k-1) and the
degrees of freedom for the denominator are the degrees of freedom for the within group (N-k).
Summary Table
All of this sounds like a lot to remember, and it is. However, there is a table which makes things really nice.| SS | df | MS | F | |
| Between | SS(B) | k-1 | SS(B) ----------- k-1 |
MS(B) -------------- MS(W) |
| Within | SS(W) | N-k | SS(W) ----------- N-k |
. |
| Total | SS(W) + SS(B) | N-1 | . | . |
0 Comments:
Post a Comment