To explain the idea behind ANOVA, let's consider two hypothetical outcomes to our experiment. In the left display, we have a situation where the differences between the sample means are large, relative to the variability within the groups. That means the difference between the first and the second mean is this, between the second and the third is this, and between the first and the third is this one. And these differences are large, compared to the variability within each group. For example, the variability in the third group is roughly this one. The variability in the second group is that one, and the variability in the first group is this one. So, this suggests that there's evidence that the means in the three groups are different. In contrast, as the situation on the right displays, we see that the difference in the means is quite small compared to the variability within each group. This suggests that the differences in the mean may simply be due to sampling variability. So, the idea here is to compare the sample variance of the means to the sample variance within the groups. Remember, the sample variance is simply the square of the sample standard deviation. This is why this whole methodology is called Analysis of Variance. However, things are not so easy that we can simply look at the box plots. The reason is, that because of the square root law, the chance variability in the sample mean is smaller than the chance variability of the data. So, we have to do some kind of computation to assess the situation. If we have K groups, we write down the data as follows. The observations of the first group go into to the first column, and each observation has two indices. The first index simply counts the observations, and that's from 1 up to some number n1. And the second index simply means we are in group 1. We do the same thing for group 2, so there are n2 observations in the second group, all the way up to group k, where there are nk observations. So in particular, this means that it doesn't have to be the same number of observations in each group. In total, there are capital N observations, which is simply the sum of the little n's. Finally, y bar sub j is simply the mean of the jth group and y bar bar is the overall mean, which is also called the grand mean. The analysis of variance computes two important quantities. The first one is called the treatment sum of squares. There, we look simply at the difference of the jth group mean to the overall mean, we square the difference, and then we sum up over all rows and columns. This term has k - 1 degrees of freedom. If we divide the term by its degrees of freedom, we get what's called the treatment mean square, MST. The treatment mean square is essentially the sample variance of the treatment means, so it measures the variability of the treatment means y bar j. The other quantity we are interested in, is the error sum of squares. For that, we look at the squared difference of each observation and its corresponding group mean, and then again we sum over all rows and columns. This term has n - k degrees of freedom. Again, dividing by the degrees of freedom gives what's called the error mean square, and that measures the variability within the groups. So, these two terms, the treatment mean square and the error mean square make formal the idea we had on the previous slide, where we look at the variability between the treatment means and the variability within the groups.