In: Statistics and Probability
Week 8 Discussion Consider the different post hoc tests discussed in the readings and respond to the following: Describe the general rationale behind using post hoc tests (i.e., when they are used and why). One of the advantages of using an ANOVA (compared to using t-tests) is also a disadvantage—using an ANOVA makes it necessary to use post hoc tests if there is a significant main effect. We use a post hoc test because there is one specific advantage in using an ANOVA. Explain why using an ANOVA naturally leads to the need to have post hoc tests (hint: consider what you are examining when you conduct a post hoc analysis). Conducting a post hoc test is similar to conducting multiple t-tests. As a result, it would seem natural to want to bypass the ANOVA and just use repeated t-tests. Explain why this approach is not necessarily a good idea and why an ANOVA followed by a post hoc analysis is beneficial. Describe an experimental hypothesis and explain which post hoc test you would use if you find a significant overall effect. Include in your explanation the pros and cons of each test in making your decision.
ANOVA test tells us whether we have an overall difference between our groups, but it does not tell us which specific groups differed – post hoc tests do. Because post hoc tests are run to confirm where the differences occurred between groups, they should only be run when we have a shown an overall statistically significant difference in group means (i.e., a statistically significant one-way ANOVA result). Post hoc tests attempt to control the experimentwise error rate (usually alpha = 0.05) in the same manner that the one-way ANOVA is used instead of multiple t-tests. Post hoc tests are termed a posteriori tests; that is, performed after the event (the event in this case being a study).
For a one-way ANOVA, we will probably find that just two tests need to be considered. If our data met the assumption of homogeneity of variances, we use Tukey's honestly significant difference (HSD) post hoc test. If you use SPSS Statistics, Tukey's HSD test is simply referred to as "Tukey" in the post hoc multiple comparisons dialogue box). If our data did not meet the homogeneity of variances assumption, you should consider running the Games Howell post hoc test.
Every time when we conduct a t-test there is a chance that to make a Type I error. This error is usually 5%. By running two t-tests on the same data we will have increased our chance of "making a mistake" to 10%. The formula for determining the new error rate for multiple t-tests is not as simple as multiplying 5% by the number of tests. However, if we are only making a few multiple comparisons, the results are very similar if we do. As such, three t-tests would be 15% and so on. These are unacceptable errors. An ANOVA controls for these errors so that the Type I error remains at 5% and we can be more confident that any statistically significant result we find is not just running lots of tests.