In: Statistics and Probability
When should you run post hoc tests?
Why would you conduct a post hoc test?
Explain why not just run three different t-Tests, one comparing groups 1 and 2, one comparing groups 1 and 3, and one comparing groups 2 and 3?
Provide an example in which you would need to run a post-hoc test.
sagreement.
Every time conducting a t-test there is a chance of making a Type I error. This error is usually taken 5%. By running two t-tests on the same data , increased the chance of "making a mistake" to 10%. The formula for determining the new error rate for multiple t-tests is not as simple as multiplying 5% by the number of tests. However, if we are only making a few multiple comparisons, the results are very similar if we do. As such, three t-tests would be 15% (actually, 14.3%) and so on. These are unacceptable errors. An ANOVA controls for these errors so that the Type I error remains at 5% and we can be more confident that any statistically significant result you find is not just running lots of tests.
if F-test for between groups in one-way anova is significant then we reject null hypothesis of equality of more than one means. then we go for post-hoc test and find which pair of comparison are significant.
Post-hoc (Latin, meaning “after this”) means to analyze the results of your experimental data. They are often based on a familywise error rate; the probability of at least one Type I error in a set (family) of comparisons. The most common post-hoc tests is Fisher’s Least Significant Difference (LSD).
example .
here LSD=sqrt(2*MSE/r)*t(alpha,error df)=sqrt(2*6.51/6)*t(0.05/2,20)=sqrt(2*6.51/6)*2.09=3.08
ANOVA | ||||||
Source of Variation | SS | df | MS | F | P-value | F crit |
Between Groups | 382.7916667 | 3 | 127.5972 | 19.60521 | 3.59E-06 | 3.098391 |
Within Groups | 130.1666667 | 20 | 6.508333 | |||
Total | 512.9583333 | 23 |