In: Statistics and Probability
what is the cause of an increased risk for type 1 errors when T tests are conducted and how might researchers eliminate the increased risk of a type 1 error in a study?
1.
Sometimes The t-test failed to detect a statistically significant difference. However, this result could occur for two reasons: either no difference exists, or the test simply has too little power to detect a difference because of the small sample size.
A larger sample size gives the test more power, enabling it to detect a difference when it truly exists.
To increase the power of a test:
So T-tests are cause of an increased risk for type 1 errors.
2.
The probability of committing a type I error (rejecting the null hypothesis when it is actually true) is called ? (alpha) the other name for this is the level of statistical significance.
If a study is designed with ? = 0.05, for example, then the investigator has set 5% as the maximum chance of incorrectly rejecting the null hypothesis (and erroneously inferring). This is the level of reasonable doubt that the investigator is willing to accept when he uses statistical tests to analyze the data after the study is completed.