In: Statistics and Probability
On what basis would a researcher decide to select a = 0.01 instead of 0.05?
Reducing the alpha level from 0.05 to 0.01 reduces the chance of a false positive (called a Type I error) but it also makes it harder to detect differences with a t-test. Any significant results you might obtain would therefore be more trustworthy but there would probably be less of them.
At our University we use informal language to describe the strength of evidence provide by a probability value in different categories:
The alpha level you set relates to the formal decision you make rather than this informal interpretation, but both can be reported together.
Lower alpha levels are sometimes used when you are carrying out multiple tests at the same time. A common approach is to divide the alpha level by the number of tests being carried out. For example, if you needed to carry out 5 tests you might set your initial alpha level at 0.05 then divide it by 5 to obtain the alpha level of 0.01. Again, the reason for this is to reduce the Type I error risk.
Significance level (α). The lower the significance level, the lower the power of the test. If you reduce the significance level (e.g., from 0.05 to 0.01), the region of acceptance gets bigger. As a result, you are less likely to reject the null hypothesis. This means you are less likely to reject the null hypothesis when it is false, so you are more likely to make a Type II error. In short, the power of the test is reduced when you reduce the significance level; and vice versa.