In: Statistics and Probability
As a increases, so does the power to detect an effect. Why, then, do we restrict a from being larger than .05?
The significance level, alpha, also known as type I error. Where a Type I error occurs when we reject the true null hypothesis hence the larger value of alpha increases the probability of type I error.
And a Type II error occurs when we failed to reject the false null hypothesis. Hence the larger value of alpha decreases the type II error. Since the power of a hypothesis test = 1 - P(Type II error). The power of the increases as the alpha value increases.
Now let take an example of a manufacturing company where we want to inspect the number of defects in a sample coming out of a lot.
For a higher value of alpha, if we reject the null hypothesis, there is a higher chance of rejecting a good quality lot (type I error increases, can cause loss of revenue to the producer.) and if we failed to reject the null hypothesis, there is a lower chance of accepting the bad quality lot (type II error decreased, good for consumer)
To reduce the risk of producer we can not set the much higher value of alpha. Similarly, to reduce the risk of consumer we can set the much lower value of alpha. To make a balance between these two errors, a widely accepted value of significance level is 5%. However, it can vary based on the type of study.