In: Statistics and Probability
How does significance level of a statistical test relate to our decision about whether an independent variable had an effect? How does our choice of significance level impact the likelihood of making a Type I and a Type II error? Be sure to thoroughly distinguish between the two types of errors.
Suppose we have a problem where we have to test if an independent variable has any significant effect or not.
We construct the null(H0) and alternative (H1) hypothesis as
H0 : The variable has an effect vs H1 : The variable has no effect.
Both are considered as alpha =0.05 level(say)
Now, The probability value below which the null hypothesis is rejected is called the α (alpha) level or simply α. It is also called the significance level.
So if our p-value comes out to be very small and less than the significance level alpha(0.05, here) we reject the null hypothesis otherwise accept it.
This is how level of significance plays a role in deciding whether a variable has any significant effect or not.
Now in case of type I and type II errors, the significance level plays a huge role. Suppose it is set to 0.05 (5%), implying that it is acceptable to have a 5% probability of incorrectly rejecting the true null hypothesis. This is referred as type I error. On the other hand if we accept a null hypothesis when actually the alternative is true, we face type II error.
The graph below shows area alpha (when we accept H1 while H0 is true) and area beta (when we accept H0 when H1 is true)
It cleatly shows if we move the mean line towards the right side, alpha decreases and beta increases. If we move it towards left, opposite happens. This is the relationship between alpha, the level of significance and type I and type II errors.