In: Statistics and Probability
. Lead concentrations in drinking water should be below the EPA action level of 15 parts per billion (ppb). We want to know if drinking water, is safe or not, and will conduct a hypothesis test shown below ?0: Water is safe vs. ??: Water is not safe to drink
(explanation please)
(a) Describe the type I error in the context of the problem and give potential consequences of such error.
(b) Describe the type II error in the context of the problem and give potential consequences of such error.
(c) Considering the potential consequences of type I and type II errors, what would be the more detrimental error to commit? Explain/Defend your answer.
(d) Considering the potential consequences of errors from
(c), which one (1%, 5%, or 10%) should you choose for a significance level? Explain why.
Solution:
(Part a) Type I error: Type I error is the rejection of a true null hypothesis. In our context of problem type I error is it is showing water is safe but in fact it is not safe. So, here commiting type I error is dangerous because if we accept that water is safe and actually it is not then it leads to critical situation.
(Part b) Type II error: Type two error is the non-rejection of a false null hypothesis. In our context of problem type II error is experiment shows that water is unsafe but in reality it is safe. So, here we will not consume water though it is safe but this situation will not craete harm.
(Part c): Considering the potential consequences of type I and type II errors the more detrimental error to commit is type I error because if we consume unsafe water it will become injurious to our health.
(Part d): The standard is 5%. 1% is great and using 10% is usually not a good idea. Speaking from experience here, I'd say a .05 (5%) level is the norm. If you want to be more rigorous, the .01 level is ok, but almost every research paper I have ever read has used the standard .05 alpha level. The .10 (10%) level is rarely used.