In: Math
Explain intuitively why minimizing Type I error and maximizing the power of a test are contradic-
tory goals. Also draw the power function graph and label Type I error, Type II error, and power.)
Type I Error is rejecting the null hypothesis in favor of a false alternative hypothesis, and a Type II Error is failing to reject a false null hypothesis in favor of a true alternative hypothesis. The probability of a Type I error is typically known as Alpha, while the probability of a Type II error is typically known as Beta.
Now on to power. Bullard describes multiple ways to interpret power correctly:
Simply put, power is the probability of not making a Type II error, according to Neil Weiss in Introductory Statistics.
Mathematically, power is 1 – beta. The power of a hypothesis test is between 0 and 1; if the power is close to 1, the hypothesis test is very good at detecting a false null hypothesis. Beta is commonly set at 0.2, but may be set by the researchers to be smaller.
Consequently, power may be as low as 0.8, but may be higher. Powers lower than 0.8, while not impossible, would typically be considered too low for most areas of research.
Bullard also states there are the following four primary factors affecting power:
The relationship between these variables can be shown in the following proportionality equation:
Power is increased when a researcher increases sample size, as well as when a researcher selects stronger effect sizes and significance levels. There are other variables that also influence power, including variance (σ2).
In reality, a researcher wants both Type I and Type II errors to be small. In terms of significance level and power, Weiss says this means we want a small significance level (close to 0) an*d a large power (close to 1).
The most common error in statistical testing is Type I where a true Ho is rejected. It seems that researchers are at times overly zealous in their desire to reject the null hypothesis and to prove their point. One way to protect against Type I error is to reduce the alpha level to say 0.01, as illustrated here. With an alpha level of 0.01, there will be only a 1% chance of rejecting a true Ho. The change in alpha will also effect the Type II error, in the opposite direction. Decreasing alpha from 0.05 to 0.01 increases the chance of a Type II error (makes it harder to reject the null hypothesis).
The effect on statistical power will be the opposite of the effect on Type II error, i.e., a decrease in the a level will increase the Type II error and decrease power.
Choosing the a level is a judgement call. In drug research studies, the a level may be set at 0.01 or even 0.001. In clinical and diagnostic studies, a is commonly set at 0.05. In laboratory method validation studies, a is usually set in the range of 0.05 to 0.01. In laboratory quality control, the a level is determined by the choice of control limits and numbers of control measurements. Alpha or false rejections may be very high - 0.05 to 0.14 - when Levey-Jennings control charts are used with 1 to 3 control measurements and control limits set as the mean plus or minus 2 SDs. Efforts to reduce false rejections by widening control limits, e.g., reducing a to 0.01 or less by use of 3SD control limits, will also lower the power or lower error detection.
is the type 1 error.
is the type two error.
1- is the power.