In: Statistics and Probability
What is a Type I Error? What is the impact on Type II Error if we reduce the likelihood of Type I Error? Please upload in Microsoft Word.
In hypotheses testing claim about the parameters of the distribution of a given population is are tested to see whether they are valid or not.
The initial unbiased claim made without running any test about the parameters of the distribution by the statistician is call the null hypothesis denoted by H0 or H and the alternative claim which the statistician need to prove by taking data samples and running experiment is called the alternative hypotheses denoted by H1 or K
Let take an example,
Suppose we are testing the mean of a given population , let us take
H:µ=1 against K:µ>1
Now here I'm given that the mean is 1 but my alternative or I believe i that the mean(µ) of the population is greater than 1 , I run my necessary experiment and depending on the results I either reject the null and accept the alternative or reject the alternative and acccept the null.
Type I error – rejection of the null hypotheses when it’s actually true ,you may wonder why it’ll get rejected if its true . Such situation occurs due to some error in the experiment.
Type II error- Acceptance of the null hypotheses when it’s actually false, it also occurs due to experimental flaws.
Mathematically speaking
= sample space containing the values of data
= called the rejction region, that is if the sample data points x falls in this area we reject the null hypotheses
= called the acceptance region ,i.e if the sample data points x lies in this region we accept the null hypotheses.
P[Type I error] = P[x ∈ W| µ=1]= here we see all though the mean is 1 but yet due to some flaw in experiment the sample points have fallen in the rejection area.
P[Type II error] =P[x ∈ Wc | µ>1]= here we see although our null hypotheses isn’t true, since mean is greater than 1 yet the sample points falls in the acceptance region and hence the null get accepted.
Now there is a relation between P[Type I error] & P[Type II error]
P[Type I error]=1- P[Type II error]
So from the above relationship we can easily see that if we try to decrease the likelihood or probability of the Type I error than the likelihood or probability of Type II error increase and vice versa .
Note: Here I’m testing the mean =1 against mean>1 , but one can also choose the alternative to be mean<1, even the null can be changed depending on the population and the value can be anything not necessarily 1. One can also test for the variance parameter of the population also, any other parameter or they can also test whether a particular population belong to that distribution or not. This just an example. It all depends on the experimenter .