In: Statistics and Probability
n hypothesis testing we set alpha, the significance level, which limits the likelihood of making a Type 1 error if the null hypothesis were true. When does a type 2 error occur? Give a real world example of a decision and explain what the null hypothesis, alternative hypothesis, type 1 and type 2 errors would be – which would be worse in this case: making a type 1 error or making a type 2 error?
Type 2 Error occurs when the likelihood of accepting the null hypothesis increases even if it is NOT true. That is a case of False Negative. That is in this case the Power of the Test is limited represented by 1-beta.
Let us consider a real life example. Suppose you want to increase the number of visits to a particular part of a website that you run, you add a new feature to that part of the website that lets the user access the premium content for free for 7 days , which was otherwise payable. The response for the first two weeks was an increase of 25% at 85% confidence, but later on the visits began to drop , here we encounter a type 1 error. Extending to this if two competing(but identical) websites are both testing the same feature and the performance for both the websites go down in the first 10 days of the run, website A decides to withdraw the feature and website B decides to keep it on. Later we see that Website B's performance keeps skyrocketing but website A is still dealing with old problems. Here we encounter a Type 2 error.