In: Statistics and Probability
How can one quantify the imprecision that comes about from making a conclusion from your best fit line?
The imprecision that comes about from making a conclusion from best fit line can be quantified by measuring the Type-I and Type-II error. for better understanding let me tell you the concept of types of errors.
Type-I error: Type-I is defined as the probability of rejecting a null hypothesis when it is true. it is also called as level of significance and denoted by α.
Now you think, you have fitted a regression line based on available sample data and rejected the null hypothesis based on findings from the best fit line but in case, if actual population does support your decision(i.e. null hypothesis is true) then you have committed a error and this error is called type-I error. This type-I error can be quantified as the probability of rejecting a null hypothesis when it is true.
Type- II error:Type-II is defined as the probability of accepting a null hypothesis when it is false. it is also denoted by β.
Now you think, you have fitted a regression line based on available sample data and did not rejected the null hypothesis based on findings from the best fit line but in case, if actual population does support your decision(i.e. null hypothesis is false) then you have committed a error and this error is called type-II error. This type-II error can be quantified as the pprobability of accepting a null hypothesis when it is false.