In: Math
1. What are the assumptions for various forms of hypothesis testing?
2. Compare the sampling distribution with the population distribution. Consider how variance may or may not differ between the two.
3. If we reject a null hypothesis of no difference, what are the odds that we made a correct decision?
4. Type I and Type II error. How is alpha related to this? How is the critical region related to type II error? If the null is true, what is the probability of type II error?
5. What values can alpha be and not be? Can alpha be 0? Why?
6. How can we increase the probability that a confidence
interval will include the population parameter?How can we increase
the width of a confidence interval? How can we decrease the
width?
1) The primary assumptions for any Statistical testing are
(A) The Sample on which the test will perform must be a randomly selected sample.
(B) The Sample should be sufficiently large ( >50) so that it will contain the sufficient amount of information of the process.
(C) The variable must be measured on a interval ratio level.
2) If the Population is distribution is normal then the sample mean is an unbiased estimator of the population mean but the sample variance may or may not differ from the population variance it completely depends on the sampling strategy. e.g. If a simple random sample is drawn from the probability where the selection probability is same for all units then the estimated variance i.e. S^2 is an unbiased estimator of the population variance.
3) the odds that we made a correct decision for any testing problem can be measured by the power of the test i.e. the probability of rejecting the null hypothesis when, in fact, it is false.
4) In testing alpha is the probability of rejecting the null hypothesis when it is true which may or may not equal to the type I error. Since the level was decided by the performer prior to the test and If the distribution of the test statistics is discrete the probability of rejecting the null hypothesis when it is true which may or may not equal to the type I error may not equal to alpha. In testing Type II error is denoted by beta which is the probability of accepting the null hypothesis when it is false. both alpha and beta does not have any linear relationship among themselves but they are related in probability. If you want alpha to be minimum beta will automatically rise and vise versa.
The critical region line represents the point at which you reject or fail to reject the null hypothesis. we cane write type II error in terms of critical region i.e (1-Probability of falling in the critical region given that the null hypothesis is false) which is 1-power of the test.
type II error is the probability of accepting the null hypothesis when it is false so if the null hypothesis is true it should be small and closed to 0.
5) alpha is a probability so it can take any values between 0 to 1. in theory yes alpha can be 0. but if you force alpha to 0 ( since it was decided by the performer of the test) you automatically increased beta i.e. type II error and by default the power of the will small.
6) we can increase the probability that a confidence interval will include the population parameter by increasing the width of the interval and that can be achieved by taking a sufficiently large alpha. Similarly we can decrease the width by decreasing alpha.