In: Statistics and Probability
I have a textbook problem where I calculate coverage rates for sample skewness statistic confidence intervals where it compares a normal population and a chi-square of 5 distribution. What I don't understand is the question below... I know what the coverage rate is and how it's calculated just not the specific question below.
Question: Explain why you need to run a simulation to compare the coverage rates under different distributions?
The preceding discussion leads to the simulation method for estimating the coverage probability of a confidence interval. The simulation method has three steps:
1. Simulate many samples of size n from the population.
2. Compute the confidence interval for each sample.
3. Compute the proportion of samples for which the (known) population parameter is contained in the confidence interval. That proportion is an estimate for the empirical coverage probability for the CI.
Now, the question is, Isn't the coverage probability always (1-α) = 0.95? No, that is only true when the population is normally distributed (which is never true in practice) or the sample sizes are large enough that you can invoke the Central Limit Theorem. Simulation enables you to estimate the coverage probability for small samples when the population is not normal. You can simulate from skewed or heavy-tailed distributions to see how skewness and kurtosis affect the coverage probability.