In: Statistics and Probability
I am interested in testing whether total prey consumption by sea otters differs in cold water temperatures relative to some hypothesized value. From some pilot data, I have been able to estimate the variance at 6.4. I would like to be able to detect a difference of about 2 grams of prey eaten per unit time. Assuming I set my type I error rate at 0.05, what kind of sample size do I need to achieve a power of 90%?
Suppose I can only measure prey consumption in 10 sea otters, what will my minimum detectable difference be, assuming all else remains the same?
Lastly, suppose I fix my minimum detectable difference at 4 and collect data from 10 sea otters. What kind of power will I have assuming again that my variance and type I error rate are unchanged from above?