In: Statistics and Probability
7. Suppose a researcher wishes to discern if there is a
significant difference in the running times of two computer programs.
He performs a hypothesis test with the null hypothesis being that
there is no difference in their average running times, and the
alternative hypothesis being that there is a difference. The
signficance level is set at α =0.05. The first program was run n1 = 5
times with an average running time of ¯ x1 = 53.72 seconds and a
sample standard deviation of s1 = 11.72 seconds. The second program
was run n2 = 6 times with an average running time of ¯ x2 =60.64
seconds and a sample standard deviation of s2 =8.21 seconds. What
conclusion should the researcher reach? Hints: You may estimate the
difference of means as a t-distribution with 4 degrees of freedom.
You may use the following formula for the standard error
SE¯ x2−¯ x1 =ss2 1 n1
+ s2 2 n2