In: Statistics and Probability
Most video games have several difficulty settings, from "Easy" to "Hard" (many of these have very colorful names). A video game designer wants to determine how to structure the difficulty levels for a new game so that the average time it takes the typical player to play through "Hard" mode is longer than the time to play through "Easy" mode.
A sample (Group 1) of 15 typical players took an average of 5.3 hours to complete the game on "Hard", with a standard deviation of 1.4. Another sample (Group 2) of 10 typical players took an average of 3.9 hours to complete the game on "Easy", with a standard deviation of 1.3. Calculate the test statistic to test the hypothesis described above, assuming the two population standard deviations are not equal (Case 2). Take all calculations toward the answer to three decimal places.
n1 = 15
= 5.3
s1 = 1.4
n2 = 10
= 3.9
s2 = 1.3
Claim: A new game so that the average time it takes the typical player to play through "Hard" mode is longer than the time to play through "Easy" mode.
The null and alternative hypothesis
Assuming the two population standard deviations are not equal.
So we have to use the unpooled standard deviation.
Test statistic is
Degrees of freedom = Min(n1 -1 , n2 -1) = Min( 15 - 1 , 10 - 1 ) = Min(14 , 9) = 9
Level of significance = 0.05
Critical value = 1.833 ( From t table)
Test statistic > critical we reject null hypothesis.
Conclusion: A new game so that the average time it takes the typical player to play through "Hard" mode is longer than the time to play through "Easy" mode.