In: Math
statistical significance:
=>There is always some likelihood that the changes you observe in your participants’ knowledge, attitudes, and behaviors are due to chance rather than to the program
=>Testing for statistical significance helps you learn how likely it is that these changes occurred randomly and do not represent differences due to the program.
=>To learn whether the difference is statistically significant, you will have to compare the probability number you get from your test (the p-value) to the critical probability value you determined ahead of time (the alpha level).
=>If the p-value is less than the alpha value, you can conclude that the difference you observed is statistically significant.
P-Value: the probability that the results were due to chance and not based on your program.
=>P-values range from 0 to 1.
=>The lower the p-value, the more likely it is that a difference occurred as a result of your program.
Alpha (α) level: the error rate that you are willing to accept.
=>Alpha is often set at .05 or .01.
=>The alpha level is also known as the Type I error rate.
=>An alpha of .05 means that you are willing to accept that there is a 5% chance that your results are due to chance rather than to your program.
=>An alpha level of less than .05 is accepted in most social science fields as statistically significant, and this is the most common alpha level used in EE evaluations.
effect size:
=>When a difference is statistically significant, it does not necessarily mean that it is big, important, or helpful in decision-making.
=>It simply means you can be confident that there is a difference.
=>Let’s say, for example, that you evaluate the effect of an EE activity on student knowledge using pre and posttests.
=>The mean score on the pretest was 83 out of 100 while the mean score on the posttest was 84.
=>Although you find that the difference in scores is statistically significant (because of a large sample size), the difference is very slight, suggesting that the program did not lead to a meaningful increase in student knowledge.
=>To know if an observed difference is not only statistically significant but also important or meaningful, you will need to calculate its effect size.
=>Rather than reporting the difference in terms of, for example, the number of points earned on a test or the number of pounds of recycling collected, effect size is standardized.
=>In other words, all effect sizes are calculated on a common scale -- which allows you to compare the effectiveness of different programs on the same outcome.
=>There are different ways to calculate effect size depending on the evaluation design you use. Generally, effect size is calculated by taking the difference between the two groups (e.g., the mean of treatment group minus the mean of the control group) and dividing it by the standard deviation of one of the groups.