In: Math

Describe the difference between statistical significance and effect
size. How is each used in describing the results of an
experiment?

**statistical significance:**

=>There is always some likelihood that the changes you observe in your participants’ knowledge, attitudes, and behaviors are due to chance rather than to the program

=>Testing for statistical significance helps you learn how likely it is that these changes occurred randomly and do not represent differences due to the program.

=>To learn whether the difference is statistically significant, you will have to compare the probability number you get from your test (the p-value) to the critical probability value you determined ahead of time (the alpha level).

=>If the p-value is less than the alpha value, you can conclude that the difference you observed is statistically significant.

**P-Value**: the probability that the results were
due to chance and not based on your program.

=>P-values range from 0 to 1.

=>The lower the p-value, the more likely it is that a difference occurred as a result of your program.

**Alpha (α) level**: the error rate that you are
willing to accept.

=>Alpha is often set at .05 or .01.

=>The alpha level is also known as the Type I error rate.

=>An alpha of .05 means that you are willing to accept that there is a 5% chance that your results are due to chance rather than to your program.

=>An alpha level of less than .05 is accepted in most social science fields as statistically significant, and this is the most common alpha level used in EE evaluations.

**effect size:**

=>When a difference is statistically significant, it does not necessarily mean that it is big, important, or helpful in decision-making.

=>It simply means you can be confident that there is a difference.

=>Let’s say, for example, that you evaluate the effect of an EE activity on student knowledge using pre and posttests.

=>The mean score on the pretest was 83 out of 100 while the mean score on the posttest was 84.

=>Although you find that the difference in scores is statistically significant (because of a large sample size), the difference is very slight, suggesting that the program did not lead to a meaningful increase in student knowledge.

=>To know if an observed difference is not only statistically significant but also important or meaningful, you will need to calculate its effect size.

=>Rather than reporting the difference in terms of, for example, the number of points earned on a test or the number of pounds of recycling collected, effect size is standardized.

=>In other words, all effect sizes are calculated on a common scale -- which allows you to compare the effectiveness of different programs on the same outcome.

=>There are different ways to calculate effect size depending
on the evaluation design you use. Generally, effect size is
calculated by taking the difference between the two groups (e.g.,
the mean of treatment group *minus* the mean of the control
group) and dividing it by the standard deviation of one of the
groups.

Describe how the following are related:
-Statistical significance and effect size
-Difference between means and effect size
-Effect size and variability
-Sample size and power of a statistical test
-Effect size and power of a statistical test

How are power, effect size, and sample size
related?
Distinguish between statistical significance and
practical significance.
What is the difference between a critical value and a
p-value?

What is the difference between practical and statistical
significance?
A. Statistical significance is associated with p values, but
practical significance is not.
B. Practical significance is associated with p values, but
statistical significance is not.
C. There is no difference.
D. Neither A nor B is true.

Explain the difference between statistical significance and
economic significance of OLS estimates?

Explain the difference between statistical and practical
significance.
Explain the difference between the null and alternative
hypotheses.
When should a one-tailed test be used? What are the
disadvantages to using a one-tailed test?
When should you use a two-tailed test?
Define a Type I error. In the behavioral sciences, what is the
likelihood of a Type I error?
Define a Type II error. In the behavioral sciences, what is the
likelihood of a Type II error?

1. What is the difference between practical significance and
statistical significance? Give an example of something that might
be statistically significant, but not practically significant.
2. What is a Type I error? Give an example.
3. What is a Type II error? Give an example.

Research Method
What is the difference between statistical significance and
clinical significance? Search the literature and cite at least one
or two examples that illustrate the differences between the
two.

What is the difference between statistical and economic
significance? Give an example. (Your own example, NOT a pill for
cancer).

How do the significance level, power, effect size, and standard
deviation influence the sample size?

How do the significance level, power, effect size, and standard
deviation influence the sample size?

ADVERTISEMENT

ADVERTISEMENT

Latest Questions

- (Present value of an uneven stream of payments) You are given three investment alternatives to analyze....
- A 12 in. x 18 in. rectangular beam has an effective span of 20 ft. A...
- Standard Costs, Decomposition of Budget Variances, Direct Materials and Direct Labor Haversham Corporation produces dress shirts....
- Suppose you held a diversified portfolio consisting of a $7,500 investment in each of 20 different...
- please answer all Crane Limited purchased a machine on account on April 2, 2018, at an...
- Problem 1. Molecular Genetics A sea urchin mutation results in an unusual positioning of the mitotic...
- Explain, in your own words, what Thomson means when she claims that “Being a good K...

ADVERTISEMENT