In: Math
If data set A has a larger standard deviation than data set B, what would be different about their distributions?
Standard deviation is a measure of dispersement in statistics. “Dispersement” tells us how much and data is spread out. Specifically, it shows how much a data is spread out around the mean or average.
Data set A has larger standard deviation it indicates that the curve of set A is flatter than normal curve i.e. it is platykurtic.
And distribution is spread out i.e. data is further away from the mean.
Set B have less standard deviation it indicates that the curve of set B is very steep or more peaked than the normal curve i.e. leptokurtic. And distribution is tightly clustered around the mean.
To understand this in a practical scenario, let’s look at test scores across two exams. Let’s say the mean for each is 60 and 90 points, out of 100, respectively. If the standard deviation is say, 10 for the first test and 15 for the other test, what can we say about the tests? We can say that the results for the second exam were more spread out, and that means that the test takers got scores vastly different to each other, in comparison to the first exam, in which the scores were packed more tightly.