When you start out with statistics, there are a lot of terms that can be super confusing. Take mean, median, and mode for example; they sound similar but mean completely different things. But they are central to understanding how statistical models and methods work. Another set of terms that are central to understanding statistical models are **range** and **standard deviation**.

## Home on the Range

When we think about it in mathematical terms, *range* is a pretty straightforward term. It means the distance between the highest value and the lowest value. Let’s take a look at a three data sets for an idea of their ranges.

The mean of each data set is the same, so we may be tempted to think that the data are the same. But a look at the range says otherwise. In the first dataset, *X _{1}*, the range is 25 – 5 = 20. While dataset

*X*has a range of 90 – (-60) = 150! This represents vast differences in the data that we have to account for in some way.

_{3}The range also represents the *variability* of the data. Datasets with a large range are said to have large variability, while datasets with smaller ranges are said to have small variability. Generally, smaller variability is better because it represents more precise measurements and yields more accurate analyses.

The range is a descriptive term that is useful for describing data. Its chief use is in calculating quartiles and interquartile range. But while range is a good gauge of the variability of the data, there is a more accurate and useful one: *standard deviation*.

## Good Ol’ Standard Deviation

Standard deviation is the standard way that we understand and report variability. The most awesome thing about standard deviation is that we can use it not only to describe data but also conduct further analyses such as ANOVA or multiple linear regressions.

Standard deviation is a reliable method for determining how variable the data is for both a sample and a population. Of course, we cannot truly know the standard deviation for a population, but with the standard deviation of a sample, we can infer it.

The deviation is how much a score varies from the overall mean of the data. In the case of our example data, it would be how much each value differs from the mean of 15. We generally use *s* to represent deviation. For our data the deviation is

Just like the range, the larger the difference between the highest and lowest values, the greater the deviation and the higher the variability. On a side note, your deviations should always add up to zero.

### Calculating Standard Deviation

It may seem odd that the deviation scores add up to zero, but the standard deviation may be a non-zero value. This is because of the way that standard deviation is calculated. Standard deviation is calculated as a sum of squares instead of just deviant scores. The formula for standard deviation looks like

So, for our *X _{1}* dataset, the standard deviation is 7.9 while

*X*is 54.0. This represents a HUGE difference in variability. The standard deviation for

_{3}*X*is 1.58, which indicates slightly less deviation.

_{2}*But John, how much standard deviation is too much?*

Another great question, and one that I wish I had a hard and fast answer for. In general, the closer your standard deviation is to zero, the less variability there is in your data. That would mean that your values are relatively close to the mean, just like we see in the *X _{2}* dataset. One rule of thumb that I was taught early in my statistics career was that a

*good*standard deviation should be smaller than the value of the mean. So

*X*‘s standard deviation of 54.0 is definitely NOT good. It means, on average, the values differ wildly from the mean.

_{3}Ultimately, both the range and the standard deviation give you an idea about the variability of your data, or how much each value differs from the mean. The smaller your range or standard deviation, the lower and better your variability is for further analysis. The range is useful, but the standard deviation is considered the more reliable and useful measure for statistical analyses. In any case, both are necessary for truly understanding patterns in your data. Happy statistics!

## Comments are closed.