offers statistics lesson videos made simple!

Sign up or log in to Magoosh Statistics.

How do You Calculate a Confidence Interval?

Have you ever used the phrase “give or take” as in, “I’ll be there at 6 o’clock give or take five minutes”? That common phrase means that an acceptable time to arrive would be between 5:55 and 6:05. In other words, we can be confident that the time you actually arrive will be within that interval. In statistics, we call these give or take intervals confidence intervals.

Calculating a Confidence Interval

A confidence interval represents the upper and lower limits of error that we are to accept for a single measure in relation to the other measurements in the sample or population. Calculating a confidence interval requires knowing both the sample mean and standard error.

When measuring the mean of a sample, there is always some sort of error based on the method of measurement that affects our measurement of the true population mean. We call this error standard error (SE). You could consider this the “give-or-take” amount.

Determining the confidence interval depends on the sample mean, the standard error (SE), and a z-score of +/- 1.96 representing 95% of the sample distribution curve. It is calculated using the following equation:

confidence interval -magoosh

This represents that if a measure is give-or-take (1.96 x SE)of the true mean, X, then the individual measure is not statistically different from the population. It’s when the measure and its confidence interval do NOT contain the true mean that we say the measure is statistically different.

In the Real World

There is a sandwich shop near my house that sells foot-long subs. Well, at least they say the subs are 12 inches, but they don’t really have any data to back that up. So, being the nerd that I am, I pack up my trusty ruler and calculator and head off to collect some data.

Following extensive measurements and a full stomach, I have measured 30 subs and come up with a sample mean of 11.9” and a standard error of 0.5”. This means that we could call a sandwich a “foot-long” as long as its length (X) is between 10.9” and 12.8”, and be 95% confident that we’re right.

Let’s take a look at our results:

The blue lines include the measure and the confidence interval for each measurement. Notice that most of the measured values, with their error included, include the mean or (μ). However, also notice that some lines lie entirely above or below the mean.

This means that their individual confidence intervals (i.e., their measured value +/- (1.96 x 0.5”)) doesn’t include the population mean. Those are statistically different from the population, which may or may not be a good thing depending on how long you prefer your sub sandwich.

The real goal of confidence intervals is to determine whether a measure (be it sample mean or individual measure) is statistically different from the population mean. This is a central component of hypothesis testing. But that is for another post.

Comments are closed.

Magoosh blog comment policy: To create the best experience for our readers, we will only approve comments that are relevant to the article, general enough to be helpful to other students, concise, and well-written! 😄 Due to the high volume of comments across all of our blogs, we cannot promise that all comments will receive responses from our instructors.

We highly encourage students to help each other out and respond to other students' comments if you can!

If you are a Premium Magoosh student and would like more personalized service from our instructors, you can use the Help tab on the Magoosh dashboard. Thanks!