I am addicted to caffeine. I mean I drink coffee, sodas, and even want to wake up with a little bit of chocolate. Sadly, my hand starts to shake a little bit right after I drink something with caffeine. But I am always wondering which drinks make my hand shake a little bit: coffee or soda. The best way to test this out would be determining the difference between groups using a independent sample t-test.
What is a t-Test
In the case of social science and other forms of research, the researcher wants to determine the difference between two groups. For example, let’s say that researchers want to determine whether a new drug works. They would need a control group and an experimental group.
In the case of experimental designs, we operate under the idea that some sort of treatment will cause an outcome. So we compare a group that does receive the treatment to a similar one that does not. The group that receives the treatment is the experimental group and the one that does not is called the control group.
A t-test allows you to compare the two groups statistically based on some measure. In basic terms, the t-test compares the means and errors of two groups to determine if they are significantly different.
Terms to Know
There are also a few terms that help you to understand what makes a t-test tick.
The mean is the average of a group’s particular measures. You use the difference between the two means to determine if they are different enough to be significant.
The standard error of difference is the combined standard error of the two groups. This is determined by the standard error of each group with respect to how many participants in each group.
The overall equation for a t-test is
However, the sample size of the two groups makes all the difference when it comes to t-tests. The reason that this is most relevant is that the degrees of freedom determine the role of the t-statistic necessary for significance, but we’ll get to that in a minute.
When it comes to comparing two groups, there are a couple of ways to think about it.
First, you can consider the same group that receives some sort of treatment. In this case, it is best to think of the sample measurements before treatment and the sample measurements after the treatment. In this case, there should be the same participants in the before group and the after group. This kind of setup requires a dependent sample t-test in which the same participants are in both groups.
Second, is when there are a different number of participants or different participants in the two groups. This type of test is called an independent sample t-test. This is definitely not a case of before and after treatment because there should be participants that are not in both groups. This is the kind of setup that would be required for testing control-experimental setups.
One way to visualize the difference is
I want to be sure to clarify that it is possible to have two groups receive separate treatments with an independent sample t-test. The image above shows a typical control-experimental setup, but it is just as easy to set up two experimental treatments. As long as the groups are different in some way, then an independent sample t-test is the way to go.
Is There a Difference
Let’s try a real-time example. Earlier, I mentioned that
I’m addicted to I like caffeine, but it makes my hands shake. In this case, I want to see if soda makes hands shaky more or less than coffee. So my treatment is drinking soda or coffee and measuring how long a hand is shaky.
In this case, the two different treatments are drinking coffee and drinking soda. The outcome that I will be measuring is the length of time that hand are shaky after drinking the different drinks. The next step is to gather up some participants.
When it comes to participants, the gold standard of statistical and experimental design is to have groups of randomly selected individuals. This actually helps to explain the error since the fluctuations in measurements are random. So, I have two groups of randomly selected participants. In the coffee-drinking group, I have 7 participants and in the soda-drinking group, I have 6.
After the participants drink their respective drink, I measure the length of time that their hands are shaky and get the following data
When you plug these numbers into the formula of the t-test, you come up with a value of -2.99. But what does that number mean? To figure that out, you need a t-table
The t-table is a table of t-values based on a t-distribution. Essentially, the distribution yields a table of critical values that represent the lowest value of a t-test that would be considered significantly different from the population (or group) mean. For example, when you look at the t-table below, you can see a table of values in several columns.
The row along the top indicates that level of significance that you are seeking. This represents how significantly different the scores are from each other. Researchers usually accept a level of significance of 0.05 (5%) or lower
The first column of the table represents the degrees of freedom present in the sample. Usually, the degrees of freedom are the sample size minus one (N – 1 = df). In the case of a t-test, there are two samples, so the degrees of freedom are N1 + N2 – 2 = df.
Once you determine the significance level (first row) and the degrees of freedom (first column), the intersection of the two in the chart is the critical value for your particular study. In the case of the shaky hands, the critical value is 2.201. Since the absolute value of our experimental t-statistic is higher than the critical value, our two groups are significantly different. If the value had been lower, then the result of our two groups is not statistically significant.
Hence, you can report that drinking soda has a significantly different effect on the length of time hands are shaky than coffee, t(11) = -2.99, p < 0.05.
A t-test is a means of comparing two means. It is most often used to determine the difference between groups that have received some kind of treatment. An independent sample t-test determines the difference between two groups with different participants in each group. The test itself is the difference between means in terms of the shared standard error. Once a number has been calculated, it is compared to a table of critical values to determine its significance.
I hope that this helps, and I look forward to seeing your questions below. Happy statistics!