In probability theory, **independent** means something quite similar to its everyday meaning. If you’re an *independent* type of person, then the opinions or actions of others shouldn’t affect how you behave; if you’re *independently* wealthy, you have enough money that you don’t need to work (lucky you!)

When it comes to probability, **independence** means that when one event occurs, it has no impact on the probability of another event occurring. Here’s an example. Let’s say that we have two events, A and B.

A = it rains it Tokyo

B = a Little League game is cancelled in Poughkeepsie, NY

The two events are **independent**; if it rains in Tokyo, this will have no impact on the probability that a Little League game is cancelled in upstate New York. However, if we change the events as follows:

A = it rains in Poughkeepsie, NY

B = a Little League game is cancelled in Poughkeepsie, NY

We now have **dependent events**. If it rains in Poughkeepsie, the probability that the Little League game gets cancelled increases greatly. Now that we have a conceptual basis for independence, let’s look at a few classic examples.

## Coin flips: A classic example of independent events

When you flip a fair coin there is a 50% chance it lands heads and a 50% chance it lands tails. Now, if you flip a coin nine times, and it lands heads each time, what is the probability it lands tails on the **tenth toss**?

Answer: Still 50%! While the initial nine heads in a row is quite unlikely—given that is has already occurred—and that each coin toss is an independent event, the outcome of the previous coin flips have no impact on the subsequent tenth flip. Thus, the probability for each individual toss, regardless of what came before, is 50/50.

This is a classic case of independence—the fact that certain events have occurred (in this case, nine heads being tossed in a row) has no impact on the probability of a subsequent event. However, for those untrained in probability theory, the occurrence of a “hot streak” can cause all kinds of fallacious thinking. We will look at this more closely in the next section. For now, let’s explore the concept of independent probability a bit further by applying a bit of math.

## Independence of mutually exclusive events

In our example of coin tosses, we are dealing with mutually exclusive or disjoint events, meaning that the events cannot occur simultaneously (that is, you can’t flip a coin and have it land both heads and tails). For two mutually exclusive events, A and B, the events are independent if and only if

To untangle this a bit, this means that the probability of two events occurring in sequence P(A,B), is equal to the product of each event occuring on its own, P(A)*P(B). Let’s apply this to our coin tosses by defining our events as follows:

H = a fair coin is tossed and lands heads

T = a fair coin is tossed and lands tails

As we already know, our probabilities for a fair coin are:

Now, if we toss a coin twice in a row, there should be four different possible outcomes:

**H, H**

**H, T**

**T, H**

**T, T**

Thus, the probability of any one of these outcomes should be 1 in 4, or 0.25. We can confirm this by applying the rule above, calculating the probability of getting heads on the first flip and then tails on the second. Applying our rule for independence, we get:

Applying this rule to any of the other permutations (H,H; T,H; T,T) will also yield 0.25.

Now let’s go back to our (somewhat outlandish) example. What is the probability of getting nine heads in a row? That would be:

or approximately 1 in 500.

That is, if you tossed a coin nine times in a row *500 times*, you’d expect to see 9 heads in a row just once. Highly unlikely, but not impossible. However, beware the fallacy of the “hot streak”—just because the coin lands heads 9 times in a row, the probability that the next toss lands heads remains unchanged at 0.5. This is the very basis of independence.

## A conclusion: “Hot hands” in basketball and the Gambler’s Fallacy

In basketball, a “hot hand” refers to a player who has not missed a basket in a long streak of shots. Such a streak often leads sports commentators to (wrongly) conclude that because the player is “on fire,” he or she is likelier to make subsequent shots. From the perspective of probability theory, such talk is nonsense, because each shot in basketball is independent of the last. In the world of sports commentary, this nonsense is mostly harmless; however, in the world of gambling, when real money is at stake, such faulty logic can be devastating.

Consider the infamous events at Monte Carlo Casino on August 18, 1913. On that date, a roulette wheel landed on black 26 times in a row—an event with a probability of about 1 in 67 million. As the wheel landed on black after black after black, the gamblers at the table (who must have lacked any understanding of independence) wrongly concluded that the next spin *must* yield red, to correct the apparent “imbalance” of the wheel. (Note that this is the inverse of the “hot hand” fallacy, but just as misguided—the occurrence of the same event over and over led the gamblers to wrongly believe that the opposite event *had to occur*.)

Of course, as we now know from our coin flip example, the probability that the next spin would land on red or black was the same as it always was, as each spin is independent of all previous spins of the wheel. This misunderstanding cost these gamblers millions of francs, as they continued to bet on the next spin being red, while the wheel continued to come up black.

What can we conclude from this? For one, having a solid understanding of independent events will clarify your thinking about events in the real world. And who knows—if you’re a gambler, it might save you a fortune!

Need more practice with independent events? Check out our statistics lessons and videos!

## Comments are closed.