# All about that Bayes - An Intro to Probability

#### RANDOM VARIABLES

In this world things keep happening around us. Each event occurring is a Random Variable. A Random Variable is an event, like elections, snow or hail. Random variables have an outcome attached them - the value of which is between 0 and 1. This is the likelihood of that event happening. We hear the outcomes of random variables all the time - There is a 50% chance or precipitation, The Seattle Seahawks have a 90% chance of winning the game.

#### SIMPLE PROBABILITY

Where do we get these numbers from? From past data.

Year |
2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 |

Rain |
Rainy | Dry | Rainy | Rainy | Rainy | Dry | Dry | Rainy |

#### PROBABILITY OF 2 EVENTS

What is the probability that event A and Event B happening together? Consider the following table, with data about the *Rain* and *Sun* received by Seattle for the past few years.

Year |
2008 | 2009 | 2010 | 2011 | 2012 | 2013 | 2014 | 2015 |

Rain |
Rainy | Dry | Rainy | Rainy | Rainy | Dry | Dry | Rainy |

Sun |
Sunny | Sunny | Sunny | Cloudy | Cloudy | Cloudy | Sunny | Sunny |

Using the above information, can you compute what is the probability that it will be Sunny and Rainy in 2016?

We can get this number easily from the **Joint Distribution**

RAIN |
|||

Rainy |
Dry | ||

SUN |
Sunny |
3/8 | 2/8 |

Cloudy
| 2/8 | 1/8 |

In 3 out of the 8 examples above, it is *Sunny* and *Rainy* at the same time. Similarly, in 1 out of 8 times it is *Cloudy* and it is *Dry*. So we can compute the probability of multiple events happening at the same time using the *Joint Distribution*. If there are more than 2 variables, the table will be of a higher dimension

We can extend this table further include **Marginalization**. *Marginalization* is just a fancy word for adding up all the probabilities in each row, and the probabilities in each column respectively.

RAIN |
||||

Rainy |
Dry |
Margin | ||

SUN |
Sunny |
0.375 | 0.25 | 0.625 |

Cloudy |
0.25 | 0.125 | 0.375 | |

Margin |
0.625 | 0.375 | 1 |

Why are margins helpful? They remove the effects of one of the two events in the table. So, if we want to know the probability that it will rain (irrespective of other events), we can find it from the marginal table as 0.625. From Table 1, we can confirm this by computing all the individual instances that it rains - 5/8 = 0.625

#### CONDITIONAL PROBABILITY

What do we do when one of the outcomes is already given to us? On this new day in 2016, it is very sunny, but what is the probability that it will rain?

which is read as - probability that it will rain, given that there is sun.

This is computed in the same way as we compute normal probability, but we will just look at the cases where Sun = Sun from Table 1. There are 5 instances of Sun = Sun in Table 1, and in 3 of those cases Rain = Rain. So the probability of

We can also compute this from Table 3. Total probability of Sun = 0.625 (Row 1 Marginal probability). Probability of Rain and Sun = 0.375

Probability of Rain given Sun = 0.375/0.625 = 0.6

#### DIFFERENCE BETWEEN CONDITIONAL AND JOINT PROBABILITY

Conditional and Joint probability are often mistaken for each other because of the similarity in their naming convention. So what is the difference between: and

The first is Joint Probability and the second is Conditional Probability.

Joint probability computes the probability of 2 events happening together. In the case above - what is the probability that Event A and Event B both happen together? We do not know whether either of these events actually happened, and are computing the probability of both of them happening together.

Conditional probability is similar, but with one difference - We already know that one of the events (e.g. Event B) did happen. So we are looking for the probability of Event A, when we know the Event B already happened or that the probability of Event B is 1. This is a subtle but a significantly different way of looking at things.

#### BAYES RULE

Equating (4) and (5)

This is the **Bayes Rule**.

Bayes Rule is interesting, and significant, because we can use it to discover the conditional probability of something, using the conditional probability going the other direction. For example: to find the probability , we can get this unknown from , which is much easier to collect data for, as it is easier to find out whether the person who died was a smoker or a non smoker.

Lets look at some real examples of probability in action. Consider a prosecutor, who wants to know whether to charge someone with a crime, given the forensic evidence of fingerprints, and town population.

The data we have is the following:

- One person in a town of 100,000 committed a crime. The probability that is he guilty , where is the probability of a person being guilty of having committed a crime
- The forensics experts tell us, that if someone commits a crime, then they leave behind fingerprints 99% of the time. , where is the probability of fingerprints, given crime is commited
- There are usually 3 people’s fingerprints in any given location. So . This is because only 1 in 100,000 people could have their fingerprints

We need to compute:

Using *Bayes Rule* we know that:

Plugging in the values that we already know:

This is a good enough probability to get in touch with the suspect, and get his side of the story. However, when the prosecutor talks to the detective, the detective points out that the suspects actually lives at the scrime scene. This makes it highly likely to find the suspect’s fingerprints in that location. And the new probability of finding fingerprints becomes : $P(F) = 0.99$

Plugging in those values again into (9), we get:

So it completely changes the probability of the suspect being guilty.

This example is interesting because we computed the probability of a $P(G \mid F)$ using the probability of $P(F \mid G)$. This is because we have more data from previous solved crimes about how many peple actually leave fingerprints behind, and the correlation of that with them being guilty.

Another motivation for using conditional probability, is that conditional probability in one direction is often less stable that the conditional probability in the other direction. For example, the probability of disease given a symptom $P(D \mid S)$ is less stable as compared to probability of symptom given disease $P(S \mid D)$

So, consider a situation where you think that you might have a horrible disease *Severenitis*. You know that Severenitis is very rare and the probability that someone actually has it is 0.0001. There is a test for it that is reasonably accurate 99%. You go get the test, and it comes back positive. You think, “oh no! I am 99% likely to have the disease”. Is this correct? Lets do the Math.

Let $P(H \leftarrow w)$ be the probability of Health being *well*, and $P(H \leftarrow s)$ be the probability of Health being *sick*. Let and $P(T \leftarrow p)$ be the probability of the Test being *positive* and $P(T \leftarrow n)$ be the probability of the Test being *negative*.

We know that the probability you have the disease is low $P(H \leftarrow s) = 0.0001$. We also know that the test is 99% accurate. What does this mean? It means that if you are sick, then the test will accurately predict it by 99%

We need to find out the probability that you are *sick* given that the test is *positive* or

Using Bayes Rule:

We know the numerator, but not the denominator. However, it is easy enough to compute the denominator using some clever math!

We know that the total probability of

Adding (16) and (17), and equating with (15) we get:

Therefore:

Substituting (7) into (4) we get:

This is the reason why doctors are hesitant to order expensive tests if it is unlikely tht you have the disease. Even though the test is accurate, rare diseases are so rare that the very rarity dominates the accuracy of the test.

#### NAIVE BAYES

When someone applies *Naive Bayes* to a problem, they are assuming *conditional independence* of all the events. This means:

When this is plugged into Bayes Rules:

- Let $P(BCD…) = \alpha$ which is the normalization constant. Then,

What we have done here, is assumed that the events A, B, C etc. are not dependent on each other, thereby reducing a very high dimensional table into several low dimensional tables. If we have 100 features, and each feature can take 2 values, then we would have a table of size $2^{100}$. However, assuming independence of events we reduce this to one hundred 4 element tables.

Naive bayes is rarely ever true, but it often works because we are not interested in the right probability, but the fact that the correct class has the highest probability.

**References:**