I guess that this kind of question gets asked (and answered) a lot. But I’ve tried to read a few posts here about Bayes’ Theorem and they seem to talk about slightly different things than the question that is bothering me. Maybe I should’ve read a few more, but since I’m also interested in how people use this theorem in their everyday life, I’ve decided to ask the question anyway.
Bayes’ Theorem is a (not very) special case of this nameless theorem:
If D, E and F are mutually-exclusive events with non-zero probabilities d, e and f respectively, then dd+e=dd+f×(d+f)d+e.
Which is true because that’s how real numbers work. To translate this theorem into more familiar form, you can simply replace D with A∧B, E with ¬A∧B and F with A∧¬B and look up the definition of P(A|B), which is P(A∧B)P(A∧B)+P(¬A∧B).
You might notice that this theorem is not exactly hard to prove. It should probably be obvious to anybody who understood the definition of probability space in university. You don’t need to understand probability spaces that well either—you can boil down (losing some generality in the process) everything to this theorem:
If D, E and F are non-intersecting figures with non-zero areas d, e and f, drawn inside rectangle with area 1, then dd+e=dd+f×(d+f)d+e.
You might think of the rectangle as a target you are shooting and of D, E and F as some of the places on the target your bullet can hit. And you can boil it down even further, losing almost all of the generality, but keeping most of the applicability to real-life scenarios.
If D, E and F are non-empty sets with d, e and f elements respectively, then dd+e=dd+f×(d+f)d+e.
Okay, so I think that Bayes’ Theorem is very simple. Do I think that it is useless? Not at all—it is used all the time. Perhaps, it is used all the time in part because it is so simple. So we have a mathematical concept that is easy to use, easy to understand (if you know a bit about probabilities) and there are many cases, when it gives counterintuitive (but correct) results. So why I am not happy with declaring it the Best Thing in the World? Well, if you put it that way, maybe I am. But there still are many other mathematical concepts that fit the bill.
For example, the Bayes’ Theorem tells us something about conditional probabilities. There is a related concept of independent events. Basically, A and B are independent iff probability of A happening does not change wherever B happens or not. P(A)=P(A|B). (The definition used in math is a bit different because of the trade-off between generality and clarity. I used less general but easier to understand version.) For example it is (probably) true for A=”6 comes up on a d6 roll” and B=”it is raining” and is not true for A=”6 comes up on a d6 roll” and B=”even number comes up on the same roll”. There are a lot of questions about independence with somewhat counterintuitive answers. For example:
A and B are not independent. B and C are also not independent. Are A and C necessarily not independent?
In 90% of cases where A happens, B does not. In 90% of cases where B happens, A does not. Does it means that A and B are not independent?
Even more important than probability is logic (at leas, in my opinion). And people often make mistakes in basic logic. For example, implication. A⇒B. If A is true then B is also true. People make all kinds of mistakes with it. I often give this problem to students when I give logic mini-course:
My cat sneezes every day when it’s going to rain. It sneezed. Does it mean that it is going to rain today?
Some students tried to assure me that yes, the first statement about my cat implies that it is some kind of psychic. A similar logical mistake in real life situation could probably convince somebody of existence of supernatural powers of some kind. Or of some other dumb thing.
What I’m trying to say is, while some perfect rational human we strive to be should understand Bayes’ Theorem really well and use it when appropriate, he also should understand a lot of other thing really well and use them when appropriate. And the first half of my question is “What makes this theorem so special compared to other very useful things?”. If you think it was already covered in one of the posts here, please give me a link.
The over half of my question is “How exactly do you use this theorem in your life?”. Because it seems to me that is is really hard to do that. If you are doing some serious research, you probably can obtain a lot of statistical data and sometimes you can obtain things like P(A),P(B) and P(B|A), use theorem and it would give you P(A|B). But if you try to quickly use this theorem in your daily life, you likely won’t know at least some of the P(A),P(B) and P(B|A). So you would probably guess them? Even worse, at least some of them are probably going to be very small. And while humans are relatively good at distinguishing probability 50% from probability 0,5%, they are in many cases awful at distinguishing probabilities like 0,005% and 0,00005%. A wrong choice could leave you with P(A|B)=70% instead of P(A|B)=0,7%.
So the over half of my question is this. If you use Bayes’ Theorem in your daily life, how do you try to avoid making mistakes? If you only use it in at least somewhat serious research, do you really find it that useful compared to all over statements in probability theory?
[Question] What’s the big deal about Bayes’ Theorem?
I guess that this kind of question gets asked (and answered) a lot. But I’ve tried to read a few posts here about Bayes’ Theorem and they seem to talk about slightly different things than the question that is bothering me. Maybe I should’ve read a few more, but since I’m also interested in how people use this theorem in their everyday life, I’ve decided to ask the question anyway.
Bayes’ Theorem is a (not very) special case of this nameless theorem:
If D, E and F are mutually-exclusive events with non-zero probabilities d, e and f respectively, then dd+e=dd+f×(d+f)d+e.
Which is true because that’s how real numbers work. To translate this theorem into more familiar form, you can simply replace D with A∧B, E with ¬A∧B and F with A∧¬B and look up the definition of P(A|B), which is P(A∧B)P(A∧B)+P(¬A∧B).
You might notice that this theorem is not exactly hard to prove. It should probably be obvious to anybody who understood the definition of probability space in university. You don’t need to understand probability spaces that well either—you can boil down (losing some generality in the process) everything to this theorem:
If D, E and F are non-intersecting figures with non-zero areas d, e and f, drawn inside rectangle with area 1, then dd+e=dd+f×(d+f)d+e.
You might think of the rectangle as a target you are shooting and of D, E and F as some of the places on the target your bullet can hit.
And you can boil it down even further, losing almost all of the generality, but keeping most of the applicability to real-life scenarios.
If D, E and F are non-empty sets with d, e and f elements respectively, then dd+e=dd+f×(d+f)d+e.
Okay, so I think that Bayes’ Theorem is very simple. Do I think that it is useless? Not at all—it is used all the time. Perhaps, it is used all the time in part because it is so simple. So we have a mathematical concept that is easy to use, easy to understand (if you know a bit about probabilities) and there are many cases, when it gives counterintuitive (but correct) results. So why I am not happy with declaring it the Best Thing in the World? Well, if you put it that way, maybe I am. But there still are many other mathematical concepts that fit the bill.
For example, the Bayes’ Theorem tells us something about conditional probabilities. There is a related concept of independent events. Basically, A and B are independent iff probability of A happening does not change wherever B happens or not. P(A)=P(A|B). (The definition used in math is a bit different because of the trade-off between generality and clarity. I used less general but easier to understand version.) For example it is (probably) true for A=”6 comes up on a d6 roll” and B=”it is raining” and is not true for A=”6 comes up on a d6 roll” and B=”even number comes up on the same roll”. There are a lot of questions about independence with somewhat counterintuitive answers. For example:
A and B are not independent. B and C are also not independent. Are A and C necessarily not independent?
In 90% of cases where A happens, B does not. In 90% of cases where B happens, A does not. Does it means that A and B are not independent?
Even more important than probability is logic (at leas, in my opinion). And people often make mistakes in basic logic. For example, implication. A⇒B. If A is true then B is also true. People make all kinds of mistakes with it. I often give this problem to students when I give logic mini-course:
My cat sneezes every day when it’s going to rain. It sneezed. Does it mean that it is going to rain today?
Some students tried to assure me that yes, the first statement about my cat implies that it is some kind of psychic. A similar logical mistake in real life situation could probably convince somebody of existence of supernatural powers of some kind. Or of some other dumb thing.
What I’m trying to say is, while some perfect rational human we strive to be should understand Bayes’ Theorem really well and use it when appropriate, he also should understand a lot of other thing really well and use them when appropriate. And the first half of my question is “What makes this theorem so special compared to other very useful things?”. If you think it was already covered in one of the posts here, please give me a link.
The over half of my question is “How exactly do you use this theorem in your life?”. Because it seems to me that is is really hard to do that. If you are doing some serious research, you probably can obtain a lot of statistical data and sometimes you can obtain things like P(A),P(B) and P(B|A), use theorem and it would give you P(A|B). But if you try to quickly use this theorem in your daily life, you likely won’t know at least some of the P(A),P(B) and P(B|A). So you would probably guess them? Even worse, at least some of them are probably going to be very small. And while humans are relatively good at distinguishing probability 50% from probability 0,5%, they are in many cases awful at distinguishing probabilities like 0,005% and 0,00005%. A wrong choice could leave you with P(A|B)=70% instead of P(A|B)=0,7%.
So the over half of my question is this. If you use Bayes’ Theorem in your daily life, how do you try to avoid making mistakes? If you only use it in at least somewhat serious research, do you really find it that useful compared to all over statements in probability theory?