So, in order to answer the puzzles, you have to start with probabilistic beliefs, rather than with binary true-false beliefs. The problem is currently somewhat like the question “is it true or false that the sun will rise tomorrow.” To a very good approximation, the sun will rise tomorrow. But the earth’s rotation could stop, or the sun could get eaten by a black hole, or several other possibilities that mean that it is not absolutely known that the sun will rise tomorrow. So how can we express our confidence that the sun will rise tomorrow? As a probability—a big one, like 0.999999999999.
So anyhow, let’s take problem 1. How confident are you in P1, P2, and P3? Let’s say about 0.99 each—you could make a hundred such statements and only get one wrong, or so you think. So how about T? Well, if it follows form P1, P2 and P3, then you believe it with degree about 0.97.
Now Ms. Math comes and tells you you’re wrong. What happens? You apply Bayes’ theorem. When something is wrong, Ms. Math can spot it 90% of the time, and when it’s right, she only thinks it’s wrong 0.01% of the time. So Bayes’ rule says to multiply your probability of ~T by 0.9/(0.030.9 + 0.970.0001), giving an end result of T being true with probability only about 0.005.
Note that at no point did any beliefs “defeat” other ones. You just multiplied them together. If Ms. Math had talked to you first, and then you had gotten your answer after, the end result would be the same. The second problem is slightly trickier because not only do you have to apply probability theory correctly, you have to avoid applying it incorrectly. Basically, you have to be good at remembering to use conditional probabilities when applying (AME).
I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1.
I suspect that you only conceive that you can conceive of that. In addition to the post linked above, I would suggest reading this, and this, and perhaps a textbook on probability. It’s not enough for something to be a belief for it to be a probability—it has to behave according to certain rules.
I can’t believe people apply Baye’s theorem when confronted to counter-evidence. What evidence do we have to believe that Bayesian probability theories describe the way we reason inductively?
It doesn’t necessarily describe the way we actually reason (because of cognitive biases that effect our ability to make inferences), but it does describe the way we should reason.
So, in order to answer the puzzles, you have to start with probabilistic beliefs, rather than with binary true-false beliefs. The problem is currently somewhat like the question “is it true or false that the sun will rise tomorrow.” To a very good approximation, the sun will rise tomorrow. But the earth’s rotation could stop, or the sun could get eaten by a black hole, or several other possibilities that mean that it is not absolutely known that the sun will rise tomorrow. So how can we express our confidence that the sun will rise tomorrow? As a probability—a big one, like 0.999999999999.
Why not just round up to one? Because although the gap between 0.999999999999 and 1 may seem small, it actually takes an infinite amount of evidence to bridge that gap. You may know this as the problem of induction.
So anyhow, let’s take problem 1. How confident are you in P1, P2, and P3? Let’s say about 0.99 each—you could make a hundred such statements and only get one wrong, or so you think. So how about T? Well, if it follows form P1, P2 and P3, then you believe it with degree about 0.97.
Now Ms. Math comes and tells you you’re wrong. What happens? You apply Bayes’ theorem. When something is wrong, Ms. Math can spot it 90% of the time, and when it’s right, she only thinks it’s wrong 0.01% of the time. So Bayes’ rule says to multiply your probability of ~T by 0.9/(0.030.9 + 0.970.0001), giving an end result of T being true with probability only about 0.005.
Note that at no point did any beliefs “defeat” other ones. You just multiplied them together. If Ms. Math had talked to you first, and then you had gotten your answer after, the end result would be the same. The second problem is slightly trickier because not only do you have to apply probability theory correctly, you have to avoid applying it incorrectly. Basically, you have to be good at remembering to use conditional probabilities when applying (AME).
I suspect that you only conceive that you can conceive of that. In addition to the post linked above, I would suggest reading this, and this, and perhaps a textbook on probability. It’s not enough for something to be a belief for it to be a probability—it has to behave according to certain rules.
I can’t believe people apply Baye’s theorem when confronted to counter-evidence. What evidence do we have to believe that Bayesian probability theories describe the way we reason inductively?
Oh, if you want to model what people actually do, I agree it’s much more complicated. Merely doing things correctly is quite simple by comparison.
It doesn’t necessarily describe the way we actually reason (because of cognitive biases that effect our ability to make inferences), but it does describe the way we should reason.