But from the decision algorithm’s point of view, the situation looks more like being asked to pay up because 2+2=4. How do we resolve this tension?
If probability is all in the mind, how is this different? What is the difference between eventually calculating an unknown digit or simply waiting for the world to determine the outcome of a coin toss? I don’t see any difference at all.
Logical uncertainty is weird because it doesn’t exactly obey the rules of probability. You can’t have a consistent probability assignment that says axioms are 100% true but the millionth digit of pi has a 50% chance of being odd. So I won’t be very surprised if the correct way to treat logical uncertainty turns out to be not completely Bayesian.
You can’t have a consistent probability assignment that says axioms are 100% true but the millionth digit of pi has a 50% chance of being odd.
Why not? If you haven’t actually worked out the millionth digit of pi, then this probability assignment is consistent given your current state of knowledge. It’s inconsistent given logical omniscience, but then if you were logically omniscient you wouldn’t assign a 50% chance in the first place. The act of observing a logical fact doesn’t seem different from the act of making any other observation to me.
If you knew that the universe doesn’t obey Newtonian mechanics precisely, it would be inconsistent to assign them a high probability, but that doesn’t mean that an early physicist who doesn’t have that knowledge is violating the rules of probability by thinking that the universe does follow Newtonian mechanics. It’s only after you make that observation that such an assignment becomes inconsistent.
If probability is all in the mind, how is this different? What is the difference between eventually calculating an unknown digit or simply waiting for the world to determine the outcome of a coin toss? I don’t see any difference at all.
Logical uncertainty is weird because it doesn’t exactly obey the rules of probability. You can’t have a consistent probability assignment that says axioms are 100% true but the millionth digit of pi has a 50% chance of being odd. So I won’t be very surprised if the correct way to treat logical uncertainty turns out to be not completely Bayesian.
Why not? If you haven’t actually worked out the millionth digit of pi, then this probability assignment is consistent given your current state of knowledge. It’s inconsistent given logical omniscience, but then if you were logically omniscient you wouldn’t assign a 50% chance in the first place. The act of observing a logical fact doesn’t seem different from the act of making any other observation to me.
If you knew that the universe doesn’t obey Newtonian mechanics precisely, it would be inconsistent to assign them a high probability, but that doesn’t mean that an early physicist who doesn’t have that knowledge is violating the rules of probability by thinking that the universe does follow Newtonian mechanics. It’s only after you make that observation that such an assignment becomes inconsistent.
Actually this strikes me as a special case of dealing with the fact that your own decision process is imperfect.