Exactly-0 isn’t on the table at all. Close-enough-to-0-that-you-can-represent-it-that-way-without-too-much-disclaiming is reserved for propositions like “a square circle and Batman teamed up to, not kill, but kidnap and replace with a convincing inert android, Meredith”. Princess Diana’s odds of having killed Meredith are miniscule, but not zero or even compellingly zerolike, compared to those.
I don’t know if this means I disagree with Eliezer but I’m pretty sure the probability of a contradiction has to be 0 and the probability of a tautology has to be 1. Else really weird things start happening and you can’t do deduction. Like, what is the probability of A given A ^ B?
The circle is defined as the locus of points an equal distance from a center on a plane. A square is defined as a regular quadrilateral—i.e. a shape with four sides of equal length separated by four angles of equal magnitude. If you allow that “distance” may be generalized) to be applicable to other geometries than Euclidean...
...what is the shape of a circle on a chessboard, where “distance” is measured by the number of king-moves?
I believe this is a useful object lesson in the difficulty of constructing properly impossible propositions.
Edited to make the square have four sides, not three. What was I thinking...?
Something can be metaphysically/logically impossible without it being okay to assign exactly-0 to it. Epistemic probability is what we’re really representing here—I mean, even something as uncertain-to-me as the current weather conditions in the red spot on Jupiter is exactly one way. But it’s not useful to represent that single-ness of weather conditions because I can’t access them. I similarly can’t usefully access absolute epistemic certainty about even simple math and logic. I’m a broken machine; I cannot handle perfect surety.
It isn’t that simple. Most of the results we get from Bayes theorem we get by deduction. For example, the Dutch book argument, the most common justification given for Bayesian epistemology in the first place, relies on deduction. So does nearly every other important result we get from Bayes theorem. So when you say to someone: take this evidence and act rationally that may imply that that person not get her deductions wrong. This is why, afaict most Bayesians assume logical omniscience. See here. Apparently there have been attempts to weaken logical omniscience and maybe someone here has one in mind… but I haven’t heard it. Obviously it is that case the humans, as a matter of psychological fact can screw up deduction. But this is basically like saying that as a matter of psychological fact humans aren’t perfect Bayesian rationalists. The whole theory isn’t supposed to be descriptive, it is an ideal to strive toward.
I have also seen Eliezer tempted to consider a ‘0’ probability in response to a ‘divide by infinity’ situation. (I think there is a ‘mathsy’ way to represent that kind of ‘0’.)
That’s called a limit). What’s special is not the “zero” but the “infinity”: you don’t talk about a value “infinity” (attempting to have one causes you to lose various other useful properties), but rather that as some input increases without bound, the output approaches zero.
“The limit of 1/x as x approaches infinity is zero.”
The concept of limits is a great way to look at this. A limit is a thing unto its own, a complex statement indicating, confirmed as much as is humanly possible.
Another notion is what Goedel brings to the table. His contribution of something being consistent or complete is relevant.
Well, it is quite fascinating that no one gets a 0 probability. Just to ask, does Meredith get a 0 probability? I will move past understanding the exclusion of 0. I just want to make sure I understand. Anyway, when I say 0, I understand it to mean functionally 0, which is the same as .0000000001, which is also functionally 0, correct? Thank you for you patience.
Meredith could have committed suicide. She’s probably more likely to be responsible for the death than Princess Diana. And she’s much more likely than the team of Batman-and-square-circle.
Well, maybe she had superpowers. Or was killed by her time-traveling past self. When you get to probabilities this low, boy do you ever get to make shit up.
Maybe she was killed by her time-traveling future grandaughter. I was tempted to rule it out based off the anthropic principle (I don’t expect to exist in a world in which someone was killed by someone who wouldn’t exist if the victim was killed). But come to think of it I haven’t assigned 0 to specific operational mechanisms behind time travel.
Exactly-0 isn’t on the table at all. Close-enough-to-0-that-you-can-represent-it-that-way-without-too-much-disclaiming is reserved for propositions like “a square circle and Batman teamed up to, not kill, but kidnap and replace with a convincing inert android, Meredith”. Princess Diana’s odds of having killed Meredith are miniscule, but not zero or even compellingly zerolike, compared to those.
I don’t know if this means I disagree with Eliezer but I’m pretty sure the probability of a contradiction has to be 0 and the probability of a tautology has to be 1. Else really weird things start happening and you can’t do deduction. Like, what is the probability of A given A ^ B?
*cackles evilly and cracks metaphorical knuckles*
The circle is defined as the locus of points an equal distance from a center on a plane. A square is defined as a regular quadrilateral—i.e. a shape with four sides of equal length separated by four angles of equal magnitude. If you allow that “distance” may be generalized) to be applicable to other geometries than Euclidean...
...what is the shape of a circle on a chessboard, where “distance” is measured by the number of king-moves?
I believe this is a useful object lesson in the difficulty of constructing properly impossible propositions.
Edited to make the square have four sides, not three. What was I thinking...?
And when you superimpose a middle finger onto Reimannian space...
Edit: But upvoted because it is always good to get this reminder.
Something can be metaphysically/logically impossible without it being okay to assign exactly-0 to it. Epistemic probability is what we’re really representing here—I mean, even something as uncertain-to-me as the current weather conditions in the red spot on Jupiter is exactly one way. But it’s not useful to represent that single-ness of weather conditions because I can’t access them. I similarly can’t usefully access absolute epistemic certainty about even simple math and logic. I’m a broken machine; I cannot handle perfect surety.
It isn’t that simple. Most of the results we get from Bayes theorem we get by deduction. For example, the Dutch book argument, the most common justification given for Bayesian epistemology in the first place, relies on deduction. So does nearly every other important result we get from Bayes theorem. So when you say to someone: take this evidence and act rationally that may imply that that person not get her deductions wrong. This is why, afaict most Bayesians assume logical omniscience. See here. Apparently there have been attempts to weaken logical omniscience and maybe someone here has one in mind… but I haven’t heard it. Obviously it is that case the humans, as a matter of psychological fact can screw up deduction. But this is basically like saying that as a matter of psychological fact humans aren’t perfect Bayesian rationalists. The whole theory isn’t supposed to be descriptive, it is an ideal to strive toward.
I have also seen Eliezer tempted to consider a ‘0’ probability in response to a ‘divide by infinity’ situation. (I think there is a ‘mathsy’ way to represent that kind of ‘0’.)
That’s called a limit). What’s special is not the “zero” but the “infinity”: you don’t talk about a value “infinity” (attempting to have one causes you to lose various other useful properties), but rather that as some input increases without bound, the output approaches zero.
“The limit of 1/x as x approaches infinity is zero.”
The concept of limits is a great way to look at this. A limit is a thing unto its own, a complex statement indicating, confirmed as much as is humanly possible.
Another notion is what Goedel brings to the table. His contribution of something being consistent or complete is relevant.
Well, it is quite fascinating that no one gets a 0 probability. Just to ask, does Meredith get a 0 probability? I will move past understanding the exclusion of 0. I just want to make sure I understand. Anyway, when I say 0, I understand it to mean functionally 0, which is the same as .0000000001, which is also functionally 0, correct? Thank you for you patience.
Meredith could have committed suicide. She’s probably more likely to be responsible for the death than Princess Diana. And she’s much more likely than the team of Batman-and-square-circle.
Were there any fatal wounds that she could not have inflicted?
Well, maybe she had superpowers. Or was killed by her time-traveling past self. When you get to probabilities this low, boy do you ever get to make shit up.
Maybe she was killed by her time-traveling future grandaughter. I was tempted to rule it out based off the anthropic principle (I don’t expect to exist in a world in which someone was killed by someone who wouldn’t exist if the victim was killed). But come to think of it I haven’t assigned 0 to specific operational mechanisms behind time travel.