Er, my bad. I missed your point. I see it now, duh.
So my friend thinks something S has a probability of zero but I know otherwise and point out that it is possible give assumption which I know my friend believes has a .1 chance of being true. He says “Oh right. I guess S is possible after all.” What has just happened? What do we say when we see the dart land at a specific point on the line?
Your friend had incorrectly computed the implications of his prior to the problem in question. On your prompting he re-ran the computation, and got the right answer (or at least a different answer) this time.
Perfect Bayesians are normally assumed to be logically omniscient, so this just wouldn’t happen to them in the first place.
What do we say when we see the dart land at a specific point on the line?
In order to specify a point on the line you need an infinite amount of evidence, which is sufficient to counteract the infinitesimal prior. (The dart won’t hit a rational number or anything else that has a finite exact description.)
Or if you only have a finite precision observation, then you have only narrowed the dart’s position to some finite interval, and each point in that interval still has probability 0.
So my friend thinks something S has a probability of zero but I know otherwise and point out that it is possible give assumption which I know my friend believes has a .1 chance of being true. He says “Oh right. I guess S is possible after all.” What has just happened?
You wasted a great gambling opportunity.
Pengvado gives one good answer. I’ll add that your friend saying something has a probability of zero most likely means a different thing than what a Bayesian agent means when it says the same thing. Often people give probability estimates that don’t take their own fallibility into account without actually intending to imply that they do not need to. That is, if asked to actually bet on something they will essentially use a different probability figure that incorporates their confidence in their reasoning. In fact, I’ve engaged with philosophers who insist that you have to do it that way.
What do we say when we see the dart land at a specific point on the line?
“Did not! Look closer, you missed by 1/infinity miles!”
Er, my bad. I missed your point. I see it now, duh.
So my friend thinks something S has a probability of zero but I know otherwise and point out that it is possible give assumption which I know my friend believes has a .1 chance of being true. He says “Oh right. I guess S is possible after all.” What has just happened? What do we say when we see the dart land at a specific point on the line?
Your friend had incorrectly computed the implications of his prior to the problem in question. On your prompting he re-ran the computation, and got the right answer (or at least a different answer) this time.
Perfect Bayesians are normally assumed to be logically omniscient, so this just wouldn’t happen to them in the first place.
In order to specify a point on the line you need an infinite amount of evidence, which is sufficient to counteract the infinitesimal prior. (The dart won’t hit a rational number or anything else that has a finite exact description.)
Or if you only have a finite precision observation, then you have only narrowed the dart’s position to some finite interval, and each point in that interval still has probability 0.
You wasted a great gambling opportunity.
Pengvado gives one good answer. I’ll add that your friend saying something has a probability of zero most likely means a different thing than what a Bayesian agent means when it says the same thing. Often people give probability estimates that don’t take their own fallibility into account without actually intending to imply that they do not need to. That is, if asked to actually bet on something they will essentially use a different probability figure that incorporates their confidence in their reasoning. In fact, I’ve engaged with philosophers who insist that you have to do it that way.
“Did not! Look closer, you missed by 1/infinity miles!”