It means your model was inapplicable to the event. Careful Bayesean reasoners don’t have any 0s or 1s in predictions of observations. They may have explicit separation of observation and probability, such as “all circles in euclidian planes have pi as their ratio of circumference to their diameter”, with the non-1 probability falling into “is that thing I see actually truly a circle in a flat plane?”
Likewise, it’s fine to give probability 1 to “a fair die will roll integers between 1 and 6 inclusive with equal probability”, and then when a 7 rolls, say “that’s evidence that it’s not a fair die”.
Anyone who assigns a probability of 0 or 1 for a future experience is wrong. There’s an infinitesimal chance that the simulation ends or your Boltzmann brain has a glitch or aliens are messing with gravity or whatever. In casual use, we often round these off, which is convenient but not strictly applicable.
Note that there’s absolutely no way to GET a 0 or 1 probability in Bayesean calculations, unless it’s a prior. Any sane prior can adjust arbitrarily close to 0 or 1 with sufficient observations but can’t actually get all the way there—update size is proportional to surprise, so it takes a LOT of evidence to shift a tiny bit closer when it’s already close to 0 or 1.
For real, limited-calculation agents and humans, one can also model a meta-credence about “is my model and the probability assignments I have vaguely close to correct”, which ALSO is not 1.
Another example is what’s the probability that our physical constants are what they are, especially the constants that seem tuned to life?
The answer is if the constants are arbitrary real numbers, the answer is probability 0, and this applies no matter what number you pick.
This is how we can defuse the fine-tuning argument, that the cosmos’s constants have improbable values that seem tuned for life, since any other constant has probability 0, no matter whether it was able to sustain life or not:
I think anytime you say “what is the probability that”, as if it were an objective fact or measure, rather than an agent’s tool for prediction, framed as “what is this agent’s probability assignment over …”, you’re somewhat outside of a bayesean framework.
In my view, those are incomplete propositions—your probability assignment may be a convenience in making predictions, but it’s not directly updatable. Bayesean calculations are about how to predict evidence, and how to update on that evidence. “what is the chance that this decider can solve the halting program for this program in that timeframe” is something that can use evidence to update. likewise “what is the chance that I will measure this constant next week and have it off by more than 10% from last week”.
“What is true of the universe, in an unobservable way” is not really a question for Bayes-style probability calculations. That doesn’t keep agents from having beliefs, just that there’s no general mechanism for correctly making them better.
It means your model was inapplicable to the event. Careful Bayesean reasoners don’t have any 0s or 1s in predictions of observations. They may have explicit separation of observation and probability, such as “all circles in euclidian planes have pi as their ratio of circumference to their diameter”, with the non-1 probability falling into “is that thing I see actually truly a circle in a flat plane?”
Likewise, it’s fine to give probability 1 to “a fair die will roll integers between 1 and 6 inclusive with equal probability”, and then when a 7 rolls, say “that’s evidence that it’s not a fair die”.
Anyone who assigns a probability of 0 or 1 for a future experience is wrong. There’s an infinitesimal chance that the simulation ends or your Boltzmann brain has a glitch or aliens are messing with gravity or whatever. In casual use, we often round these off, which is convenient but not strictly applicable.
Note that there’s absolutely no way to GET a 0 or 1 probability in Bayesean calculations, unless it’s a prior. Any sane prior can adjust arbitrarily close to 0 or 1 with sufficient observations but can’t actually get all the way there—update size is proportional to surprise, so it takes a LOT of evidence to shift a tiny bit closer when it’s already close to 0 or 1.
For real, limited-calculation agents and humans, one can also model a meta-credence about “is my model and the probability assignments I have vaguely close to correct”, which ALSO is not 1.
I’ll give 2 examples:
What’s the probability that the program you are given contains a solvable halting problem for your decider:
The answer is it has probability 1, but that doesn’t mean that we can extend the decider of the halting problem to cover all cases.
https://arxiv.org/abs/math/0504351
Another example is what’s the probability that our physical constants are what they are, especially the constants that seem tuned to life?
The answer is if the constants are arbitrary real numbers, the answer is probability 0, and this applies no matter what number you pick.
This is how we can defuse the fine-tuning argument, that the cosmos’s constants have improbable values that seem tuned for life, since any other constant has probability 0, no matter whether it was able to sustain life or not:
https://en.wikipedia.org/wiki/Fine-tuned_universe
I think anytime you say “what is the probability that”, as if it were an objective fact or measure, rather than an agent’s tool for prediction, framed as “what is this agent’s probability assignment over …”, you’re somewhat outside of a bayesean framework.
In my view, those are incomplete propositions—your probability assignment may be a convenience in making predictions, but it’s not directly updatable. Bayesean calculations are about how to predict evidence, and how to update on that evidence. “what is the chance that this decider can solve the halting program for this program in that timeframe” is something that can use evidence to update. likewise “what is the chance that I will measure this constant next week and have it off by more than 10% from last week”.
“What is true of the universe, in an unobservable way” is not really a question for Bayes-style probability calculations. That doesn’t keep agents from having beliefs, just that there’s no general mechanism for correctly making them better.