Austerity: The model should only include events the agent thinks are genuinely possible. If she is certain something cannot happen, the theory shouldn’t force her to rank or measure preferences for that scenario. (Formally, we can regard zero-probability events as excluded from the relevant algebra.)
I’d want to mention that in infinite contexts, probability 0 events are still possible.
(An example here is possibly the constants of our universe, which currently are real numbers, but any specific real number has a 0 probability of being picked)
It’s very important to recognize when you are in a domain such that probability 0 is not equal to impossible.
The dual case holds as well, that is probability 1 events are not equal to certainty in the general case.
The atomless property and only contradictions taking a 0 value could both be consequences of the axioms in question. The Kolmogorov paper (translated from French by Jeffrey) has the details, but from skimming it I don’t immediately understand how it works.
I’d want to mention that in infinite contexts, probability 0 events are still possible.
(An example here is possibly the constants of our universe, which currently are real numbers, but any specific real number has a 0 probability of being picked)
It’s very important to recognize when you are in a domain such that probability 0 is not equal to impossible.
The dual case holds as well, that is probability 1 events are not equal to certainty in the general case.
If understand correctly, possible probability 0 events are ruled out for Kolmogorov’s atomless system of probability mentioned in footnote 7
Wait, how does the atomless property ensure that if the probability of an event is 0, then the event can never happen at all, as a matter of logic?
The atomless property and only contradictions taking a 0 value could both be consequences of the axioms in question. The Kolmogorov paper (translated from French by Jeffrey) has the details, but from skimming it I don’t immediately understand how it works.