This is a great reminder and one that people continue to get tripped up on. We are probably looking at a case where people mistake what they know (the ontological) for what is (the ontic). Or, put another way, they mistake ontology for metaphysics.
One important problem with probabilities appears when they are applied to one-time future event, which thus have no frequency. Most important examples are AI appearance and global catastrophe. As they don’t have frequency, a probability of such event should mean something else.
The frequencies do not necessarily have to be actual, they can be calculated over the simulated worlds, as well. As long as the observer is well defined.
This objection applies equally to all models, regardless of whether they involve probabilities or not; a model may fail to accurately represent the thing that it’s trying to represent. But this doesn’t make it meaningless.
what is the thing that the model is trying to represent?
There are a bunch things “probability” could be representing in a model for a one-shot event, which (with the exception of a few notable corner cases) all imply each other and fit into mathematical models the same way. We might mean:
* The frequency that will be observed if this one-shot situation is transformed into an iterated one
* The odds that will give this the best expected value if we’re prompted to bet under a proper scoring rule
* The number that will best express our uncertainty to someone calibrated on iterated events
* The fraction of iterated events which would combine to produce that particular event
...or a number of other things. All of these have one thing in common, which is that they share a mathematical structure which satisfies the axioms of Cox’s theorem, which means that we can calculate and combine one-shot probabilities, including one-shot probabilities with different interpretations, without needing to be precise about what we mean philosophically. It’s only in corner cases, like Sleeping Beauty, where the isomorphism between definitions breaks down and we have to stop using the word and be more precise.
Thanks, operationalising this in the four different ways and explaining that they have the same mathematical structure helped me understand what’s going on with probability a great deal better than I did before.
The odds that will give this the best expected value if we’re prompted to bet under a proper scoring rule
Isn’t this one incoherent unless we already have a notion of probability?
The number that will best express our uncertainty to someone calibrated on iterated events
This seems weirdly indirect. What are the criteria for “best” here?
The fraction of iterated events which would combine to produce that particular event
I’m not sure what you mean. Could you elaborate?
The frequency that will be observed if this one-shot situation is transformed into an iterated one
Is there some principled method for determining a unique (or unique up to some sort of isomorphism) way of transforming a one-shot event into an iterated one?
This is a great reminder and one that people continue to get tripped up on. We are probably looking at a case where people mistake what they know (the ontological) for what is (the ontic). Or, put another way, they mistake ontology for metaphysics.
One important problem with probabilities appears when they are applied to one-time future event, which thus have no frequency. Most important examples are AI appearance and global catastrophe. As they don’t have frequency, a probability of such event should mean something else.
The frequencies do not necessarily have to be actual, they can be calculated over the simulated worlds, as well. As long as the observer is well defined.
What reasons do we have to believe that probabilities of one-time events mean anything?
We need something like probability to make informed decisions about one-tine events, like the probability of FAI vs. UFAI
That… doesn’t actually answer the question—especially since it begs the question!
This objection applies equally to all models, regardless of whether they involve probabilities or not; a model may fail to accurately represent the thing that it’s trying to represent. But this doesn’t make it meaningless.
Yes, my question is: in the case of “probabilities of one-time events”, what is the thing that the model is trying to represent?
There are a bunch things “probability” could be representing in a model for a one-shot event, which (with the exception of a few notable corner cases) all imply each other and fit into mathematical models the same way. We might mean:
* The frequency that will be observed if this one-shot situation is transformed into an iterated one
* The odds that will give this the best expected value if we’re prompted to bet under a proper scoring rule
* The number that will best express our uncertainty to someone calibrated on iterated events
* The fraction of iterated events which would combine to produce that particular event
...or a number of other things. All of these have one thing in common, which is that they share a mathematical structure which satisfies the axioms of Cox’s theorem, which means that we can calculate and combine one-shot probabilities, including one-shot probabilities with different interpretations, without needing to be precise about what we mean philosophically. It’s only in corner cases, like Sleeping Beauty, where the isomorphism between definitions breaks down and we have to stop using the word and be more precise.
Thanks, operationalising this in the four different ways and explaining that they have the same mathematical structure helped me understand what’s going on with probability a great deal better than I did before.
Isn’t this one incoherent unless we already have a notion of probability?
This seems weirdly indirect. What are the criteria for “best” here?
I’m not sure what you mean. Could you elaborate?
Is there some principled method for determining a unique (or unique up to some sort of isomorphism) way of transforming a one-shot event into an iterated one?
Also known as the Mind Projection Fallacy.