There is likely a broader-scoped discussion on this topic that I haven’t read, so please point me to such a thread if my comment is addressed—but it seems to me that there is a simpler resolution to this issue (as well as an obvious limitation to this way of thinking), namely that there’s an almost immediate stage (in the context of highly-abstract hypotheticals) where probability assessment breaks down completely.
For example, there are an uncountably-infinite number of different parent universes we could have. There are even an uncountably-infinite number of possible laws of physics that could govern our universe. And it’s literally impossible to have all these scenarios “possible” in the sense of a well-defined measure, simply because if you want an uncountable sum of real numbers to add up to 1, only countably many terms can be nonzero.
This is highly related to the axiomatic problem of cause and effect, a famous example being the question “why is there something rather than nothing”—you have to have an axiomatic foundation before you can make calculations, but the sheer act of adopting that foundation excludes a lot of very interesting material. In this case, if you want to make probabilistic expectations, you need a solid axiomatic framework to stipulate how calculations are made.
Just like with the laws of physics, this framework should agree with empirically-derived probabilities, but just like physics there will be seemingly-well-formulated questions that the current laws cannot address. In cases like hobos who make claims to special powers, the framework may be ill-equipped to make a definitive prediction. More generally, it will have a scope that is limited of mathematical necessity, and many hypotheses about spirituality, religion, and other universes, where we would want to assign positive but marginal probabilities, will likely be completely outside its light cone.
There is likely a broader-scoped discussion on this topic that I haven’t read, so please point me to such a thread if my comment is addressed—but it seems to me that there is a simpler resolution to this issue (as well as an obvious limitation to this way of thinking), namely that there’s an almost immediate stage (in the context of highly-abstract hypotheticals) where probability assessment breaks down completely.
For example, there are an uncountably-infinite number of different parent universes we could have. There are even an uncountably-infinite number of possible laws of physics that could govern our universe. And it’s literally impossible to have all these scenarios “possible” in the sense of a well-defined measure, simply because if you want an uncountable sum of real numbers to add up to 1, only countably many terms can be nonzero.
This is highly related to the axiomatic problem of cause and effect, a famous example being the question “why is there something rather than nothing”—you have to have an axiomatic foundation before you can make calculations, but the sheer act of adopting that foundation excludes a lot of very interesting material. In this case, if you want to make probabilistic expectations, you need a solid axiomatic framework to stipulate how calculations are made.
Just like with the laws of physics, this framework should agree with empirically-derived probabilities, but just like physics there will be seemingly-well-formulated questions that the current laws cannot address. In cases like hobos who make claims to special powers, the framework may be ill-equipped to make a definitive prediction. More generally, it will have a scope that is limited of mathematical necessity, and many hypotheses about spirituality, religion, and other universes, where we would want to assign positive but marginal probabilities, will likely be completely outside its light cone.