When you speak of “the probability”, what information do you mean that to take into account and what information do you mean that not to take into account? What things does a rational agent need to know for the agent’s subjective probability to become equal to the probability? (Not a rhetorical question.)
“the probability” means something like the following: take a random selection of universe-histories starting with a state consistent with my/your observable past and proceeding 100 years forward, with no uncaused discontinuities in the laws of physics, to a compact portion of a wave function (that is “one quantum universe”, modulo quantum computers which are turned on). What portion of those universes satisfy the given end state?
Yes, I’m doing what I can to duck the measure problem of universes, sorry. And of course this is underdefined and unobservable. Yet it contains the basic elements: both knowledge and uncertainty about the current state of the universe, and definite laws of physics, assumed to independently exist, which strongly constrain the possible outcomes from a given initial state.
On a more practical level, it seems to be the case that, given enough information and study of a class of situations, post-hoc polynomial-computable models which use non-determinism to model the effects of details which have been abstracted out, can provide predictions about some salient aspects of that situation under certain constraints. For instance, the statement “42% of technological societies of intelligent biological agents with access to fissile materiels destroy themselves in a nuclear holocaust” could, subject to the definitions of terms that would be necessary to build a useful model, be a true or false statement.
Note that this allows for three completely different kinds of uncertainty: uncertainty about the appropriate model(s), uncertainty about the correct parameters for those model(s), and uncertainty inherent within a given model. In almost all questions involving predicting nonlinear interactions of intelligent agents, the first kind of uncertainty currently dominates. That is the kind of uncertainty I’m trying (and of course failing) to capture with the error bar in the exponent. Still, I think my failure, which at least acknowledges the overwhelming probability that I’m wrong (albeit in a limited sense) is better than a form of estimation that presents an estimate garnered from a clearly limited set of models as a final one.
In other words: I’m probably wrong. You’re probably wrong too. Since giving an estimate under 95% requires certain specific extrapolations, while almost any induction points to estimates over 95%, I would expect most rational people to arrive at an estimate over 95%, and would suspect any community with the reverse situation to be subject to biases (of which selection bias is the most innocuous). This suspicion would not apply when dealing with individuals.
To get the right answer, you need to make a honest effort to construct a model which is an unbiased composite of evidence-based models. Metaphorical reasoning is permitted as weak evidence, but cannot be the only sort of evidence.
And you also need to be lucky. I mean, unless you have the resources to fully simulate universes, you can never know that you have the right answer. But the process above, iterated, will tend to improve your answer.
When you speak of “the probability”, what information do you mean that to take into account and what information do you mean that not to take into account? What things does a rational agent need to know for the agent’s subjective probability to become equal to the probability? (Not a rhetorical question.)
“the probability” means something like the following: take a random selection of universe-histories starting with a state consistent with my/your observable past and proceeding 100 years forward, with no uncaused discontinuities in the laws of physics, to a compact portion of a wave function (that is “one quantum universe”, modulo quantum computers which are turned on). What portion of those universes satisfy the given end state?
Yes, I’m doing what I can to duck the measure problem of universes, sorry. And of course this is underdefined and unobservable. Yet it contains the basic elements: both knowledge and uncertainty about the current state of the universe, and definite laws of physics, assumed to independently exist, which strongly constrain the possible outcomes from a given initial state.
On a more practical level, it seems to be the case that, given enough information and study of a class of situations, post-hoc polynomial-computable models which use non-determinism to model the effects of details which have been abstracted out, can provide predictions about some salient aspects of that situation under certain constraints. For instance, the statement “42% of technological societies of intelligent biological agents with access to fissile materiels destroy themselves in a nuclear holocaust” could, subject to the definitions of terms that would be necessary to build a useful model, be a true or false statement.
Note that this allows for three completely different kinds of uncertainty: uncertainty about the appropriate model(s), uncertainty about the correct parameters for those model(s), and uncertainty inherent within a given model. In almost all questions involving predicting nonlinear interactions of intelligent agents, the first kind of uncertainty currently dominates. That is the kind of uncertainty I’m trying (and of course failing) to capture with the error bar in the exponent. Still, I think my failure, which at least acknowledges the overwhelming probability that I’m wrong (albeit in a limited sense) is better than a form of estimation that presents an estimate garnered from a clearly limited set of models as a final one.
In other words: I’m probably wrong. You’re probably wrong too. Since giving an estimate under 95% requires certain specific extrapolations, while almost any induction points to estimates over 95%, I would expect most rational people to arrive at an estimate over 95%, and would suspect any community with the reverse situation to be subject to biases (of which selection bias is the most innocuous). This suspicion would not apply when dealing with individuals.
See the posts “Priors as Mathematical Objects”, “Probability is Subjectively Objective” linked from the Priors wiki article.
To get the right answer, you need to make a honest effort to construct a model which is an unbiased composite of evidence-based models. Metaphorical reasoning is permitted as weak evidence, but cannot be the only sort of evidence.
And you also need to be lucky. I mean, unless you have the resources to fully simulate universes, you can never know that you have the right answer. But the process above, iterated, will tend to improve your answer.