It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
I can see how one might balk at this, but I don’t think it’s an “overinterpretation”.
What strikes me as fatuous is the need to assign actual numbers to propositions, such that one would say “I think there is a 4.3% probability of us getting wiped out by an asteroid”.
But you can refrain from this kind of silliness even as you admit that probabilities must be real numbers, and that therefore it makes sense to think of various propositions, no matter how fuzzily defined, in terms of your ranking of their plausibilities. One consequence of the Bayesian model is that plausibilities are comparable.
So you can certainly list out the know risks, and for each of them ask the question: “What are my reasons for ranking this one as more or less likely than this other?” You may not end up with precise numbers, but that’s not the point. The point is to think through the precise components of your background knowledge that go into your assessment, doing your best to mitigate bias whenever possible.
The objective, and I think it’s achievable, is to finish with a better reasoned position than you had on starting the procedure.
Bayesian model requires that probabilities be self-consistent. It breaks Bayesianism to believe that God exist with probability 90% and God doesn’t exist with probability 90% at once, or to make confident prediction of second coming of Jesus and then not update probability of Jesus existing once it doesn’t take place.
But there is no reason to prefer one prior distribution to another prior distribution and people’s priors are in fact all over the probability space. I’ve heard quite a few times here that Kolmogorov complexity weighting somehow fixes the problem—but the most it can do is leaving different priors within a few billion orders of magnitude away from each other. There is nothing like a single unique “Kolmogorov prior”, or even a reasonably compact family of such priors. So why should anyone commit oneself to a prior distribution?
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
Even if Aumann threatens to spank them and send them to their rooms for not playing nice?
I can see how one might balk at this, but I don’t think it’s an “overinterpretation”.
What strikes me as fatuous is the need to assign actual numbers to propositions, such that one would say “I think there is a 4.3% probability of us getting wiped out by an asteroid”.
But you can refrain from this kind of silliness even as you admit that probabilities must be real numbers, and that therefore it makes sense to think of various propositions, no matter how fuzzily defined, in terms of your ranking of their plausibilities. One consequence of the Bayesian model is that plausibilities are comparable.
So you can certainly list out the know risks, and for each of them ask the question: “What are my reasons for ranking this one as more or less likely than this other?” You may not end up with precise numbers, but that’s not the point. The point is to think through the precise components of your background knowledge that go into your assessment, doing your best to mitigate bias whenever possible.
The objective, and I think it’s achievable, is to finish with a better reasoned position than you had on starting the procedure.
The mistake here is not the number but the way of saying it: as if this is your guess at the value of a number out there in the world. Better to say
“My subjective probability of an asteroid strike wiping us out is currently 4.3%”
though of course the spurious precision of the ”.3″ would be more obviously silly in such a context.
Bayesian model requires that probabilities be self-consistent. It breaks Bayesianism to believe that God exist with probability 90% and God doesn’t exist with probability 90% at once, or to make confident prediction of second coming of Jesus and then not update probability of Jesus existing once it doesn’t take place.
But there is no reason to prefer one prior distribution to another prior distribution and people’s priors are in fact all over the probability space. I’ve heard quite a few times here that Kolmogorov complexity weighting somehow fixes the problem—but the most it can do is leaving different priors within a few billion orders of magnitude away from each other. There is nothing like a single unique “Kolmogorov prior”, or even a reasonably compact family of such priors. So why should anyone commit oneself to a prior distribution?
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
I have no priors.
So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
Even that scenario seems to suggest priors. Insane priors, but priors nonetheless.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.
Even if Aumann threatens to spank them and send them to their rooms for not playing nice?
Aumann assumes shared priors, which they explicitly don’t have. And you cannot “assume shared pre-priors”, or any other such workaround.