Bayesian model requires that probabilities be self-consistent. It breaks Bayesianism to believe that God exist with probability 90% and God doesn’t exist with probability 90% at once, or to make confident prediction of second coming of Jesus and then not update probability of Jesus existing once it doesn’t take place.
But there is no reason to prefer one prior distribution to another prior distribution and people’s priors are in fact all over the probability space. I’ve heard quite a few times here that Kolmogorov complexity weighting somehow fixes the problem—but the most it can do is leaving different priors within a few billion orders of magnitude away from each other. There is nothing like a single unique “Kolmogorov prior”, or even a reasonably compact family of such priors. So why should anyone commit oneself to a prior distribution?
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
Even if Aumann threatens to spank them and send them to their rooms for not playing nice?
Bayesian model requires that probabilities be self-consistent. It breaks Bayesianism to believe that God exist with probability 90% and God doesn’t exist with probability 90% at once, or to make confident prediction of second coming of Jesus and then not update probability of Jesus existing once it doesn’t take place.
But there is no reason to prefer one prior distribution to another prior distribution and people’s priors are in fact all over the probability space. I’ve heard quite a few times here that Kolmogorov complexity weighting somehow fixes the problem—but the most it can do is leaving different priors within a few billion orders of magnitude away from each other. There is nothing like a single unique “Kolmogorov prior”, or even a reasonably compact family of such priors. So why should anyone commit oneself to a prior distribution?
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
I have no priors.
So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
Even that scenario seems to suggest priors. Insane priors, but priors nonetheless.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.
Even if Aumann threatens to spank them and send them to their rooms for not playing nice?
Aumann assumes shared priors, which they explicitly don’t have. And you cannot “assume shared pre-priors”, or any other such workaround.