So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.
So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
Even that scenario seems to suggest priors. Insane priors, but priors nonetheless.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.