WRT point D, it should be possible to come up with some sort of formula that gives the relative utility according to maxipok of working on various risks. Something that takes into account
The current probability of a particular risk causing existential disaster
The total resources in dollars currently expended on that risk
The relative reduction in risk that a 1% increase in resources on that risk would bring
These I think are all that are needed when considering donations. When considering time rather than money, you also need to take into account:
The dollar value of a one hour of a well-suited person’s leisure time spent on the risk
The relative value of one’s own time on the risk compared to the arbitrary well-suited person measured against
This is to take into account that it might be rational to work on AI risk even as you donated to, say, a nanotech-related risk organisation, if your skillset was particularly well suited to it.
Our track record of long term prediction of any kind is so dismal I doubt comparing one way of pulling numbers out of one’s ass can be meaningfully described as superior to another way of pulling numbers out of one’s ass. Either way—numbers come from the same place.
The only exception to this I can think of are asteroid impacts, and we actually seem to be spending adequately on them.
It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
I can see how one might balk at this, but I don’t think it’s an “overinterpretation”.
What strikes me as fatuous is the need to assign actual numbers to propositions, such that one would say “I think there is a 4.3% probability of us getting wiped out by an asteroid”.
But you can refrain from this kind of silliness even as you admit that probabilities must be real numbers, and that therefore it makes sense to think of various propositions, no matter how fuzzily defined, in terms of your ranking of their plausibilities. One consequence of the Bayesian model is that plausibilities are comparable.
So you can certainly list out the know risks, and for each of them ask the question: “What are my reasons for ranking this one as more or less likely than this other?” You may not end up with precise numbers, but that’s not the point. The point is to think through the precise components of your background knowledge that go into your assessment, doing your best to mitigate bias whenever possible.
The objective, and I think it’s achievable, is to finish with a better reasoned position than you had on starting the procedure.
Bayesian model requires that probabilities be self-consistent. It breaks Bayesianism to believe that God exist with probability 90% and God doesn’t exist with probability 90% at once, or to make confident prediction of second coming of Jesus and then not update probability of Jesus existing once it doesn’t take place.
But there is no reason to prefer one prior distribution to another prior distribution and people’s priors are in fact all over the probability space. I’ve heard quite a few times here that Kolmogorov complexity weighting somehow fixes the problem—but the most it can do is leaving different priors within a few billion orders of magnitude away from each other. There is nothing like a single unique “Kolmogorov prior”, or even a reasonably compact family of such priors. So why should anyone commit oneself to a prior distribution?
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
Even if Aumann threatens to spank them and send them to their rooms for not playing nice?
You assume a good way of assessing existential risk even exists. How difficult is it to accept that it doesn’t? It is irrational to deny existence of unknown unknowns.
It’s quite likely that a few more existential risks will get decent estimates the way asteroid impacts did, but there’s no reason to expect it to be typical, and it will most likely be serendipitous.
No, I don’t assume that there’s a good way. I’m assuming only that we will either act or not act, and therefore we will find that we have decided between action and inaction one way or another whether we like it or not, so I’m asking for the third time, how shall we make that decision?
Using some embarrassingly bad reasoning, self-serving lies, and inertia—the way we make all decisions as a society. We will devote unreasonable amount of resources to risks that aren’t serious, and stay entirely unaware of the most dangerous risks. No matter which decision procedure we’ll take—this will be the result.
What evasions? I thought I’ve clearly stated that I view your decision procedure as pretty much “make up a bunch of random number, multiply and compare”.
Improvement would be to skip this rationality theater and admit we don’t have a clue.
WRT point D, it should be possible to come up with some sort of formula that gives the relative utility according to maxipok of working on various risks. Something that takes into account
The current probability of a particular risk causing existential disaster
The total resources in dollars currently expended on that risk
The relative reduction in risk that a 1% increase in resources on that risk would bring
These I think are all that are needed when considering donations. When considering time rather than money, you also need to take into account:
The dollar value of a one hour of a well-suited person’s leisure time spent on the risk
The relative value of one’s own time on the risk compared to the arbitrary well-suited person measured against
This is to take into account that it might be rational to work on AI risk even as you donated to, say, a nanotech-related risk organisation, if your skillset was particularly well suited to it.
The current probability of a particular risk causing existential disaster
The total resources in dollars currently expended on that risk
The relative reduction in risk that a 1% increase in resources on that risk would bring
How #1 and especially #3 can be anything more than ass pulls? I don’t even see how to calculate #2 in a reasonable way for most risks.
What superior method of comparing such charities are you comparing this to?
Our track record of long term prediction of any kind is so dismal I doubt comparing one way of pulling numbers out of one’s ass can be meaningfully described as superior to another way of pulling numbers out of one’s ass. Either way—numbers come from the same place.
The only exception to this I can think of are asteroid impacts, and we actually seem to be spending adequately on them.
It seems to be a recurring idea on this site that it’s not only possible but even rationally necessary to attach a probability to absolutely anything and this is correct measure of uncertainty. This is overinterpretation of Bayesian model of rationality.
I can see how one might balk at this, but I don’t think it’s an “overinterpretation”.
What strikes me as fatuous is the need to assign actual numbers to propositions, such that one would say “I think there is a 4.3% probability of us getting wiped out by an asteroid”.
But you can refrain from this kind of silliness even as you admit that probabilities must be real numbers, and that therefore it makes sense to think of various propositions, no matter how fuzzily defined, in terms of your ranking of their plausibilities. One consequence of the Bayesian model is that plausibilities are comparable.
So you can certainly list out the know risks, and for each of them ask the question: “What are my reasons for ranking this one as more or less likely than this other?” You may not end up with precise numbers, but that’s not the point. The point is to think through the precise components of your background knowledge that go into your assessment, doing your best to mitigate bias whenever possible.
The objective, and I think it’s achievable, is to finish with a better reasoned position than you had on starting the procedure.
The mistake here is not the number but the way of saying it: as if this is your guess at the value of a number out there in the world. Better to say
“My subjective probability of an asteroid strike wiping us out is currently 4.3%”
though of course the spurious precision of the ”.3″ would be more obviously silly in such a context.
Bayesian model requires that probabilities be self-consistent. It breaks Bayesianism to believe that God exist with probability 90% and God doesn’t exist with probability 90% at once, or to make confident prediction of second coming of Jesus and then not update probability of Jesus existing once it doesn’t take place.
But there is no reason to prefer one prior distribution to another prior distribution and people’s priors are in fact all over the probability space. I’ve heard quite a few times here that Kolmogorov complexity weighting somehow fixes the problem—but the most it can do is leaving different priors within a few billion orders of magnitude away from each other. There is nothing like a single unique “Kolmogorov prior”, or even a reasonably compact family of such priors. So why should anyone commit oneself to a prior distribution?
Another argument that fails is that sufficient amount of evidence might possibly cause convergence of some priors—but first our evidence for gray goo or fai is barely existent in any way, and second even infinite amount of evidence will leave far too much up to priors—grue/bleen Bayesians will never agree with green/blue Bayesians.
I have no priors.
So let me see if I got this straight: Having no priors, you’d consider a possible extinction during the next hundred years to be exactly as likely to occur from, say, a high-energy physics experiment causing an uncontrollable effect that makes Earth uninhabitable, a non-friendly AI wiping out humans or the Earth just inexplicably and in blatant violation of the conservation of energy stopping perfectly in its orbit and plunging straight into the Sun, since none of those scenarios have any precedents.
Even that scenario seems to suggest priors. Insane priors, but priors nonetheless.
You didn’t get it straight. Having no priors means I’m allowed to answer that I don’t know without attaching a number to it.
Conservation of energy is ridiculously well documented—it’s not impossible that it will stop working on a particular date in near future, but it seems extremely unlikely (see—no number). The world in which it would be true would be highly different than my idea of what world is like. Other risks you mentioned don’t seem to require as severe violations of what seems to be how world works.
I will not give you a number for any of them. P(earth just stoping|sanity) feels somewhat estimable, and perhaps if world is insane all planing is for naught anyway, so we might get away with using it.
By the way considering how many people here seem to think simulation argument isn’t ridiculous, this should put very strong limit on any claims about P(sanity). For example if you think we’re 10^-10 likely to be in a simulation, you cannot meaningfully talk about probabilities less than 10^-10 unless you think you have a good idea what kind of simulations are run, and such claim would be really baseless.
I think the breakdown in communication here is the heretofore unstated question “in what sense is this position “Bayesian”? Just having likelihood ratios with no prior is like having a vector space without an affine space; there’s no point of correspondence with reality unless you declare one.
Well, it’s called “subjective” for a reason. If we agree that no prior is privileged, why should anybody commit yourself to one? If different Bayesians can have completely unrelated priors, why cannot a single Bayesian have one for Wednesdays, and another for Fridays?
I tried some back of an envelope math to see if some middle way is possible like limiting priors to those weighted by Kolmogorov complexity, or having prior with some probability for “all hypotheses not considered” but all such attempts seem to lead nowhere.
Now if you think some priors are better than others you just introduced a pre-prior, and it’s not obvious that a particular pre-prior should be privileged either.
Even if Aumann threatens to spank them and send them to their rooms for not playing nice?
Aumann assumes shared priors, which they explicitly don’t have. And you cannot “assume shared pre-priors”, or any other such workaround.
Right, so how shall we assess whether these risks are worth addressing?
You assume a good way of assessing existential risk even exists. How difficult is it to accept that it doesn’t? It is irrational to deny existence of unknown unknowns.
It’s quite likely that a few more existential risks will get decent estimates the way asteroid impacts did, but there’s no reason to expect it to be typical, and it will most likely be serendipitous.
No, I don’t assume that there’s a good way. I’m assuming only that we will either act or not act, and therefore we will find that we have decided between action and inaction one way or another whether we like it or not, so I’m asking for the third time, how shall we make that decision?
Using some embarrassingly bad reasoning, self-serving lies, and inertia—the way we make all decisions as a society. We will devote unreasonable amount of resources to risks that aren’t serious, and stay entirely unaware of the most dangerous risks. No matter which decision procedure we’ll take—this will be the result.
It is clear from your repeated evasions that you have no proposal to improve on the decision procedure I propose.
What evasions? I thought I’ve clearly stated that I view your decision procedure as pretty much “make up a bunch of random number, multiply and compare”.
Improvement would be to skip this rationality theater and admit we don’t have a clue.
AND THEN DECIDE HOW?
By tossing a coin or using Ouija board? None of alternatives proposed has better track record.