Keep in mind that estimation is the best we have. You can’t appeal to Nature for not having been given a warning that meets a sufficient standard of rigor. Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
You can’t appeal to Nature for not having been given a warning that meets a sufficient standard of rigor.
From a Bayesian point of view, your prior should place low probability on a figure like “8 lives per dollar”. Therefore, lots of evidence is required to overcome that prior.
From a decision-theoretic point of view, the general strategy of believing sketchy (with no offense intended to Anna; I look forward to reading the paper when it is written) arguments that reach extreme conclusions at the end is a bad strategy. There would have to be a reason why this argument was somehow different from all other arguments of this form.
Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
If there were tons of actions lying around with similarly huge potential positive consequences, then I would be first in line to take them (for exactly the reason you gave). As it stands, it seems like in reality I get a one-time chance to reduce p(bad singularity) by some small amount. More explicitly, it seems like SIAI’s research program reduces xrisk by some small amount, and a handful of other programs would also reduce xrisk by some small amount. There is no combined set of programs that cumulatively reduces xrisk by some large amount (say > 3% to be explicit).
I have to admit that I’m a little bit confused about how to reason here. The issue is that any action I can personally take will only decrease xrisk by some small amount anyways. But to me the situation feels different if society can collectively decrease xrisk by some large amount, versus if even collectively we can only decrease it by some small amount. My current estimate is that we are in the latter case, not the former—even if xrisk research had unlimited funding, we could only decrease total xrisk by something like 1%. My intuitions here are further complicated by the fact that I also think humans are very bad at estimating small probabilities—so the 1% figure could very easily be a gross overestimate, whereas I think a 5% figure is starting to get into the range where humans are a bit better at estimating, and is less likely to be such a bad overestimate.
From a Bayesian point of view, your prior should place low probability on a figure like “8 lives per dollar”. Therefore, lots of evidence is required to overcome that prior.
My prior contains no such provisions; there are many possible worlds where tiny applications of resources have apparently disproportionate effect, and from the outside they don’t look so unlikely to me.
There are good reasons to be suspicious of claims of unusual effectiveness, but I recommend making that reasoning explicit and seeing what it says about this situation and how strongly.
There are also good reasons to be suspicious of arguments involving tiny probabilities, but keep in mind: first, you probably aren’t 97% confident that we have so little control over the future (I’ve thought about it a lot and am much more optimistic), and second, that even in a pessimistic scenario it is clearly worth thinking seriously about how to handle this sort of uncertainty, because there is quite a lot to gain.
Of course this isn’t an argument that you should support the SIAI in particular (though it may be worth doing some information-gathering to understand what they are currently doing), but that you should continue to optimize in good faith.
Keep in mind that estimation is the best we have. You can’t appeal to Nature for not having been given a warning that meets a sufficient standard of rigor. Avoiding all actions of uncertain character dealing with huge consequences is certainly a bad strategy. Any one of such actions might have a big chance of not working out, but not taking any of them is guaranteed to be unhelpful.
From a Bayesian point of view, your prior should place low probability on a figure like “8 lives per dollar”. Therefore, lots of evidence is required to overcome that prior.
From a decision-theoretic point of view, the general strategy of believing sketchy (with no offense intended to Anna; I look forward to reading the paper when it is written) arguments that reach extreme conclusions at the end is a bad strategy. There would have to be a reason why this argument was somehow different from all other arguments of this form.
If there were tons of actions lying around with similarly huge potential positive consequences, then I would be first in line to take them (for exactly the reason you gave). As it stands, it seems like in reality I get a one-time chance to reduce p(bad singularity) by some small amount. More explicitly, it seems like SIAI’s research program reduces xrisk by some small amount, and a handful of other programs would also reduce xrisk by some small amount. There is no combined set of programs that cumulatively reduces xrisk by some large amount (say > 3% to be explicit).
I have to admit that I’m a little bit confused about how to reason here. The issue is that any action I can personally take will only decrease xrisk by some small amount anyways. But to me the situation feels different if society can collectively decrease xrisk by some large amount, versus if even collectively we can only decrease it by some small amount. My current estimate is that we are in the latter case, not the former—even if xrisk research had unlimited funding, we could only decrease total xrisk by something like 1%. My intuitions here are further complicated by the fact that I also think humans are very bad at estimating small probabilities—so the 1% figure could very easily be a gross overestimate, whereas I think a 5% figure is starting to get into the range where humans are a bit better at estimating, and is less likely to be such a bad overestimate.
My prior contains no such provisions; there are many possible worlds where tiny applications of resources have apparently disproportionate effect, and from the outside they don’t look so unlikely to me.
There are good reasons to be suspicious of claims of unusual effectiveness, but I recommend making that reasoning explicit and seeing what it says about this situation and how strongly.
There are also good reasons to be suspicious of arguments involving tiny probabilities, but keep in mind: first, you probably aren’t 97% confident that we have so little control over the future (I’ve thought about it a lot and am much more optimistic), and second, that even in a pessimistic scenario it is clearly worth thinking seriously about how to handle this sort of uncertainty, because there is quite a lot to gain.
Of course this isn’t an argument that you should support the SIAI in particular (though it may be worth doing some information-gathering to understand what they are currently doing), but that you should continue to optimize in good faith.
Can you clarify what you mean by this?
Only that you consider the arguments you have advanced in good faith, as a difficulty and a piece of evidence rather than potential excuses.