I don’t think the hypothetical is true (by a large margin)
“A large margin” which way?
but why only “possibly”?
“Possibly” because:
I’d have to reevaluate the odds again, the confidence and my confidence in my confidence (probably no more meta than that) before actually changing my behavior based on that
compare with other potential x-risks prevention measures which can pop up at the same level of surprise when evaluated as thoroughly and at the same level
even if convinced that yes, AI indeed has a 10% or more chance of wiping out the human race as we know it AND would not replace it with something “better” in some sense of the word, AND that yes, MIRI can reduce this chance to mere 1%, AND no, other x-risk prevention efforts are not nearly as effective in improving the humans’ odds of surviving (in some form) the next century or millennium, I would also have to convince myself whether donating to MIRI and/or advocating for it, and/or volunteering and/or doing pro bono research for it would be an effective strategy.
Do you mean that “high confidence” is only conditional on the “convincing” argument, but “convincing” corresponds to relatively low confidence in the arguments itself? What is the hypothetical here?
Not sure I follow the question… I am no Bayesian, to me the argument being convincing is a statement about the odds of the argument being true, while the confidence in the predicted outcomes depends on how narrow the distribution the argument produces is, provided it’s true.
I see. I thought you were more in tune with Eliezer on this issue. I was simply trying to see what would make me take the MIRI research much more seriously. I am fascinated by the mathematical side of it, which is hopefully of high enough quality to attract expert attention, but I am currently much more skeptical of its effects on the odds of humanity surviving the next century or two.
I changed specifics to variables because I was interested more in the broader point than the specific case.
Asteroid tracking involved spending ~$100MM to eliminate most of the expected losses from civilization-wrecking asteroids. Generously, it might have eliminated as much as a 10^-6 extinction risk (if we had found a dinosaur-killer on course our civilization would have mobilized to divert it). At the same tradeoff, getting rid of a 9% extinction risk would seem to be worth $9T or more. Billions are spent on biodefense and nuclear nonproliferation programs each year.
So it seems to me that a 9% figure ‘overshoots’ the relevant thresholds in other areas: a much lower believed cost per increment of existential risk reduction would seem to suffice for more-than-adequate support (e.g. national governments, large foundations, and plenty of scientific talent would step in before that, based on experiences with nuclear weapons, climate change, cancer research, etc).
For comparison, consider someone who says that she will donate to malaria relief iff there is solidly convincing proof that at least 1000 cases of malaria affecting current people will be averted per dollar in the short-term. This is irrelevant in a world with a Gates Foundation, GiveWell, and so on: she will never get the chance as those with less stringent thresholds act.
I was trying to clarify whether you were using an extreme example to make the point in principle, or were saying that your threshold for action would actually be in that vicinity.
“A large margin” which way?
“Possibly” because:
I’d have to reevaluate the odds again, the confidence and my confidence in my confidence (probably no more meta than that) before actually changing my behavior based on that
compare with other potential x-risks prevention measures which can pop up at the same level of surprise when evaluated as thoroughly and at the same level
even if convinced that yes, AI indeed has a 10% or more chance of wiping out the human race as we know it AND would not replace it with something “better” in some sense of the word, AND that yes, MIRI can reduce this chance to mere 1%, AND no, other x-risk prevention efforts are not nearly as effective in improving the humans’ odds of surviving (in some form) the next century or millennium, I would also have to convince myself whether donating to MIRI and/or advocating for it, and/or volunteering and/or doing pro bono research for it would be an effective strategy.
Not sure I follow the question… I am no Bayesian, to me the argument being convincing is a statement about the odds of the argument being true, while the confidence in the predicted outcomes depends on how narrow the distribution the argument produces is, provided it’s true.
9% is far too high.
I see. I thought you were more in tune with Eliezer on this issue. I was simply trying to see what would make me take the MIRI research much more seriously. I am fascinated by the mathematical side of it, which is hopefully of high enough quality to attract expert attention, but I am currently much more skeptical of its effects on the odds of humanity surviving the next century or two.
I changed specifics to variables because I was interested more in the broader point than the specific case.
Asteroid tracking involved spending ~$100MM to eliminate most of the expected losses from civilization-wrecking asteroids. Generously, it might have eliminated as much as a 10^-6 extinction risk (if we had found a dinosaur-killer on course our civilization would have mobilized to divert it). At the same tradeoff, getting rid of a 9% extinction risk would seem to be worth $9T or more. Billions are spent on biodefense and nuclear nonproliferation programs each year.
So it seems to me that a 9% figure ‘overshoots’ the relevant thresholds in other areas: a much lower believed cost per increment of existential risk reduction would seem to suffice for more-than-adequate support (e.g. national governments, large foundations, and plenty of scientific talent would step in before that, based on experiences with nuclear weapons, climate change, cancer research, etc).
For comparison, consider someone who says that she will donate to malaria relief iff there is solidly convincing proof that at least 1000 cases of malaria affecting current people will be averted per dollar in the short-term. This is irrelevant in a world with a Gates Foundation, GiveWell, and so on: she will never get the chance as those with less stringent thresholds act.
I was trying to clarify whether you were using an extreme example to make the point in principle, or were saying that your threshold for action would actually be in that vicinity.