Right, but of course the absolute, certain implication from “AGI is created” to “all biological life on Earth is eaten by nanotechnology made by an unaligned AI that has worthless goals” requires some amount of justification, and that justification for this level of certainty is completely missing.
In general such confidently made predictions about the technological future have a poor historical track record, and there are multiple holes in the Eliezer/MIRI story, and there is no formal, canonical write up of why they’re so confident in their apparently secret knowledge. There’s a lot of informal, non-canonical, nontechnical stuff like List of Lethalities, security mindset, etc. that’s kind of gesturing at ideas, but there are too many holes and potential objections to have their claimed level of confidence, and they haven’t published anything formal since 2021, and very little since 2017.
We need more than that if we’re going to confidently prefer nuclear devastation over AGI.
The trade-off you’re gesturing at is really risk of AGI vs. risk of nuclear devastation. So you don’t need absolute certainty on either side in order to be willing to make it.
If the former, then I don’t understand your comment and maybe a rewording would help me.
If the latter, then I’ll just reiterate that I’m referring to Eliezer’s explicitly stated willingness to trade off the actuality of (not just some risk of) nuclear devastation to prevent the creation of AGI (though again, to be clear, I am not claiming he advocated a nuclear first strike). The only potential uncertainty in that tradeoff is the consequences of AGI (though I think Eliezer’s been clear that he thinks it means certain doom), and I suppose what follows after nuclear devastation as well.
Right, but of course the absolute, certain implication from “AGI is created” to “all biological life on Earth is eaten by nanotechnology made by an unaligned AI that has worthless goals” requires some amount of justification, and that justification for this level of certainty is completely missing.
In general such confidently made predictions about the technological future have a poor historical track record, and there are multiple holes in the Eliezer/MIRI story, and there is no formal, canonical write up of why they’re so confident in their apparently secret knowledge. There’s a lot of informal, non-canonical, nontechnical stuff like List of Lethalities, security mindset, etc. that’s kind of gesturing at ideas, but there are too many holes and potential objections to have their claimed level of confidence, and they haven’t published anything formal since 2021, and very little since 2017.
We need more than that if we’re going to confidently prefer nuclear devastation over AGI.
The trade-off you’re gesturing at is really risk of AGI vs. risk of nuclear devastation. So you don’t need absolute certainty on either side in order to be willing to make it.
Did you intend to say risk off, or risk of?
If the former, then I don’t understand your comment and maybe a rewording would help me.
If the latter, then I’ll just reiterate that I’m referring to Eliezer’s explicitly stated willingness to trade off the actuality of (not just some risk of) nuclear devastation to prevent the creation of AGI (though again, to be clear, I am not claiming he advocated a nuclear first strike). The only potential uncertainty in that tradeoff is the consequences of AGI (though I think Eliezer’s been clear that he thinks it means certain doom), and I suppose what follows after nuclear devastation as well.