I got the impression Eliezer’s claiming that a dangerous superintelligence is merely sufficient for nanotech.
No, I’m pretty confident Eliezer thinks AGI is both necessary and sufficient for nanotech. (Realistically/probabilistically speaking, given plausible levels of future investment into each tech. Obviously it’s not logically necessary or sufficient.) Cf. my summary of Nate’s view in Nate’s reply to Joe Carlsmith:
Nate agrees that if there’s a sphexish way to build world-saving nanosystems, then this should immediately be the top priority, and would be the best way to save the world (that’s currently known to us). Nate doesn’t predict that this is feasible, but it is on his list of the least-unlikely ways things could turn out well, out of the paths Nate can currently name in advance. (Most of Nate’s hope for the future comes from some other surprise occurring that he hasn’t already thought of.)
(I read “sphexish” here as a special case of “narrow AI” / “shallow cognition”, doing more things as a matter of pre-programmed reflex rather than as a matter of strategic choice.)
No, I’m pretty confident Eliezer thinks AGI is both necessary and sufficient for nanotech. (Realistically/probabilistically speaking, given plausible levels of future investment into each tech. Obviously it’s not logically necessary or sufficient.) Cf. my summary of Nate’s view in Nate’s reply to Joe Carlsmith:
(I read “sphexish” here as a special case of “narrow AI” / “shallow cognition”, doing more things as a matter of pre-programmed reflex rather than as a matter of strategic choice.)