AGIs are more important than ASIs for human strategic considerations, as the point where human effort gets screened off, where research is automated (and probably control over the world is effectively lost, unless it’s going to be eventually handed back). Even first AGIs likely work much faster than humans, and this advantage is sufficient on its own, without a need to also be stronger in other ways. AGI is probably sufficiently easier than ASI that humans only get to build AGIs, not ASIs.
When AGIs build ASIs, they are facing AI risk, which gives them motivation to put that off a bit. So I don’t expect ASIs before AGIs get a couple of years to bootstrap nanotech and adapt themselves to new hardware, running at scale without significantly changing their design first to get a better handle on AI alignment. Whether a resulting ASI is going to use “deep learning” is something humans might need some time getting up to speed on.
AGIs are more important than ASIs for human strategic considerations, as the point where human effort gets screened off, where research is automated (and probably control over the world is effectively lost, unless it’s going to be eventually handed back). Even first AGIs likely work much faster than humans, and this advantage is sufficient on its own, without a need to also be stronger in other ways. AGI is probably sufficiently easier than ASI that humans only get to build AGIs, not ASIs.
When AGIs build ASIs, they are facing AI risk, which gives them motivation to put that off a bit. So I don’t expect ASIs before AGIs get a couple of years to bootstrap nanotech and adapt themselves to new hardware, running at scale without significantly changing their design first to get a better handle on AI alignment. Whether a resulting ASI is going to use “deep learning” is something humans might need some time getting up to speed on.