Wasn’t hardware overhang the argument that if AGI is more bottlenecked by software than hardware, then conceptual insights on the software side could cause a discontinuity as people suddenly figured out how to use that hardware effectively? I’m not sure how your counterargument really works there, since the AI that arrives “a bit earlier” either precedes or follows that conceptual breakthrough. If it precedes the breakthrough, then it doesn’t benefit from that conceptual insight so won’t be powerful enough to take advantage of the overhang, and if it follows it, then it has a discontinuous advantage over previous systems and can take advantage of hardware overhang.
---
Separately, your comment also feels related to my argument that focusing on just superintelligence is a useful simplifying assumption, since a superintelligence is almost by definition capable of taking over the world. But it simplifies things a little too much, because if we focus too much on just the superintelligence case, we might miss the emergence of a “dumb” AGI which nevertheless had the “crucial capabilities” necessary for a world takeover.
In those terms, “having sufficient offensive cybersecurity capability that a hacking attempt can snowball into a world takeover” would be one such crucial capability that allowed for a discontinuity.
Wasn’t hardware overhang the argument that if AGI is more bottlenecked by software than hardware, then conceptual insights on the software side could cause a discontinuity as people suddenly figured out how to use that hardware effectively? I’m not sure how your counterargument really works there, since the AI that arrives “a bit earlier” either precedes or follows that conceptual breakthrough. If it precedes the breakthrough, then it doesn’t benefit from that conceptual insight so won’t be powerful enough to take advantage of the overhang, and if it follows it, then it has a discontinuous advantage over previous systems and can take advantage of hardware overhang.
---
Separately, your comment also feels related to my argument that focusing on just superintelligence is a useful simplifying assumption, since a superintelligence is almost by definition capable of taking over the world. But it simplifies things a little too much, because if we focus too much on just the superintelligence case, we might miss the emergence of a “dumb” AGI which nevertheless had the “crucial capabilities” necessary for a world takeover.
In those terms, “having sufficient offensive cybersecurity capability that a hacking attempt can snowball into a world takeover” would be one such crucial capability that allowed for a discontinuity.