This is a fascinating argument, and it’s shifting my perspective on plausible timelines to AGI risk.
I think you’re absolutely right about current systems. But there are no guarantees for how long this is true. The amount of compute necessary to run a better-than-human AGI is hotly debated and highly debatable. (ASI isn’t necessary for real threats).
This is probably still true for the next ten years, but I’m not sure it goes even that long. Algorithmic improvements have been doubling efficiency about every 18 months since the spread of network approaches; even if that doesn’t keep up, they will continue, and Moore’s law (or at least Kurzweil’s law) will probably keep going almost as fast as it is.
That’s on the order of magnitude of five doublings of compute, and five doublings of algorithmic efficiency (assuming some slowdown). That’s a world with a thousand times more space-for-intelligence, and it seems plausible that a slightly-smarter-than-human AGI could steal enough to rent adequate compute, and hide successfully while still operating at adequate speed to outmaneuver the rest of the world.
How much intelligence is necessary to outsmart humanity? I’d put the lower bound at just above human intelligence. And I’d say that GPT-5, properly scaffolded to agency, might be adequate.
If algorithmic or compute improvement slow down, or if I’m wrong about how much intelligence is dangerous, we’ve got longer. And we’ve probably got a little longer, since those are pretty minimal thresholds.
This is a fascinating argument, and it’s shifting my perspective on plausible timelines to AGI risk.
I think you’re absolutely right about current systems. But there are no guarantees for how long this is true. The amount of compute necessary to run a better-than-human AGI is hotly debated and highly debatable. (ASI isn’t necessary for real threats).
This is probably still true for the next ten years, but I’m not sure it goes even that long. Algorithmic improvements have been doubling efficiency about every 18 months since the spread of network approaches; even if that doesn’t keep up, they will continue, and Moore’s law (or at least Kurzweil’s law) will probably keep going almost as fast as it is.
That’s on the order of magnitude of five doublings of compute, and five doublings of algorithmic efficiency (assuming some slowdown). That’s a world with a thousand times more space-for-intelligence, and it seems plausible that a slightly-smarter-than-human AGI could steal enough to rent adequate compute, and hide successfully while still operating at adequate speed to outmaneuver the rest of the world.
How much intelligence is necessary to outsmart humanity? I’d put the lower bound at just above human intelligence. And I’d say that GPT-5, properly scaffolded to agency, might be adequate.
If algorithmic or compute improvement slow down, or if I’m wrong about how much intelligence is dangerous, we’ve got longer. And we’ve probably got a little longer, since those are pretty minimal thresholds.
Does that sound roughly right?