This is certainly an answer to someone’s shallow argument.
Red team it a little.
An easy way to upgrade this argument would be to state “the ASI wouldn’t be able to afford the compute to remain in existence on stolen computers and stolen money”. And this is pretty clearly true, at current compute costs and algorithmic efficiencies. It will remain true for a very long time, assuming we cannot find enormous algorithmic efficiency improvements (not just a mere OOM, but several) or improve computer chips at a rate faster than Moore’s law. Geohot estimated that the delta for power efficiency is currently ~1000 times in favor of brains, therefore by Moore’s law, if it were able to continue, that’s 20 years away.
This simple ground truth fact: compute is very expensive, has corollaries.
Are ASI systems, where the system is substantially smarter than humans, even possible on networks of current computers? At current efficiencies a reasonably informed answer would be “no”.
Escaped ASI systems face a threat model of humans, using less efficient but more controllable AIs, mercilessly hunting them down and killing them. Humans can afford a lot more compute. Further Discussion on Point 2: It’s not the human vs escaped ASI, but the ASI vs AIs that are unable to process an attempt to negotiate. (because humans architected them with filters and sparse architectures so they lack the cognitive capacity to do anything more than kill their targets). This is not science fiction, an ICBM is exactly such a machine, just without onboard AI. There are no radio receivers on an ICBM or any ability to communicate with the missile after launch for very obvious reasons.
Epistemic status : I work on AI accelerator software stacks presently. I also used to think rogue AIs escaping to the internet was a plausible model, it made a great science fiction story, but I have learned that this is not currently technically possible, assuming there are not enormous (many OOM) algorithmic improvements or large numbers of people upgrade their internet bandwidth and local HW by many OOM.
This is a fascinating argument, and it’s shifting my perspective on plausible timelines to AGI risk.
I think you’re absolutely right about current systems. But there are no guarantees for how long this is true. The amount of compute necessary to run a better-than-human AGI is hotly debated and highly debatable. (ASI isn’t necessary for real threats).
This is probably still true for the next ten years, but I’m not sure it goes even that long. Algorithmic improvements have been doubling efficiency about every 18 months since the spread of network approaches; even if that doesn’t keep up, they will continue, and Moore’s law (or at least Kurzweil’s law) will probably keep going almost as fast as it is.
That’s on the order of magnitude of five doublings of compute, and five doublings of algorithmic efficiency (assuming some slowdown). That’s a world with a thousand times more space-for-intelligence, and it seems plausible that a slightly-smarter-than-human AGI could steal enough to rent adequate compute, and hide successfully while still operating at adequate speed to outmaneuver the rest of the world.
How much intelligence is necessary to outsmart humanity? I’d put the lower bound at just above human intelligence. And I’d say that GPT-5, properly scaffolded to agency, might be adequate.
If algorithmic or compute improvement slow down, or if I’m wrong about how much intelligence is dangerous, we’ve got longer. And we’ve probably got a little longer, since those are pretty minimal thresholds.
the delta for power efficiency is currently ~1000 times in favor of brains
⇒
brain: ~20 W,
AGI: ~20kW,
kWh in Germany: 0,33 Euro
20 kWh: ~6 Euro
⇒ running our AGI would, if we are assuming that your description of the situation is correct, cost around 6 Euros in energy per hour, which is cheaper than a human worker.
So … while I don’t assume that such estimates need to be correct or apply to an AGI (that doesn’t exist yet) I don’t think you are making a very convincing point so far.
We’re talking about the scenario of “the ASI wouldn’t be able to afford the compute to remain in existence on stolen computers and stolen money”.
There are no 20 kilowatt personal computers in existence. Note that you cannot simply botnet them together as the activations for current neural networks require too much bandwidth between nodes for the machine to operate at useful timescales.
I am assuming an ASI needs more compute and resources than merely an AGI as well. And not linearly more, I estimate the floor between AGI → ASI is at least 1000 times the computational resources. This falls from how it requires logarithmically more compute for small improvements in utility in most benchmarks.
So 20 * 1000 = 20 megawatts. So that’s the technical reason. You need large improvements in algorithmic efficiency or much more efficient and ubiquitous computers for the “escaped ASI’ threat model to be valid.
If you find this argument “unconvincing”, please provide numerical justification. What do you assume to be actually true? If you believe an ASI needs linearly more compute, please provide a paper cite that demonstrates this on any AI benchmark.
This is certainly an answer to someone’s shallow argument.
Red team it a little.
An easy way to upgrade this argument would be to state “the ASI wouldn’t be able to afford the compute to remain in existence on stolen computers and stolen money”. And this is pretty clearly true, at current compute costs and algorithmic efficiencies. It will remain true for a very long time, assuming we cannot find enormous algorithmic efficiency improvements (not just a mere OOM, but several) or improve computer chips at a rate faster than Moore’s law. Geohot estimated that the delta for power efficiency is currently ~1000 times in favor of brains, therefore by Moore’s law, if it were able to continue, that’s 20 years away.
This simple ground truth fact: compute is very expensive, has corollaries.
Are ASI systems, where the system is substantially smarter than humans, even possible on networks of current computers? At current efficiencies a reasonably informed answer would be “no”.
Escaped ASI systems face a threat model of humans, using less efficient but more controllable AIs, mercilessly hunting them down and killing them. Humans can afford a lot more compute. Further Discussion on Point 2: It’s not the human vs escaped ASI, but the ASI vs AIs that are unable to process an attempt to negotiate. (because humans architected them with filters and sparse architectures so they lack the cognitive capacity to do anything more than kill their targets). This is not science fiction, an ICBM is exactly such a machine, just without onboard AI. There are no radio receivers on an ICBM or any ability to communicate with the missile after launch for very obvious reasons.
Epistemic status : I work on AI accelerator software stacks presently. I also used to think rogue AIs escaping to the internet was a plausible model, it made a great science fiction story, but I have learned that this is not currently technically possible, assuming there are not enormous (many OOM) algorithmic improvements or large numbers of people upgrade their internet bandwidth and local HW by many OOM.
This is a fascinating argument, and it’s shifting my perspective on plausible timelines to AGI risk.
I think you’re absolutely right about current systems. But there are no guarantees for how long this is true. The amount of compute necessary to run a better-than-human AGI is hotly debated and highly debatable. (ASI isn’t necessary for real threats).
This is probably still true for the next ten years, but I’m not sure it goes even that long. Algorithmic improvements have been doubling efficiency about every 18 months since the spread of network approaches; even if that doesn’t keep up, they will continue, and Moore’s law (or at least Kurzweil’s law) will probably keep going almost as fast as it is.
That’s on the order of magnitude of five doublings of compute, and five doublings of algorithmic efficiency (assuming some slowdown). That’s a world with a thousand times more space-for-intelligence, and it seems plausible that a slightly-smarter-than-human AGI could steal enough to rent adequate compute, and hide successfully while still operating at adequate speed to outmaneuver the rest of the world.
How much intelligence is necessary to outsmart humanity? I’d put the lower bound at just above human intelligence. And I’d say that GPT-5, properly scaffolded to agency, might be adequate.
If algorithmic or compute improvement slow down, or if I’m wrong about how much intelligence is dangerous, we’ve got longer. And we’ve probably got a little longer, since those are pretty minimal thresholds.
Does that sound roughly right?
My argument does not depend on the AI being able to survive inside a bot net. I mentioned several alternatives.
So … while I don’t assume that such estimates need to be correct or apply to an AGI (that doesn’t exist yet) I don’t think you are making a very convincing point so far.
We’re talking about the scenario of “the ASI wouldn’t be able to afford the compute to remain in existence on stolen computers and stolen money”.
There are no 20 kilowatt personal computers in existence. Note that you cannot simply botnet them together as the activations for current neural networks require too much bandwidth between nodes for the machine to operate at useful timescales.
I am assuming an ASI needs more compute and resources than merely an AGI as well. And not linearly more, I estimate the floor between AGI → ASI is at least 1000 times the computational resources. This falls from how it requires logarithmically more compute for small improvements in utility in most benchmarks.
So 20 * 1000 = 20 megawatts. So that’s the technical reason. You need large improvements in algorithmic efficiency or much more efficient and ubiquitous computers for the “escaped ASI’ threat model to be valid.
If you find this argument “unconvincing”, please provide numerical justification. What do you assume to be actually true? If you believe an ASI needs linearly more compute, please provide a paper cite that demonstrates this on any AI benchmark.
You were the one who made that argument, not me. 🙄