I am not confident that GDP is a useful abstraction over the whole region of potential futures.
Suppose someone uses GPT5 to generate code, and then throws lots of compute at the generated code. GPT5 has generalized from the specific AI techniques humans have invented, seeing them as just a random sample from the broader space of AI techniques. When it samples from that space, it sometimes happens to pull out a technique more powerful than anything humans have invented. Given plenty of compute, it rapidly self improves. The humans are happy to keep throwing compute at it. (Maybe the AI is doing some moderately useful task for them, maybe they think its still training.) Neither the AI’s actions, nor the amount of compute used are economically significant. (The AI can’t yet gain much more compute without revealing how smart it is, and having humans try to stop it.) After a month of this, the AI hacks some lab equipment over the internet, and sends a few carefully chosen emails to a biotech company. A week later, nanobots escape the lab. A week after that and the grey goo has extinguished all earth life.
Alternate. The AI thinks its most reliable route to takeover involves economic power. It makes loads of money performing various services, (Like 50% GDP money) It uses this money to buy all the compute, and to pay people to make nanobots. Grey goo as before.
(Does grey goo count as GDP? What about various techs that the AI develops that would be ever so valuable if they were under meaningful human control?)
So in this set of circumstances, whether there is explosive economic growth or not depends on whether “Do everything and make loads of money” or “stay quiet and hack lab equipment” offer faster / more reliable paths to nanobots.
I am not confident that GDP is a useful abstraction over the whole region of potential futures.
Suppose someone uses GPT5 to generate code, and then throws lots of compute at the generated code. GPT5 has generalized from the specific AI techniques humans have invented, seeing them as just a random sample from the broader space of AI techniques. When it samples from that space, it sometimes happens to pull out a technique more powerful than anything humans have invented. Given plenty of compute, it rapidly self improves. The humans are happy to keep throwing compute at it. (Maybe the AI is doing some moderately useful task for them, maybe they think its still training.) Neither the AI’s actions, nor the amount of compute used are economically significant. (The AI can’t yet gain much more compute without revealing how smart it is, and having humans try to stop it.) After a month of this, the AI hacks some lab equipment over the internet, and sends a few carefully chosen emails to a biotech company. A week later, nanobots escape the lab. A week after that and the grey goo has extinguished all earth life.
Alternate. The AI thinks its most reliable route to takeover involves economic power. It makes loads of money performing various services, (Like 50% GDP money) It uses this money to buy all the compute, and to pay people to make nanobots. Grey goo as before.
(Does grey goo count as GDP? What about various techs that the AI develops that would be ever so valuable if they were under meaningful human control?)
So in this set of circumstances, whether there is explosive economic growth or not depends on whether “Do everything and make loads of money” or “stay quiet and hack lab equipment” offer faster / more reliable paths to nanobots.