This is what I would expect an AGI takeoff to look like if we are in fact in a “hardware overshoot”. I actually think a hardware-bound “slow takeoff” is more likely, but I’d put a scenario like this at >5%.
I should have known that AGI was near the moment that BetaStar was released. Unlike AlphaStar, which was trained using more compute than any previous algorithm and still achieved sub human-expert performance, BetaStar was trained by a researcher on a single TPU in under a month and could beat the world’s best player even when limited to 1⁄2 of the actions-per-minute of human players. Unlike AlphaStar, which used a swarm of Reinforcement Learners to learn a strategy, BetaStar used a much more elegant algorithm that could be said to be a combination of Transformers (of GPT-3 fame) and good-old-fashioned AB-pruning (the same algorithm used by DeepBlue 30 years ago).
The trick was finding a way to combine these that didn’t result in a combinatorial explosion. Not only did the trick work, but because transformers were known to work on a wide variety of domains (text, images, audio, video, gestures,...), it was immediately obvious how to apply the BetaStar algorithm to literally every domain. Motion-planning for robots, resume writing, beating the stock market.
Even if I didn’t see it coming, the experts a Google, OpenAI, and all of the world’s major governments did. Immediately a world-wide arms race was launched to see who could scale BetaStar up as fast as possible. First place meant ruling the world. Second place meant the barest chance at survival. Third place meant extinction.
OpenAI was the first to announce that they had trained a version of BetaStar that appeared to have the intelligence of a 5-year-old child. A week later Google announced that their version of BetaStar was equivalent of a PhD Grad Student. The NSA didn’t say how smart their version of BetaStar was. Rather, the president of the United States announced that every single super-computer and nuclear-weapon in China, Russia, Iran, North Korea and Syria had been destroyed.
A few weeks later, ever single American received a check for $10,000 and a letter explaining that the checks would keep coming every month thereafter. A few riots broke out around the world in resistance to “American Imperialism”, but after checks started arriving in other countries, most people stopped complaining.
Nobody really knows what the AI is up to these days, but life on Earth is good so far and we try not to worry about it. Space, however, --much to Elon Musk’s disappointment—belongs to the AI.
Signs we are in hardware overshoot:
A novel algorithm achieves state-of-the-art performance on a well-studied problem using 2-3 orders of magnitude less compute
It is apparent to experts how this algorithm generalizes to other real-world problems.
Major institutions undertake a “Manhattan Project” style arms-race to scale up a general-purpose algorithm.
Caveats
I gave this story a “happy ending”. Hardware overshoot (and other forms of fast AGI takeoff) is the most-dangerous version of AGI because it has the ability to quickly surpass all human beings. It’s easy to imagine a version of the story where the winner of the arms race is not benevolent, or where there is an alignment-failure and humans lose control of the AGI entirely.
It’s easy to imagine a version of the story where the winner of the arms race is not benevolent, or where there is an alignment-failure and humans lose control of the AGI entirely.
I would frame it a bit differently: Currently, we haven’t solved the alignment problem, so in this scenario the AI would be unaligned and it would kill us all (or do something similarly bad) as soon as it suited it. We can imagine versions of this scenario where a ton of progress is made in solving the alignment problem, or we can imagine versions of this scenario where surprisingly it turns out “alignment by default” is true and there never was a problem to begin with. But both of these would be very unusual, and distinct, scenarios, requiring more text to be written.
This is what I would expect an AGI takeoff to look like if we are in fact in a “hardware overshoot”. I actually think a hardware-bound “slow takeoff” is more likely, but I’d put a scenario like this at >5%.
I should have known that AGI was near the moment that BetaStar was released. Unlike AlphaStar, which was trained using more compute than any previous algorithm and still achieved sub human-expert performance, BetaStar was trained by a researcher on a single TPU in under a month and could beat the world’s best player even when limited to 1⁄2 of the actions-per-minute of human players. Unlike AlphaStar, which used a swarm of Reinforcement Learners to learn a strategy, BetaStar used a much more elegant algorithm that could be said to be a combination of Transformers (of GPT-3 fame) and good-old-fashioned AB-pruning (the same algorithm used by DeepBlue 30 years ago).
The trick was finding a way to combine these that didn’t result in a combinatorial explosion. Not only did the trick work, but because transformers were known to work on a wide variety of domains (text, images, audio, video, gestures,...), it was immediately obvious how to apply the BetaStar algorithm to literally every domain. Motion-planning for robots, resume writing, beating the stock market.
Even if I didn’t see it coming, the experts a Google, OpenAI, and all of the world’s major governments did. Immediately a world-wide arms race was launched to see who could scale BetaStar up as fast as possible. First place meant ruling the world. Second place meant the barest chance at survival. Third place meant extinction.
OpenAI was the first to announce that they had trained a version of BetaStar that appeared to have the intelligence of a 5-year-old child. A week later Google announced that their version of BetaStar was equivalent of a PhD Grad Student. The NSA didn’t say how smart their version of BetaStar was. Rather, the president of the United States announced that every single super-computer and nuclear-weapon in China, Russia, Iran, North Korea and Syria had been destroyed.
A few weeks later, ever single American received a check for $10,000 and a letter explaining that the checks would keep coming every month thereafter. A few riots broke out around the world in resistance to “American Imperialism”, but after checks started arriving in other countries, most people stopped complaining.
Nobody really knows what the AI is up to these days, but life on Earth is good so far and we try not to worry about it. Space, however, --much to Elon Musk’s disappointment—belongs to the AI.
Signs we are in hardware overshoot:
A novel algorithm achieves state-of-the-art performance on a well-studied problem using 2-3 orders of magnitude less compute
It is apparent to experts how this algorithm generalizes to other real-world problems.
Major institutions undertake a “Manhattan Project” style arms-race to scale up a general-purpose algorithm.
Caveats
I gave this story a “happy ending”. Hardware overshoot (and other forms of fast AGI takeoff) is the most-dangerous version of AGI because it has the ability to quickly surpass all human beings. It’s easy to imagine a version of the story where the winner of the arms race is not benevolent, or where there is an alignment-failure and humans lose control of the AGI entirely.
I would frame it a bit differently: Currently, we haven’t solved the alignment problem, so in this scenario the AI would be unaligned and it would kill us all (or do something similarly bad) as soon as it suited it. We can imagine versions of this scenario where a ton of progress is made in solving the alignment problem, or we can imagine versions of this scenario where surprisingly it turns out “alignment by default” is true and there never was a problem to begin with. But both of these would be very unusual, and distinct, scenarios, requiring more text to be written.