The first super-intelligent AGI will be built by a team of 1m Von-Neumann level AGIs
Or how about: a few iterations from now, a team of AutoGPTs make a strongly superhuman AI, which then makes the million Von Neumanns, which take over the world on its behalf.
Actually intelligent human (smart grad student-ish)
Von Neumann (smartest human ever)
Super human (but not yet super-intelligent)
Super-intelligent
Dyson sphere of computronium???
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000′s of AutoGPTs working for them. The person who builds the first the first Von-Neumann level AGI doesn’t get to take over the world because they’re outnumbered 70 trillion to one.
The ratio is a direct consequence of the fact that it is much cheaper to run an AI than to train one. There are also ecological reasons why weaker agents will out-compute stronger ones. Big models are expensive to run and there’s simply no reason why you would use an AI that costs $100/hour to run for most tasks when one that costs literally pennies can do 90% as good of a job. This is the same reason why bacteria >> insects >> people. There’s no method whereby humans could kill every insect on earth without killing ourselves as well.
See also: why AI X-risk stories always postulate magic like “nano-technology” or “instantly hack every computer on earth”.
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000′s of AutoGPTs working for them.
How many requests does OpenAI handle per day? What happens when you have several copies of an LLM talking to each other at that rate, with a team of AutoGPTs helping to curate the dialogue and perform other auxiliary tasks? It’s a recipe for an intelligence singularity.
Or how about: a few iterations from now, a team of AutoGPTs make a strongly superhuman AI, which then makes the million Von Neumanns, which take over the world on its behalf.
So the timeline goes something like:
Dumb human (this was GPT-3.5)
Average-ish human but book smart (GPT-4/AutoGPT)
Actually intelligent human (smart grad student-ish)
Von Neumann (smartest human ever)
Super human (but not yet super-intelligent)
Super-intelligent
Dyson sphere of computronium???
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000′s of AutoGPTs working for them. The person who builds the first the first Von-Neumann level AGI doesn’t get to take over the world because they’re outnumbered 70 trillion to one.
The ratio is a direct consequence of the fact that it is much cheaper to run an AI than to train one. There are also ecological reasons why weaker agents will out-compute stronger ones. Big models are expensive to run and there’s simply no reason why you would use an AI that costs $100/hour to run for most tasks when one that costs literally pennies can do 90% as good of a job. This is the same reason why bacteria >> insects >> people. There’s no method whereby humans could kill every insect on earth without killing ourselves as well.
See also: why AI X-risk stories always postulate magic like “nano-technology” or “instantly hack every computer on earth”.
How many requests does OpenAI handle per day? What happens when you have several copies of an LLM talking to each other at that rate, with a team of AutoGPTs helping to curate the dialogue and perform other auxiliary tasks? It’s a recipe for an intelligence singularity.
How much being outnumbered counts depends on the type of conflict. A chess grandmaster can easily defeat ten mediocre players in a simultaneous game.