“Hack every AI on the planet” sounds like a big ask for an AI that will have a tiny fraction (<1%) of the world’s total computing power at its disposal.
Furthermore, it has to do that retro-actively. The first super-intelligent AGI will be built by a team of 1m Von-Neumann level AGIs who are working their hardest to prevent that from happening.
The AI doomer logic all hinges on at least one of a number of unlikely things being true.
Nanotechnology is easy to develop
Species killer bioweapons are easy to develop
Hacking other computers is easy for a superintelligence
Greater intelligence gains capabilities we can’t model or predict, able to do things we cannot defend against
It requires human buildable compute substrate to support the intelligence in (4), not small planetoids of 3d computronium
Humans will build a few very powerful agentic ASI, task them with some long running wish, and let them operate for a long period of time to work on the goal without human input required
Robotics is easy
Real world data is high resolution/the aggregate set of human papers can be mined to get high resolution information about the world
Humans can be easily modeled and socially manipulated
Note just 1 or a few of the above would let an ASI conquer the planet, however, by current knowledge each of the above is unlikely. (Less than 10 percent probability). Many doomers will state they don’t believe this, that ASI could take over the planet in hours or weeks.
This is unlikely but I am partially just stating their assumptions. They could be correct in the end, see the “least dignified timeline” meme.
While I’m generally a doom sceptic, I don’t see the problem with 3. Hacking computers is possible for smart humans, so it should be easy for IQ 200 AIs.
While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.
/s Yeah the 20th century was really a disaster for humanity. It would be terrible if capitalism and economic development were to keep going like this.
The first super-intelligent AGI will be built by a team of 1m Von-Neumann level AGIs
Or how about: a few iterations from now, a team of AutoGPTs make a strongly superhuman AI, which then makes the million Von Neumanns, which take over the world on its behalf.
Actually intelligent human (smart grad student-ish)
Von Neumann (smartest human ever)
Super human (but not yet super-intelligent)
Super-intelligent
Dyson sphere of computronium???
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000′s of AutoGPTs working for them. The person who builds the first the first Von-Neumann level AGI doesn’t get to take over the world because they’re outnumbered 70 trillion to one.
The ratio is a direct consequence of the fact that it is much cheaper to run an AI than to train one. There are also ecological reasons why weaker agents will out-compute stronger ones. Big models are expensive to run and there’s simply no reason why you would use an AI that costs $100/hour to run for most tasks when one that costs literally pennies can do 90% as good of a job. This is the same reason why bacteria >> insects >> people. There’s no method whereby humans could kill every insect on earth without killing ourselves as well.
See also: why AI X-risk stories always postulate magic like “nano-technology” or “instantly hack every computer on earth”.
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000′s of AutoGPTs working for them.
How many requests does OpenAI handle per day? What happens when you have several copies of an LLM talking to each other at that rate, with a team of AutoGPTs helping to curate the dialogue and perform other auxiliary tasks? It’s a recipe for an intelligence singularity.
“Hack every AI on the planet” sounds like a big ask for an AI that will have a tiny fraction (<1%) of the world’s total computing power at its disposal.
Furthermore, it has to do that retro-actively. The first super-intelligent AGI will be built by a team of 1m Von-Neumann level AGIs who are working their hardest to prevent that from happening.
The AI doomer logic all hinges on at least one of a number of unlikely things being true.
Nanotechnology is easy to develop
Species killer bioweapons are easy to develop
Hacking other computers is easy for a superintelligence
Greater intelligence gains capabilities we can’t model or predict, able to do things we cannot defend against
It requires human buildable compute substrate to support the intelligence in (4), not small planetoids of 3d computronium
Humans will build a few very powerful agentic ASI, task them with some long running wish, and let them operate for a long period of time to work on the goal without human input required
Robotics is easy
Real world data is high resolution/the aggregate set of human papers can be mined to get high resolution information about the world
Humans can be easily modeled and socially manipulated
Note just 1 or a few of the above would let an ASI conquer the planet, however, by current knowledge each of the above is unlikely. (Less than 10 percent probability). Many doomers will state they don’t believe this, that ASI could take over the planet in hours or weeks.
This is unlikely but I am partially just stating their assumptions. They could be correct in the end, see the “least dignified timeline” meme.
While I’m generally a doom sceptic, I don’t see the problem with 3. Hacking computers is possible for smart humans, so it should be easy for IQ 200 AIs.
While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.
/s Yeah the 20th century was really a disaster for humanity. It would be terrible if capitalism and economic development were to keep going like this.
Or how about: a few iterations from now, a team of AutoGPTs make a strongly superhuman AI, which then makes the million Von Neumanns, which take over the world on its behalf.
So the timeline goes something like:
Dumb human (this was GPT-3.5)
Average-ish human but book smart (GPT-4/AutoGPT)
Actually intelligent human (smart grad student-ish)
Von Neumann (smartest human ever)
Super human (but not yet super-intelligent)
Super-intelligent
Dyson sphere of computronium???
By the time we get the first Von-Neumann, every human on earth is going to have a team of 1000′s of AutoGPTs working for them. The person who builds the first the first Von-Neumann level AGI doesn’t get to take over the world because they’re outnumbered 70 trillion to one.
The ratio is a direct consequence of the fact that it is much cheaper to run an AI than to train one. There are also ecological reasons why weaker agents will out-compute stronger ones. Big models are expensive to run and there’s simply no reason why you would use an AI that costs $100/hour to run for most tasks when one that costs literally pennies can do 90% as good of a job. This is the same reason why bacteria >> insects >> people. There’s no method whereby humans could kill every insect on earth without killing ourselves as well.
See also: why AI X-risk stories always postulate magic like “nano-technology” or “instantly hack every computer on earth”.
How many requests does OpenAI handle per day? What happens when you have several copies of an LLM talking to each other at that rate, with a team of AutoGPTs helping to curate the dialogue and perform other auxiliary tasks? It’s a recipe for an intelligence singularity.
How much being outnumbered counts depends on the type of conflict. A chess grandmaster can easily defeat ten mediocre players in a simultaneous game.