The AI doomer logic all hinges on at least one of a number of unlikely things being true.
Nanotechnology is easy to develop
Species killer bioweapons are easy to develop
Hacking other computers is easy for a superintelligence
Greater intelligence gains capabilities we can’t model or predict, able to do things we cannot defend against
It requires human buildable compute substrate to support the intelligence in (4), not small planetoids of 3d computronium
Humans will build a few very powerful agentic ASI, task them with some long running wish, and let them operate for a long period of time to work on the goal without human input required
Robotics is easy
Real world data is high resolution/the aggregate set of human papers can be mined to get high resolution information about the world
Humans can be easily modeled and socially manipulated
Note just 1 or a few of the above would let an ASI conquer the planet, however, by current knowledge each of the above is unlikely. (Less than 10 percent probability). Many doomers will state they don’t believe this, that ASI could take over the planet in hours or weeks.
This is unlikely but I am partially just stating their assumptions. They could be correct in the end, see the “least dignified timeline” meme.
While I’m generally a doom sceptic, I don’t see the problem with 3. Hacking computers is possible for smart humans, so it should be easy for IQ 200 AIs.
While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.
/s Yeah the 20th century was really a disaster for humanity. It would be terrible if capitalism and economic development were to keep going like this.
The AI doomer logic all hinges on at least one of a number of unlikely things being true.
Nanotechnology is easy to develop
Species killer bioweapons are easy to develop
Hacking other computers is easy for a superintelligence
Greater intelligence gains capabilities we can’t model or predict, able to do things we cannot defend against
It requires human buildable compute substrate to support the intelligence in (4), not small planetoids of 3d computronium
Humans will build a few very powerful agentic ASI, task them with some long running wish, and let them operate for a long period of time to work on the goal without human input required
Robotics is easy
Real world data is high resolution/the aggregate set of human papers can be mined to get high resolution information about the world
Humans can be easily modeled and socially manipulated
Note just 1 or a few of the above would let an ASI conquer the planet, however, by current knowledge each of the above is unlikely. (Less than 10 percent probability). Many doomers will state they don’t believe this, that ASI could take over the planet in hours or weeks.
This is unlikely but I am partially just stating their assumptions. They could be correct in the end, see the “least dignified timeline” meme.
While I’m generally a doom sceptic, I don’t see the problem with 3. Hacking computers is possible for smart humans, so it should be easy for IQ 200 AIs.
While I see a lot of concern about the big one. I think the whole AI environment being unaligned is the more likely but not any better outcome. A society that is doing really well by some metrics that just happen to be the wrong ones. I thinking of idea of freedom of contract that was popular at the beginning of the 20th century and how hard it was to dig ourselves out of that hole.
/s Yeah the 20th century was really a disaster for humanity. It would be terrible if capitalism and economic development were to keep going like this.