On doom through normal means: “Persuasion, hacking, and warfare” aren’t by themselves doom, but they can be used to accumulate lots of power, and then that power can be used to cause doom. Imagine a world in which human are completely economically, militarily, and politically obsolete, thanks to armies of robots directed by superintelligent AIs. Such a world could and would do very nasty things to humans (e.g. let them all starve to death) unless the superintelligent AIs managing everything specifically cared about keeping humans alive and in good living conditions. Because keeping humans alive & in good living conditions would, ex hypothesi, not be instrumentally valuable to the economy, or the military, etc.
How could such a world arise? Well, if we have superintelligent AIs, they can do some hacking, persuasion, and maybe some warfare, and create that world.
How long would this process take? IDK, maybe years? Could be much less. But I wouldn’t be surprised if it takes several years, even maybe five years.
I’m not conflating those things. We have ambitious goals and are trying to get our AIs to have ambitious goals—specifically we are trying to get them to have our ambitious goals. It’s not much of a stretch to imagine this going wrong, and them ending up with ambitious goals that are different from ours in various ways (even if somewhat overlapping).
Thanks to you likewise!
On doom through normal means: “Persuasion, hacking, and warfare” aren’t by themselves doom, but they can be used to accumulate lots of power, and then that power can be used to cause doom. Imagine a world in which human are completely economically, militarily, and politically obsolete, thanks to armies of robots directed by superintelligent AIs. Such a world could and would do very nasty things to humans (e.g. let them all starve to death) unless the superintelligent AIs managing everything specifically cared about keeping humans alive and in good living conditions. Because keeping humans alive & in good living conditions would, ex hypothesi, not be instrumentally valuable to the economy, or the military, etc.
How could such a world arise? Well, if we have superintelligent AIs, they can do some hacking, persuasion, and maybe some warfare, and create that world.
How long would this process take? IDK, maybe years? Could be much less. But I wouldn’t be surprised if it takes several years, even maybe five years.
I’m not conflating those things. We have ambitious goals and are trying to get our AIs to have ambitious goals—specifically we are trying to get them to have our ambitious goals. It’s not much of a stretch to imagine this going wrong, and them ending up with ambitious goals that are different from ours in various ways (even if somewhat overlapping).