Unlike other existential risks, AIs could really “finish the job”: an AI bent on removing humanity would be able to eradicate the last remaining members of our species.
Most worrying aspect: likely to cause total (not partial) human extinction
I agree that AI risk is more likely to be existential given that it is at least catastrophic than the other things you have mentioned. This is especially true in the sense of “most of the accessible universe gets used in ways that fall far short of their potential/astronomical waste point of view.”
However, see this discussion of “AI will keep some humans around” arguments (or record data about, and recreate some in experiments and the like).
All solutions proposed so far have turned out to be very inadequate.
Well, none have been tested. Potential problems have been found or suggested, but depending on technological and social factors many might work.
If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can’t prove with a high degree of certainty that it will work perfectly, you shouldn’t turn it on.
Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...
I agree that AI risk is more likely to be existential given that it is at least catastrophic than the other things you have mentioned. This is especially true in the sense of “most of the accessible universe gets used in ways that fall far short of their potential/astronomical waste point of view.”
However, see this discussion of “AI will keep some humans around” arguments (or record data about, and recreate some in experiments and the like).
Well, none have been tested. Potential problems have been found or suggested, but depending on technological and social factors many might work.
If you agree that a superhuman AI is capable of being an existential risk, that makes the system that keeps it from running amok the most safety-critical piece of technology in history. There is no room for hopes or optimism or wishful thinking in a project like that. If you can’t prove with a high degree of certainty that it will work perfectly, you shouldn’t turn it on.
Or, to put it another way, the engineering team should act as if they were working with antimatter instead of software. The AI is actually a lot more dangerous than that, but giant explosions are a lot easier for human minds to visualize than UFAI outcomes...