Your first alternative hypothesis (there’s ALREADY a path to extinction) is clear to me, and it is unclear what sign or magnitude of change in that risk that AI will bring. Which makes your title a bit suspect—AI doesn’t bring a risk of extinction, it “merely” changes the likelihood, and perhaps the severity, of possible extinction paths.
I agree that AI changes the likelihood of extinction rather than bring a risk where there was none before. In that sense the right question could be ‘Is increasing the probability of extinction reasonable?’.
Assuming that you mean by the last sentence that AI does not bring new extinction paths, I would like to counter that AI could well bring new paths to extinction; that is, there are probably paths to human extinction that open up when a machine intelligence surpasses human intelligence. Just like chess engines can apply strategies that humans have not thought of, some machine intelligence could find ways to wipe out humanity that have not yet been imagined. Furthermore, there might be ways to cause human extinction that can only be executed by a superior intelligence. An example could be a path that starts with hacking into many well defended servers in short succession, at a speed that even the best group of human hackers could not execute, in order to shut down a large part of the internet.
Your first alternative hypothesis (there’s ALREADY a path to extinction) is clear to me, and it is unclear what sign or magnitude of change in that risk that AI will bring. Which makes your title a bit suspect—AI doesn’t bring a risk of extinction, it “merely” changes the likelihood, and perhaps the severity, of possible extinction paths.
The title, previously ‘Is taking extinction risk reasonable?’ has been changed to ‘On extinction risk over time and AI’. I appreciate the correction.
I agree that AI changes the likelihood of extinction rather than bring a risk where there was none before. In that sense the right question could be ‘Is increasing the probability of extinction reasonable?’.
Assuming that you mean by the last sentence that AI does not bring new extinction paths, I would like to counter that AI could well bring new paths to extinction; that is, there are probably paths to human extinction that open up when a machine intelligence surpasses human intelligence. Just like chess engines can apply strategies that humans have not thought of, some machine intelligence could find ways to wipe out humanity that have not yet been imagined. Furthermore, there might be ways to cause human extinction that can only be executed by a superior intelligence. An example could be a path that starts with hacking into many well defended servers in short succession, at a speed that even the best group of human hackers could not execute, in order to shut down a large part of the internet.