Even without taking future lives into account, a 2% extinction risk is equivalent to around 160 million casualties in expectation, roughly four times the population of Canada. It’s difficult to say whether the potential benefits of powerful AI systems would justify taking that relatively high risk.
One thing to consider is that currently around 67 million people die every year. If one assumes that
learn the answers to all of humanity’s greatest questions
would entail the ability to rapidly cut this rate at least 10-fold, then the benefits look formidable, as one could argue that inaction causes 60 million deaths per year or so.
In this sense, AI is different from all those other cases, which do not seem to be associated with this kind of benefit.
Of course, one important factor here is how likely are we to make really rapid progress in anti-aging and rejuvenation research, in ability to fluently cure various cancers, and so on in the absence of “strong AI”. Or, at least, rapid progress in the ability to reliably freeze and revive a mouse… So far, my main hopes for progress here are all AI-related, as these problems really all seem to be too complicated for humans to solve… But I might easily be wrong...
In any case, in addition to comparing P(doom for humanity) conditional on presence and on absence of “strong AI” (“strong AI” is not the only existential risk we are facing, and many people hope for “strong AI” to be protective against other existential risks), one can also consider P(doom for all currently existing people) conditional on presence and on absence of “strong AI”, and also various life expectancy measures for various sets of people conditional on presence and on absence of “strong AI”.
For example, the question whether one’s P(doom for all currently existing people) conditional on absence of “strong AI” is much smaller than 1 is quite legitimate, given that all people currently seem to be mortal, and that “escape velocity” in life expectancy increase remains merely a dream so far (some sort of revolution is needed, if people don’t want to eventually die with 100% chance).
But I do feel that life expectancy estimates conditional on presence and on absence of “strong AI” might be more relevant and fair… I don’t know if people are trying at all to compute those...
The Manhattan project had benefits potentially in the millions of lives if the counterfactual was broader Nazi domination. So while AI is different in the size of the benefit, it is a quantitative difference. I agree it would be interesting to compute QALYs with or without AI, and do the same for some of the other examples in the list.
One thing to consider is that currently around 67 million people die every year. If one assumes that
would entail the ability to rapidly cut this rate at least 10-fold, then the benefits look formidable, as one could argue that inaction causes 60 million deaths per year or so.
In this sense, AI is different from all those other cases, which do not seem to be associated with this kind of benefit.
Of course, one important factor here is how likely are we to make really rapid progress in anti-aging and rejuvenation research, in ability to fluently cure various cancers, and so on in the absence of “strong AI”. Or, at least, rapid progress in the ability to reliably freeze and revive a mouse… So far, my main hopes for progress here are all AI-related, as these problems really all seem to be too complicated for humans to solve… But I might easily be wrong...
In any case, in addition to comparing P(doom for humanity) conditional on presence and on absence of “strong AI” (“strong AI” is not the only existential risk we are facing, and many people hope for “strong AI” to be protective against other existential risks), one can also consider P(doom for all currently existing people) conditional on presence and on absence of “strong AI”, and also various life expectancy measures for various sets of people conditional on presence and on absence of “strong AI”.
For example, the question whether one’s P(doom for all currently existing people) conditional on absence of “strong AI” is much smaller than 1 is quite legitimate, given that all people currently seem to be mortal, and that “escape velocity” in life expectancy increase remains merely a dream so far (some sort of revolution is needed, if people don’t want to eventually die with 100% chance).
But I do feel that life expectancy estimates conditional on presence and on absence of “strong AI” might be more relevant and fair… I don’t know if people are trying at all to compute those...
The Manhattan project had benefits potentially in the millions of lives if the counterfactual was broader Nazi domination. So while AI is different in the size of the benefit, it is a quantitative difference. I agree it would be interesting to compute QALYs with or without AI, and do the same for some of the other examples in the list.