Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn’t the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?
It seems likely that for strategic reasons an AGI will not act in a hostile manner until it is essentially certain to permanently win. It also seems likely that any means by which it can permanently win with near-certainty will kill most of the population relatively quickly.
Keep in mind that this should be measured in comparison with the end-of-life scenarios that most people would face otherwise: typically dementia, cancer, chronic lung or cardiovascular disease. It seems unlikely that most of the people alive at the start of an AI doom scenario will suffer much worse than that for much longer.
If it truly is worse than not being alive at all, suicide will be an option in most scenarios.
Because, so the argument goes, if the AI is powerful enough to pose any threat at all, then it is surely powerful enough to improve itself (in the slowest case, coercing or bribing human researchers, until eventually being able to self-modify). Unlike humans, the AI has no skill ceiling, and so the recursive feedback loop of improvement will go FOOM in a relatively short amount of time, though how long that is is a matter of question.
I don’t doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.
Thanks, but I am not convinced that the first AI that turns against humans and wins automatically has to be an AI that is extremely powerful in all dimensions. Skynet may be cartoonish, but why shouldn’t the first AI that moves against humankind be one that controls a large part of the US nukes while not being able to manipulate germs?
It seems likely that for strategic reasons an AGI will not act in a hostile manner until it is essentially certain to permanently win. It also seems likely that any means by which it can permanently win with near-certainty will kill most of the population relatively quickly.
Keep in mind that this should be measured in comparison with the end-of-life scenarios that most people would face otherwise: typically dementia, cancer, chronic lung or cardiovascular disease. It seems unlikely that most of the people alive at the start of an AI doom scenario will suffer much worse than that for much longer.
If it truly is worse than not being alive at all, suicide will be an option in most scenarios.
I think the comparison to cancer etc is helpful, thanks.
The suicide option is a somewhat strange but maybe helpful perspective, as it simplifies the original question by splitting it:
Do you consider a life worth living that ends in a situation in which suicide is the best option?
How likely will this be the case for most people in our relatively soon future? (Including because of AI)
Because, so the argument goes, if the AI is powerful enough to pose any threat at all, then it is surely powerful enough to improve itself (in the slowest case, coercing or bribing human researchers, until eventually being able to self-modify). Unlike humans, the AI has no skill ceiling, and so the recursive feedback loop of improvement will go FOOM in a relatively short amount of time, though how long that is is a matter of question.
Isn’t there a certain amount of disagreement about whether FOOM is the necessary thing to happen?
People also talk about a slow takeoff being risky. See the “Why Does This Matter” section from here.
I don’t doubt that slow take-off is risky. I rather meant that foom is not guaranteed, and risk due a not-immediately-omnipotent AI make be more like a catastrophic, painful war.