It seems likely that for strategic reasons an AGI will not act in a hostile manner until it is essentially certain to permanently win. It also seems likely that any means by which it can permanently win with near-certainty will kill most of the population relatively quickly.
Keep in mind that this should be measured in comparison with the end-of-life scenarios that most people would face otherwise: typically dementia, cancer, chronic lung or cardiovascular disease. It seems unlikely that most of the people alive at the start of an AI doom scenario will suffer much worse than that for much longer.
If it truly is worse than not being alive at all, suicide will be an option in most scenarios.
It seems likely that for strategic reasons an AGI will not act in a hostile manner until it is essentially certain to permanently win. It also seems likely that any means by which it can permanently win with near-certainty will kill most of the population relatively quickly.
Keep in mind that this should be measured in comparison with the end-of-life scenarios that most people would face otherwise: typically dementia, cancer, chronic lung or cardiovascular disease. It seems unlikely that most of the people alive at the start of an AI doom scenario will suffer much worse than that for much longer.
If it truly is worse than not being alive at all, suicide will be an option in most scenarios.
I think the comparison to cancer etc is helpful, thanks.
The suicide option is a somewhat strange but maybe helpful perspective, as it simplifies the original question by splitting it:
Do you consider a life worth living that ends in a situation in which suicide is the best option?
How likely will this be the case for most people in our relatively soon future? (Including because of AI)