To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.