To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent. To someone who cares about others, the large number of looming deaths is pretty bad. To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
To someone who loves only abstract intelligence and quantifies by some metric I don’t quite get, AI may be just as good as (or better than) people.
To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.
To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent. To someone who cares about others, the large number of looming deaths is pretty bad. To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
To someone who loves only abstract intelligence and quantifies by some metric I don’t quite get, AI may be just as good as (or better than) people.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.