Is death by AI really any more dire than the default outcome, i.e. the slow and agonizing decay of the body until cancer/Alzheimer’s delivers the final blow?
Well, that’s just a variation of the Fermi paradox, isn’t it? What’s strange is that we don’t observe any sign of alien sentience, superintelligence or not. I guess, if we’re in the zoo hypothesis, then the aliens will probably step in and stop us from developing a rogue AI (anytime now). But I wouldn’t pin my hopes for life in the universe on it.
It was a rhetorical question, there is nothing strange about not observing aliens. I’m an avid critic of the Fermi paradox. You simply update towards their nonexistence and, to a lesser extent, whatever other hypothesis fits that observation. You don’t start out with the romantic idea that aliens ought to be out there, living their parallel lives, and then call the lack of evidence thereof a “paradox”.
The probability that all sentient life in the observable universe just so happens to invariably reside in the limbo state between nonexistence and total dominance is vanishingly small, to a comical degree. Even on our own Earth, sentient life only occupies a small fragment of our evolutionary history, and intelligent life even more so. Either we’re alone, or we’re in a zoo/simulation.
But it is surprising that life could only appear on our planet, since it doesn’t seem to have unique features. If we’re alone, that probably means we’re just first. If we just blow up ourselves, another sentient species will probably appear someday somewhere else with a chance to not mess up. But an expanding unaligned AI will wipe out all chance of life appearing in the future. That’s a big difference.
To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent. To someone who cares about others, the large number of looming deaths is pretty bad. To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
To someone who loves only abstract intelligence and quantifies by some metric I don’t quite get, AI may be just as good as (or better than) people.
To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.
Is death by AI really any more dire than the default outcome, i.e. the slow and agonizing decay of the body until cancer/Alzheimer’s delivers the final blow?
Senescence doesn’t kill the world.
And it doesn’t expand into the universe to kill every other life.
How strange for us to achieve superintelligence where every other life in the universe has failed, don’t you think?
Well, that’s just a variation of the Fermi paradox, isn’t it? What’s strange is that we don’t observe any sign of alien sentience, superintelligence or not. I guess, if we’re in the zoo hypothesis, then the aliens will probably step in and stop us from developing a rogue AI (anytime now). But I wouldn’t pin my hopes for life in the universe on it.
It was a rhetorical question, there is nothing strange about not observing aliens. I’m an avid critic of the Fermi paradox. You simply update towards their nonexistence and, to a lesser extent, whatever other hypothesis fits that observation. You don’t start out with the romantic idea that aliens ought to be out there, living their parallel lives, and then call the lack of evidence thereof a “paradox”.
The probability that all sentient life in the observable universe just so happens to invariably reside in the limbo state between nonexistence and total dominance is vanishingly small, to a comical degree. Even on our own Earth, sentient life only occupies a small fragment of our evolutionary history, and intelligent life even more so. Either we’re alone, or we’re in a zoo/simulation.
Either way, Clippy doesn’t kill more than us.
But it is surprising that life could only appear on our planet, since it doesn’t seem to have unique features. If we’re alone, that probably means we’re just first. If we just blow up ourselves, another sentient species will probably appear someday somewhere else with a chance to not mess up. But an expanding unaligned AI will wipe out all chance of life appearing in the future. That’s a big difference.
What does “could appear” mean here? 1 in 10? 1 in a trillion? 1 in 10^50?
Remember we live in a tiny universe with only ~10^23 stars.
Moloch is to the world what senescence is to a person. It, too, dies by default.
To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent. To someone who cares about others, the large number of looming deaths is pretty bad. To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
To someone who loves only abstract intelligence and quantifies by some metric I don’t quite get, AI may be just as good as (or better than) people.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.