If doomdsay is inevitable it is interesting to ask what would we prefer—to die from UFAI or from biological weapons catastrophe.
I prefer UFAI, because:
It still be intellgence which will explore universe.
Even if it destroy humanity it will share some of its values.
It could ressurect humanity in simulation, and most likely will do it many times in order to study chances of its own existence and AI frecuency in the universe.
It could kill us quickly and without useless pain.
This would be like the passenger pigeon and the dodo rooting for humanity in a war against space aliens.
I feel the same way. I see FAI as an attempt to cheat evolution. But I would still root from the uAI from our planet to win over the other uAI’s in the same sense I root for my daughter’s volleyball team and refer to their opponents du jour as “the bad guys.”
Machine intelligence would still be evolution. Evolution—as usually defined—is changes in frequency of heritable information over time. It would be a genetic takeover, but the planet has probably seen those before.
If we assume primates and other intelligent social mammals continue to exist then yes, the transition from their level to human level is comparatively minor to the steps needed to get that far.
It could kill us quickly and without useless pain.
Why would it care to avoid inflicting pain? If it finds that extreme mental anguish and/or physical distress makes humans flail and screech in curious ways, it would have no reason to not repeat the observations over and over.
There are many FAI failure modes that don’t involve gratuitous torture. I think ‘could’ is justified here, especially in comparison to Bioweapon catastrophe.
Right, there are plenty of failure modes, some less unpleasant than others, some are probably horrific beyond our worst nightmares. I suspect that any particular set of scenarios that we find comforting would have measure zero in the space of possible outcomes. If so, preferring death by AI over death by a bioweapon is but a failure of imagination.
If doomdsay is inevitable it is interesting to ask what would we prefer—to die from UFAI or from biological weapons catastrophe. I prefer UFAI, because:
It still be intellgence which will explore universe.
Even if it destroy humanity it will share some of its values.
It could ressurect humanity in simulation, and most likely will do it many times in order to study chances of its own existence and AI frecuency in the universe.
It could kill us quickly and without useless pain.
:(
This would be like the passenger pigeon and the dodo rooting for humanity in a war against space aliens.
I feel the same way. I see FAI as an attempt to cheat evolution. But I would still root from the uAI from our planet to win over the other uAI’s in the same sense I root for my daughter’s volleyball team and refer to their opponents du jour as “the bad guys.”
Machine intelligence would still be evolution. Evolution—as usually defined—is changes in frequency of heritable information over time. It would be a genetic takeover, but the planet has probably seen those before.
I dunno, if we all die from superAIDS, intelligent life will evolve on earth again, and share more of our values than a catastrophic AI.
Is it probable for intelligent life to evolve?
If we assume primates and other intelligent social mammals continue to exist then yes, the transition from their level to human level is comparatively minor to the steps needed to get that far.
How do you know?
http://www.youtube.com/watch?v=hOLAGYmUQV0
Why would it care to avoid inflicting pain? If it finds that extreme mental anguish and/or physical distress makes humans flail and screech in curious ways, it would have no reason to not repeat the observations over and over.
There are many FAI failure modes that don’t involve gratuitous torture. I think ‘could’ is justified here, especially in comparison to Bioweapon catastrophe.
Right, there are plenty of failure modes, some less unpleasant than others, some are probably horrific beyond our worst nightmares. I suspect that any particular set of scenarios that we find comforting would have measure zero in the space of possible outcomes. If so, preferring death by AI over death by a bioweapon is but a failure of imagination.
It doesn’t take much comfort to beat a bioweapon that actually succeeds in killing everyone.
Simply using our atoms to make paperclips, and being quick about it, wins.