It could kill us quickly and without useless pain.
Why would it care to avoid inflicting pain? If it finds that extreme mental anguish and/or physical distress makes humans flail and screech in curious ways, it would have no reason to not repeat the observations over and over.
There are many FAI failure modes that don’t involve gratuitous torture. I think ‘could’ is justified here, especially in comparison to Bioweapon catastrophe.
Right, there are plenty of failure modes, some less unpleasant than others, some are probably horrific beyond our worst nightmares. I suspect that any particular set of scenarios that we find comforting would have measure zero in the space of possible outcomes. If so, preferring death by AI over death by a bioweapon is but a failure of imagination.
Why would it care to avoid inflicting pain? If it finds that extreme mental anguish and/or physical distress makes humans flail and screech in curious ways, it would have no reason to not repeat the observations over and over.
There are many FAI failure modes that don’t involve gratuitous torture. I think ‘could’ is justified here, especially in comparison to Bioweapon catastrophe.
Right, there are plenty of failure modes, some less unpleasant than others, some are probably horrific beyond our worst nightmares. I suspect that any particular set of scenarios that we find comforting would have measure zero in the space of possible outcomes. If so, preferring death by AI over death by a bioweapon is but a failure of imagination.
It doesn’t take much comfort to beat a bioweapon that actually succeeds in killing everyone.
Simply using our atoms to make paperclips, and being quick about it, wins.