Someone concerned about this possibility has posted to this site and used the term “s-risk”.
It is approximately as difficult to create an AI that wants people to suffer as it is to create one that wants people to flourish, and humanity is IMO very far from being able to do the latter, so my main worry is the an AGI will kill us all painlessly.
Some advanced intelligence that takes over doesn’t have to be directed toward human suffering for s-risk to happen. It could just happen as a byproduct of whatever unimaginable things the advanced intelligence might want/do as it goes about its own business completely heedless of us. In those cases we’re suffering in the same way that some nameless species in some niche of the world is suffering because humans, unaware that species even exists, are encroaching on and destroying its natural domain in the scope of just going about our own comparatively unimaginable business.
I think there are two confusions here. This comment appears to be conflating the “suffering” of a species with suffering of individuals within it, and also temporary suffering of the dying with suffering that is protracted indefinitely.
The term s-risk usually refers to indefinitely extended amounts of suffering much greater than has been normal in human history. Centrally, to scenarios in which most people would prefer to die but can’t.
Someone concerned about this possibility has posted to this site and used the term “s-risk”.
It is approximately as difficult to create an AI that wants people to suffer as it is to create one that wants people to flourish, and humanity is IMO very far from being able to do the latter, so my main worry is the an AGI will kill us all painlessly.
Some advanced intelligence that takes over doesn’t have to be directed toward human suffering for s-risk to happen. It could just happen as a byproduct of whatever unimaginable things the advanced intelligence might want/do as it goes about its own business completely heedless of us. In those cases we’re suffering in the same way that some nameless species in some niche of the world is suffering because humans, unaware that species even exists, are encroaching on and destroying its natural domain in the scope of just going about our own comparatively unimaginable business.
I think there are two confusions here. This comment appears to be conflating the “suffering” of a species with suffering of individuals within it, and also temporary suffering of the dying with suffering that is protracted indefinitely.
The term s-risk usually refers to indefinitely extended amounts of suffering much greater than has been normal in human history. Centrally, to scenarios in which most people would prefer to die but can’t.
Thanks for the helpful clarification!