I was a negative utilitarian for two weeks because of a math error
So I was like,
If the neuroscience of human hedonics is such that we experience pleasure at about a 1 valence and suffering at about a 2.5 valence,
And therefore an AI building a glorious transhuman utopia would get us to 1 gigapleasure, and an endless S-risk hellscape would get us to 2.5 gigapain,
And we don’t know what our future holds,
And, although the most likely AI outcome is still overwhelmingly “paperclips”,
If our odds are 1:1 between ending up in Friendship Is Optimal heaven versus UNSONG hell,
You should kill yourself (and everyone else) swiftly to avoid that EV-negative bet.
(noting the mistake is left as an exercise to the reader)
I was a negative utilitarian for two weeks because of a math error
So I was like,
If the neuroscience of human hedonics is such that we experience pleasure at about a 1 valence and suffering at about a 2.5 valence,
And therefore an AI building a glorious transhuman utopia would get us to 1 gigapleasure, and an endless S-risk hellscape would get us to 2.5 gigapain,
And we don’t know what our future holds,
And, although the most likely AI outcome is still overwhelmingly “paperclips”,
If our odds are 1:1 between ending up in Friendship Is Optimal heaven versus UNSONG hell,
You should kill yourself (and everyone else) swiftly to avoid that EV-negative bet.
(noting the mistake is left as an exercise to the reader)