Should have made it much scarier. “Superhappies” caring about humans “not in the specific way that the humans wanted to be cared for” sounds better or at least no worse than death, whereas I’m concerned about s-risks, i.e., risks of worse than death scenarios.
To clarify, I don’t actually want you to scare people this way, because I don’t know if people can psychologically handle it or if it’s worth the emotional cost. I only bring it up myself to counteract people saying things like “AIs will care a little about humans and therefore keep them alive” or when discussing technical solutions/ideas, etc.
Was my “An important caveat” parenthetical paragraph sufficient, or do you think I should have made it scarier?
Should have made it much scarier. “Superhappies” caring about humans “not in the specific way that the humans wanted to be cared for” sounds better or at least no worse than death, whereas I’m concerned about s-risks, i.e., risks of worse than death scenarios.
This is a difficult topic (in more ways than one). I’ll try to do a better job of addressing it in a future post.
To clarify, I don’t actually want you to scare people this way, because I don’t know if people can psychologically handle it or if it’s worth the emotional cost. I only bring it up myself to counteract people saying things like “AIs will care a little about humans and therefore keep them alive” or when discussing technical solutions/ideas, etc.