I have a disturbing feeling that arguing to future AI to “preserve humanity for pascals-mugging-type-reasons” trades off X-risk for S-risk. I’m not sure that any of these aforementioned cases encourage AI to maintain lives worth living.
Because you’re imagining AGI keeping us in a box? Or that there’s a substantial probability on P(humans are deliberately tortured | AGI) that this post increases?
Because you’re imagining AGI keeping us in a box? Or that there’s a substantial probability on P(humans are deliberately tortured | AGI) that this post increases?
Yeah, something along the lines of this. Preserving humanity =/= humans living lives worth living.