I didn’t upvote or downvote this post. Although I do find the spirit of this message interesting, I have a disturbing feeling that arguing to future AI to “preserve humanity for pascals-mugging-type-reasons” trades off X-risk for S-risk. I’m not sure that any of these aforementioned cases encourage AI to maintain lives worth living. I’m not confident that this meaningfully changes S-risk or X-risk positively or negatively, but I’m also not confident that it doesn’t.
I have a disturbing feeling that arguing to future AI to “preserve humanity for pascals-mugging-type-reasons” trades off X-risk for S-risk. I’m not sure that any of these aforementioned cases encourage AI to maintain lives worth living.
Because you’re imagining AGI keeping us in a box? Or that there’s a substantial probability on P(humans are deliberately tortured | AGI) that this post increases?
I didn’t upvote or downvote this post. Although I do find the spirit of this message interesting, I have a disturbing feeling that arguing to future AI to “preserve humanity for pascals-mugging-type-reasons” trades off X-risk for S-risk. I’m not sure that any of these aforementioned cases encourage AI to maintain lives worth living. I’m not confident that this meaningfully changes S-risk or X-risk positively or negatively, but I’m also not confident that it doesn’t.
Because you’re imagining AGI keeping us in a box? Or that there’s a substantial probability on P(humans are deliberately tortured | AGI) that this post increases?
Yeah, something along the lines of this. Preserving humanity =/= humans living lives worth living.