Let’s say that H is the set of all worlds that are viewed as “hell” by all existing human minds (with reflection, AI tools, ect). I think what you’re saying that it is not just practically impossible, but logically impossible for a mind (M’) to exist that is only slightly different from an existing human and also views any world in H as heaven.
I’m not convinced of this. Imagine that people have moral views of internal human simulations (what you conjure when you imagine a conversation with a friend or fictional character) that diverge upon reflection. So some people think they have moral value and therefore human minds need to be altered to not be able to make them (S-), and some think they are morally irrelevant (S+) and that the S- alteration is morally repugnant. Now imagine that this opinion is caused entirely by a gene causing a tiny difference in serotonin reuptake in the cerebellum, and that there are two alternate universes populated entirely by one group. Any S- heaven would be viewed as hell by an S+, and vis-versa.
Human utility functions don’t have to be continuous—it is entirely possible for a small difference in starting conditions of a human mind to result in extreme differences in how a world is evaluated morally after reflection. I don’t think consensus among all current human minds is of much comfort, since we fundamentally make up such a tiny dot in the space of all human minds that ever existed, which is a tiny part of all possible human minds, ect. Your hypothesis relies a lot on the diversity of moral evaluations amongst human minds, which I’m just not convinced of.
Let’s say that H is the set of all worlds that are viewed as “hell” by all existing human minds (with reflection, AI tools, ect). I think what you’re saying that it is not just practically impossible, but logically impossible for a mind (M’) to exist that is only slightly different from an existing human and also views any world in H as heaven.
I’m not convinced of this. Imagine that people have moral views of internal human simulations (what you conjure when you imagine a conversation with a friend or fictional character) that diverge upon reflection. So some people think they have moral value and therefore human minds need to be altered to not be able to make them (S-), and some think they are morally irrelevant (S+) and that the S- alteration is morally repugnant. Now imagine that this opinion is caused entirely by a gene causing a tiny difference in serotonin reuptake in the cerebellum, and that there are two alternate universes populated entirely by one group. Any S- heaven would be viewed as hell by an S+, and vis-versa.
Human utility functions don’t have to be continuous—it is entirely possible for a small difference in starting conditions of a human mind to result in extreme differences in how a world is evaluated morally after reflection. I don’t think consensus among all current human minds is of much comfort, since we fundamentally make up such a tiny dot in the space of all human minds that ever existed, which is a tiny part of all possible human minds, ect. Your hypothesis relies a lot on the diversity of moral evaluations amongst human minds, which I’m just not convinced of.