We could certainly make agents for whom pleasure and pain would use equal resources per util. The question is if human preferences today (or extrapolated) would sympathize with such agents to the point of giving them the universe. Their decision-making could look very inhuman to us. If we value such agents with a discount factor, we’re back at square one.
That’s what the congenital deafness discussion was about.
You have preferences over pain and pleasure intensities that you haven’t experienced, or new durations of experiences you know. Otherwise you wouldn’t have anything to worry about re torture, since you haven’t experienced it.
Pain asymbolia is a condition in which pain is perceived, but with an absence of the suffering that is normally associated with the pain experience. Individuals with pain asymbolia still identify the stimulus as painful but do not display the behavioral or affective reactions that usually accompany pain; no sense of threat and/or danger is precipitated by pain.
Suppose you currently had pain asymbolia. Would that mean you wouldn’t object to pain and suffering in non-asymbolics? What if you personally had only happened to experience extremely mild discomfort while having lots of great positive experiences? What about for yourself? If you knew you were going to get a cure for your pain asymbolia tomorrow would you object to subsequent torture as intrinsically bad?
We can go through similar stories for major depression and positive mood.
Seems it’s the character of the experience that matters.
Likewise, if you’ve never experienced skiing, chocolate, favorite films, sex, victory in sports, and similar things that doesn’t mean you should act as though they have no moral value. This also holds true for enhanced experiences and experiences your brain currently is unable to have, like the case of congenital deafness followed by a procedure to grant hearing and listening to music.
Music and chocolate are known to be mostly safe. I guess I’m more cautious about new self-modifications that can change my decisions massively, including decisions about more self-modifications. It seems like if I’m not careful, you can devise a sequence that will turn me into a paperclipper. That’s why I discount such agents for now, until I understand better what CEV means.
We could certainly make agents for whom pleasure and pain would use equal resources per util. The question is if human preferences today (or extrapolated) would sympathize with such agents to the point of giving them the universe. Their decision-making could look very inhuman to us. If we value such agents with a discount factor, we’re back at square one.
That’s what the congenital deafness discussion was about.
You have preferences over pain and pleasure intensities that you haven’t experienced, or new durations of experiences you know. Otherwise you wouldn’t have anything to worry about re torture, since you haven’t experienced it.
Consider people with pain asymbolia:
Suppose you currently had pain asymbolia. Would that mean you wouldn’t object to pain and suffering in non-asymbolics? What if you personally had only happened to experience extremely mild discomfort while having lots of great positive experiences? What about for yourself? If you knew you were going to get a cure for your pain asymbolia tomorrow would you object to subsequent torture as intrinsically bad?
We can go through similar stories for major depression and positive mood.
Seems it’s the character of the experience that matters.
Likewise, if you’ve never experienced skiing, chocolate, favorite films, sex, victory in sports, and similar things that doesn’t mean you should act as though they have no moral value. This also holds true for enhanced experiences and experiences your brain currently is unable to have, like the case of congenital deafness followed by a procedure to grant hearing and listening to music.
Music and chocolate are known to be mostly safe. I guess I’m more cautious about new self-modifications that can change my decisions massively, including decisions about more self-modifications. It seems like if I’m not careful, you can devise a sequence that will turn me into a paperclipper. That’s why I discount such agents for now, until I understand better what CEV means.