The actual scenario is full of distractions, but I’ll try to ignore them (1).
The thing is, I think the pain in this scenario is a distraction as well. The relevant property in this sort of scenario, which drives my impulse to prevent it and causes me to experience guilt if I don’t, is my inference of suffering (2).
So the question becomes, how do I characterize the nature of suffering?
Which is perhaps a mere semantic substitution, but it certainly doesn’t feel that way from the inside. I can feel pain without suffering, and suffering without pain, which strongly suggests that there are two different things under discussion, even if I don’t clearly understand either of them.
I’ll probably play a second round against the GLUT, since if there is any suffering involved there it has already happened and I might as well get some benefit from it. (3)
The others, I am less certain about.
Thinking about it more, I lean towards saying that my intuitions about guilt and shame and moral obligation to reduce suffering are all kind of worthless in this scenario, and I do better to frame the question differently.
For example, given #3, perhaps the right question is not “are those uploads experiencing suffering I ought to alleviate” but rather “ought I cooperate with those uploads, or ought I defect?”
Not that that helps: I’m still left with the question of how to calibrate their cost/benefit equation against my own, which is to say of how significant a term their utility is in my utility function. And sure, I can dodge the question by saying I need more data to be certain, but one can fairly ask what data I’d want… which is really the same question we started with, though stated in a more general way.
So… dunno. I’m spinning my wheels, here.
==
(1) For example, I suspect that my actual response in that scenario is either to keep all the money, under the expectation that what I’m seeing is an actor or otherwise something not experiencing pain (in a non-exotic way, as in the Milgram experiments), or to immediately quit the experiment and leave the room, under the expectation that to do anything else is to reinforce the sadistic monster running this “experiment.”
I cannot imagine why I’d ever actually press the button.
But of course that’s beside the point here.
(2) That is, if the person wired up to the chair informs me that yes, they are experiencing pain, but it’s no big deal, then I don’t feel the same impulse to spend $100 to prevent it.
Conversely, if the neurologists monitoring the person’s condition assures me credibly that there’s no pain, but they are intensely suffering for some other reason, I feel the same impulse to prevent it. E.g., if they will be separated from their family, whom they love, unless I return the $100, I will feel the same impulse to spend $100 to prevent it.
The pain is neither necessary nor sufficient for my reaction.
Note that I’m expressing all this in terms of what I perceive and what that impels me to do, rather than in terms of the moral superiority of one condition over another, because I have a clearer understanding of what I’m talking about with the former. I don’t mean to suggest that the latter doesn’t exist, nor that the two are equivalent, nor that they aren’t. I’m just not talking about the latter yet.
(3) I say “probably” because there are acausal decision issues that arise here that might make me decide otherwise, but I think those issues are also beside your point.
Also, incidentally, if there is any suffering involved, the creation of the GLUT was an act of monstrous cruelty on a scale I can’t begin to conceive.
The actual scenario is full of distractions, but I’ll try to ignore them (1).
The thing is, I think the pain in this scenario is a distraction as well. The relevant property in this sort of scenario, which drives my impulse to prevent it and causes me to experience guilt if I don’t, is my inference of suffering (2).
So the question becomes, how do I characterize the nature of suffering?
Which is perhaps a mere semantic substitution, but it certainly doesn’t feel that way from the inside. I can feel pain without suffering, and suffering without pain, which strongly suggests that there are two different things under discussion, even if I don’t clearly understand either of them.
I’ll probably play a second round against the GLUT, since if there is any suffering involved there it has already happened and I might as well get some benefit from it. (3)
The others, I am less certain about.
Thinking about it more, I lean towards saying that my intuitions about guilt and shame and moral obligation to reduce suffering are all kind of worthless in this scenario, and I do better to frame the question differently.
For example, given #3, perhaps the right question is not “are those uploads experiencing suffering I ought to alleviate” but rather “ought I cooperate with those uploads, or ought I defect?”
Not that that helps: I’m still left with the question of how to calibrate their cost/benefit equation against my own, which is to say of how significant a term their utility is in my utility function. And sure, I can dodge the question by saying I need more data to be certain, but one can fairly ask what data I’d want… which is really the same question we started with, though stated in a more general way.
So… dunno. I’m spinning my wheels, here.
==
(1) For example, I suspect that my actual response in that scenario is either to keep all the money, under the expectation that what I’m seeing is an actor or otherwise something not experiencing pain (in a non-exotic way, as in the Milgram experiments), or to immediately quit the experiment and leave the room, under the expectation that to do anything else is to reinforce the sadistic monster running this “experiment.”
I cannot imagine why I’d ever actually press the button.
But of course that’s beside the point here.
(2) That is, if the person wired up to the chair informs me that yes, they are experiencing pain, but it’s no big deal, then I don’t feel the same impulse to spend $100 to prevent it.
Conversely, if the neurologists monitoring the person’s condition assures me credibly that there’s no pain, but they are intensely suffering for some other reason, I feel the same impulse to prevent it. E.g., if they will be separated from their family, whom they love, unless I return the $100, I will feel the same impulse to spend $100 to prevent it.
The pain is neither necessary nor sufficient for my reaction.
Note that I’m expressing all this in terms of what I perceive and what that impels me to do, rather than in terms of the moral superiority of one condition over another, because I have a clearer understanding of what I’m talking about with the former. I don’t mean to suggest that the latter doesn’t exist, nor that the two are equivalent, nor that they aren’t. I’m just not talking about the latter yet.
(3) I say “probably” because there are acausal decision issues that arise here that might make me decide otherwise, but I think those issues are also beside your point.
Also, incidentally, if there is any suffering involved, the creation of the GLUT was an act of monstrous cruelty on a scale I can’t begin to conceive.