The Suffering Golem is no thought experiment. There are actual people who live with great suffering. Some of them wish to die, but some do not. Should you kill someone who is in untreatable pain, against their definitely expressed, compos mentis wishes? Should such an act be legally not murder but justifiable homicide, justified by the amount of suffering thereby prevented?
What you describe does not want to be a thought experiment, because it doesn’t abstract away relevant confounders (moral value of human life). The setup in the post is better at being a thought experiment for the distinctions being discussed (moral value of golem’s life more clearly depends on a moral framework). In this context, it’s misleading to ask whether something should be done instead of whether it’s the action that’s hedonistic utilitarian / preference utilitarian / autonomy-preserving.
Leave legality out if it—laws and enforcement is about really generic social behaviors, and is always going to encode a different set of expectations than a nuanced morality. I assert that it’s perfectly moral (noble even) to be legally punished for making a correct moral choice.
Also, separate “correct action in face of high uncertainty” from “correct action if you can read the source code / detect and measure the experiences”. I bias strongly against killing when there’s significant uncertainty about current or future preferences/experiences. I think it’s probably right to kill if you can somehow know the remainder of their life is negative value to them.
In fact, I’m not sure that any human in constant (or even frequent) deep pain can be considered compos mentis on this topic. By the time the pain is known to be constant, the reaction and anticipation of the pain has altered the person’s cognitive approach.
That said, I try to remain humble in my demands of others. I won’t kill a sentient being for pure altruism[1], and will in fact put barriers in place to suicide so that someone needs to maintain the desire and expend thought and further pain to achieve it. I don’t actually judge suicide as wrong, or even as a mistake, but I don’t understand the universe or others’ experiences well enough to want to make it easy.
Really, human experience is so short already (a century at most, less for most of us), and it’s going to end regardless of my or the sufferer’s intent. exactly when it ends is far less important than what I can do to make the remaining time slightly less unpleasant.
[1] meaning I don’t think I’ll ever have sufficient evidence that killing them would benefit them more than other actions I can take. There are other utilitarian reasons I might be willing to kill, such as preventing 3^^3 dust specs. That’s not what this post is about though—it’s altruistic, but not toward the killed victim.
The Suffering Golem is no thought experiment. There are actual people who live with great suffering. Some of them wish to die, but some do not. Should you kill someone who is in untreatable pain, against their definitely expressed, compos mentis wishes? Should such an act be legally not murder but justifiable homicide, justified by the amount of suffering thereby prevented?
I say no. What do others say?
What you describe does not want to be a thought experiment, because it doesn’t abstract away relevant confounders (moral value of human life). The setup in the post is better at being a thought experiment for the distinctions being discussed (moral value of golem’s life more clearly depends on a moral framework). In this context, it’s misleading to ask whether something should be done instead of whether it’s the action that’s hedonistic utilitarian / preference utilitarian / autonomy-preserving.
Leave legality out if it—laws and enforcement is about really generic social behaviors, and is always going to encode a different set of expectations than a nuanced morality. I assert that it’s perfectly moral (noble even) to be legally punished for making a correct moral choice.
Also, separate “correct action in face of high uncertainty” from “correct action if you can read the source code / detect and measure the experiences”. I bias strongly against killing when there’s significant uncertainty about current or future preferences/experiences. I think it’s probably right to kill if you can somehow know the remainder of their life is negative value to them.
In fact, I’m not sure that any human in constant (or even frequent) deep pain can be considered compos mentis on this topic. By the time the pain is known to be constant, the reaction and anticipation of the pain has altered the person’s cognitive approach.
That said, I try to remain humble in my demands of others. I won’t kill a sentient being for pure altruism[1], and will in fact put barriers in place to suicide so that someone needs to maintain the desire and expend thought and further pain to achieve it. I don’t actually judge suicide as wrong, or even as a mistake, but I don’t understand the universe or others’ experiences well enough to want to make it easy.
Really, human experience is so short already (a century at most, less for most of us), and it’s going to end regardless of my or the sufferer’s intent. exactly when it ends is far less important than what I can do to make the remaining time slightly less unpleasant.
[1] meaning I don’t think I’ll ever have sufficient evidence that killing them would benefit them more than other actions I can take. There are other utilitarian reasons I might be willing to kill, such as preventing 3^^3 dust specs. That’s not what this post is about though—it’s altruistic, but not toward the killed victim.