That seems like a very counter-intuitive position to take, at least according to my intuitions. Do you think the sensation of pleasure in and of itself has moral value? If so, why the asymmetry? If not, what actually does have moral value?
Have you written more about your position elsewhere, or is it a standard one that I can look up?
FWIW I’m basically in the same position as AlephNeil’s (and I’m puzzled at the two downvotes: the response is informative, in good faith, and not incoherent).
If you (say, you-on-the-other-side-of-the-camera-link) hurt me, the most important effects from that pain are on my plans and desires: the pain will cause me to do (or avoid doing) certain things in the future, and I might have preferred otherwise. Maybe I’ll flinch when I come into contact with you again, or with people who look like you, and I would have preferred to look happy and trusting.
It’s not clear that the ECPs as posited are “feeling” in pain in the same sense; if I refrain from pushing the button, so that I can pocket the $100, I have no reason to believe that this will cause the ECP to do (or avoid doing) some things in future when it would have preferred otherwise, or cause it to feel ill-disposed toward me.
As for pleasure, I think pleasure you have not chosen and that has the same effect on you as a pain would have (derailing your current plans and desires) also has moral disvalue; only freely chosen pleasure, that reinforces things you already value or want to value, is a true benefit. (For a fictional example of “inflicted pleasure” consider Larry Niven’s tasp weapon.)
That position implies we should be indifferent between torturing a person for six hours in a way that leaves no permanent damage, or just making them sit in a room unable to engage in their normal activities for six hours (or if you try to escape by saying all pain causes permanent psychological damage, then we should still be indifferent between killing a person quickly, or torturing em for six hours and then killing em).
I think this is a controversial enough position that even if you’re willing to bite that bullet, before you state it you should at least say you understand the implication, are willing to bite the bullet, and maybe provide a brief explanation of why.
First I want to note that “the sensation of pain, considered in and of itself” is, after “the redness of red”, the second most standard example of “qualia”. So if, like good Dennettians, we’re going to deny that qualia exist then we’d better deny that “the sensation of pain in and of itself” has moral disvalue!
Instead we should be considering: “the sensation of pain in and of how-it-relates-to-other-stuff”. So how does pain relate to other stuff? It comes down to the fact that pain is the body’s “damage alarm system”, whose immediate purpose is to limit the extent of injury by preventing a person from continuing with whatever action was beginning to cause damage.
So if you want to deny qualia while still holding that pain is morally awful (albeit not “in and of itself”) then I think you’re forced at least some of the way towards my position. ‘Pain’ is an arrow pointing towards ‘damage’ but not always succeeding—a bit like how ‘sweetness’ points towards ‘sugar’. This is an oversimplification, but one could almost say that I get the rest of the way by looking at where the arrow is pointing rather than at the arrow itself. (Similarly, what’s “bad” is not when the smoke alarm sounds but when the house burns down.)
That position implies we should be indifferent between torturing … (or if you try to escape by saying all pain causes permanent psychological damage
Well, it’s manifestly not true that all pain causes permanent psychological damage (e.g. the pain from exercising hard, from the kind of ‘play-fighting’ that young boys do, or from spicy food) but it seems plausible that ‘torture’ does.
then we should still be indifferent between killing a person quickly, or torturing em for six hours and then killing em).
I admit this gave me pause.
There’s a horrible true story on the internet about a woman who was lobotomised by an evil psychologist, in such a way that she was left as a ‘zombie’ afterwards (no, not the philosophers’ kind of zombie). I’ve rot13ed a phrase that will help you google it, if you really want, but please don’t feel obliged to: wbhearl vagb znqarff.
Let it be granted that this woman felt no pain during her ‘operation’ and that she wasn’t told or was too confused to figure out what exactly was happening, or why the doctor kept asking her simple things—e.g. her name, or to hum her favourite tune. (The real reason, as best I can tell, was “to see how well a person with that amount of brain damage could still do so and so”)
What I want to say is that even with these elaborations, the story is just as repulsive—it offends our moral sense just as much—as any story of torture. This would still be true even if the victim had died shortly after the lobotomy. More to the point, this story is massively more repulsive than, say, a story where Paul Atreides has his hand thrust into a ‘pain box’ for half an hour before being executed (presumably by a gom jabbar). (And that in turn feels somewhat more repulsive than a story where Paul Atreides is tricked into voluntarily holding his hand in the pain box and then executed, despite the fact that the pain is just as bad.)
Torture isn’t just a synonym for “excruciating pain”—it’s more complicated than that.
Here’s a more appetising bullet: We should be indifferent between “Paul Atreides executed at time 0” and “Paul Atreides tricked into voluntarily holding his hand in the pain box for half an hour and then executed”. (I’d be lying if I said I was 100% happy about biting it, but neither am I 100% sure that my position is inconsistent otherwise.)
First I want to note that “the sensation of pain, considered in and of itself” is, after “the redness of red”, the second most standard example of “qualia”. So if, like good Dennettians, we’re going to deny that qualia exist then we’d better deny that “the sensation of pain in and of itself” has moral disvalue!
I know Dennett’s usually right about this sort of thing and so there must be something to that argument, but I’ve never been able to understand it no matter how hard I try. It looks too much like wishful thinking—“these qualia things are really confusing, so screw that.” Certainly it’s not the sort of Reduction with a capital “R” I’ve heard that’s left me genuinely satisfied about the nonexistence of things like Free Will or Good or Essence-Of-Chair-Ness.
I would be hesitant to say the sensation of pain in and of itself has moral disvalue; I would say that people have preferences against pain and that the violation of those preferences causes moral disvalue in the same sense as the violation of any other preference. I would have no trouble with inflicting pain on a masochist, a person with pain asymbolia, or a person voluntarily undergoing some kind of conditioning.
Damage can also be something people have a preference against, but it’s not necessarily more important than pain. There are amounts of torture such that I would prefer permanently losing a finger to undergoing that torture, and I suspect it’s the same for most other people.
What I want to say is that even with these elaborations, the story is just as repulsive—it offends our moral sense just as much—as any story of torture. This would still be true even if the victim had died shortly after the lobotomy.
You seem to be arguing from “Things other than pain are bad” to “Pain is not bad”, which is not valid.
I admit your Paul Atreides example doesn’t disgust me so much, but I think that’s because number one I have no mental imagery associated with gom jabbars, and number two I feel like he’s a legendary Messiah figure so he should be able to take it.
If we start talking about non-Kwisatz Haderach people, like say your little sister, and we start talking about them being whipped to death instead of an invisible and inscrutable gom jabbar, I find my intuition shifts pretty far the other direction.
I’d be lying if I said I was 100% happy about biting it, but neither am I 100% sure that my position is inconsistent otherwise.
So I’m reading about your moral system in your other post, and I don’t want to get into debating it fully here. But surely you can recognize that just as some things and systems are beautiful and fascinating and complex, there are other systems that are especially and uniquely horrible, and that it is a moral credit to remove them from the world. Sometimes I read about the more horrible atrocities perpetrated in the Nazi camps and North Korea, and I feel physically sick that there is no way I can’t just kill everyone involved, the torturers and victims both, and relieve them of their suffering, and that this is the strongest moral imperative imaginable, much more important than the part where we make sure there are lots of rainforests and interesting buildings and such. Have you never felt this emotion? And if so, have you ever read a really good fictional dystopian work?
There are amounts of torture such that I would prefer permanently losing a finger to undergoing that torture, and I suspect it’s the same for most other people.
What if you could be assured that you would have no bad memories of it? (That is, what if you can recall it but doing so doesn’t evoke any negative emotions?)
If I could be assured that I would be genuinely undamaged afterwards, then an interval of intense pain no matter how intense doesn’t seem like a big deal. (As I recall, Dennett makes this same point somewhere in Darwin’s Dangerous Idea, as an illustration of what’s wrong with the kind of utilitarianism that scores everything in terms of pain and pleasure.)
You seem to be arguing from “Things other than pain are bad” to “Pain is not bad”, which is not valid.
You keep talking about torture rather than just pain. The point of my bringing up the ‘lobotomy story’ was to suggest that what makes it so awful has a good deal in common with what makes torture so awful. Something about a person idly and ‘cruelly’ doing something ‘horrible’ and ‘disgusting’ to a victim over whom they have complete power. Using another human as an ‘instrument’ rather than as an end in itself. Pain is not an essential ingredient here.
If we start talking about non-Kwisatz Haderach people, like say your little sister, and we start talking about them being whipped to death instead of an invisible and inscrutable gom jabbar, I find my intuition shifts pretty far the other direction.
Yeah, but this is reintroducing some of the ‘extra ingredients’, besides pain alone, that make torture awful.
So I’m reading about your moral system in your other post, and I don’t want to get into debating it fully here. But surely you can recognize that just as some things and systems are beautiful and fascinating and complex, there are other systems that are especially and uniquely horrible, and that it is a moral credit to remove them from the world. Sometimes I read about the more horrible atrocities perpetrated in the Nazi camps and North Korea, and I feel physically sick that there is no way I can’t just kill everyone involved, the torturers and victims both, and relieve them of their suffering, and that this is the strongest moral imperative imaginable, much more important than the part where we make sure there are lots of rainforests and interesting buildings and such. Have you never felt this emotion? And if so, have you ever read a really good fictional dystopian work?
You keep assuming that somehow I have to make the inference “if pain has no moral disvalue in itself, then neither does torture”. I do not. If I can say that “the lobotomy story” is an abomination even if no pain was caused, then I think I can quite easily judge that the Nazi atrocities were loathesome without having to bring in ‘the intrinsic awfulness of pain’. The Nazi and North Korean atrocities were ‘ugly’ - in fact they are among the ‘ugliest’ things that humans have ever accomplished.
Conclusion: “It all adds up to normality”. Ethical reasoning involves a complex network of related concepts, one of which is pain. Pain—the pain of conscious creatures—is often taken to be a (or even ‘the’) terminal disvalue. Perhaps the best way of looking at my approach is to regard it as a demonstration that if you kill the ‘pain node’ then actually, the rest of the network does the job just fine (with maybe one or two slightly problematic cases at the fringe, but then there are always problematic cases in any ethical system.)
(The advantage of taking out the ‘pain node’ is that it sidesteps unproductive philosophical debates about qualia.)
saying all pain causes permanent psychological damage
Not all pain, but certainly that’s a factor.
we should still be indifferent between killing a person quickly, or torturing em for six hours and then killing em
I don’t see how that follows. Killing someone quickly leaves them no time to contemplate the fact that all their plans and desires have come to a dead end; what is awful about torture is the knowledge of one’s plans and desires being thwarted—even more awful than not allowing that person to carry out their plans and fulfill their desires. (Also, in many cases other people have plans and desires for us: they prefer us to be alive and well, to enjoy pleasure and avoid pain, and so on. Torture thwarts those desires as well, over and above killing.)
Note that you’re saying that not only is the thwarting of someone’s plans a disvalue, having them contemplate the thwarting is an additional disvalue.
Also, since being tortured makes contemplation harder, you should prefer torturing someone for six hours and them killing them to letting them contemplate their imminent death in comfort for six hours and then killing them.
you should prefer torturing someone for six hours and them killing them to letting them contemplate their imminent death in comfort for six hours and then killing them
When you’re being tortured you have no choice but to attend to the pain: you are not cognitively free to contemplate anything other than your own destruction. In comfort you could at least aim for a more pleasant state of mind—you can make your own plans for those six hours instead of following the torturer’s, and if you have the strength, refuse to contemplate your own death.
Also, in many cases other people have plans and desires for us: they prefer us to be alive and well, to enjoy pleasure and avoid pain, and so on.
But why do other people prefer for you to avoid pain, if pain is not a moral disvalue? And what exactly do they mean by “pain” (which is what the post asked in the first place)?
I liked this comment on Alicorn’s post: “(pain) makes you want to pull away; it’s a flinch, abstracted”. What seems to matter about pain, when I think about scenarios such as the one you proposed, is its permanent aversive effect, something not present in simulated pain.
Trying to frame this in terms of anticipated experiences, the question I would want to ask about the posited ECP is, “if I meet this ECP again will they hold it against me that I failed to press the button, because of negative reinforcement in our first encounter”. The way you’ve framed the thought experiment suggests that they won’t have a memory of the encounter, in fact that I’m not even likely to think of them as an entity I might “meet”.
I didn’t downvote AlephNeil, but I think a good rule is that if you say something that is likely to be highly counterintuitive to your audience, to give or link to some explanation of why you believe that (even if it’s just “according to my intuition”). Otherwise it seems very hard to know what to do with the information provided.
Do you think the sensation of pleasure in and of itself has moral value?
No.
Have you written more about your position elsewhere
I have, but not in a convenient form. I’ll just paste some stuff into this comment box:
Hitherto I had been some kind of utilitarian: The purest essence of wrongness is causing suffering to a sentient being, and the amount of wrongness increases with the amount of suffering. Something similar is true concerning virtue and happiness, though I realized even then that one has to be very careful in how ‘happiness’ is formulated. After all, we don’t want to end up concluding that synthesizing Huxley’s drug “soma” is humanity’s highest ethical goal. If pressed to refine my concept of happiness, I had two avenues open: (i) Try to prise apart “animal happiness”—a meaningless and capricious flood of neurochemicals—from a higher “rational happiness” which can only be derived from recognition of truth or beauty (ii) Retreat to the view that “in any case, morality is just a bunch of intuitions that helped our ancestors to survive. There’s no reason to assume that our moral intuitions are a ‘window’ onto any larger and more fundamental domain of moral truth.”
(Actually, I still regard a weaker version of (ii) as the ‘ultimate truth of the matter’: On the one hand, it’s not hard to believe that in any community of competing intelligent agents, more similar to each other than different, who have evolved by natural selection, moral precepts such as ‘the golden rule’ are almost guaranteed to arise. On the other, it remains the case that the spectrum of ‘ethical dilemmas’ that could reasonably arise in our evolutionary history is narrow, and it is easy for ethicists to devise strange situations that escape its confines. I see no reason at all to expect that the principles by which we evaluate the morality of real-world decisions can be refined and systematised to give verdicts on all possible decisions.)
[i.e. “don’t take the following too seriously.”]
I believe moral value is inherent in those systems and entities that we describe as ‘fascinating’, ‘richly structured’ and ‘beautiful’. A snappy way of characterising this view is “value-as-profundity”. On the other hand, I regard pain and pleasure as having no value at all in themselves.
In the context of interpersonal affairs, then, to do good is ultimately to make the people around you more profound, more interesting, more beautiful—their happiness is irrelevant. To do evil, on the other hand, is to damage and degrade something, shutting down its higher features, closing off its possibilities. Note that feelings of joy usually accompany activities I classify as ‘good’ (e.g. learning, teaching, creating things, improving fitness) and conversely, pain and suffering tend to accompany damage and degradation. However, in those situations where value-as-profundity diverges from utilitarian value, notice that our moral intuitions tend to favour the former. For instance:
Drug abuse: Taking drugs such as heroin produces feelings of euphoria but only at the cost of degrading and constraining our future behaviour, and damaging our bodies. It is the erosion of profundity that makes heroin abuse wrong, not the withdrawal symptoms, or the fact that the addict’s behaviour tends to make others in his community less happy. The latter are both incidental—we can hypothetically imagine that the withdrawal symptoms do not exist and that the addict is all alone in a post-apocalyptic world, and we are still dismayed by the degradation of behaviour that drug addiction produces (just as we would be dismayed by a giraffe with brain damage, irrespective of whether the giraffe felt happy).
The truth hurts: We accept that there are situations where the best way to help someone is to criticise them in a way that we know they will find upsetting. We do this because we want our friend to grow into a better (more profound) version of themselves, which cannot happen until she sees her flaws as flaws rather than lovable idiosyncracies. On the utilitarian view, the rightness of this harsh criticism cannot be accounted for except in respect of its remote consequences—the greater happiness of our improved friend and of those with whom she interacts—yet there is no necessary reason why the end result of a successful self-improvement must be increased happiness, and if it is not then the initial upset will force us to say that our actions were immoral. However, surely it is preferable for our ethical theory to place value in the improvements themselves rather than their contingent psychological effects.
Nature red in tooth and claw (see Q6): Consider the long and eventful story of life on earth. Consider that before the arrival of humankind, almost all animals spent almost all of their lives perched on the edge, struggling against starvation, predators and disease. In a state of nature, suffering is far more prevalent than happiness. Yet suppose we were given a planet like the young earth, and that we knew life could evolve there with a degree of richness comparable to our own, but that the probability of technological, language-using creatures like us evolving is very remote. Sadly, this planet lies in a solar system on a collision course with a black hole, and may be swallowed up before life even appears. Suppose it is within our power to ‘deflect’ the solar system away from the black hole—should we do so? On the utilitarian view, to save the planet would be to bring a vast amount of unnecessary suffering into being, and (almost certainly) a relatively tiny quantity of joy. However, saving the planet increases the profundity and beauty of the universe, and obviously is in line with our ethical intuitions.
...
is it a standard one that I can look up?
I’m not sure, but it’s vaguely Nietzschean. For instance, here’s a quote from Thus Spoke Zarathustra:
Man is a rope, fastened between animal and Superman—a rope over an abyss.
A dangerous going-across, a dangerous wayfaring, a dangerous looking-back, a dangerous shuddering and staying-still.
What is great in man is that he is a bridge and not a goal; what can be loved in man is that he is a going-across and a down-going.
Actually, that one quote doesn’t really suffice, but if you’re interested, please read sections 4 and 5 of “Zarathustra’s Prologue”.
If pressed to refine my concept of happiness, I had two avenues open
What about Eliezer’s position, which you don’t seem to address, that happiness is just one value among many? Why jump to the (again, highly counterintuitive) conclusion that happiness is not a value at all?
What about Eliezer’s position, which you don’t seem to address, that happiness is just one value among many? Why jump to the (again, highly counterintuitive) conclusion that happiness is not a value at all?
To me it doesn’t seem so counterintuitive. I actually came to this view through thinking about tourism, and it struck me that (a) beautiful undisturbed planet is morally preferable to (b) beautiful planet disturbed by sightseers who are passively impressed by its beauty, which they spoil ever so slightly, and contribute nothing (i.e. it doesn’t inspire them to create anything beautiful themselves).
In other words, even the “higher happiness” of aesthetic appreciation doesn’t necessarily have value. If there’s ‘intrinsic value’ anywhere in the system, it’s in nature (or art) itself, not the person appreciating it.
But again, I don’t take this ‘cold’, ‘ascetic’ concept of morality to be the ‘final truth’. I don’t think there is such a thing.
That seems like a very counter-intuitive position to take, at least according to my intuitions. Do you think the sensation of pleasure in and of itself has moral value? If so, why the asymmetry? If not, what actually does have moral value?
Have you written more about your position elsewhere, or is it a standard one that I can look up?
FWIW I’m basically in the same position as AlephNeil’s (and I’m puzzled at the two downvotes: the response is informative, in good faith, and not incoherent).
If you (say, you-on-the-other-side-of-the-camera-link) hurt me, the most important effects from that pain are on my plans and desires: the pain will cause me to do (or avoid doing) certain things in the future, and I might have preferred otherwise. Maybe I’ll flinch when I come into contact with you again, or with people who look like you, and I would have preferred to look happy and trusting.
It’s not clear that the ECPs as posited are “feeling” in pain in the same sense; if I refrain from pushing the button, so that I can pocket the $100, I have no reason to believe that this will cause the ECP to do (or avoid doing) some things in future when it would have preferred otherwise, or cause it to feel ill-disposed toward me.
As for pleasure, I think pleasure you have not chosen and that has the same effect on you as a pain would have (derailing your current plans and desires) also has moral disvalue; only freely chosen pleasure, that reinforces things you already value or want to value, is a true benefit. (For a fictional example of “inflicted pleasure” consider Larry Niven’s tasp weapon.)
That position implies we should be indifferent between torturing a person for six hours in a way that leaves no permanent damage, or just making them sit in a room unable to engage in their normal activities for six hours (or if you try to escape by saying all pain causes permanent psychological damage, then we should still be indifferent between killing a person quickly, or torturing em for six hours and then killing em).
I think this is a controversial enough position that even if you’re willing to bite that bullet, before you state it you should at least say you understand the implication, are willing to bite the bullet, and maybe provide a brief explanation of why.
First I want to note that “the sensation of pain, considered in and of itself” is, after “the redness of red”, the second most standard example of “qualia”. So if, like good Dennettians, we’re going to deny that qualia exist then we’d better deny that “the sensation of pain in and of itself” has moral disvalue!
Instead we should be considering: “the sensation of pain in and of how-it-relates-to-other-stuff”. So how does pain relate to other stuff? It comes down to the fact that pain is the body’s “damage alarm system”, whose immediate purpose is to limit the extent of injury by preventing a person from continuing with whatever action was beginning to cause damage.
So if you want to deny qualia while still holding that pain is morally awful (albeit not “in and of itself”) then I think you’re forced at least some of the way towards my position. ‘Pain’ is an arrow pointing towards ‘damage’ but not always succeeding—a bit like how ‘sweetness’ points towards ‘sugar’. This is an oversimplification, but one could almost say that I get the rest of the way by looking at where the arrow is pointing rather than at the arrow itself. (Similarly, what’s “bad” is not when the smoke alarm sounds but when the house burns down.)
Well, it’s manifestly not true that all pain causes permanent psychological damage (e.g. the pain from exercising hard, from the kind of ‘play-fighting’ that young boys do, or from spicy food) but it seems plausible that ‘torture’ does.
I admit this gave me pause.
There’s a horrible true story on the internet about a woman who was lobotomised by an evil psychologist, in such a way that she was left as a ‘zombie’ afterwards (no, not the philosophers’ kind of zombie). I’ve rot13ed a phrase that will help you google it, if you really want, but please don’t feel obliged to: wbhearl vagb znqarff.
Let it be granted that this woman felt no pain during her ‘operation’ and that she wasn’t told or was too confused to figure out what exactly was happening, or why the doctor kept asking her simple things—e.g. her name, or to hum her favourite tune. (The real reason, as best I can tell, was “to see how well a person with that amount of brain damage could still do so and so”)
What I want to say is that even with these elaborations, the story is just as repulsive—it offends our moral sense just as much—as any story of torture. This would still be true even if the victim had died shortly after the lobotomy. More to the point, this story is massively more repulsive than, say, a story where Paul Atreides has his hand thrust into a ‘pain box’ for half an hour before being executed (presumably by a gom jabbar). (And that in turn feels somewhat more repulsive than a story where Paul Atreides is tricked into voluntarily holding his hand in the pain box and then executed, despite the fact that the pain is just as bad.)
Torture isn’t just a synonym for “excruciating pain”—it’s more complicated than that.
Here’s a more appetising bullet: We should be indifferent between “Paul Atreides executed at time 0” and “Paul Atreides tricked into voluntarily holding his hand in the pain box for half an hour and then executed”. (I’d be lying if I said I was 100% happy about biting it, but neither am I 100% sure that my position is inconsistent otherwise.)
I know Dennett’s usually right about this sort of thing and so there must be something to that argument, but I’ve never been able to understand it no matter how hard I try. It looks too much like wishful thinking—“these qualia things are really confusing, so screw that.” Certainly it’s not the sort of Reduction with a capital “R” I’ve heard that’s left me genuinely satisfied about the nonexistence of things like Free Will or Good or Essence-Of-Chair-Ness.
I would be hesitant to say the sensation of pain in and of itself has moral disvalue; I would say that people have preferences against pain and that the violation of those preferences causes moral disvalue in the same sense as the violation of any other preference. I would have no trouble with inflicting pain on a masochist, a person with pain asymbolia, or a person voluntarily undergoing some kind of conditioning.
Damage can also be something people have a preference against, but it’s not necessarily more important than pain. There are amounts of torture such that I would prefer permanently losing a finger to undergoing that torture, and I suspect it’s the same for most other people.
You seem to be arguing from “Things other than pain are bad” to “Pain is not bad”, which is not valid.
I admit your Paul Atreides example doesn’t disgust me so much, but I think that’s because number one I have no mental imagery associated with gom jabbars, and number two I feel like he’s a legendary Messiah figure so he should be able to take it.
If we start talking about non-Kwisatz Haderach people, like say your little sister, and we start talking about them being whipped to death instead of an invisible and inscrutable gom jabbar, I find my intuition shifts pretty far the other direction.
So I’m reading about your moral system in your other post, and I don’t want to get into debating it fully here. But surely you can recognize that just as some things and systems are beautiful and fascinating and complex, there are other systems that are especially and uniquely horrible, and that it is a moral credit to remove them from the world. Sometimes I read about the more horrible atrocities perpetrated in the Nazi camps and North Korea, and I feel physically sick that there is no way I can’t just kill everyone involved, the torturers and victims both, and relieve them of their suffering, and that this is the strongest moral imperative imaginable, much more important than the part where we make sure there are lots of rainforests and interesting buildings and such. Have you never felt this emotion? And if so, have you ever read a really good fictional dystopian work?
What if you could be assured that you would have no bad memories of it? (That is, what if you can recall it but doing so doesn’t evoke any negative emotions?)
If I could be assured that I would be genuinely undamaged afterwards, then an interval of intense pain no matter how intense doesn’t seem like a big deal. (As I recall, Dennett makes this same point somewhere in Darwin’s Dangerous Idea, as an illustration of what’s wrong with the kind of utilitarianism that scores everything in terms of pain and pleasure.)
You keep talking about torture rather than just pain. The point of my bringing up the ‘lobotomy story’ was to suggest that what makes it so awful has a good deal in common with what makes torture so awful. Something about a person idly and ‘cruelly’ doing something ‘horrible’ and ‘disgusting’ to a victim over whom they have complete power. Using another human as an ‘instrument’ rather than as an end in itself. Pain is not an essential ingredient here.
Yeah, but this is reintroducing some of the ‘extra ingredients’, besides pain alone, that make torture awful.
You keep assuming that somehow I have to make the inference “if pain has no moral disvalue in itself, then neither does torture”. I do not. If I can say that “the lobotomy story” is an abomination even if no pain was caused, then I think I can quite easily judge that the Nazi atrocities were loathesome without having to bring in ‘the intrinsic awfulness of pain’. The Nazi and North Korean atrocities were ‘ugly’ - in fact they are among the ‘ugliest’ things that humans have ever accomplished.
Conclusion: “It all adds up to normality”. Ethical reasoning involves a complex network of related concepts, one of which is pain. Pain—the pain of conscious creatures—is often taken to be a (or even ‘the’) terminal disvalue. Perhaps the best way of looking at my approach is to regard it as a demonstration that if you kill the ‘pain node’ then actually, the rest of the network does the job just fine (with maybe one or two slightly problematic cases at the fringe, but then there are always problematic cases in any ethical system.)
(The advantage of taking out the ‘pain node’ is that it sidesteps unproductive philosophical debates about qualia.)
Not all pain, but certainly that’s a factor.
I don’t see how that follows. Killing someone quickly leaves them no time to contemplate the fact that all their plans and desires have come to a dead end; what is awful about torture is the knowledge of one’s plans and desires being thwarted—even more awful than not allowing that person to carry out their plans and fulfill their desires. (Also, in many cases other people have plans and desires for us: they prefer us to be alive and well, to enjoy pleasure and avoid pain, and so on. Torture thwarts those desires as well, over and above killing.)
I don’t think that this is as awful as the degree to which torture hurts.
Note that you’re saying that not only is the thwarting of someone’s plans a disvalue, having them contemplate the thwarting is an additional disvalue.
Also, since being tortured makes contemplation harder, you should prefer torturing someone for six hours and them killing them to letting them contemplate their imminent death in comfort for six hours and then killing them.
When you’re being tortured you have no choice but to attend to the pain: you are not cognitively free to contemplate anything other than your own destruction. In comfort you could at least aim for a more pleasant state of mind—you can make your own plans for those six hours instead of following the torturer’s, and if you have the strength, refuse to contemplate your own death.
But why do other people prefer for you to avoid pain, if pain is not a moral disvalue? And what exactly do they mean by “pain” (which is what the post asked in the first place)?
I liked this comment on Alicorn’s post: “(pain) makes you want to pull away; it’s a flinch, abstracted”. What seems to matter about pain, when I think about scenarios such as the one you proposed, is its permanent aversive effect, something not present in simulated pain.
Trying to frame this in terms of anticipated experiences, the question I would want to ask about the posited ECP is, “if I meet this ECP again will they hold it against me that I failed to press the button, because of negative reinforcement in our first encounter”. The way you’ve framed the thought experiment suggests that they won’t have a memory of the encounter, in fact that I’m not even likely to think of them as an entity I might “meet”.
I didn’t downvote AlephNeil, but I think a good rule is that if you say something that is likely to be highly counterintuitive to your audience, to give or link to some explanation of why you believe that (even if it’s just “according to my intuition”). Otherwise it seems very hard to know what to do with the information provided.
No.
I have, but not in a convenient form. I’ll just paste some stuff into this comment box:
[i.e. “don’t take the following too seriously.”]
...
I’m not sure, but it’s vaguely Nietzschean. For instance, here’s a quote from Thus Spoke Zarathustra:
Actually, that one quote doesn’t really suffice, but if you’re interested, please read sections 4 and 5 of “Zarathustra’s Prologue”.
What about Eliezer’s position, which you don’t seem to address, that happiness is just one value among many? Why jump to the (again, highly counterintuitive) conclusion that happiness is not a value at all?
To me it doesn’t seem so counterintuitive. I actually came to this view through thinking about tourism, and it struck me that (a) beautiful undisturbed planet is morally preferable to (b) beautiful planet disturbed by sightseers who are passively impressed by its beauty, which they spoil ever so slightly, and contribute nothing (i.e. it doesn’t inspire them to create anything beautiful themselves).
In other words, even the “higher happiness” of aesthetic appreciation doesn’t necessarily have value. If there’s ‘intrinsic value’ anywhere in the system, it’s in nature (or art) itself, not the person appreciating it.
But again, I don’t take this ‘cold’, ‘ascetic’ concept of morality to be the ‘final truth’. I don’t think there is such a thing.