However, you asked me what I think, so here it is...
The wording of your first post in this thread seems telling. You say that “Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.”
Do you want to become orgasmium?
Perhaps you do. In that case, I direct the question to myself, and my answer is no: I don’t want to become orgasmium.
That having been established, what could it mean to say that my judgment is a “mistake”? That seems to be a category error. One can’t be mistaken in wanting something. One can be mistaken about wanting something (“I thought I wanted X, but upon reflection and consideration of my mental state, it turns out I actually don’t want X”), or one can be mistaken about some property of the thing in question, which affects the preference (“I thought I wanted X, but then I found out more about X, and now I don’t want X”); but if you’re aware of all relevant facts about the way the world is, and you’re not mistaken about what your own mental states are, and you still want something… labeling that a “mistake” seems simply meaningless.
On to your analogy:
If someone wants to “keep her body natural”, then conditional on that even being a coherent desire[1], what’s wrong with it? If it harms other people somehow, then that’s a problem… otherwise, I see no issue. I don’t think it makes this person “kind of dumb” unless you mean that she’s actually got other values that are being harmed by this value, or is being irrational in some other ways; but values in and of themselves cannot be irrational.
[Eliezer] can’t stand the idea that scientific discovery is only an instrument to increase happiness, so he makes it a terminal value just because he can.
This construal is incorrect. Say rather: Eliezer does not agree that scientific discovery is only an instrument to increase happiness. Eliezer isn’t making scientific discovery a terminal value, it is a terminal value for him. Terminal values are given.
In this discussion we are, of course, ignoring external effects altogether.
Why are we doing that...? If it’s only about happiness, then external effects should be irrelevant. You shouldn’t need to ignore them; they shouldn’t affect your point.
[1]Coherence matters: the difference between your hypothetical hippie and Eliezer the potential-scientific-discoverer is that the hippie, upon reflection, would realize (or so we would like to hope) that “natural” is not a very meaningful category, that her body is almost certainly already “not natural” in at least some important sense, and that “keeping her body natural” is just not a state of affairs that can be described in any consistent and intuitively correct way, much less one that can be implemented. That, if anything, is what makes her preference “dumb”. There’s no analogous failures of reasoning behind Eliezer’s preference to actually discover things instead of just pretend-discovering, or my preference to not become orgasmium.
That having been established, what could it mean to say that my judgment is a “mistake”? That seems to be a category error. One can’t be mistaken in wanting something.
I have never used the word “mistake” by itself. I did say that refusing to become orgasmium is a hedonistic utilitarian mistake, which is mathematically true, unless you disagree with me on the definition of “hedonistic utilitarian mistake” (= an action which demonstrably results in less hedonic utility than some other action) or of “orgasmium” (= a state of maximum personal hedonic utility).[1]
I point this out because I think you are quite right: it doesn’t make sense to tell somebody that they are mistaken in “wanting” something.
Indeed, I never argued that the dying hippie was mistaken. In fact I made exactly the same point that you’re making, when I said:
And [the hippie’s] choice is — well, it isn’t wrong, choices can’t be “wrong”
What I said was that she is misguided.
The argument I was trying to make was, look, this hippie is using some suspect reasoning to make her decisions, and Eliezer’s reasoning looks a lot like her’s, so we should doubt Eliezer’s conclusions. There are two perfectly reasonable ways to refute this argument: you can (1) deny that the hippie’s reasoning is suspect, or (2) deny that Eliezer’s reasoning is similar to hers.
These are both perfectly fine things to do, since I never elaborated on either point. (You seem to be trying option 1.) My comment can only possibly convince people who feel instinctively that both of these points are true.
All that said, I think that I am meaningfully right — in the sense that, if we debated this forever, we would both end up much closer to my (current) view than to your (current) view. Maybe I’ll write an article about this stuff and see if I can make my case more strongly.
[1] Please note that I am ignoring the external effects of becoming orgasmium. If we take those into account, my statement stops being mathematically true.
I don’t think those are the only two ways to refute the argument. I can think of at least two more:
(3) Deny the third step of the argument’s structure — the “so we should doubt Eliezer’s conclusions” part. Analogical reasoning applied to surface features of arguments is not reliable. There’s really no substitute for actually examining an argument.
(4) Disagree that construing the hippie’s position as constituting any sort of “reasoning” that may or may not be “suspect” is a meaningful description of what’s going on in your hypothetical (or at least, the interesting aspect of what’s going on, the part we’re concerned with). The point I was making is this: what’s relevant in that scenario is that the hippie has “keeping her body natural” as a terminal value. If that’s a coherent value, then the rest of the reasoning (“and therefore I shouldn’t take this pill”) is trivial and of no interest to us. Now it may not be a coherent value, as I said; but if it is — well, arguing with terminal values is not a matter of poking holes in someone’s logic. Terminal values are given.
As for your other points:
It’s true, you didn’t say “mistake” on its own. What I am wondering is this: ok, refusing to become orgasmium fails to satisfy the mathematical requirements of hedonistic utilitarianism.
But why should anyone care about that?
I don’t mean this as a general, out-of-hand dismissal; I am asking, specifically, why such a requirement would override a person’s desires:
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would. Person B: But I don’t want to become orgasmium. Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium! Person B: But… I don’t want to become orgasmium.
I see Person B’s position as being the final word on the matter (especially if, as you say, we’re ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B’s judgments? Why should the mathematical requirements behind Person A’s framework have any relevance to Person B’s decisions? In other words, why should we be hedonistic utilitarians, if we don’t want to be?
(If we imagine the above argument continuing, it would develop that Person B doesn’t want to feel as much pleasure as possible; or, at the least, wants other things too, and even the pleasure thing he wants only given certain conditions; in other words, we’d arrive at conclusions along the lines outlined in the “Complexity of value” wiki entry.)
(As an aside, I’m still not sure why you’re ignoring external effects in your arguments.)
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don’t want to become orgasmium.
If I become orgasmium, then I would cease to exist, and the orgasmium, which is not me in any meaningful sense, will have more pleasure than I otherwise would have. But I don’t care about the pleasure of this orgasmium, and certainly would not pay my existence for it.
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don’t want to become orgasmium.
Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium! Person B: But… I don’t want to become orgasmium.
I see Person B’s position as being the final word on the matter (especially if, as you say, we’re ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B’s judgments? Why should the mathematical requirements behind Person A’s framework have any relevance to Person B’s decisions? In other words, why should we be hedonistic utilitarians, if we don’t want to be?
The difficulty here, of course, is that Person B is using a cached heuristic that outputs “no” for “become orgasmium”; and we cannot be certain that this heuristic is correct in this case. Just as Person A is using the (almost certainly flawed) heuristic “feel as much pleasure as possible”, which outputs “yes” for “become orgasmium”.
The difficulty here, of course, is that Person B is using a cached heuristic that outputs “no” for “become orgasmium”
Why do you think so?
we cannot be certain that this heuristic is correct in this case.
What do you mean by “correct”?
Edit: I think it would be useful for any participants in discussions like this to read Eliezer’s Three Worlds Collide. Not as fictional evidence, but as an examination of the issues, which I think it does quite well. A relevant quote, from chapter 4, “Interlude with the Confessor”:
A sigh came from that hood. “Well… would you prefer a life entirely free of pain and sorrow, having sex all day long?”
“Not… really,” Akon said.
The shoulders of the robe shrugged. “You have judged. What else is there?”
I give a decent probability to the optimal order of things containing absolutely zero pleasure. I assign a lower, but still significant, probability to it containing an infinite amount of pain in any given subjective interval.
… why? Humans definitely appear to want to avoid pain and enjoy pleasure. i suppose I can see pleasure being replaced with “better” emotions, but I’m really baffled regarding the pain. Is it to do with punishment? Challenge? Something I haven’t thought of?
Agreed, pretty much. I said significant probability, not big. I’m not good at translating anticipations into numbers, but no more than 5%. Mostly based on extreme outside view, as in “something I haven’t thought of”.
I surmise from your comments that you may not be aware that Eliezer’s written quite a bit on this matter; http://wiki.lesswrong.com/wiki/Complexity_of_value is a good summary/index (http://lesswrong.com/lw/l3/thou_art_godshatter/ is one of my favorites). There’s a lot of stuff in there that is relevant to your points.
However, you asked me what I think, so here it is...
The wording of your first post in this thread seems telling. You say that “Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.”
Do you want to become orgasmium?
Perhaps you do. In that case, I direct the question to myself, and my answer is no: I don’t want to become orgasmium.
That having been established, what could it mean to say that my judgment is a “mistake”? That seems to be a category error. One can’t be mistaken in wanting something. One can be mistaken about wanting something (“I thought I wanted X, but upon reflection and consideration of my mental state, it turns out I actually don’t want X”), or one can be mistaken about some property of the thing in question, which affects the preference (“I thought I wanted X, but then I found out more about X, and now I don’t want X”); but if you’re aware of all relevant facts about the way the world is, and you’re not mistaken about what your own mental states are, and you still want something… labeling that a “mistake” seems simply meaningless.
On to your analogy:
If someone wants to “keep her body natural”, then conditional on that even being a coherent desire[1], what’s wrong with it? If it harms other people somehow, then that’s a problem… otherwise, I see no issue. I don’t think it makes this person “kind of dumb” unless you mean that she’s actually got other values that are being harmed by this value, or is being irrational in some other ways; but values in and of themselves cannot be irrational.
This construal is incorrect. Say rather: Eliezer does not agree that scientific discovery is only an instrument to increase happiness. Eliezer isn’t making scientific discovery a terminal value, it is a terminal value for him. Terminal values are given.
Why are we doing that...? If it’s only about happiness, then external effects should be irrelevant. You shouldn’t need to ignore them; they shouldn’t affect your point.
[1]Coherence matters: the difference between your hypothetical hippie and Eliezer the potential-scientific-discoverer is that the hippie, upon reflection, would realize (or so we would like to hope) that “natural” is not a very meaningful category, that her body is almost certainly already “not natural” in at least some important sense, and that “keeping her body natural” is just not a state of affairs that can be described in any consistent and intuitively correct way, much less one that can be implemented. That, if anything, is what makes her preference “dumb”. There’s no analogous failures of reasoning behind Eliezer’s preference to actually discover things instead of just pretend-discovering, or my preference to not become orgasmium.
I have never used the word “mistake” by itself. I did say that refusing to become orgasmium is a hedonistic utilitarian mistake, which is mathematically true, unless you disagree with me on the definition of “hedonistic utilitarian mistake” (= an action which demonstrably results in less hedonic utility than some other action) or of “orgasmium” (= a state of maximum personal hedonic utility).[1]
I point this out because I think you are quite right: it doesn’t make sense to tell somebody that they are mistaken in “wanting” something.
Indeed, I never argued that the dying hippie was mistaken. In fact I made exactly the same point that you’re making, when I said:
What I said was that she is misguided.
The argument I was trying to make was, look, this hippie is using some suspect reasoning to make her decisions, and Eliezer’s reasoning looks a lot like her’s, so we should doubt Eliezer’s conclusions. There are two perfectly reasonable ways to refute this argument: you can (1) deny that the hippie’s reasoning is suspect, or (2) deny that Eliezer’s reasoning is similar to hers.
These are both perfectly fine things to do, since I never elaborated on either point. (You seem to be trying option 1.) My comment can only possibly convince people who feel instinctively that both of these points are true.
All that said, I think that I am meaningfully right — in the sense that, if we debated this forever, we would both end up much closer to my (current) view than to your (current) view. Maybe I’ll write an article about this stuff and see if I can make my case more strongly.
[1] Please note that I am ignoring the external effects of becoming orgasmium. If we take those into account, my statement stops being mathematically true.
I don’t think those are the only two ways to refute the argument. I can think of at least two more:
(3) Deny the third step of the argument’s structure — the “so we should doubt Eliezer’s conclusions” part. Analogical reasoning applied to surface features of arguments is not reliable. There’s really no substitute for actually examining an argument.
(4) Disagree that construing the hippie’s position as constituting any sort of “reasoning” that may or may not be “suspect” is a meaningful description of what’s going on in your hypothetical (or at least, the interesting aspect of what’s going on, the part we’re concerned with). The point I was making is this: what’s relevant in that scenario is that the hippie has “keeping her body natural” as a terminal value. If that’s a coherent value, then the rest of the reasoning (“and therefore I shouldn’t take this pill”) is trivial and of no interest to us. Now it may not be a coherent value, as I said; but if it is — well, arguing with terminal values is not a matter of poking holes in someone’s logic. Terminal values are given.
As for your other points:
It’s true, you didn’t say “mistake” on its own. What I am wondering is this: ok, refusing to become orgasmium fails to satisfy the mathematical requirements of hedonistic utilitarianism.
But why should anyone care about that?
I don’t mean this as a general, out-of-hand dismissal; I am asking, specifically, why such a requirement would override a person’s desires:
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don’t want to become orgasmium.
Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium!
Person B: But… I don’t want to become orgasmium.
I see Person B’s position as being the final word on the matter (especially if, as you say, we’re ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B’s judgments? Why should the mathematical requirements behind Person A’s framework have any relevance to Person B’s decisions? In other words, why should we be hedonistic utilitarians, if we don’t want to be?
(If we imagine the above argument continuing, it would develop that Person B doesn’t want to feel as much pleasure as possible; or, at the least, wants other things too, and even the pleasure thing he wants only given certain conditions; in other words, we’d arrive at conclusions along the lines outlined in the “Complexity of value” wiki entry.)
(As an aside, I’m still not sure why you’re ignoring external effects in your arguments.)
If I become orgasmium, then I would cease to exist, and the orgasmium, which is not me in any meaningful sense, will have more pleasure than I otherwise would have. But I don’t care about the pleasure of this orgasmium, and certainly would not pay my existence for it.
The difficulty here, of course, is that Person B is using a cached heuristic that outputs “no” for “become orgasmium”; and we cannot be certain that this heuristic is correct in this case. Just as Person A is using the (almost certainly flawed) heuristic “feel as much pleasure as possible”, which outputs “yes” for “become orgasmium”.
Why do you think so?
What do you mean by “correct”?
Edit: I think it would be useful for any participants in discussions like this to read Eliezer’s Three Worlds Collide. Not as fictional evidence, but as an examination of the issues, which I think it does quite well. A relevant quote, from chapter 4, “Interlude with the Confessor”:
Humans are not perfect reasoners.
[Edited for clarity.]
I give a decent probability to the optimal order of things containing absolutely zero pleasure. I assign a lower, but still significant, probability to it containing an infinite amount of pain in any given subjective interval.
Is this intended as a reply to my comment?
reply to the entire thread really.
Fair enough.
Is this intended as a reply to my comment?
… why? Humans definitely appear to want to avoid pain and enjoy pleasure. i suppose I can see pleasure being replaced with “better” emotions, but I’m really baffled regarding the pain. Is it to do with punishment? Challenge? Something I haven’t thought of?
Agreed, pretty much. I said significant probability, not big. I’m not good at translating anticipations into numbers, but no more than 5%. Mostly based on extreme outside view, as in “something I haven’t thought of”.
Oh, right. “Significance” is subjective, I guess. I assumed it meant, I don’t know, >10% or whatever.