This may be a minor nit, but… is this forum collectively anti-orgasmium, now?
Because being orgasmium is by definition more pleasant than not being orgasmium. Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.[1] (Well, that’s not actually true, since as a human you can make other people happier, and as orgasmium you presumably cannot. But it is at least on average a mistake to refuse to become orgasmium; I would argue that it is virtually always a mistake.)
That article is arguing that it’s all right to value things that aren’t mental states over a net gain in mental utility.[1] If, for instance, you’re given the choice between feeling like you’ve made lots of scientific discoveries and actually making just a few scientific discoveries, it’s reasonable to prefer the latter.[2]
Well, that example doesn’t sound all that ridiculous.
But the logic that Eliezer is using is exactly the same logic that drives somebody who’s dying of a horrible disease to refuse antibiotics, because she wants to keep her body natural. And this choice is — well, it isn’t wrong, choices can’t be “wrong” — but it reflects a very fundamental sort of human bias. It’s misguided.
And I think that Eliezer’s argument is misguided, too. He can’t stand the idea that scientific discovery is only an instrument to increase happiness, so he makes it a terminal value just because he can. This is less horrible than the hippie who thinks that maintaining her “naturalness” is more important than avoiding a painful death, but it’s not much less dumb.
[1] Or a net gain in “happiness,” if we don’t mind using that word as a catchall for “whatever it is that makes good mental states good.”
[2] In this discussion we are, of course, ignoring external effects altogether. And we’re assuming that the person who gets to experience lots of scientific discoveries really is happier than the person who doesn’t, otherwise there’s nothing to debate. Let me note that in the real world, it is obviously possible to make yourself less happy by taking joy-inducing drugs — for instance if doing so devalues the rest of your life. This fact makes Eliezer’s stance seem a lot more reasonable than it actually is.
But the logic that Eliezer is using is exactly the same logic that drives somebody who’s dying of a horrible disease to refuse antibiotics, because she wants to keep her body natural. And this choice is — well, it isn’t wrong, choices can’t be “wrong” — but it reflects a very fundamental sort of human bias. It’s misguided.
Very well, let’s back up Eliezer’s argument with some hard evidence. Fortunately, lukeprog has already written a brief review of the neuroscience on this topic. The verdict? Eliezer is right. People value things other than happiness and pleasure. The idea that pleasant feelings are the sole good is an illusion created by the fact that the signals for wanting something and getting pleasure from it are comingled on the same neurons.
So no, Eliezer is not misguided. On the contrary, the evidence is on his side. People really do value more things than just happiness. If you want more evidence consider this thought experiment Alonso Fyfe cooked up:
Assume that you and somebody you care about (e.g., your child) are kidnapped by a mad scientist. This scientist gives you two options:
Option 1: Your child will be taken away and tortured. However, you will be made to believe that your child is living a happy and healthy life. You will receive regular reports and even correspondence explaining how great your child’s life is. Except, they will all be fake. In fact, we will take your child to another location and spend every day peeling off his skin while soaking him in a vat of salt water, among other things.
Option 2: Your child will be taken away, provided with paid medical insurance, an endowment to complete an education, will be hired into a good job, and will be caused to live a healthy and happy life. However, you will be made to believe that your child is suffering excruciating torture. You will be able to hear what you think are your child’s screams coming down the hallway. We will show you video of the torture. It will all be fake, of course, but you will be convinced it is real.
Of course, after you make your choice, we will make you forget that you even had these options presented to you.
What do you choose?
Now, we are not going to kidnap people and make them choose. However, both theories need to explain the fact that the vast majority of parents, for example, report that, in such a situation, they would choose Option 2.
Happiness theory seems to suggest that the agent should choose Option 1. After all, the agent will be happier receiving news (that she believes) that says that her child is living a happy and healthy life. So, if happiness is what she is after, and Option1 delivers more happiness, then Option 1 is the rational choice.
Why do people choose Option 2?
Because happiness theory is wrong. In fact, people do not choose happiness. They choose “making or keeping true the propositions that are the objects of our desires.” In this case, the desire in question is the desire that one’s child be healthy and happy. A person with a desire that “my child is healthy and happy” will select that option that will make or keep the proposition, “my child is healthy and happy” true. That is Option 2.
Choices can be wrong, and that one is. The hippy is simply mistaken about the kinds of differences that exist between “natural” and “non-natural” things, and about how much she would care about those differences if she knew more chemistry and physics. And presumably if she was less mistaken in expectations of what happens “after you die”.
As for relating this to Eliezer’s argument, a few examples of wrong non-subjective-happiness values is no demonstration that subjective happiness is the only human terminal value. Especially given the introspective and experimental evidence that people care about certain things that aren’t subjective happiness.
But the logic that Eliezer is using is exactly the same logic that drives somebody who’s dying of a horrible disease to refuse antibiotics, because she wants to keep her body natural.
I see absolutely no reason that people shouldn’t be allowed to decide this. (Where I firmly draw the line is people making decisions for other people on this kind of basis.)
So if I wanted to respond to the person dying of a horrible disease who is refusing antibiotics, I might say something like “you are confused about what you actually value and about the meaning of the word ‘natural.’ If you understood more about about science and medicine and successfully resolved the relevant confusions, you would no longer want to make this decision.” (I might also say something like “however, I respect your right to determine what kind of substances enter your body.”)
I suppose you want me to say that Eliezer is also confused about what he actually values, namely that he thinks he values science but he only values the ability of science to increase human happiness. (I don’t think he’s confused about the meaning of any of the relevant words.)
I disagree. One reason to value science, even from a purely hedonistic point of view, is that science corrects itself over time, and in particular gives you better ideas about how to be a hedonist over time. If you wanted to actually design a process that turned people into orgasmium, you’d have to science a lot, and at the end of all that sciencing there’s no guarantee that the process you’ve come up with is hedonistically optimal. Maybe you could increase the capacity of the orgasmium to experience happiness further if you’d scienced more. Once you turn everyone into orgasmium, nobody’s around to science anymore, so nobody’s around to find better processes for turning people into orgasmium (or, science forbid, find better ethical arguments against hedonistic utilitarianism).
In short, the capacity for self-improvement is lost, and that would be terrible regardless of what direction you’re trying to improve towards.
However, you asked me what I think, so here it is...
The wording of your first post in this thread seems telling. You say that “Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.”
Do you want to become orgasmium?
Perhaps you do. In that case, I direct the question to myself, and my answer is no: I don’t want to become orgasmium.
That having been established, what could it mean to say that my judgment is a “mistake”? That seems to be a category error. One can’t be mistaken in wanting something. One can be mistaken about wanting something (“I thought I wanted X, but upon reflection and consideration of my mental state, it turns out I actually don’t want X”), or one can be mistaken about some property of the thing in question, which affects the preference (“I thought I wanted X, but then I found out more about X, and now I don’t want X”); but if you’re aware of all relevant facts about the way the world is, and you’re not mistaken about what your own mental states are, and you still want something… labeling that a “mistake” seems simply meaningless.
On to your analogy:
If someone wants to “keep her body natural”, then conditional on that even being a coherent desire[1], what’s wrong with it? If it harms other people somehow, then that’s a problem… otherwise, I see no issue. I don’t think it makes this person “kind of dumb” unless you mean that she’s actually got other values that are being harmed by this value, or is being irrational in some other ways; but values in and of themselves cannot be irrational.
[Eliezer] can’t stand the idea that scientific discovery is only an instrument to increase happiness, so he makes it a terminal value just because he can.
This construal is incorrect. Say rather: Eliezer does not agree that scientific discovery is only an instrument to increase happiness. Eliezer isn’t making scientific discovery a terminal value, it is a terminal value for him. Terminal values are given.
In this discussion we are, of course, ignoring external effects altogether.
Why are we doing that...? If it’s only about happiness, then external effects should be irrelevant. You shouldn’t need to ignore them; they shouldn’t affect your point.
[1]Coherence matters: the difference between your hypothetical hippie and Eliezer the potential-scientific-discoverer is that the hippie, upon reflection, would realize (or so we would like to hope) that “natural” is not a very meaningful category, that her body is almost certainly already “not natural” in at least some important sense, and that “keeping her body natural” is just not a state of affairs that can be described in any consistent and intuitively correct way, much less one that can be implemented. That, if anything, is what makes her preference “dumb”. There’s no analogous failures of reasoning behind Eliezer’s preference to actually discover things instead of just pretend-discovering, or my preference to not become orgasmium.
That having been established, what could it mean to say that my judgment is a “mistake”? That seems to be a category error. One can’t be mistaken in wanting something.
I have never used the word “mistake” by itself. I did say that refusing to become orgasmium is a hedonistic utilitarian mistake, which is mathematically true, unless you disagree with me on the definition of “hedonistic utilitarian mistake” (= an action which demonstrably results in less hedonic utility than some other action) or of “orgasmium” (= a state of maximum personal hedonic utility).[1]
I point this out because I think you are quite right: it doesn’t make sense to tell somebody that they are mistaken in “wanting” something.
Indeed, I never argued that the dying hippie was mistaken. In fact I made exactly the same point that you’re making, when I said:
And [the hippie’s] choice is — well, it isn’t wrong, choices can’t be “wrong”
What I said was that she is misguided.
The argument I was trying to make was, look, this hippie is using some suspect reasoning to make her decisions, and Eliezer’s reasoning looks a lot like her’s, so we should doubt Eliezer’s conclusions. There are two perfectly reasonable ways to refute this argument: you can (1) deny that the hippie’s reasoning is suspect, or (2) deny that Eliezer’s reasoning is similar to hers.
These are both perfectly fine things to do, since I never elaborated on either point. (You seem to be trying option 1.) My comment can only possibly convince people who feel instinctively that both of these points are true.
All that said, I think that I am meaningfully right — in the sense that, if we debated this forever, we would both end up much closer to my (current) view than to your (current) view. Maybe I’ll write an article about this stuff and see if I can make my case more strongly.
[1] Please note that I am ignoring the external effects of becoming orgasmium. If we take those into account, my statement stops being mathematically true.
I don’t think those are the only two ways to refute the argument. I can think of at least two more:
(3) Deny the third step of the argument’s structure — the “so we should doubt Eliezer’s conclusions” part. Analogical reasoning applied to surface features of arguments is not reliable. There’s really no substitute for actually examining an argument.
(4) Disagree that construing the hippie’s position as constituting any sort of “reasoning” that may or may not be “suspect” is a meaningful description of what’s going on in your hypothetical (or at least, the interesting aspect of what’s going on, the part we’re concerned with). The point I was making is this: what’s relevant in that scenario is that the hippie has “keeping her body natural” as a terminal value. If that’s a coherent value, then the rest of the reasoning (“and therefore I shouldn’t take this pill”) is trivial and of no interest to us. Now it may not be a coherent value, as I said; but if it is — well, arguing with terminal values is not a matter of poking holes in someone’s logic. Terminal values are given.
As for your other points:
It’s true, you didn’t say “mistake” on its own. What I am wondering is this: ok, refusing to become orgasmium fails to satisfy the mathematical requirements of hedonistic utilitarianism.
But why should anyone care about that?
I don’t mean this as a general, out-of-hand dismissal; I am asking, specifically, why such a requirement would override a person’s desires:
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would. Person B: But I don’t want to become orgasmium. Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium! Person B: But… I don’t want to become orgasmium.
I see Person B’s position as being the final word on the matter (especially if, as you say, we’re ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B’s judgments? Why should the mathematical requirements behind Person A’s framework have any relevance to Person B’s decisions? In other words, why should we be hedonistic utilitarians, if we don’t want to be?
(If we imagine the above argument continuing, it would develop that Person B doesn’t want to feel as much pleasure as possible; or, at the least, wants other things too, and even the pleasure thing he wants only given certain conditions; in other words, we’d arrive at conclusions along the lines outlined in the “Complexity of value” wiki entry.)
(As an aside, I’m still not sure why you’re ignoring external effects in your arguments.)
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don’t want to become orgasmium.
If I become orgasmium, then I would cease to exist, and the orgasmium, which is not me in any meaningful sense, will have more pleasure than I otherwise would have. But I don’t care about the pleasure of this orgasmium, and certainly would not pay my existence for it.
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don’t want to become orgasmium.
Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium! Person B: But… I don’t want to become orgasmium.
I see Person B’s position as being the final word on the matter (especially if, as you say, we’re ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B’s judgments? Why should the mathematical requirements behind Person A’s framework have any relevance to Person B’s decisions? In other words, why should we be hedonistic utilitarians, if we don’t want to be?
The difficulty here, of course, is that Person B is using a cached heuristic that outputs “no” for “become orgasmium”; and we cannot be certain that this heuristic is correct in this case. Just as Person A is using the (almost certainly flawed) heuristic “feel as much pleasure as possible”, which outputs “yes” for “become orgasmium”.
The difficulty here, of course, is that Person B is using a cached heuristic that outputs “no” for “become orgasmium”
Why do you think so?
we cannot be certain that this heuristic is correct in this case.
What do you mean by “correct”?
Edit: I think it would be useful for any participants in discussions like this to read Eliezer’s Three Worlds Collide. Not as fictional evidence, but as an examination of the issues, which I think it does quite well. A relevant quote, from chapter 4, “Interlude with the Confessor”:
A sigh came from that hood. “Well… would you prefer a life entirely free of pain and sorrow, having sex all day long?”
“Not… really,” Akon said.
The shoulders of the robe shrugged. “You have judged. What else is there?”
I give a decent probability to the optimal order of things containing absolutely zero pleasure. I assign a lower, but still significant, probability to it containing an infinite amount of pain in any given subjective interval.
… why? Humans definitely appear to want to avoid pain and enjoy pleasure. i suppose I can see pleasure being replaced with “better” emotions, but I’m really baffled regarding the pain. Is it to do with punishment? Challenge? Something I haven’t thought of?
Agreed, pretty much. I said significant probability, not big. I’m not good at translating anticipations into numbers, but no more than 5%. Mostly based on extreme outside view, as in “something I haven’t thought of”.
For as long as I’ve been here, which admittedly isn’t all that long.
Because being orgasmium is by definition more pleasant than not being orgasmium. Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.[1]
I’m anti-orgasmium, but not necessarily anti-experience-machine. I’m approximately a median-preference utilitarian. (This is more descriptive than normative)
I can’t bring myself to see the creation of an awesomeness pill as the one problem of such huge complexity that even a superintelligent agent can’t solve it.
I have no doubt that you could make a pill that would convince someone that they were living an awesome life, complete with hallucinations of rocket-powered tyrannosaurs, and black leather lab coats.
The trouble is that merely hallucinating those things, or merely feeling awesome is not enough.
The average optimizer probably has no code for experiencing utility, it only feels the utility of actions under consideration. The concept of valuing (or even having) internal experience is particular to humans, and is in fact only one of the many things that we care about. Is there a good argument for why internal experience ought to be the only thing we care about? Why should we forget all the other things that we like and focus solely on internal experience (and possibly altruism)?
Can’t I simulate everything I care about? And if I can, why would I care about what is going on outside of the simulation, any more than I care now about a hypothetical asteroid on which the “true” purpose of the universe is written? Hell, if I can delete the fact from my memory that my utility function is being deceived, I’d gladly do so—yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.
Now that I think about it...if, without an awesomeness pill, my decision would be to go and do battle in an eternal Valhalla where I polish my skills and have fun, and an awesomeness pill brings me that, except maybe better in some way I wouldn’t normally have thought of...what is exactly the problem here? The image of a brain with the utility slider moved to the max is disturbing, but I myself can avoid caring about that particular asteroid. An image of an universe tiled with brains storing infinite integers is disturbing; one of an universe tiled with humans riding rocket-powered tyrannosaurs is great—and yet, they’re one and the same; we just can’t intuitively penetrate the black box that is the brain storing the integer. I’d gladly tile the universe with awesome.
If I could take an awesomeness pill and be whisked off somewhere where my body would be taken care of indefinitely, leaving everything else as it is, maybe I would decline; probably I won’t. Luckily, once awesomeness pills become available, there probably won’t be starving children, so that point seems moot.
[PS.] In any case, if my space fleet flies by some billboard saying that all this is an illusion, I’d probably smirk, I’d maybe blow it up with my rainbow lasers, and I’d definitely feel bad about all those other fellas whose space fleets are a bit less awesome and significantly more energy-consuming than mine (provided our AI is still limited by, at the very least, entropy; meaning limited in its ability to tile the world to infinity; if it can create the same amount of real giant robots as it can create awesome pills, it doesn’t matter which option is taken), all just because they’re bothered by silly billboards like this. If I’m allowed to have that knowledge and the resulting negative utility, that is.
[PPS.] I can’t imagine how an awesomeness pill would max my sliders for self-improvement, accomplishment, etc without actually giving me the illusion of doing those things. As in, I can imagine feeling intense pleasure; I can’t imagine feeling intense achievement separated from actually flying—or imagining that I’m flying—a spaceship—it wouldn’t feel as fulfilling, and it makes no sense that an awesomeness pill would separate them if it’s possible not to. It probably wouldn’t have me go through the roundabout process of doing all the stuff, and it probably would max my sliders even if I can’t imagine it, to an effect much different from the roundabout way, and by definition superior. As long as it doesn’t modify my utility function (as long as I value flying space ships), I don’t mind.
Luckily, once awesomeness pills become available, there probably won’t be starving children, so that point seems moot.
This is a key assumption. Sure, if I assume that the universe is such that no choice I make affects the chances that a child I care about will starve—and, more generally, if I assume that no choice I make affects the chances that people will gain good stuff or bad stuff—then sure, why not wirehead? It’s not like there’s anything useful I could be doing instead.
But some people would, in that scenario, object to the state of the world. Some people actually want to be able to affect the total amount of good and bad stuff that people get.
And, sure, the rest of us could get together and lie to them (e.g., by creating a simulation in which they believe that’s the case), though it’s not entirely clear why we ought to. We could also alter them (e.g., by removing their desire to actually do good) but it’s not clear why we ought to do that, either.
I can’t imagine feeling intense achievement separated from actually flying—or imagining that I’m flying—a spaceship
Do you mean to distinguish this from believing that you have flown a spaceship?
Don’t we have to do it (lying to people) because we value other people being happy? I’d rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I’m not about to wirehead you though)
Do you mean to distinguish this from believing that you have flown a spaceship?
Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can’t imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don’t have an achievement slider to max; it just means I can’t imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout—really, that’s the only thing I can say; it feels like it won’t work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can’t imagine how an awesomeness pill would feel, hence I can’t dispel this annoying confusion.
[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.
Regarding the first bit… well, we have a few basic choices:
Change the world so that reality makes them happy
Change them so that reality makes them happy
Lie to them about reality, so that they’re happy
Accept that they aren’t happy
If I’m understanding your scenario properly, we don’t want to do the first because it leaves more people worse off, and we don’t want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don’t know, but I’ll accept that it is.)
But why, on your view, ought we lie to them, rather than change them?
I attach negative utility to getting my utility function changed—I wouldn’t change myself to maximize paperclips. I also attach negative utility to getting my memory modified—I don’t like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I’d prefer being fed negative information to having my memory modified to being made to stop caring about other people altogether. There is an especially big gap between the last one and the former two.
Thanks for summarizing my argument. I guess I need to work on expressing myself so I don’t force other people to work through my roundaboutness :)
Fair enough. If you have any insight into why your preferences rank in this way, I’d be interested, but I accept that they are what they are.
However, I’m now confused about your claim.
Are you saying that we ought to treat other people in accordance with your preferences of how to be treated (e.g., lied to in the present rather than having their values changed or their memories altered)? Or are you just talking about how you’d like us to treat you? Or are you assuming that other people have the same preferences you do?
For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn’t change the fact that it also means I’d eat a ton of babies, which makes the option a huge source of utility- currently—I wouldn’t want to do something that would lead to me eating a ton of babies. If I greatly valued generating as much utility+ for myself at any moment as possible, I would take the plunge; however, I look at the future, decide not to take what is currently utility- for me, and move on. Or maybe I’m just making up excuses to refuse to take a momentary discomfort for eternal utility+ - after all, I bet someone having the time of his life eating babies would laugh at me and have more fun than me—the inconsistency here is that I avoid the utility- choice when it comes to changing my terminal values, but I have no issue taking the utility- choice when I decide I want to be in a simulation. Guess I don’t value truth that much. I find that changing my memories leads to similar results as changing my utility function, but on a much, much smaller scale—after all, they are what make up my beliefs, preferences, myself as a person. Changing them at all changes my belief system and preference; but that’s happening all the time. Changing them on a large scale is significantly worse in terms of affecting my utility function—it can’t change my terminal values, so still far less bad than directly making me interested in eating babies, but still negative. Getting lied to is just bad, with no relation to the above two, and weakest in importance.
My gut says that I should treat others as I want them to treat me. Provided a simulation is a bit more awesome, or comparably awesome but more efficient, I’d rather take that than the real thing. Hence, I’d want to give others what I myself prefer (in terms of ways to achieve preferences) - not because they are certain to agree that being lied to is better than angsting about not helping people, but because my way is either better or worse than theirs, and I wouldn’t believe in my way unless I though it better. Of course, I am also assuming that truth isn’t a terminal value to them. In the same way, since I don’t want my utility function changed, I’d rather not do it to them.
Hell, if I can delete the fact from my memory that my utility function is being deceived, I’d gladly do so—yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.
I don’t understand this. If your utility function is being deceived, then you don’t value the true state of affairs, right? Unless you value “my future self feeling utility” as a terminal value, and this outweighs the value of everything else …
No, this is more about deleting a tiny discomfort—say, the fact that I know that all of it is an illusion; I attach a big value to my memory and especially disagree with sweeping changes to it, but I’ll rely on the pill and thereby the AI to make the decision what shouldn’t be deleted because doing so would interfere with the fulfillment of my terminal values and what can be deleted because it brings negative utility that isn’t necessary.
Intellectually, I wouldn’t care whether I’m the only drugged brain in a world where everyone is flying real spaceships. I probably can’t fully deal with the intuition telling me I’m drugged though. It’s not highly important—just a passing discomfort when I think about the particular topic (passing and tiny, unless there are starving children). Whether its worth keeping around so I can feel in control and totally not drugged and imprisoned...I guess that’s reliant on the circumstances.
One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said “this is the best thing ever” and was pretty sincere. It looked pretty silly from the outside though.
This may be a minor nit, but… is this forum collectively anti-orgasmium, now?
Because being orgasmium is by definition more pleasant than not being orgasmium. Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.[1] (Well, that’s not actually true, since as a human you can make other people happier, and as orgasmium you presumably cannot. But it is at least on average a mistake to refuse to become orgasmium; I would argue that it is virtually always a mistake.)
[1] We’re all hedonistic utilitarians, right?
… no?
http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/
Interesting stuff. Very interesting.
Do you buy it?
That article is arguing that it’s all right to value things that aren’t mental states over a net gain in mental utility.[1] If, for instance, you’re given the choice between feeling like you’ve made lots of scientific discoveries and actually making just a few scientific discoveries, it’s reasonable to prefer the latter.[2]
Well, that example doesn’t sound all that ridiculous.
But the logic that Eliezer is using is exactly the same logic that drives somebody who’s dying of a horrible disease to refuse antibiotics, because she wants to keep her body natural. And this choice is — well, it isn’t wrong, choices can’t be “wrong” — but it reflects a very fundamental sort of human bias. It’s misguided.
And I think that Eliezer’s argument is misguided, too. He can’t stand the idea that scientific discovery is only an instrument to increase happiness, so he makes it a terminal value just because he can. This is less horrible than the hippie who thinks that maintaining her “naturalness” is more important than avoiding a painful death, but it’s not much less dumb.
[1] Or a net gain in “happiness,” if we don’t mind using that word as a catchall for “whatever it is that makes good mental states good.”
[2] In this discussion we are, of course, ignoring external effects altogether. And we’re assuming that the person who gets to experience lots of scientific discoveries really is happier than the person who doesn’t, otherwise there’s nothing to debate. Let me note that in the real world, it is obviously possible to make yourself less happy by taking joy-inducing drugs — for instance if doing so devalues the rest of your life. This fact makes Eliezer’s stance seem a lot more reasonable than it actually is.
Very well, let’s back up Eliezer’s argument with some hard evidence. Fortunately, lukeprog has already written a brief review of the neuroscience on this topic. The verdict? Eliezer is right. People value things other than happiness and pleasure. The idea that pleasant feelings are the sole good is an illusion created by the fact that the signals for wanting something and getting pleasure from it are comingled on the same neurons.
So no, Eliezer is not misguided. On the contrary, the evidence is on his side. People really do value more things than just happiness. If you want more evidence consider this thought experiment Alonso Fyfe cooked up:
Damn but that’s a good example. Is it too long to submit to the Rationality Quotes thread?
You can argue that having values other than hedonistic utility is mistaken in certain cases. But that doesn’t imply that it’s mistaken in all cases.
Choices can be wrong, and that one is. The hippy is simply mistaken about the kinds of differences that exist between “natural” and “non-natural” things, and about how much she would care about those differences if she knew more chemistry and physics. And presumably if she was less mistaken in expectations of what happens “after you die”.
As for relating this to Eliezer’s argument, a few examples of wrong non-subjective-happiness values is no demonstration that subjective happiness is the only human terminal value. Especially given the introspective and experimental evidence that people care about certain things that aren’t subjective happiness.
I see absolutely no reason that people shouldn’t be allowed to decide this. (Where I firmly draw the line is people making decisions for other people on this kind of basis.)
I’m not arguing that people shouldn’t decide that. I’m not arguing any kind of “should.”
I’m just saying, if you do decide that, you’re kind of dumb. And by analogy Eliezer was being kind of dumb in his article.
Okay. What do you mean by “dumb”?
In this case: letting bias and/or intellectual laziness dominate your decision-making process.
So if I wanted to respond to the person dying of a horrible disease who is refusing antibiotics, I might say something like “you are confused about what you actually value and about the meaning of the word ‘natural.’ If you understood more about about science and medicine and successfully resolved the relevant confusions, you would no longer want to make this decision.” (I might also say something like “however, I respect your right to determine what kind of substances enter your body.”)
I suppose you want me to say that Eliezer is also confused about what he actually values, namely that he thinks he values science but he only values the ability of science to increase human happiness. (I don’t think he’s confused about the meaning of any of the relevant words.)
I disagree. One reason to value science, even from a purely hedonistic point of view, is that science corrects itself over time, and in particular gives you better ideas about how to be a hedonist over time. If you wanted to actually design a process that turned people into orgasmium, you’d have to science a lot, and at the end of all that sciencing there’s no guarantee that the process you’ve come up with is hedonistically optimal. Maybe you could increase the capacity of the orgasmium to experience happiness further if you’d scienced more. Once you turn everyone into orgasmium, nobody’s around to science anymore, so nobody’s around to find better processes for turning people into orgasmium (or, science forbid, find better ethical arguments against hedonistic utilitarianism).
In short, the capacity for self-improvement is lost, and that would be terrible regardless of what direction you’re trying to improve towards.
I surmise from your comments that you may not be aware that Eliezer’s written quite a bit on this matter; http://wiki.lesswrong.com/wiki/Complexity_of_value is a good summary/index (http://lesswrong.com/lw/l3/thou_art_godshatter/ is one of my favorites). There’s a lot of stuff in there that is relevant to your points.
However, you asked me what I think, so here it is...
The wording of your first post in this thread seems telling. You say that “Refusing to become orgasmium is a hedonistic utilitarian mistake, full stop.”
Do you want to become orgasmium?
Perhaps you do. In that case, I direct the question to myself, and my answer is no: I don’t want to become orgasmium.
That having been established, what could it mean to say that my judgment is a “mistake”? That seems to be a category error. One can’t be mistaken in wanting something. One can be mistaken about wanting something (“I thought I wanted X, but upon reflection and consideration of my mental state, it turns out I actually don’t want X”), or one can be mistaken about some property of the thing in question, which affects the preference (“I thought I wanted X, but then I found out more about X, and now I don’t want X”); but if you’re aware of all relevant facts about the way the world is, and you’re not mistaken about what your own mental states are, and you still want something… labeling that a “mistake” seems simply meaningless.
On to your analogy:
If someone wants to “keep her body natural”, then conditional on that even being a coherent desire[1], what’s wrong with it? If it harms other people somehow, then that’s a problem… otherwise, I see no issue. I don’t think it makes this person “kind of dumb” unless you mean that she’s actually got other values that are being harmed by this value, or is being irrational in some other ways; but values in and of themselves cannot be irrational.
This construal is incorrect. Say rather: Eliezer does not agree that scientific discovery is only an instrument to increase happiness. Eliezer isn’t making scientific discovery a terminal value, it is a terminal value for him. Terminal values are given.
Why are we doing that...? If it’s only about happiness, then external effects should be irrelevant. You shouldn’t need to ignore them; they shouldn’t affect your point.
[1]Coherence matters: the difference between your hypothetical hippie and Eliezer the potential-scientific-discoverer is that the hippie, upon reflection, would realize (or so we would like to hope) that “natural” is not a very meaningful category, that her body is almost certainly already “not natural” in at least some important sense, and that “keeping her body natural” is just not a state of affairs that can be described in any consistent and intuitively correct way, much less one that can be implemented. That, if anything, is what makes her preference “dumb”. There’s no analogous failures of reasoning behind Eliezer’s preference to actually discover things instead of just pretend-discovering, or my preference to not become orgasmium.
I have never used the word “mistake” by itself. I did say that refusing to become orgasmium is a hedonistic utilitarian mistake, which is mathematically true, unless you disagree with me on the definition of “hedonistic utilitarian mistake” (= an action which demonstrably results in less hedonic utility than some other action) or of “orgasmium” (= a state of maximum personal hedonic utility).[1]
I point this out because I think you are quite right: it doesn’t make sense to tell somebody that they are mistaken in “wanting” something.
Indeed, I never argued that the dying hippie was mistaken. In fact I made exactly the same point that you’re making, when I said:
What I said was that she is misguided.
The argument I was trying to make was, look, this hippie is using some suspect reasoning to make her decisions, and Eliezer’s reasoning looks a lot like her’s, so we should doubt Eliezer’s conclusions. There are two perfectly reasonable ways to refute this argument: you can (1) deny that the hippie’s reasoning is suspect, or (2) deny that Eliezer’s reasoning is similar to hers.
These are both perfectly fine things to do, since I never elaborated on either point. (You seem to be trying option 1.) My comment can only possibly convince people who feel instinctively that both of these points are true.
All that said, I think that I am meaningfully right — in the sense that, if we debated this forever, we would both end up much closer to my (current) view than to your (current) view. Maybe I’ll write an article about this stuff and see if I can make my case more strongly.
[1] Please note that I am ignoring the external effects of becoming orgasmium. If we take those into account, my statement stops being mathematically true.
I don’t think those are the only two ways to refute the argument. I can think of at least two more:
(3) Deny the third step of the argument’s structure — the “so we should doubt Eliezer’s conclusions” part. Analogical reasoning applied to surface features of arguments is not reliable. There’s really no substitute for actually examining an argument.
(4) Disagree that construing the hippie’s position as constituting any sort of “reasoning” that may or may not be “suspect” is a meaningful description of what’s going on in your hypothetical (or at least, the interesting aspect of what’s going on, the part we’re concerned with). The point I was making is this: what’s relevant in that scenario is that the hippie has “keeping her body natural” as a terminal value. If that’s a coherent value, then the rest of the reasoning (“and therefore I shouldn’t take this pill”) is trivial and of no interest to us. Now it may not be a coherent value, as I said; but if it is — well, arguing with terminal values is not a matter of poking holes in someone’s logic. Terminal values are given.
As for your other points:
It’s true, you didn’t say “mistake” on its own. What I am wondering is this: ok, refusing to become orgasmium fails to satisfy the mathematical requirements of hedonistic utilitarianism.
But why should anyone care about that?
I don’t mean this as a general, out-of-hand dismissal; I am asking, specifically, why such a requirement would override a person’s desires:
Person A: If you become orgasmium, you would feel more pleasure than you otherwise would.
Person B: But I don’t want to become orgasmium.
Person A: But if you want to feel as much pleasure as possible, then you should become orgasmium!
Person B: But… I don’t want to become orgasmium.
I see Person B’s position as being the final word on the matter (especially if, as you say, we’re ignoring external consequences). Person A may be entirely right — but so what? Why should that affect Person B’s judgments? Why should the mathematical requirements behind Person A’s framework have any relevance to Person B’s decisions? In other words, why should we be hedonistic utilitarians, if we don’t want to be?
(If we imagine the above argument continuing, it would develop that Person B doesn’t want to feel as much pleasure as possible; or, at the least, wants other things too, and even the pleasure thing he wants only given certain conditions; in other words, we’d arrive at conclusions along the lines outlined in the “Complexity of value” wiki entry.)
(As an aside, I’m still not sure why you’re ignoring external effects in your arguments.)
If I become orgasmium, then I would cease to exist, and the orgasmium, which is not me in any meaningful sense, will have more pleasure than I otherwise would have. But I don’t care about the pleasure of this orgasmium, and certainly would not pay my existence for it.
The difficulty here, of course, is that Person B is using a cached heuristic that outputs “no” for “become orgasmium”; and we cannot be certain that this heuristic is correct in this case. Just as Person A is using the (almost certainly flawed) heuristic “feel as much pleasure as possible”, which outputs “yes” for “become orgasmium”.
Why do you think so?
What do you mean by “correct”?
Edit: I think it would be useful for any participants in discussions like this to read Eliezer’s Three Worlds Collide. Not as fictional evidence, but as an examination of the issues, which I think it does quite well. A relevant quote, from chapter 4, “Interlude with the Confessor”:
Humans are not perfect reasoners.
[Edited for clarity.]
I give a decent probability to the optimal order of things containing absolutely zero pleasure. I assign a lower, but still significant, probability to it containing an infinite amount of pain in any given subjective interval.
Is this intended as a reply to my comment?
reply to the entire thread really.
Fair enough.
Is this intended as a reply to my comment?
… why? Humans definitely appear to want to avoid pain and enjoy pleasure. i suppose I can see pleasure being replaced with “better” emotions, but I’m really baffled regarding the pain. Is it to do with punishment? Challenge? Something I haven’t thought of?
Agreed, pretty much. I said significant probability, not big. I’m not good at translating anticipations into numbers, but no more than 5%. Mostly based on extreme outside view, as in “something I haven’t thought of”.
Oh, right. “Significance” is subjective, I guess. I assumed it meant, I don’t know, >10% or whatever.
No. Most of us are preferentists or similar. Some of us are not consequentialists at all.
For as long as I’ve been here, which admittedly isn’t all that long.
Here’s your problem.
I’m anti-orgasmium, but not necessarily anti-experience-machine. I’m approximately a median-preference utilitarian. (This is more descriptive than normative)
No thanks. Awesomeness is more complex than can be achieved with wireheading.
I can’t bring myself to see the creation of an awesomeness pill as the one problem of such huge complexity that even a superintelligent agent can’t solve it.
I have no doubt that you could make a pill that would convince someone that they were living an awesome life, complete with hallucinations of rocket-powered tyrannosaurs, and black leather lab coats.
The trouble is that merely hallucinating those things, or merely feeling awesome is not enough.
The average optimizer probably has no code for experiencing utility, it only feels the utility of actions under consideration. The concept of valuing (or even having) internal experience is particular to humans, and is in fact only one of the many things that we care about. Is there a good argument for why internal experience ought to be the only thing we care about? Why should we forget all the other things that we like and focus solely on internal experience (and possibly altruism)?
Can’t I simulate everything I care about? And if I can, why would I care about what is going on outside of the simulation, any more than I care now about a hypothetical asteroid on which the “true” purpose of the universe is written? Hell, if I can delete the fact from my memory that my utility function is being deceived, I’d gladly do so—yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.
Now that I think about it...if, without an awesomeness pill, my decision would be to go and do battle in an eternal Valhalla where I polish my skills and have fun, and an awesomeness pill brings me that, except maybe better in some way I wouldn’t normally have thought of...what is exactly the problem here? The image of a brain with the utility slider moved to the max is disturbing, but I myself can avoid caring about that particular asteroid. An image of an universe tiled with brains storing infinite integers is disturbing; one of an universe tiled with humans riding rocket-powered tyrannosaurs is great—and yet, they’re one and the same; we just can’t intuitively penetrate the black box that is the brain storing the integer. I’d gladly tile the universe with awesome.
If I could take an awesomeness pill and be whisked off somewhere where my body would be taken care of indefinitely, leaving everything else as it is, maybe I would decline; probably I won’t. Luckily, once awesomeness pills become available, there probably won’t be starving children, so that point seems moot.
[PS.] In any case, if my space fleet flies by some billboard saying that all this is an illusion, I’d probably smirk, I’d maybe blow it up with my rainbow lasers, and I’d definitely feel bad about all those other fellas whose space fleets are a bit less awesome and significantly more energy-consuming than mine (provided our AI is still limited by, at the very least, entropy; meaning limited in its ability to tile the world to infinity; if it can create the same amount of real giant robots as it can create awesome pills, it doesn’t matter which option is taken), all just because they’re bothered by silly billboards like this. If I’m allowed to have that knowledge and the resulting negative utility, that is.
[PPS.] I can’t imagine how an awesomeness pill would max my sliders for self-improvement, accomplishment, etc without actually giving me the illusion of doing those things. As in, I can imagine feeling intense pleasure; I can’t imagine feeling intense achievement separated from actually flying—or imagining that I’m flying—a spaceship—it wouldn’t feel as fulfilling, and it makes no sense that an awesomeness pill would separate them if it’s possible not to. It probably wouldn’t have me go through the roundabout process of doing all the stuff, and it probably would max my sliders even if I can’t imagine it, to an effect much different from the roundabout way, and by definition superior. As long as it doesn’t modify my utility function (as long as I value flying space ships), I don’t mind.
This is a key assumption. Sure, if I assume that the universe is such that no choice I make affects the chances that a child I care about will starve—and, more generally, if I assume that no choice I make affects the chances that people will gain good stuff or bad stuff—then sure, why not wirehead? It’s not like there’s anything useful I could be doing instead.
But some people would, in that scenario, object to the state of the world. Some people actually want to be able to affect the total amount of good and bad stuff that people get.
And, sure, the rest of us could get together and lie to them (e.g., by creating a simulation in which they believe that’s the case), though it’s not entirely clear why we ought to. We could also alter them (e.g., by removing their desire to actually do good) but it’s not clear why we ought to do that, either.
Do you mean to distinguish this from believing that you have flown a spaceship?
Don’t we have to do it (lying to people) because we value other people being happy? I’d rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I’m not about to wirehead you though)
Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can’t imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don’t have an achievement slider to max; it just means I can’t imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout—really, that’s the only thing I can say; it feels like it won’t work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can’t imagine how an awesomeness pill would feel, hence I can’t dispel this annoying confusion.
[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.
Regarding the first bit… well, we have a few basic choices:
Change the world so that reality makes them happy
Change them so that reality makes them happy
Lie to them about reality, so that they’re happy
Accept that they aren’t happy
If I’m understanding your scenario properly, we don’t want to do the first because it leaves more people worse off, and we don’t want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don’t know, but I’ll accept that it is.)
But why, on your view, ought we lie to them, rather than change them?
I attach negative utility to getting my utility function changed—I wouldn’t change myself to maximize paperclips. I also attach negative utility to getting my memory modified—I don’t like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I’d prefer being fed negative information to having my memory modified to being made to stop caring about other people altogether. There is an especially big gap between the last one and the former two.
Thanks for summarizing my argument. I guess I need to work on expressing myself so I don’t force other people to work through my roundaboutness :)
Fair enough. If you have any insight into why your preferences rank in this way, I’d be interested, but I accept that they are what they are.
However, I’m now confused about your claim.
Are you saying that we ought to treat other people in accordance with your preferences of how to be treated (e.g., lied to in the present rather than having their values changed or their memories altered)? Or are you just talking about how you’d like us to treat you? Or are you assuming that other people have the same preferences you do?
For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn’t change the fact that it also means I’d eat a ton of babies, which makes the option a huge source of utility- currently—I wouldn’t want to do something that would lead to me eating a ton of babies. If I greatly valued generating as much utility+ for myself at any moment as possible, I would take the plunge; however, I look at the future, decide not to take what is currently utility- for me, and move on. Or maybe I’m just making up excuses to refuse to take a momentary discomfort for eternal utility+ - after all, I bet someone having the time of his life eating babies would laugh at me and have more fun than me—the inconsistency here is that I avoid the utility- choice when it comes to changing my terminal values, but I have no issue taking the utility- choice when I decide I want to be in a simulation. Guess I don’t value truth that much. I find that changing my memories leads to similar results as changing my utility function, but on a much, much smaller scale—after all, they are what make up my beliefs, preferences, myself as a person. Changing them at all changes my belief system and preference; but that’s happening all the time. Changing them on a large scale is significantly worse in terms of affecting my utility function—it can’t change my terminal values, so still far less bad than directly making me interested in eating babies, but still negative. Getting lied to is just bad, with no relation to the above two, and weakest in importance.
My gut says that I should treat others as I want them to treat me. Provided a simulation is a bit more awesome, or comparably awesome but more efficient, I’d rather take that than the real thing. Hence, I’d want to give others what I myself prefer (in terms of ways to achieve preferences) - not because they are certain to agree that being lied to is better than angsting about not helping people, but because my way is either better or worse than theirs, and I wouldn’t believe in my way unless I though it better. Of course, I am also assuming that truth isn’t a terminal value to them. In the same way, since I don’t want my utility function changed, I’d rather not do it to them.
I don’t understand this. If your utility function is being deceived, then you don’t value the true state of affairs, right? Unless you value “my future self feeling utility” as a terminal value, and this outweighs the value of everything else …
No, this is more about deleting a tiny discomfort—say, the fact that I know that all of it is an illusion; I attach a big value to my memory and especially disagree with sweeping changes to it, but I’ll rely on the pill and thereby the AI to make the decision what shouldn’t be deleted because doing so would interfere with the fulfillment of my terminal values and what can be deleted because it brings negative utility that isn’t necessary.
Intellectually, I wouldn’t care whether I’m the only drugged brain in a world where everyone is flying real spaceships. I probably can’t fully deal with the intuition telling me I’m drugged though. It’s not highly important—just a passing discomfort when I think about the particular topic (passing and tiny, unless there are starving children). Whether its worth keeping around so I can feel in control and totally not drugged and imprisoned...I guess that’s reliant on the circumstances.
So you’re saying that your utility function is fine with the world-as-it-is, but you don’t like the sensation of knowing you’re in a vat. Fair enough.
My first thought was that an awesomeness pill would be a pill that makes ordinary experience awesome. Things fall down. Reliably. That’s awesome!
And in fact, that’s a major element of popular science writing, though I don’t know how well it works.
Psychedelic drugs already exist...
One time my roommate ate shrooms, and then he spent about 2 hours repeatedly knocking over an orange juice jug, and then picking it up again. It was bizarre. He said “this is the best thing ever” and was pretty sincere. It looked pretty silly from the outside though.