Going for what you “want” is merely going for what you like the thought of. To like the thought of something is to like something (in this case the “something” that you like is the thought of something; a thought is also something). This means that wanting cannot happen unless there is liking that creates the wanting. So, of wanting and liking, liking is the only thing that can ever independently make us make any choice we make. Wanting which is not entirely contingent on liking never makes us make any decisions, because there is no such thing as wanting which is not entirely contingent on liking.
Suppose you can save mankind, but only by taking a drug that makes you forget that you have saved mankind, and also makes you suffer horribly for two minutes and then kills you. The fact that you can reasonably choose to take such a drug may seem to suggest that you can make a choice which you know will lead to a situation that you know you will not like being in. But there too, you actually just go for what you like: you like the thought of saving mankind, so you do whatever action seems associated with that thought. You may intellectually understand that you will suffer and feel no pleasure from the very moment after your decision is made, but this is hard for your subconscious to fully believe if you at the thought of that future actually feel pleasure (or at least less pain than you feel at the thought of the alternative), so your subconscious continues assuming that what you like thinking about is what will create situations that you will like. And the subconscious may be the one making the decision for you, even if it feels like you are making a conscious decision. So your decision may be a function exclusively of what you like, not by what you “want but don’t like”.
To merely like the thought of doing something can be motivating enough, and this is what makes so many people overeat, smoke, drink, take drugs, skip doing physical excercise, et cetera. After the point when you know you have already eaten enough, you couldn’t want to eat more unless you in some sense liked the thought of eating more. Our wanting something always implies an expectation of a future which we at least like thinking of. Wanting may sometimes appear to point in a different direction than liking does, but wanting is always merely liking the thought of something (more than one likes the thought of the alternatives).
Going for what you “want” (that is, going for what you merely like the thought of having) may be a very dumb and extremely short-sighted way of going for what you like, but it’s still a way of going for what you like.
Isn’t this just a way of saying that people like the thought of getting what they want? Indeed, it would be rather odd if expecting to get what we want made us unhappy. See also here, I guess.
No, I didn’t just try to say that “people like the thought of getting what they want”. The title of the article says “not for the sake of pleasure alone”. I tried to show that that is false. Everything we do, we do for pleasure alone, or to avoid or decrease suffering. We never make a decision based on a want that is not in turn based on a like/dislike. All “wants” are servile consequences of “likes”/”dislikes”, so I think “wants” should be treated as mere transitional steps, not as initial causes of our decisions.
You’ve just shown that wanting and liking go together, and asserted that one of them is more fundamental. Nothing which you have written appears to show that it’s impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good.
And nevertheless, people still don’t just optimize for pleasure, since they would take the drug mentioned, despite the fact that doing so is far less pleasurable than the alternative, even if the “pleasure involved in deciding to do so” is taken into account.
Sure, you can say that only the “pleasure involved in deciding” or “liking the thought of” is relevant, upon which your account of decision making reduces to (something about X) --> (I like the thought of X) --> (I take action X), where (I like the thought of X) would seem to be an unnecessary step where the same result would be obtained by eliminating it, and of course you still haven’t looked inside the black box (something about X).
Or you can suggest that people are just mistaken about how pleasurable the results will be of any action they take that doesn’t maximise pleasure. But at that point you’re trying to construct sensible preferences from a mind that appears to be wrong about almost everything including the blatantly obvious, and I have to wonder exactly what evidence in this mind points toward the “true” preferences being “maximal pleasure”.
Nothing which you have written appears to show that it’s impossible or even unlikely that people try to get
things they want (which sometimes include pleasure, and which sometimes include saving the world), and
that successful planning just feels good.
I’m not trying to show that. I agree that people try to get things they want, as long as with “things they want” we mean “things that they are tempted to go for because the thought of going for those things is so pleasurable”.
(something about X) --> (I like the thought of X) --> (I take action X), where (I like the thought of X) would seem
to be an unnecessary step where the same result would be obtained by eliminating it,
Why would you want to eliminate the pleasure involved in decision processes? Don’t you feel pleasure has intrinsic value? If you eliminate pleasure from decision processes, why not eliminate it altogether from life, for the same reasons that made you consider pleasure “unnecessary” in decision processes?
This, I think, is one thing that makes many people so reluctant to accept the idea of human-level and super-human AI: they notice that many advocates of the AI revolution seem to want to ignore the subjective part of being human and seem interested merely in how to give machines the objective abilities of humans (i.e. abilities to manipulate the outer environment rather than “intangibles” like love and happiness). This seems as backward as spending your whole life earning millions of dollars, having no fun doing it, and never doing anything fun or good with the money. For most people, at first at least, the purpose of earning money is to increase pleasure. So should the purpose of building human-level or super-human AI. If you start to think that step two (the pleasure) in decision processes is an unnecessary part of our decision processes and can be omitted, you are thinking like the money-hunter who has lost track of why money is important; by thinking that pleasure may as well be omitted in decision processes, you throw away the whole reason for having any decision processes at all.
It’s the second step (of your three steps above) - the step which is always “I like the though of...”, i.e. our striving to maximize pleasure—that determines our values and choices about whatever there is in the first step (“X” or “something about X”, the thing we happen to like the thought of). So, to the extent that the first step (“something about X”) is incompatible with pleasure-maximizing (the decisive second step), what happens in step two seems to be a misinterpretation of what is there in step one. It seems reasonable to get rid of any misinterpretation. For example: fast food tastes good and produces short-term pleasure, but that pleasure is a misinterpretation in that it makes our organism take fast food for something more nutritious and long-term good for us than it actually is. We should go for pleasure, but not necessarily by eating fast food. We should let ourselves be motivated by the phenomenon in “step two” (“I like the thought of...”), but we should be careful about which “step one”’s (“X” or “something about X”) we let “step two” lead us to decisions about. The pleasure derived from eating fast food is, in and of itself, intrinsically good (all other things equal), but the source of it: fast food, is not. Step two is always a good thing as long as step one is a good thing, but step one is sometimes not a good thing even when step two, in and of itself, is a good thing. Whether the goal is to get to step three or to just enjoy the happiness in step two, step one is dispensable and replaceable, whereas step two is always necessary. So, it seems reasonable to found all ethics exclusively on what happens in step two.
Or you can suggest that people are just mistaken about how pleasurable the results will be of any action >they take that doesn’t maximise pleasure.
Even if they are not too mistaken about that, they may still be shortsighted enough that, when trying to choose between decision A and decision B, they’ll prefer the brief but immediate pleasure of making decision A (regardless of its expected later consequences) to the much larger amount of pleasure that they know would eventually follow after the less immediately pleasurable decision B. Many of us are this shortsighted. Our reward mechanism needs fixing.
You’re missing the point, or perhaps I’m missing your point. A paperclip maximiser implemented by having the program experience subjective pleasure when considering an action that results in lots of paperclips, and which decides by taking the action with the highest associated subjective pleasure, is still a paperclip maximiser.
So, I think you’re confusing levels. On the decision making level, you can hypothesise that decisions are made by attaching a “pleasure” feeling to each option and taking the one with highest pleasure. Sure, fine. But this doesn’t mean it’s wrong for an option which predictably results in less physical pleasure later to feel less pleasurable during decision making. The decision system could have been implemented equally well by associating options with colors and picking the brightest or something, without meaning the agent is irrational to take an action that physically darkens the environment. This is just a way of implementing the algorithm, which is not about the brightness of the environment or the light levels observed by the agent.
This is what I mean by ”(I like the thought of X) would seem to be an unnecessary step”. The implementation is not particularly relevant to the values. Noticing that pleasure is there at a step in the decision process doesn’t tell you what should feel pleasurable and what shouldn’t, it just tells you a bit about the mechanisms.
Of course I believe that pleasure has intrinsic value. We value fun; pleasure can be fun. But I can’t believe pleasure is the only thing with intrinsic value. We don’t use nozick’s pleasure machine, we don’t choose to be turned into orgasmium, we are willing to be hurt for higher benefits. I don’t think any of those things are mistakes.
Going for what you “want” is merely going for what you like the thought of. To like the thought of something is to like something (in this case the “something” that you like is the thought of something; a thought is also something). This means that wanting cannot happen unless there is liking that creates the wanting. So, of wanting and liking, liking is the only thing that can ever independently make us make any choice we make. Wanting which is not entirely contingent on liking never makes us make any decisions, because there is no such thing as wanting which is not entirely contingent on liking.
Suppose you can save mankind, but only by taking a drug that makes you forget that you have saved mankind, and also makes you suffer horribly for two minutes and then kills you. The fact that you can reasonably choose to take such a drug may seem to suggest that you can make a choice which you know will lead to a situation that you know you will not like being in. But there too, you actually just go for what you like: you like the thought of saving mankind, so you do whatever action seems associated with that thought. You may intellectually understand that you will suffer and feel no pleasure from the very moment after your decision is made, but this is hard for your subconscious to fully believe if you at the thought of that future actually feel pleasure (or at least less pain than you feel at the thought of the alternative), so your subconscious continues assuming that what you like thinking about is what will create situations that you will like. And the subconscious may be the one making the decision for you, even if it feels like you are making a conscious decision. So your decision may be a function exclusively of what you like, not by what you “want but don’t like”.
To merely like the thought of doing something can be motivating enough, and this is what makes so many people overeat, smoke, drink, take drugs, skip doing physical excercise, et cetera. After the point when you know you have already eaten enough, you couldn’t want to eat more unless you in some sense liked the thought of eating more. Our wanting something always implies an expectation of a future which we at least like thinking of. Wanting may sometimes appear to point in a different direction than liking does, but wanting is always merely liking the thought of something (more than one likes the thought of the alternatives).
Going for what you “want” (that is, going for what you merely like the thought of having) may be a very dumb and extremely short-sighted way of going for what you like, but it’s still a way of going for what you like.
Isn’t this just a way of saying that people like the thought of getting what they want? Indeed, it would be rather odd if expecting to get what we want made us unhappy. See also here, I guess.
No, I didn’t just try to say that “people like the thought of getting what they want”. The title of the article says “not for the sake of pleasure alone”. I tried to show that that is false. Everything we do, we do for pleasure alone, or to avoid or decrease suffering. We never make a decision based on a want that is not in turn based on a like/dislike. All “wants” are servile consequences of “likes”/”dislikes”, so I think “wants” should be treated as mere transitional steps, not as initial causes of our decisions.
You’ve just shown that wanting and liking go together, and asserted that one of them is more fundamental. Nothing which you have written appears to show that it’s impossible or even unlikely that people try to get things they want (which sometimes include pleasure, and which sometimes include saving the world), and that successful planning just feels good.
And nevertheless, people still don’t just optimize for pleasure, since they would take the drug mentioned, despite the fact that doing so is far less pleasurable than the alternative, even if the “pleasure involved in deciding to do so” is taken into account.
Sure, you can say that only the “pleasure involved in deciding” or “liking the thought of” is relevant, upon which your account of decision making reduces to
(something about X) --> (I like the thought of X) --> (I take action X)
, where(I like the thought of X)
would seem to be an unnecessary step where the same result would be obtained by eliminating it, and of course you still haven’t looked inside the black box(something about X)
.Or you can suggest that people are just mistaken about how pleasurable the results will be of any action they take that doesn’t maximise pleasure. But at that point you’re trying to construct sensible preferences from a mind that appears to be wrong about almost everything including the blatantly obvious, and I have to wonder exactly what evidence in this mind points toward the “true” preferences being “maximal pleasure”.
I’m not trying to show that. I agree that people try to get things they want, as long as with “things they want” we mean “things that they are tempted to go for because the thought of going for those things is so pleasurable”.
Why would you want to eliminate the pleasure involved in decision processes? Don’t you feel pleasure has intrinsic value? If you eliminate pleasure from decision processes, why not eliminate it altogether from life, for the same reasons that made you consider pleasure “unnecessary” in decision processes?
This, I think, is one thing that makes many people so reluctant to accept the idea of human-level and super-human AI: they notice that many advocates of the AI revolution seem to want to ignore the subjective part of being human and seem interested merely in how to give machines the objective abilities of humans (i.e. abilities to manipulate the outer environment rather than “intangibles” like love and happiness). This seems as backward as spending your whole life earning millions of dollars, having no fun doing it, and never doing anything fun or good with the money. For most people, at first at least, the purpose of earning money is to increase pleasure. So should the purpose of building human-level or super-human AI. If you start to think that step two (the pleasure) in decision processes is an unnecessary part of our decision processes and can be omitted, you are thinking like the money-hunter who has lost track of why money is important; by thinking that pleasure may as well be omitted in decision processes, you throw away the whole reason for having any decision processes at all.
It’s the second step (of your three steps above) - the step which is always “I like the though of...”, i.e. our striving to maximize pleasure—that determines our values and choices about whatever there is in the first step (“X” or “something about X”, the thing we happen to like the thought of). So, to the extent that the first step (“something about X”) is incompatible with pleasure-maximizing (the decisive second step), what happens in step two seems to be a misinterpretation of what is there in step one. It seems reasonable to get rid of any misinterpretation. For example: fast food tastes good and produces short-term pleasure, but that pleasure is a misinterpretation in that it makes our organism take fast food for something more nutritious and long-term good for us than it actually is. We should go for pleasure, but not necessarily by eating fast food. We should let ourselves be motivated by the phenomenon in “step two” (“I like the thought of...”), but we should be careful about which “step one”’s (“X” or “something about X”) we let “step two” lead us to decisions about. The pleasure derived from eating fast food is, in and of itself, intrinsically good (all other things equal), but the source of it: fast food, is not. Step two is always a good thing as long as step one is a good thing, but step one is sometimes not a good thing even when step two, in and of itself, is a good thing. Whether the goal is to get to step three or to just enjoy the happiness in step two, step one is dispensable and replaceable, whereas step two is always necessary. So, it seems reasonable to found all ethics exclusively on what happens in step two.
Even if they are not too mistaken about that, they may still be shortsighted enough that, when trying to choose between decision A and decision B, they’ll prefer the brief but immediate pleasure of making decision A (regardless of its expected later consequences) to the much larger amount of pleasure that they know would eventually follow after the less immediately pleasurable decision B. Many of us are this shortsighted. Our reward mechanism needs fixing.
You’re missing the point, or perhaps I’m missing your point. A paperclip maximiser implemented by having the program experience subjective pleasure when considering an action that results in lots of paperclips, and which decides by taking the action with the highest associated subjective pleasure, is still a paperclip maximiser.
So, I think you’re confusing levels. On the decision making level, you can hypothesise that decisions are made by attaching a “pleasure” feeling to each option and taking the one with highest pleasure. Sure, fine. But this doesn’t mean it’s wrong for an option which predictably results in less physical pleasure later to feel less pleasurable during decision making. The decision system could have been implemented equally well by associating options with colors and picking the brightest or something, without meaning the agent is irrational to take an action that physically darkens the environment. This is just a way of implementing the algorithm, which is not about the brightness of the environment or the light levels observed by the agent.
This is what I mean by ”
(I like the thought of X)
would seem to be an unnecessary step”. The implementation is not particularly relevant to the values. Noticing that pleasure is there at a step in the decision process doesn’t tell you what should feel pleasurable and what shouldn’t, it just tells you a bit about the mechanisms.Of course I believe that pleasure has intrinsic value. We value fun; pleasure can be fun. But I can’t believe pleasure is the only thing with intrinsic value. We don’t use nozick’s pleasure machine, we don’t choose to be turned into orgasmium, we are willing to be hurt for higher benefits. I don’t think any of those things are mistakes.