Emotion dressed up as pseudo-intellectualism. How do I know that/ Because the answer is so supremely obvious.
Others—happiness terminal value
You - (apparently) maximizing potential terminal value
… What exactly is so baffling? People want to be happy for it’s own sake. You can say “cuz it feels good” or whatever, but in the end you’re going to be faced with the fact that it’s just a terminal value and you’re going to have a hard time “explaining” it.
P.S. The specific use of the word contemptible is what tipped me off to the fact that you’re not emotionally prepared to ask a good question here.
Emotion dressed up as pseudo-intellectualism. How do I know that/ Because the answer is so supremely obvious.
… Is there maybe some other manner in which I could explain that I was revealing my emotional biases in order to get them out of the way of the dialogue that would be more effective for you?
… What exactly is so baffling? People want to be happy for it’s own sake
Why? The position is alien to me.
but in the end you’re going to be faced with the fact that it’s just a terminal value and you’re going to have a hard time “explaining” it.
Handwaving is non-explanation. You might as well say “because magic”. I was opening myself up to the dialogue of perhaps exploring mindspace alien to me—and giving others the opportunity to do so in kind.
P.S. The specific use of the word contemptible is what tipped me off to the fact that you’re not emotionally prepared to ask a good question here.
:-|
That’s all I got. Maybe someone better suited to the dialogue will come along. Thanks for your time I guess?
… What exactly is so baffling? People want to be happy for it’s own sake
Why?
The only useful answer here seems to be giving a causal explanation for why humans have the preferences that they have. Something to do with the folks who have different preferences not living long enough to get laid a lot. There isn’t any reasoning that people need to execute based on more fundamental principles of what to desire. This is just what they happen to do.
This is nonsensical. Do we always conform to patterns merely because they’re the patterns we always adhered to, unquestioningly? The question is being asked now.
If there is no decent answer—then what justifies this article?
(Apologies for my earlier comment. I’ve been in a negative emotional funk lately, and at least I am more self-aware because of that comment and its response. Anyway---)
I’m a little confused about why you think it’s hand-waving or question-begging to call happiness a ‘terminal value’. Here’s why.
Your utility function is just a representation of what you want to increase. So if happiness is what you want to increase, it ought to be represented (heavily) in your utility function, whatever that is.
As far as I know, “utility” is a (philosophical) explication—a more precise and formalized way of talking about something. This is a concept I learned from some metalogic: logic is an “explication”, see, that hopes to improve on intuitive reasoning based on intuitively valid inferences. Logical tautologies like P-->P aren’t themselves considered to be true, they’re only considered to represent truths, like ‘if snow is white then snow is white’, all of which is supposed to remind you that just because you dress something up in formalism doesn’t mean you don’t have to refer back to what you’re trying to represent in the first place, which may be mundane in the sense of ‘everyday’.
So of course you can call happiness a terminal value. Your values aren’t constructed by your utility function; they’re described by it. In my opinion you’re taking things backward. Or, if maximal utility doesn’t include increased happiness, you’re doing it wrong, assuming happiness is what you value.
So of course you can call happiness a terminal value.
I can call my cat a Harrier Jet, too. Doesn’t mean she’s going to start launching missiles from under the wings she doesn’t have.
Or, if maximal utility doesn’t include increased happiness, you’re doing it wrong, assuming happiness is what you value.
You’re confusing abitrary with intrinsic. To qualify as a terminal value a thing must be self-evidently valuable; that is, it must be inherently ‘good’. More of it must always be ‘better’ than less of it. I know this to be false of happiness/pleasure; I know that there is such a thing as “too happy”.
I think you’re still missing the point. You can call happiness a terminal value, because you decide what those are.
I think you are confused here; what do you mean by inherently ‘good’? Why must more of X always be better than less for?
Does this resolve it?: yes, happiness =/= utility. I never claimed it was and I don’t think anyone did. But among all the things that, when aggregated = your ‘utility function’, happiness (for most people) is a pretty big thing. And it is aggregate utility, and only aggregate utility, that would always be better with more.
Then, I suppose happiness isn’t a terminal value. I think I was wrong. The only “terminal” value would be total utility.… but happiness is so fundamental and “self-evidently valuable” to most people that it seems useful to call such thing a “terminal value” if indeed you’re going to call anything that.
P.S. I think you think you’re saying something meaningful when you say “useful”. I don’t think you are. I think you’re just expressing the idea of an aggregate utility, the sum. If not, what do you mean?
EDIT: This threw me for a loop: “I know this to be false of happiness/pleasure; I know that there is such a thing as “too happy”.” Obviously, if happiness is a terminal value, you’re right you can’t be too happy. I think I’m either confused or tired or both. And if it so happens that in reality people don’t desire, upon reflection, to maximize happiness because there’s a point at which it’s bad, then I understand you; such a person would be inconsistent to call happiness a terminal value in such a case.
Why do you think you can have too much happiness? (Think of some situation.)
Presumably there’s some trade-off..... Now consider someone else in that same situation. Would someone else also think they have too much happiness in that situation? Because if that’s not the case, you just have different terminal values.
Ultimately, someone may judge their happiness to be most important to them. You can say, ‘no, they should care more about (whatever tradeoff you think there is, maybe decreased ambition)’; they can simply respond, ‘no, I disagree, I don’t care more about that, I care more about happiness’, because that’s what it means for a value to be “terminal”: it means for that value to the ultimate basis of judgment.
To make it clearer: you seem to think that different people’s terminal values must be the same. Why do you think this?
Which is funny, because I am increasingly coming to the same conclusion with regards to your integration of my statements as you respond to me with essentially the same “talking points” in a manner that shows you haven’t contemplated that at the very least merely repeating yourself isn’t going to cause me to consider your old point any more relevant than it was the first time I offerred a rebuttal with new informational value.
At some point, I have learned that in such dialogues the only productive thing left for me to do is to simply drop the microphone and walk off the stage. I don’t think this dialogue has quite reached that stage just yet. :)
You can call happiness a terminal value, because you decide what those are.
A rose by any other name. My pet; a harrier jet.
Declaring a thing to be another thing does not make it that thing. Brute fiat is insufficient to normative evaluations of intrinsic worth.
Obviously, if happiness is a terminal value, you’re right you can’t be too happy.
You somehow read the exact opposite of my meaning from my statement.
Also—if you accept the notion of wireheading as an existential failure, then you acknowledge that happiness is not an intrinsic value.
Good point. And I think I’ll have to exit too, because I have the feeling that I’m doing something wrong, and I’m frankly too tired to figure that out right now.
Just one question.
“Declaring a thing to be another thing does not make it that thing. Brute fiat is insufficient to normative evaluations of intrinsic worth.”
Among other things I may be confused about, I’m (still) confused what intrinsic worth might be.
Since I don’t (currently) think ‘intrinsic worth’ is a thing, it seems to me that it is just the nature of a terminal value that it’s something you choose, so I don’t see the violation.
intrinsic values are values that a thing has merely by being what it is.
My question from the outset was “what’s the use of happiness?” Responding to that with “its own sake” doesn’t answer my question. To say that ‘being useful is useful for its own sake’ is to make an intrinsic utilitity statement of utility.
We—or rather I—framed this question in terms of utility from the outset.
Now—hedonism is the default consensus view here on LessWrong.com. (obviously I am a dissenter. My personal history as being clinically anhedonic maaaaay have something to do with this.) the argument is made by hedonistic utilitarians that pleasure is the “measure of utility”. That is; utility is pleasure; pleasure is utility.
But of course it’s trivially easy to demonstrate the paucity of this reasoning—we need only look to the wireheading existential failure mode, and other variations of it, to acknowledge that pleasure for pleasure’s own sake is not an intrinsic value.
Without having intrinsic value, a value cannot be a terminal value; the terms are synonymous.
Since I don’t (currently) think ‘intrinsic worth’ is a thing,
The position you are here aligning with is called “intrinsic nihilism”. It claims that there are zero terminal/intrinsic goods. Now—there’s nothing wrong with that, from the outside view.
But it does leave us at something of an impasse; how could you then justify seeking happiness? If there are no intrinsic goods then your goals are entirely arbitrary. Which means that you must have reasoning for continuing to seek them out—otherwise you would not continue to retain those arbitrary goals.
Absolutely—our evolutionary history plays into this. But then, our evolutionary history includes rapine and slaughter. And we curttail that in lieu of creating a better society. So why does ‘happiness’ get a free pass from this inspection?
No, not really. Or, at least, not obviously. I can see making an argument that most LW users implicitly adopt a hedonistic model when thinking about stuff-people-value, even if they would explicitly reject such a model. I’m not sure that’s true, but I’m not sure it’s false either; certainly I find myself doing that sometimes when I don’t pay attention. I don’t think that’s sufficient justification to declare hedonism a local consensus, but I suppose one could probably make that argument as well.
That seems entirely wrong. In fact, I think “eudaimonic hedonism” is just a contradiction in terms. Normally eudaimonic well-being is contrasted with hedonistic well-being.
ETA: Maybe you were thinking, “Eudaimonist utlitiarianism is still a form of utilitarianism”?
I meant what I said. Eudaimonic hedonism is still a form of hedonism. Eudaimonia is simply redefined happiness.
It is contrasted with “traditional” hedonism in common usage, but the relationship is quite clear. Eudaimonia is not a rejection of traditional hedonism but a modification.
Hedonism and eudaimonia can both be considered types of ‘happiness’ - thus we talk about “hedonic well-being” and “eudaimonic well-being”, and we can construe both as ways of talking about ‘happiness’. But it’s a misconstrual of eudaimonia to think it reduces to pleasure, and a misuse of ‘hedonism’ to refer to goals other than pleasure.
Eudaimonia is essentially epicurian hedonism, as contrasted with cyrenaic.
I think we’re better to follow Aristotle than Epicurus in defining eudaimonia. It’s at least the primary way the word is used now. Being a good human is just not a sort of pleasure.
pleasure is the “measure of utility”. That is; utility is pleasure; pleasure is utility.
Eudaimonic pleasure—happiness—is of a nature that wireheading would not qualify as valid happiness/pleasure. It would be like ‘empty calories’; tasty but unfulfilling.
So no, I do not not mean that ‘pleasure is the “measure of utility”’ is the mainstream consensus view on LessWrong. I do mean that, and I believe it to be so. “Hedons” and “utilons” are used interchangeably here.
So you do not mean that LWers hold that pleasure (by which I mean the standard definition) is the measure of utility, and that these people would wirehead and are therefore wrong.
My question from the outset was “what’s the use of happiness?” Responding to that with “its own sake” doesn’t answer my question. To say that ‘being useful is useful for its own sake’ is to make an intrinsic utilitity statement of utility.
My answer to this would be that happiness doesn’t necessarily have any value outside of human brains. But that doesn’t matter. For most people, it’s one of those facets of life that is so basic, so integrated into everything, that it’s impossible not to base a lot of decisions on “what makes me happy.” (And variants: what makes me satisfied with myself, what makes me able to be proud of myself...I would consider these metrics to be happiness-based even if they don’t measure just pleasure in the moment.)
You can try to make general unified theories about what should be true, but in the end, what is true is that human brains experience a state called happiness, and it’s a state most people like and want, and that doesn’t change no matter what your theory is.
Thanks for the link. Of course I should have checked that.... I’d like to point out that you find this in the second paragraph: “For an eudaemonist, happiness has intrinsic value”
Given the rest of what you’ve said, and my attachment to happiness as self-evidently valuable, a broader conception of “happiness” (as in eudamonia above) may avoid adverse outcomes like wireheading (assuming it is one). As other commenters here have noted there is no single definition anyway. You might say the broader it becomes, the less useful. Sure, but any measure would probably have to be really broad—like “utility”. When I said I don’t think ‘intrinsic worth’ is a thing, it’s because I was identifying it with utility, and… I guess I wasn’t thinking of (overall) utility as a ‘thing’ because to me, the concept is really vague and I just think of it an aggregate. An aggregate of things like happiness that contribute to utility.
I mentioned how if you’re going to call anything a terminal value, happiness seems like a good one. Now I don’t think so: you seem to be saying that you shouldn’t (edit: aren’t justified in considering) anything a terminal value other than utility itself, which seems reasonable. Is that right?
More to the point:
So why does ‘happiness’ get a free pass from this inspection?
I’m not sure; it now seems to me it oughtn’t to. Maybe another Less Wronger can contribute more, though not me.
This is nonsensical. Do we always conform to patterns merely because they’re the patterns we always adhered to, unquestioningly? The question is being asked now.
Right now some people prefer happiness. Many of the people who prefer happiness also endorse desiring to be happy and as such they right now prefer not to self modify away from desiring happiness.
then what justifies this article?
No justification is required for preferring one’s preferences. That’s the default. If you keep asking “Why?” enough you are bound to end up at the bottom level terminal goals from which other instrumental goals may be derived. Agents don’t need to justify having terminal goals to you.
It seems to me as if you view terminal goals as universal, not mind-specific. Is this correct, or have I misunderstood?
The point, as I understand it, that some humans seem to have happiness as a terminal goal. If you truly do not share this goal, then there is nothing left to explain. Value is in the mind, not inherent in the object it is evaluating. If one person values a thing for its own sake but another does not, this is a fact about their minds, not a disagreement about the properties of the thing.
Emotion dressed up as pseudo-intellectualism. How do I know that/ Because the answer is so supremely obvious.
Others—happiness terminal value You - (apparently) maximizing potential terminal value
… What exactly is so baffling? People want to be happy for it’s own sake. You can say “cuz it feels good” or whatever, but in the end you’re going to be faced with the fact that it’s just a terminal value and you’re going to have a hard time “explaining” it.
P.S. The specific use of the word contemptible is what tipped me off to the fact that you’re not emotionally prepared to ask a good question here.
… Is there maybe some other manner in which I could explain that I was revealing my emotional biases in order to get them out of the way of the dialogue that would be more effective for you?
Why? The position is alien to me.
Handwaving is non-explanation. You might as well say “because magic”. I was opening myself up to the dialogue of perhaps exploring mindspace alien to me—and giving others the opportunity to do so in kind.
:-|
That’s all I got. Maybe someone better suited to the dialogue will come along. Thanks for your time I guess?
The only useful answer here seems to be giving a causal explanation for why humans have the preferences that they have. Something to do with the folks who have different preferences not living long enough to get laid a lot. There isn’t any reasoning that people need to execute based on more fundamental principles of what to desire. This is just what they happen to do.
It is rather common to you fellow humans.
This is nonsensical. Do we always conform to patterns merely because they’re the patterns we always adhered to, unquestioningly? The question is being asked now.
If there is no decent answer—then what justifies this article?
(Apologies for my earlier comment. I’ve been in a negative emotional funk lately, and at least I am more self-aware because of that comment and its response. Anyway---)
I’m a little confused about why you think it’s hand-waving or question-begging to call happiness a ‘terminal value’. Here’s why.
Your utility function is just a representation of what you want to increase. So if happiness is what you want to increase, it ought to be represented (heavily) in your utility function, whatever that is. As far as I know, “utility” is a (philosophical) explication—a more precise and formalized way of talking about something. This is a concept I learned from some metalogic: logic is an “explication”, see, that hopes to improve on intuitive reasoning based on intuitively valid inferences. Logical tautologies like P-->P aren’t themselves considered to be true, they’re only considered to represent truths, like ‘if snow is white then snow is white’, all of which is supposed to remind you that just because you dress something up in formalism doesn’t mean you don’t have to refer back to what you’re trying to represent in the first place, which may be mundane in the sense of ‘everyday’.
So of course you can call happiness a terminal value. Your values aren’t constructed by your utility function; they’re described by it. In my opinion you’re taking things backward. Or, if maximal utility doesn’t include increased happiness, you’re doing it wrong, assuming happiness is what you value.
[Note: was edited.]
I can call my cat a Harrier Jet, too. Doesn’t mean she’s going to start launching missiles from under the wings she doesn’t have.
You’re confusing abitrary with intrinsic. To qualify as a terminal value a thing must be self-evidently valuable; that is, it must be inherently ‘good’. More of it must always be ‘better’ than less of it. I know this to be false of happiness/pleasure; I know that there is such a thing as “too happy”.
I know of no such thing as “too useful”.
I think you’re still missing the point. You can call happiness a terminal value, because you decide what those are.
I think you are confused here; what do you mean by inherently ‘good’? Why must more of X always be better than less for? Does this resolve it?: yes, happiness =/= utility. I never claimed it was and I don’t think anyone did. But among all the things that, when aggregated = your ‘utility function’, happiness (for most people) is a pretty big thing. And it is aggregate utility, and only aggregate utility, that would always be better with more.
Then, I suppose happiness isn’t a terminal value. I think I was wrong. The only “terminal” value would be total utility.… but happiness is so fundamental and “self-evidently valuable” to most people that it seems useful to call such thing a “terminal value” if indeed you’re going to call anything that.
P.S. I think you think you’re saying something meaningful when you say “useful”. I don’t think you are. I think you’re just expressing the idea of an aggregate utility, the sum. If not, what do you mean?
EDIT: This threw me for a loop: “I know this to be false of happiness/pleasure; I know that there is such a thing as “too happy”.” Obviously, if happiness is a terminal value, you’re right you can’t be too happy. I think I’m either confused or tired or both. And if it so happens that in reality people don’t desire, upon reflection, to maximize happiness because there’s a point at which it’s bad, then I understand you; such a person would be inconsistent to call happiness a terminal value in such a case.
Why do you think you can have too much happiness? (Think of some situation.) Presumably there’s some trade-off.....
Now consider someone else in that same situation. Would someone else also think they have too much happiness in that situation? Because if that’s not the case, you just have different terminal values. Ultimately, someone may judge their happiness to be most important to them. You can say, ‘no, they should care more about (whatever tradeoff you think there is, maybe decreased ambition)’; they can simply respond, ‘no, I disagree, I don’t care more about that, I care more about happiness’, because that’s what it means for a value to be “terminal”: it means for that value to the ultimate basis of judgment.
To make it clearer: you seem to think that different people’s terminal values must be the same. Why do you think this?
Which is funny, because I am increasingly coming to the same conclusion with regards to your integration of my statements as you respond to me with essentially the same “talking points” in a manner that shows you haven’t contemplated that at the very least merely repeating yourself isn’t going to cause me to consider your old point any more relevant than it was the first time I offerred a rebuttal with new informational value.
At some point, I have learned that in such dialogues the only productive thing left for me to do is to simply drop the microphone and walk off the stage. I don’t think this dialogue has quite reached that stage just yet. :)
A rose by any other name. My pet; a harrier jet.
Declaring a thing to be another thing does not make it that thing. Brute fiat is insufficient to normative evaluations of intrinsic worth.
You somehow read the exact opposite of my meaning from my statement.
Also—if you accept the notion of wireheading as an existential failure, then you acknowledge that happiness is not an intrinsic value.
Good point. And I think I’ll have to exit too, because I have the feeling that I’m doing something wrong, and I’m frankly too tired to figure that out right now.
Just one question. “Declaring a thing to be another thing does not make it that thing. Brute fiat is insufficient to normative evaluations of intrinsic worth.” Among other things I may be confused about, I’m (still) confused what intrinsic worth might be. Since I don’t (currently) think ‘intrinsic worth’ is a thing, it seems to me that it is just the nature of a terminal value that it’s something you choose, so I don’t see the violation.
EDIT: Edited statement edited out.
intrinsic values are values that a thing has merely by being what it is.
My question from the outset was “what’s the use of happiness?” Responding to that with “its own sake” doesn’t answer my question. To say that ‘being useful is useful for its own sake’ is to make an intrinsic utilitity statement of utility.
We—or rather I—framed this question in terms of utility from the outset.
Now—hedonism is the default consensus view here on LessWrong.com. (obviously I am a dissenter. My personal history as being clinically anhedonic maaaaay have something to do with this.) the argument is made by hedonistic utilitarians that pleasure is the “measure of utility”. That is; utility is pleasure; pleasure is utility.
But of course it’s trivially easy to demonstrate the paucity of this reasoning—we need only look to the wireheading existential failure mode, and other variations of it, to acknowledge that pleasure for pleasure’s own sake is not an intrinsic value.
Without having intrinsic value, a value cannot be a terminal value; the terms are synonymous.
The position you are here aligning with is called “intrinsic nihilism”. It claims that there are zero terminal/intrinsic goods. Now—there’s nothing wrong with that, from the outside view.
But it does leave us at something of an impasse; how could you then justify seeking happiness? If there are no intrinsic goods then your goals are entirely arbitrary. Which means that you must have reasoning for continuing to seek them out—otherwise you would not continue to retain those arbitrary goals.
Absolutely—our evolutionary history plays into this. But then, our evolutionary history includes rapine and slaughter. And we curttail that in lieu of creating a better society. So why does ‘happiness’ get a free pass from this inspection?
What? Really?
(I’m thinking of this and this.)
No, not really.
Or, at least, not obviously.
I can see making an argument that most LW users implicitly adopt a hedonistic model when thinking about stuff-people-value, even if they would explicitly reject such a model. I’m not sure that’s true, but I’m not sure it’s false either; certainly I find myself doing that sometimes when I don’t pay attention. I don’t think that’s sufficient justification to declare hedonism a local consensus, but I suppose one could probably make that argument as well.
Eudaimonic hedonism is still a form of hedonism.
(EDIT: Specifically it’s epicurian as compared to cyrenaic.)
That seems entirely wrong. In fact, I think “eudaimonic hedonism” is just a contradiction in terms. Normally eudaimonic well-being is contrasted with hedonistic well-being.
ETA: Maybe you were thinking, “Eudaimonist utlitiarianism is still a form of utilitarianism”?
I meant what I said. Eudaimonic hedonism is still a form of hedonism. Eudaimonia is simply redefined happiness.
It is contrasted with “traditional” hedonism in common usage, but the relationship is quite clear. Eudaimonia is not a rejection of traditional hedonism but a modification.
Definitely just mincing words here, but...
Hedonism and eudaimonia can both be considered types of ‘happiness’ - thus we talk about “hedonic well-being” and “eudaimonic well-being”, and we can construe both as ways of talking about ‘happiness’. But it’s a misconstrual of eudaimonia to think it reduces to pleasure, and a misuse of ‘hedonism’ to refer to goals other than pleasure.
This is simply not true. Eudaimonia is essentially epicurian hedonism, as contrasted with cyrenaic.
Looking only at the wiki page, epicurian moral thought doesn’t look like what I remember from reading Aristotle’s Ethics. But it’s been a while.
I think we’re better to follow Aristotle than Epicurus in defining eudaimonia. It’s at least the primary way the word is used now. Being a good human is just not a sort of pleasure.
I see. Then you do not mean that
is the consensus view here at LW. Since after all, the consensus view here is that wireheading is a bad idea.
Eudaimonic pleasure—happiness—is of a nature that wireheading would not qualify as valid happiness/pleasure. It would be like ‘empty calories’; tasty but unfulfilling.
So no, I do not not mean that ‘pleasure is the “measure of utility”’ is the mainstream consensus view on LessWrong. I do mean that, and I believe it to be so. “Hedons” and “utilons” are used interchangeably here.
So you do not mean that LWers hold that pleasure (by which I mean the standard definition) is the measure of utility, and that these people would wirehead and are therefore wrong.
My answer to this would be that happiness doesn’t necessarily have any value outside of human brains. But that doesn’t matter. For most people, it’s one of those facets of life that is so basic, so integrated into everything, that it’s impossible not to base a lot of decisions on “what makes me happy.” (And variants: what makes me satisfied with myself, what makes me able to be proud of myself...I would consider these metrics to be happiness-based even if they don’t measure just pleasure in the moment.)
You can try to make general unified theories about what should be true, but in the end, what is true is that human brains experience a state called happiness, and it’s a state most people like and want, and that doesn’t change no matter what your theory is.
Thanks for the link. Of course I should have checked that....
I’d like to point out that you find this in the second paragraph: “For an eudaemonist, happiness has intrinsic value”
Given the rest of what you’ve said, and my attachment to happiness as self-evidently valuable, a broader conception of “happiness” (as in eudamonia above) may avoid adverse outcomes like wireheading (assuming it is one). As other commenters here have noted there is no single definition anyway. You might say the broader it becomes, the less useful. Sure, but any measure would probably have to be really broad—like “utility”. When I said I don’t think ‘intrinsic worth’ is a thing, it’s because I was identifying it with utility, and… I guess I wasn’t thinking of (overall) utility as a ‘thing’ because to me, the concept is really vague and I just think of it an aggregate. An aggregate of things like happiness that contribute to utility.
I mentioned how if you’re going to call anything a terminal value, happiness seems like a good one. Now I don’t think so: you seem to be saying that you shouldn’t (edit: aren’t justified in considering) anything a terminal value other than utility itself, which seems reasonable. Is that right?
More to the point:
I’m not sure; it now seems to me it oughtn’t to. Maybe another Less Wronger can contribute more, though not me.
I’m just done. (I think I’m being stupid above.) Thanks.
Right now some people prefer happiness. Many of the people who prefer happiness also endorse desiring to be happy and as such they right now prefer not to self modify away from desiring happiness.
No justification is required for preferring one’s preferences. That’s the default. If you keep asking “Why?” enough you are bound to end up at the bottom level terminal goals from which other instrumental goals may be derived. Agents don’t need to justify having terminal goals to you.
This is handwaving. That is; you use a description to fulfill the role of an explanation.
This is also a description, not an explanation.
… I cannot help but find this to be a silly assertion. “That’s the default”? That’s just… not true.
Absolutely. And those terminal goals are those which are intrinsic in nature.
If you are claiming that happiness is an intrinsic good—please, explain why. Because I for one just don’t see it.
It seems to me as if you view terminal goals as universal, not mind-specific. Is this correct, or have I misunderstood?
The point, as I understand it, that some humans seem to have happiness as a terminal goal. If you truly do not share this goal, then there is nothing left to explain. Value is in the mind, not inherent in the object it is evaluating. If one person values a thing for its own sake but another does not, this is a fact about their minds, not a disagreement about the properties of the thing.
Was this helpful?