It’s not arbitrary. Evolutionarily speaking, the species that persist are the ones that have utility functions which lead them to thrive and reproduce. Happiness is a utility function that covers a lot of basic activities like eating, health, bonding with other people, morality, and self-improvement, all of which directly impact our individual and collective fitness as a race. That’s why most people agree on it and not something else.
There are two salient senses of something being non-arbitrary: something you prefer for a clear moral reason, and something understood as resulting from a clear physical cause. You are elaborating the sense irrelevant in this context: that happiness as a phenomenon appeared because of evolutionary factors (although explanatory power of this argument is debatable). But this is not at all a moral argument for preferring happiness, which is the context of Rain’s confusion.
But this is not at all a moral argument for preferring happiness, which is the context of Rain’s confusion.
Since my simple and straightforward question regarding this statement was voted down, I’m assuming that people believe me to be some kind of troll or otherwise flippant in the current discussion.
I assure you that nothing is further from the truth, and that I want to know the answers to my questions, since I consider them very important. Rather than engagement, I find only frustration.
For a thread on strategies to thwart nihilism, “downvote it” does not seem like an optimal action.
I agree that there’s a semantic distinction between those two senses, but I’d point out that when you’re talking about humans, there’s not much practical distinction. It’s not as if someone comes up to us at the age of 18 and says “OK, you’re an adult, so if you don’t find happiness to be moral, you may now modify your preferences at will to match your morality.” That’s really hard to do! So I don’t think it’s irrelevant.
I assure you that nothing is further from the truth, and that I want to know the answers to my questions, since I consider them very important. Rather than engagement, I find only frustration.
On with the question:
You fully understand morality?
The answer is obviously, “No.” There!
I understand your comment as meaning to state, “Vladimir, keep in mind that you don’t understand morality enough to claim that.” But you dressed that into a blend of fallacy of gray, offense through implying me boast more knowledge than I actually possess, and a rhetorical question.
Going with charitable interpretation, please be more specific, “you don’t understand morality enough to claim that” doesn’t point out the specific problem. An even more charitable interpretation would go, “Your argument doesn’t convince me”. I’ll accept that, but don’t want to work on a new one.
I admit that I am confused about morality. I posit that others are also confused, and that claiming my statements are confused is in no way helpful or meaningful when the entire topic is left in such a sorry state. Specifically, I wanted to know if you were, like me, also confused, which you’ve said that you are, though by your reference to the fallacy of gray, you appear to be claiming you are less confused than me. I apologize for my original phrasing, and seek further clarification.
I’d very much like to find someone who isn’t confused (or even less confused) and who would be willing to explain morality, since I very much want to understand the topic. If you are such a person, please do so.
Well, I am totally confused. Not so much about morality, but I am mostly confused about why this subthread, since your first comment, has turned into such a train wreck. I have read through several times looking for the cause. But I have some hypotheses now; allow me to present them.
Things started going wrong with your first comment:
“Happiness” is an arbitrary choice for a utility function when the universe has no intrinsic purpose.
I think what you meant was “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose.” See the difference? Mentioning a hypothetical intrinsic purpose for the universe seems to drag in the notion of a deity. And in any case, it is the hypothetical purpose of the agent which is relevant here.
I’m guessing that this (accidental?) reference to a deity is what put RobinZ and Nesov on edge. The next collision began with cata suggesting that evolutionary psychology privileges happiness as a choice of utility function for an evolved agent. This strikes me as a reasonable contribution to the discussion, and I thought it was offered in a non-inflammatory manner. Reasonable, but not quite relevant, as Nesov then pointed out. But then he picked horrible language to do this. Instead of writing that cata’s evopsych justification of happiness is not a good argument in the context of your question or your comment, he wrote “the context of Rain’s confusion.”
I am not sure whether he intended that as an insult. I also don’t think you were unreasonable in interpreting it as one. But here you made your second mistake—one I’m pretty sure you already understand. Rather than “You fully understand morality?”, it might have been better and more direct to ask “Why do you suggest that I am confused?”.
So much for post-mortem. Do I have anything useful to say regarding nihilism, existentialism, happiness, and morality? Only this. I think that morality arises from self-interest, once an agent takes a long term view on self-interest and takes into account the opinions of other agents and how one’s own reputation contributes to one’s self-interest in the long term.
It is all a question of game theory—especially bargaining and coalition formation. And as such, the interests of others are important—you can further your own interests by furthering the interests of your coalition partners. What are those interests? Well, you can ask them, but if you don’t yet know them well enough to ask, you can make a pretty good guess that their utility functions will be pretty close to “human nature standard”—the utility function which natural selection “tried” to install in each and every one of us. Evopsych generated “happiness” is relevant at least to that extent. You can and should apply it as a Bayesian prior in generating your best guess about other people’s utility function.
But what about your own utility function? Is there any particular reason for seeking conventional happiness for yourself? Well, the discipline of theoretical economics is fairly scrupulous about leaving that completely up to you. You don’t have to have children if you don’t want. You don’t have to eat nutritiously. You don’t have to show concern for your personal safety. You can be sinner or saint. Selfish or altruistic. It is up to you.
But there is one argument in favor of not being too creative. Which is that everyone else is going to assume that you are interested in conventional happiness until you make it very explicit that you are not interested in that stuff. So, when you do someone a good turn by giving them what they want, they will probably reciprocate by giving you something that you don’t really want. Cuz they don’t know any better.
I think that morality arises from self-interest, once an agent takes a long term view on self-interest and takes into account the opinions of other agents and how one’s own reputation contributes to one’s self-interest in the long term.
Based on this, I will revise my previous estimate of the path morality is taking: it seems more probable that self-interest and future prediction would be the drivers. Agents who approximately implement my utility function would then receive more empathy (cooperation, consideration) as a second order effect.
I’m guessing that this (accidental?) reference to a deity is what put RobinZ and Nesov on edge.
Approximately—had Rain said “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose”, my answer would probably have been more along the lines of cata’s.
If the only justification for wanting to survive is that that’s what most people want, and that you personally want it, and want to have fun, and be happy, then I don’t understand why you can’t also let people who do not want those things from doing what they want, even if that’s [unthinkable].
What if “pathology” (nihilism, depression) is an alteration of terminal values away from human norms?
Empirically speaking, nihilism and depression usually is a temporary condition; given time, or if conditions change, most people will revert to having more normal human values. So if you want to help someone else maximize utility over time, it’s usually reasonable to help prevent them from making decisions in a nihilistic and depressed state which they will find extremely regrettable if and when they are no longer nihilistic and depressed.
I hope you see the correlation between this and wireheading: each involves altering someone’s terminal values to achieve greater utility over the allotted time span. The major difference is that one is labeled normal and the other abnormal.
It seems like less of a choice than one might think. I’m starting to believe terminal values can have natural or provoked drift. Or perhaps they’re conflicting and incompatible, gaining and losing strength over time. Or both.
This may not provide much satisfaction to someone inquiring into morals. But then someone inquiring into morals may well do better to just think moral thoughts, rather than thinking about metaethics or reductionism.
If the utility function is self-referential at any point, then implementing it will necessarily involve choosing (portions of) it.
Tentatively disagree. With the ‘necessarily’ part. A broad class of self references will be such that there is only one unique solution to the utility function that fits. In such cases the process would be one of mathematical analysis or of calculation. It isn’t out of the question that the letter ‘e’ would appear in a written transcription of some of them.
“Happiness” is an arbitrary choice for a utility function when the universe has no intrinsic purpose.
It’s not arbitrary. Evolutionarily speaking, the species that persist are the ones that have utility functions which lead them to thrive and reproduce. Happiness is a utility function that covers a lot of basic activities like eating, health, bonding with other people, morality, and self-improvement, all of which directly impact our individual and collective fitness as a race. That’s why most people agree on it and not something else.
There are two salient senses of something being non-arbitrary: something you prefer for a clear moral reason, and something understood as resulting from a clear physical cause. You are elaborating the sense irrelevant in this context: that happiness as a phenomenon appeared because of evolutionary factors (although explanatory power of this argument is debatable). But this is not at all a moral argument for preferring happiness, which is the context of Rain’s confusion.
Since my simple and straightforward question regarding this statement was voted down, I’m assuming that people believe me to be some kind of troll or otherwise flippant in the current discussion.
I assure you that nothing is further from the truth, and that I want to know the answers to my questions, since I consider them very important. Rather than engagement, I find only frustration.
For a thread on strategies to thwart nihilism, “downvote it” does not seem like an optimal action.
I agree that there’s a semantic distinction between those two senses, but I’d point out that when you’re talking about humans, there’s not much practical distinction. It’s not as if someone comes up to us at the age of 18 and says “OK, you’re an adult, so if you don’t find happiness to be moral, you may now modify your preferences at will to match your morality.” That’s really hard to do! So I don’t think it’s irrelevant.
You fully understand morality?
Edit: I apologize if this seems inflammatory. I did not mean it that way.
From another comment:
On with the question:
The answer is obviously, “No.” There!
I understand your comment as meaning to state, “Vladimir, keep in mind that you don’t understand morality enough to claim that.” But you dressed that into a blend of fallacy of gray, offense through implying me boast more knowledge than I actually possess, and a rhetorical question.
Going with charitable interpretation, please be more specific, “you don’t understand morality enough to claim that” doesn’t point out the specific problem. An even more charitable interpretation would go, “Your argument doesn’t convince me”. I’ll accept that, but don’t want to work on a new one.
I admit that I am confused about morality. I posit that others are also confused, and that claiming my statements are confused is in no way helpful or meaningful when the entire topic is left in such a sorry state. Specifically, I wanted to know if you were, like me, also confused, which you’ve said that you are, though by your reference to the fallacy of gray, you appear to be claiming you are less confused than me. I apologize for my original phrasing, and seek further clarification.
I’d very much like to find someone who isn’t confused (or even less confused) and who would be willing to explain morality, since I very much want to understand the topic. If you are such a person, please do so.
Sorry, pointing out an error in your conduct was easy, “explaining morality” would probably be very hard, I’m not willing to make that commitment.
Well, I am totally confused. Not so much about morality, but I am mostly confused about why this subthread, since your first comment, has turned into such a train wreck. I have read through several times looking for the cause. But I have some hypotheses now; allow me to present them.
Things started going wrong with your first comment:
I think what you meant was “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose.” See the difference? Mentioning a hypothetical intrinsic purpose for the universe seems to drag in the notion of a deity. And in any case, it is the hypothetical purpose of the agent which is relevant here.
I’m guessing that this (accidental?) reference to a deity is what put RobinZ and Nesov on edge. The next collision began with cata suggesting that evolutionary psychology privileges happiness as a choice of utility function for an evolved agent. This strikes me as a reasonable contribution to the discussion, and I thought it was offered in a non-inflammatory manner. Reasonable, but not quite relevant, as Nesov then pointed out. But then he picked horrible language to do this. Instead of writing that cata’s evopsych justification of happiness is not a good argument in the context of your question or your comment, he wrote “the context of Rain’s confusion.”
I am not sure whether he intended that as an insult. I also don’t think you were unreasonable in interpreting it as one. But here you made your second mistake—one I’m pretty sure you already understand. Rather than “You fully understand morality?”, it might have been better and more direct to ask “Why do you suggest that I am confused?”.
So much for post-mortem. Do I have anything useful to say regarding nihilism, existentialism, happiness, and morality? Only this. I think that morality arises from self-interest, once an agent takes a long term view on self-interest and takes into account the opinions of other agents and how one’s own reputation contributes to one’s self-interest in the long term.
It is all a question of game theory—especially bargaining and coalition formation. And as such, the interests of others are important—you can further your own interests by furthering the interests of your coalition partners. What are those interests? Well, you can ask them, but if you don’t yet know them well enough to ask, you can make a pretty good guess that their utility functions will be pretty close to “human nature standard”—the utility function which natural selection “tried” to install in each and every one of us. Evopsych generated “happiness” is relevant at least to that extent. You can and should apply it as a Bayesian prior in generating your best guess about other people’s utility function.
But what about your own utility function? Is there any particular reason for seeking conventional happiness for yourself? Well, the discipline of theoretical economics is fairly scrupulous about leaving that completely up to you. You don’t have to have children if you don’t want. You don’t have to eat nutritiously. You don’t have to show concern for your personal safety. You can be sinner or saint. Selfish or altruistic. It is up to you.
But there is one argument in favor of not being too creative. Which is that everyone else is going to assume that you are interested in conventional happiness until you make it very explicit that you are not interested in that stuff. So, when you do someone a good turn by giving them what they want, they will probably reciprocate by giving you something that you don’t really want. Cuz they don’t know any better.
Based on this, I will revise my previous estimate of the path morality is taking: it seems more probable that self-interest and future prediction would be the drivers. Agents who approximately implement my utility function would then receive more empathy (cooperation, consideration) as a second order effect.
Approximately—had Rain said “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose”, my answer would probably have been more along the lines of cata’s.
Yes. But why survive?
If the only justification for wanting to survive is that that’s what most people want, and that you personally want it, and want to have fun, and be happy, then I don’t understand why you can’t also let people who do not want those things from doing what they want, even if that’s [unthinkable].
What if “pathology” (nihilism, depression) is an alteration of terminal values away from human norms?
Empirically speaking, nihilism and depression usually is a temporary condition; given time, or if conditions change, most people will revert to having more normal human values. So if you want to help someone else maximize utility over time, it’s usually reasonable to help prevent them from making decisions in a nihilistic and depressed state which they will find extremely regrettable if and when they are no longer nihilistic and depressed.
I hope you see the correlation between this and wireheading: each involves altering someone’s terminal values to achieve greater utility over the allotted time span. The major difference is that one is labeled normal and the other abnormal.
but one can go back to being nihilistic if one chooses to, I think this does not strongly seem to be the case for wire heading.
It seems like less of a choice than one might think. I’m starting to believe terminal values can have natural or provoked drift. Or perhaps they’re conflicting and incompatible, gaining and losing strength over time. Or both.
“Arbitrary” is a somewhat meaningless adjective to invoke regarding an entity with no intrinsic goals.
I never did like “Arbitrary”. Or this one, either. Really, the whole meta-ethics sequence is pretty useless [to me].
Yes. Luckily, we’re not in the business of choosing our own utility functions, but of implementing and analyzing them.
If the utility function is self-referential at any point, then implementing it will necessarily involve choosing (portions of) it.
Tentatively disagree. With the ‘necessarily’ part. A broad class of self references will be such that there is only one unique solution to the utility function that fits. In such cases the process would be one of mathematical analysis or of calculation. It isn’t out of the question that the letter ‘e’ would appear in a written transcription of some of them.