I admit that I am confused about morality. I posit that others are also confused, and that claiming my statements are confused is in no way helpful or meaningful when the entire topic is left in such a sorry state. Specifically, I wanted to know if you were, like me, also confused, which you’ve said that you are, though by your reference to the fallacy of gray, you appear to be claiming you are less confused than me. I apologize for my original phrasing, and seek further clarification.
I’d very much like to find someone who isn’t confused (or even less confused) and who would be willing to explain morality, since I very much want to understand the topic. If you are such a person, please do so.
Well, I am totally confused. Not so much about morality, but I am mostly confused about why this subthread, since your first comment, has turned into such a train wreck. I have read through several times looking for the cause. But I have some hypotheses now; allow me to present them.
Things started going wrong with your first comment:
“Happiness” is an arbitrary choice for a utility function when the universe has no intrinsic purpose.
I think what you meant was “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose.” See the difference? Mentioning a hypothetical intrinsic purpose for the universe seems to drag in the notion of a deity. And in any case, it is the hypothetical purpose of the agent which is relevant here.
I’m guessing that this (accidental?) reference to a deity is what put RobinZ and Nesov on edge. The next collision began with cata suggesting that evolutionary psychology privileges happiness as a choice of utility function for an evolved agent. This strikes me as a reasonable contribution to the discussion, and I thought it was offered in a non-inflammatory manner. Reasonable, but not quite relevant, as Nesov then pointed out. But then he picked horrible language to do this. Instead of writing that cata’s evopsych justification of happiness is not a good argument in the context of your question or your comment, he wrote “the context of Rain’s confusion.”
I am not sure whether he intended that as an insult. I also don’t think you were unreasonable in interpreting it as one. But here you made your second mistake—one I’m pretty sure you already understand. Rather than “You fully understand morality?”, it might have been better and more direct to ask “Why do you suggest that I am confused?”.
So much for post-mortem. Do I have anything useful to say regarding nihilism, existentialism, happiness, and morality? Only this. I think that morality arises from self-interest, once an agent takes a long term view on self-interest and takes into account the opinions of other agents and how one’s own reputation contributes to one’s self-interest in the long term.
It is all a question of game theory—especially bargaining and coalition formation. And as such, the interests of others are important—you can further your own interests by furthering the interests of your coalition partners. What are those interests? Well, you can ask them, but if you don’t yet know them well enough to ask, you can make a pretty good guess that their utility functions will be pretty close to “human nature standard”—the utility function which natural selection “tried” to install in each and every one of us. Evopsych generated “happiness” is relevant at least to that extent. You can and should apply it as a Bayesian prior in generating your best guess about other people’s utility function.
But what about your own utility function? Is there any particular reason for seeking conventional happiness for yourself? Well, the discipline of theoretical economics is fairly scrupulous about leaving that completely up to you. You don’t have to have children if you don’t want. You don’t have to eat nutritiously. You don’t have to show concern for your personal safety. You can be sinner or saint. Selfish or altruistic. It is up to you.
But there is one argument in favor of not being too creative. Which is that everyone else is going to assume that you are interested in conventional happiness until you make it very explicit that you are not interested in that stuff. So, when you do someone a good turn by giving them what they want, they will probably reciprocate by giving you something that you don’t really want. Cuz they don’t know any better.
I think that morality arises from self-interest, once an agent takes a long term view on self-interest and takes into account the opinions of other agents and how one’s own reputation contributes to one’s self-interest in the long term.
Based on this, I will revise my previous estimate of the path morality is taking: it seems more probable that self-interest and future prediction would be the drivers. Agents who approximately implement my utility function would then receive more empathy (cooperation, consideration) as a second order effect.
I’m guessing that this (accidental?) reference to a deity is what put RobinZ and Nesov on edge.
Approximately—had Rain said “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose”, my answer would probably have been more along the lines of cata’s.
I admit that I am confused about morality. I posit that others are also confused, and that claiming my statements are confused is in no way helpful or meaningful when the entire topic is left in such a sorry state. Specifically, I wanted to know if you were, like me, also confused, which you’ve said that you are, though by your reference to the fallacy of gray, you appear to be claiming you are less confused than me. I apologize for my original phrasing, and seek further clarification.
I’d very much like to find someone who isn’t confused (or even less confused) and who would be willing to explain morality, since I very much want to understand the topic. If you are such a person, please do so.
Sorry, pointing out an error in your conduct was easy, “explaining morality” would probably be very hard, I’m not willing to make that commitment.
Well, I am totally confused. Not so much about morality, but I am mostly confused about why this subthread, since your first comment, has turned into such a train wreck. I have read through several times looking for the cause. But I have some hypotheses now; allow me to present them.
Things started going wrong with your first comment:
I think what you meant was “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose.” See the difference? Mentioning a hypothetical intrinsic purpose for the universe seems to drag in the notion of a deity. And in any case, it is the hypothetical purpose of the agent which is relevant here.
I’m guessing that this (accidental?) reference to a deity is what put RobinZ and Nesov on edge. The next collision began with cata suggesting that evolutionary psychology privileges happiness as a choice of utility function for an evolved agent. This strikes me as a reasonable contribution to the discussion, and I thought it was offered in a non-inflammatory manner. Reasonable, but not quite relevant, as Nesov then pointed out. But then he picked horrible language to do this. Instead of writing that cata’s evopsych justification of happiness is not a good argument in the context of your question or your comment, he wrote “the context of Rain’s confusion.”
I am not sure whether he intended that as an insult. I also don’t think you were unreasonable in interpreting it as one. But here you made your second mistake—one I’m pretty sure you already understand. Rather than “You fully understand morality?”, it might have been better and more direct to ask “Why do you suggest that I am confused?”.
So much for post-mortem. Do I have anything useful to say regarding nihilism, existentialism, happiness, and morality? Only this. I think that morality arises from self-interest, once an agent takes a long term view on self-interest and takes into account the opinions of other agents and how one’s own reputation contributes to one’s self-interest in the long term.
It is all a question of game theory—especially bargaining and coalition formation. And as such, the interests of others are important—you can further your own interests by furthering the interests of your coalition partners. What are those interests? Well, you can ask them, but if you don’t yet know them well enough to ask, you can make a pretty good guess that their utility functions will be pretty close to “human nature standard”—the utility function which natural selection “tried” to install in each and every one of us. Evopsych generated “happiness” is relevant at least to that extent. You can and should apply it as a Bayesian prior in generating your best guess about other people’s utility function.
But what about your own utility function? Is there any particular reason for seeking conventional happiness for yourself? Well, the discipline of theoretical economics is fairly scrupulous about leaving that completely up to you. You don’t have to have children if you don’t want. You don’t have to eat nutritiously. You don’t have to show concern for your personal safety. You can be sinner or saint. Selfish or altruistic. It is up to you.
But there is one argument in favor of not being too creative. Which is that everyone else is going to assume that you are interested in conventional happiness until you make it very explicit that you are not interested in that stuff. So, when you do someone a good turn by giving them what they want, they will probably reciprocate by giving you something that you don’t really want. Cuz they don’t know any better.
Based on this, I will revise my previous estimate of the path morality is taking: it seems more probable that self-interest and future prediction would be the drivers. Agents who approximately implement my utility function would then receive more empathy (cooperation, consideration) as a second order effect.
Approximately—had Rain said “Happiness is an arbitrary choice for a utility function when the chooser believes he has no intrinsic purpose”, my answer would probably have been more along the lines of cata’s.