This line of discussion says nothing on the object level. The words “altruistic” and “selfish” in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.
Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.
The reason I rejected the utility function and why I rejected this argument is that I judged them useless.
What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people’s answers to this question.
I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics—it’s not the aggregate amount of well being that matters, it’s only mine.
Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God’s wrath or something.
Do you disagree with any matters of fact that I have asserted or implied? When you try to have a discussion like you are trying to have, about “logical necessity” and so on, you are just arguing about words. What do you predict about the world that is different from what I predict?
I think that it is important to recognize the relationship between thought processes because having a well organized mind allows us to change our minds more efficiently which improves the quality of our predictions. So long as you recognize that all moral behavior is motivated by internal experiences and values I don’t really care what you call it.
This line of discussion says nothing on the object level. The words “altruistic” and “selfish” in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.
Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.
The reason I rejected the utility function and why I rejected this argument is that I judged them useless.
What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people’s answers to this question.
I don’t understand how your reply is responsive.
I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics—it’s not the aggregate amount of well being that matters, it’s only mine.
Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God’s wrath or something.
Does my position make more sense now?
Do you disagree with any matters of fact that I have asserted or implied? When you try to have a discussion like you are trying to have, about “logical necessity” and so on, you are just arguing about words. What do you predict about the world that is different from what I predict?
I think that it is important to recognize the relationship between thought processes because having a well organized mind allows us to change our minds more efficiently which improves the quality of our predictions. So long as you recognize that all moral behavior is motivated by internal experiences and values I don’t really care what you call it.