It’s not an argument about definitions, it’s an argument about logical priority. Altruistic impulses are logically a subset of selfish ones because all impulses are selfish because they’re only experienced internally. (I’m using impulse as roughly synonymous with an action taken because of values). Altruism is only relevant to your morality insofar as you value altruistic actions. Altruism can only be justified on somewhat selfish grounds. (To clarify, it can be justified on other grounds but I don’t think those grounds make sense.)
all impulses are selfish because they’re only experienced internally.
I think defining “selfish” as “anything experienced internally” is very limiting definition that makes it a pretty useless word. The concept of ‘selfishness’ can only be applied to human behaviour/motivations–physical-world phenomena like storms can’t be selfish or unselfish, it’s a mind-level concept. Thus, if you pre-define all human behaviour/motivations as selfish, you’re ruling out the opposite of selfishness existing at all. Which means you might as well not bother with using the word “selfish” at all, since there’s nothing that isn’t selfish.
There’s also the argument of common usage–it doesn’t matter how you define a word in your head, communication is with other people, who have their own definition of that word in their heads, and most people’s definitions are likely to be the common usage of the word, since how else would they learn what the word means? Most people define “selfishness” such that some impulses are selfish (i.e. Sally taking the last piece of cake because she likes cake) and some are not selfish (Sally giving Jack the last piece of cake, even though she wants it, because Jack hasn’t had any cake yet and she already had a piece.) Obviously both of those reactions are the result of impulses bouncing around between neurons, but since we don’t have introspective access to our neurons firing, it’s meaningful for most people to use selfishness or unselfishness as labels.
To comment on the linguistic issue, yes this particular argument is silly, but I do think it is legitimate to define a word and then later discover it points out something trivial or nonexistent. Like if we discovered that everyone would wirehead rather than actually help other people in every case, then we might say “welp, guess all drives are selfish” or something.
Sally doesn’t give Jack the cake because Jack hasn’t had any, rather, Sally gives Jack the cake because she wants to. That’s why explicitly calling the motivation selfish is useful, because it clarifies that obligations are still subjective and rooted in individual values (it also clarifies that obligations don’t mandate sacrifice or asceticism or any other similar nonsense). You say that it’s obvious that all actions occur from internally motivated states as a result of neurons firing, but it’s not obvious to most people, which is why pointing out that the action stems from the internal desires of Sally is still useful.
Why not just specify to people that motivations or obligations are “subjective and rooted in individual values”? Then you don’t have to bring in the word “selfish”, with all its common-usage connotations.
I want those common-usage connotations brought in because I want to eradicate the taboo around those common-usage connotations, I guess. I think that people are vilified for being selfish in lots of situations where being selfish is a good thing, at least from that person’s perspective. I don’t think that people should ever get mad at defectors in Prisoner’s Dilemmas, for example, and I think that saying that all of morality is selfish is a good way to fix this kind of problem.
This line of discussion says nothing on the object level. The words “altruistic” and “selfish” in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.
Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.
The reason I rejected the utility function and why I rejected this argument is that I judged them useless.
What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people’s answers to this question.
I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics—it’s not the aggregate amount of well being that matters, it’s only mine.
Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God’s wrath or something.
Do you disagree with any matters of fact that I have asserted or implied? When you try to have a discussion like you are trying to have, about “logical necessity” and so on, you are just arguing about words. What do you predict about the world that is different from what I predict?
I think that it is important to recognize the relationship between thought processes because having a well organized mind allows us to change our minds more efficiently which improves the quality of our predictions. So long as you recognize that all moral behavior is motivated by internal experiences and values I don’t really care what you call it.
It’s not an argument about definitions, it’s an argument about logical priority. Altruistic impulses are logically a subset of selfish ones because all impulses are selfish because they’re only experienced internally. (I’m using impulse as roughly synonymous with an action taken because of values). Altruism is only relevant to your morality insofar as you value altruistic actions. Altruism can only be justified on somewhat selfish grounds. (To clarify, it can be justified on other grounds but I don’t think those grounds make sense.)
I think defining “selfish” as “anything experienced internally” is very limiting definition that makes it a pretty useless word. The concept of ‘selfishness’ can only be applied to human behaviour/motivations–physical-world phenomena like storms can’t be selfish or unselfish, it’s a mind-level concept. Thus, if you pre-define all human behaviour/motivations as selfish, you’re ruling out the opposite of selfishness existing at all. Which means you might as well not bother with using the word “selfish” at all, since there’s nothing that isn’t selfish.
There’s also the argument of common usage–it doesn’t matter how you define a word in your head, communication is with other people, who have their own definition of that word in their heads, and most people’s definitions are likely to be the common usage of the word, since how else would they learn what the word means? Most people define “selfishness” such that some impulses are selfish (i.e. Sally taking the last piece of cake because she likes cake) and some are not selfish (Sally giving Jack the last piece of cake, even though she wants it, because Jack hasn’t had any cake yet and she already had a piece.) Obviously both of those reactions are the result of impulses bouncing around between neurons, but since we don’t have introspective access to our neurons firing, it’s meaningful for most people to use selfishness or unselfishness as labels.
To comment on the linguistic issue, yes this particular argument is silly, but I do think it is legitimate to define a word and then later discover it points out something trivial or nonexistent. Like if we discovered that everyone would wirehead rather than actually help other people in every case, then we might say “welp, guess all drives are selfish” or something.
Sally doesn’t give Jack the cake because Jack hasn’t had any, rather, Sally gives Jack the cake because she wants to. That’s why explicitly calling the motivation selfish is useful, because it clarifies that obligations are still subjective and rooted in individual values (it also clarifies that obligations don’t mandate sacrifice or asceticism or any other similar nonsense). You say that it’s obvious that all actions occur from internally motivated states as a result of neurons firing, but it’s not obvious to most people, which is why pointing out that the action stems from the internal desires of Sally is still useful.
Why not just specify to people that motivations or obligations are “subjective and rooted in individual values”? Then you don’t have to bring in the word “selfish”, with all its common-usage connotations.
I want those common-usage connotations brought in because I want to eradicate the taboo around those common-usage connotations, I guess. I think that people are vilified for being selfish in lots of situations where being selfish is a good thing, at least from that person’s perspective. I don’t think that people should ever get mad at defectors in Prisoner’s Dilemmas, for example, and I think that saying that all of morality is selfish is a good way to fix this kind of problem.
This line of discussion says nothing on the object level. The words “altruistic” and “selfish” in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.
Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.
The reason I rejected the utility function and why I rejected this argument is that I judged them useless.
What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people’s answers to this question.
I don’t understand how your reply is responsive.
I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics—it’s not the aggregate amount of well being that matters, it’s only mine.
Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God’s wrath or something.
Does my position make more sense now?
Do you disagree with any matters of fact that I have asserted or implied? When you try to have a discussion like you are trying to have, about “logical necessity” and so on, you are just arguing about words. What do you predict about the world that is different from what I predict?
I think that it is important to recognize the relationship between thought processes because having a well organized mind allows us to change our minds more efficiently which improves the quality of our predictions. So long as you recognize that all moral behavior is motivated by internal experiences and values I don’t really care what you call it.