Longer? Probably not. Happier? Possible, depending on that person’s baseline, since we don’t know our own desires and acquiring these skills might help, but given the hedonic treadmill effect, unlikely. Achieving more of their interim goals? Possible if not probable. There are a lot of possible goals aside from living longer and being happier.
I have decided that maximizing the integral of happiness with respect to time is my selfish supergoal and that maximizing the double integral of happiness with respect to time and with respect to number of people is my altruistic supergoal. All other goals are only relevant insofar as they affect the supergoals. I have yet to be convinced this is a bad system, though previous experience suggests I probably will make modifications at some point. I also need to decide what weight to place on the selfish/altruistic components.
But despite my finding such an abstract way of characterizing my actions interesting, the actual determining of the weights and the actual function I’m maximizing are just determined by what I actually end up doing. In fact constructing this abstract system does not seem to convincingly help me further its purported goal, and I therefore cease all serious conversation about it.
In fact constructing this abstract system does not seem to convincingly help me further its purported goal
I think this is a common problem. That doesn’t mean you have to give up on having your second-order desires agree with your first-order desires. It is possible to use your abstract models to change your day-to-day behaviour, and it’s definitely possible to build a more accurate model of yourself and then use that model to make yourself do the things you endorse yourself doing (i.e. avoiding having to use willpower by making what you want to want to do the “default.”)
As for me, I’ve decided that happiness is too elusive of a goal–I’m bad at predicting what will make me happier-than-baseline, the process of explicitly pursuing happiness seems to make it harder to achieve, and the hedonic treadmill effect means that even if I did, I would have to keep working at it constantly to stay in the same place. Instead, I default to a number of proxy measures: I want to be physically fit, so I endorse myself exercising and preferably enjoying exercise; I want to have enough money to satisfy my needs; I want to finish school with good grades; I want to read interesting books; I want to have a social life; I want to be a good friend. Taken all together, these are at least the building blocks of happiness, which happens by itself unless my brain chemistry gets too wacked out.
So the normal chain of events here would just be that I argue those are still all subgoals of increasing happiness and we would go back and forth about that. But this is just arguing by definition, so I won’t continue along that line.
To the extent I understand the first paragraph in terms of what it actually says at the level of real-world experience, I have never seen evidence supporting its truth. The second paragraph seems to say what I intended the second paragraph of my previous comment to mean. So really it doesn’t seem that we disagree about anything important.
But this is just arguing by definition, so I won’t continue along that line.
Agreed. I find it practical to define my goals as all of those subgoals and not make happiness an explicit node, because it’s easy to evaluate my subgoals and measure how well I’m achieving them. But maybe you find it simpler to have only one mental construct, “happiness”, instead of lots.
The second paragraph seems to say what I intended the second paragraph of my previous comment to mean.
I guess I explicitly don’t allow myself to have abstract systems with no measurable components and/or clear practical implications–my concrete goals take up enough mental space. So my automatic reaction was “you’re doing it wrong,” but it’s possible that having an unconnected mental system doesn’t sabotage your motivation the same way it does mine. Also, “what I actually end up doing” doesn’t, to me, have to connotation of “choosing and achieving subgoals”, it has the connotation of not having goals. But it sounds like that’s not what it means to you.
I would argue that the altruism should be part of the selfish utility function. The reason that you care about other people is because you value other people. If you did not value other people there is no reason they should be in your utility function.
I would argue that the altruism should be part of the selfish utility function.
Excellent! This nuance of what “selfish” means is something I find myself reiterating all too frequently. (Where the latter means I’ve done it at least three times that I can recall.)
It’s not an argument about definitions, it’s an argument about logical priority. Altruistic impulses are logically a subset of selfish ones because all impulses are selfish because they’re only experienced internally. (I’m using impulse as roughly synonymous with an action taken because of values). Altruism is only relevant to your morality insofar as you value altruistic actions. Altruism can only be justified on somewhat selfish grounds. (To clarify, it can be justified on other grounds but I don’t think those grounds make sense.)
all impulses are selfish because they’re only experienced internally.
I think defining “selfish” as “anything experienced internally” is very limiting definition that makes it a pretty useless word. The concept of ‘selfishness’ can only be applied to human behaviour/motivations–physical-world phenomena like storms can’t be selfish or unselfish, it’s a mind-level concept. Thus, if you pre-define all human behaviour/motivations as selfish, you’re ruling out the opposite of selfishness existing at all. Which means you might as well not bother with using the word “selfish” at all, since there’s nothing that isn’t selfish.
There’s also the argument of common usage–it doesn’t matter how you define a word in your head, communication is with other people, who have their own definition of that word in their heads, and most people’s definitions are likely to be the common usage of the word, since how else would they learn what the word means? Most people define “selfishness” such that some impulses are selfish (i.e. Sally taking the last piece of cake because she likes cake) and some are not selfish (Sally giving Jack the last piece of cake, even though she wants it, because Jack hasn’t had any cake yet and she already had a piece.) Obviously both of those reactions are the result of impulses bouncing around between neurons, but since we don’t have introspective access to our neurons firing, it’s meaningful for most people to use selfishness or unselfishness as labels.
To comment on the linguistic issue, yes this particular argument is silly, but I do think it is legitimate to define a word and then later discover it points out something trivial or nonexistent. Like if we discovered that everyone would wirehead rather than actually help other people in every case, then we might say “welp, guess all drives are selfish” or something.
Sally doesn’t give Jack the cake because Jack hasn’t had any, rather, Sally gives Jack the cake because she wants to. That’s why explicitly calling the motivation selfish is useful, because it clarifies that obligations are still subjective and rooted in individual values (it also clarifies that obligations don’t mandate sacrifice or asceticism or any other similar nonsense). You say that it’s obvious that all actions occur from internally motivated states as a result of neurons firing, but it’s not obvious to most people, which is why pointing out that the action stems from the internal desires of Sally is still useful.
Why not just specify to people that motivations or obligations are “subjective and rooted in individual values”? Then you don’t have to bring in the word “selfish”, with all its common-usage connotations.
I want those common-usage connotations brought in because I want to eradicate the taboo around those common-usage connotations, I guess. I think that people are vilified for being selfish in lots of situations where being selfish is a good thing, at least from that person’s perspective. I don’t think that people should ever get mad at defectors in Prisoner’s Dilemmas, for example, and I think that saying that all of morality is selfish is a good way to fix this kind of problem.
This line of discussion says nothing on the object level. The words “altruistic” and “selfish” in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.
Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.
The reason I rejected the utility function and why I rejected this argument is that I judged them useless.
What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people’s answers to this question.
I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics—it’s not the aggregate amount of well being that matters, it’s only mine.
Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God’s wrath or something.
Do you disagree with any matters of fact that I have asserted or implied? When you try to have a discussion like you are trying to have, about “logical necessity” and so on, you are just arguing about words. What do you predict about the world that is different from what I predict?
I think that it is important to recognize the relationship between thought processes because having a well organized mind allows us to change our minds more efficiently which improves the quality of our predictions. So long as you recognize that all moral behavior is motivated by internal experiences and values I don’t really care what you call it.
Longer? Probably not. Happier? Possible, depending on that person’s baseline, since we don’t know our own desires and acquiring these skills might help, but given the hedonic treadmill effect, unlikely. Achieving more of their interim goals? Possible if not probable. There are a lot of possible goals aside from living longer and being happier.
I have decided that maximizing the integral of happiness with respect to time is my selfish supergoal and that maximizing the double integral of happiness with respect to time and with respect to number of people is my altruistic supergoal. All other goals are only relevant insofar as they affect the supergoals. I have yet to be convinced this is a bad system, though previous experience suggests I probably will make modifications at some point. I also need to decide what weight to place on the selfish/altruistic components.
But despite my finding such an abstract way of characterizing my actions interesting, the actual determining of the weights and the actual function I’m maximizing are just determined by what I actually end up doing. In fact constructing this abstract system does not seem to convincingly help me further its purported goal, and I therefore cease all serious conversation about it.
I think this is a common problem. That doesn’t mean you have to give up on having your second-order desires agree with your first-order desires. It is possible to use your abstract models to change your day-to-day behaviour, and it’s definitely possible to build a more accurate model of yourself and then use that model to make yourself do the things you endorse yourself doing (i.e. avoiding having to use willpower by making what you want to want to do the “default.”)
As for me, I’ve decided that happiness is too elusive of a goal–I’m bad at predicting what will make me happier-than-baseline, the process of explicitly pursuing happiness seems to make it harder to achieve, and the hedonic treadmill effect means that even if I did, I would have to keep working at it constantly to stay in the same place. Instead, I default to a number of proxy measures: I want to be physically fit, so I endorse myself exercising and preferably enjoying exercise; I want to have enough money to satisfy my needs; I want to finish school with good grades; I want to read interesting books; I want to have a social life; I want to be a good friend. Taken all together, these are at least the building blocks of happiness, which happens by itself unless my brain chemistry gets too wacked out.
So the normal chain of events here would just be that I argue those are still all subgoals of increasing happiness and we would go back and forth about that. But this is just arguing by definition, so I won’t continue along that line.
To the extent I understand the first paragraph in terms of what it actually says at the level of real-world experience, I have never seen evidence supporting its truth. The second paragraph seems to say what I intended the second paragraph of my previous comment to mean. So really it doesn’t seem that we disagree about anything important.
Agreed. I find it practical to define my goals as all of those subgoals and not make happiness an explicit node, because it’s easy to evaluate my subgoals and measure how well I’m achieving them. But maybe you find it simpler to have only one mental construct, “happiness”, instead of lots.
I guess I explicitly don’t allow myself to have abstract systems with no measurable components and/or clear practical implications–my concrete goals take up enough mental space. So my automatic reaction was “you’re doing it wrong,” but it’s possible that having an unconnected mental system doesn’t sabotage your motivation the same way it does mine. Also, “what I actually end up doing” doesn’t, to me, have to connotation of “choosing and achieving subgoals”, it has the connotation of not having goals. But it sounds like that’s not what it means to you.
I would argue that the altruism should be part of the selfish utility function. The reason that you care about other people is because you value other people. If you did not value other people there is no reason they should be in your utility function.
Excellent! This nuance of what “selfish” means is something I find myself reiterating all too frequently. (Where the latter means I’ve done it at least three times that I can recall.)
This is reaching the point of just arguing about definitions, so I reject this line of discussion as well.
It’s not an argument about definitions, it’s an argument about logical priority. Altruistic impulses are logically a subset of selfish ones because all impulses are selfish because they’re only experienced internally. (I’m using impulse as roughly synonymous with an action taken because of values). Altruism is only relevant to your morality insofar as you value altruistic actions. Altruism can only be justified on somewhat selfish grounds. (To clarify, it can be justified on other grounds but I don’t think those grounds make sense.)
I think defining “selfish” as “anything experienced internally” is very limiting definition that makes it a pretty useless word. The concept of ‘selfishness’ can only be applied to human behaviour/motivations–physical-world phenomena like storms can’t be selfish or unselfish, it’s a mind-level concept. Thus, if you pre-define all human behaviour/motivations as selfish, you’re ruling out the opposite of selfishness existing at all. Which means you might as well not bother with using the word “selfish” at all, since there’s nothing that isn’t selfish.
There’s also the argument of common usage–it doesn’t matter how you define a word in your head, communication is with other people, who have their own definition of that word in their heads, and most people’s definitions are likely to be the common usage of the word, since how else would they learn what the word means? Most people define “selfishness” such that some impulses are selfish (i.e. Sally taking the last piece of cake because she likes cake) and some are not selfish (Sally giving Jack the last piece of cake, even though she wants it, because Jack hasn’t had any cake yet and she already had a piece.) Obviously both of those reactions are the result of impulses bouncing around between neurons, but since we don’t have introspective access to our neurons firing, it’s meaningful for most people to use selfishness or unselfishness as labels.
To comment on the linguistic issue, yes this particular argument is silly, but I do think it is legitimate to define a word and then later discover it points out something trivial or nonexistent. Like if we discovered that everyone would wirehead rather than actually help other people in every case, then we might say “welp, guess all drives are selfish” or something.
Sally doesn’t give Jack the cake because Jack hasn’t had any, rather, Sally gives Jack the cake because she wants to. That’s why explicitly calling the motivation selfish is useful, because it clarifies that obligations are still subjective and rooted in individual values (it also clarifies that obligations don’t mandate sacrifice or asceticism or any other similar nonsense). You say that it’s obvious that all actions occur from internally motivated states as a result of neurons firing, but it’s not obvious to most people, which is why pointing out that the action stems from the internal desires of Sally is still useful.
Why not just specify to people that motivations or obligations are “subjective and rooted in individual values”? Then you don’t have to bring in the word “selfish”, with all its common-usage connotations.
I want those common-usage connotations brought in because I want to eradicate the taboo around those common-usage connotations, I guess. I think that people are vilified for being selfish in lots of situations where being selfish is a good thing, at least from that person’s perspective. I don’t think that people should ever get mad at defectors in Prisoner’s Dilemmas, for example, and I think that saying that all of morality is selfish is a good way to fix this kind of problem.
This line of discussion says nothing on the object level. The words “altruistic” and “selfish” in this conversation have ceased to mean anything that anyone could use to meaningfully alter his or her real world behavior.
Altruistic behavior is usually thought of as motivated by compassion or caring for others, so I think you are wrong. You are the one arguing about definitions in order to trivialize my point, if anything.
The reason I rejected the utility function and why I rejected this argument is that I judged them useless.
What would you recommend people do, in general? I think this is a question that is actually valuable. At the least I would benefit from considering other people’s answers to this question.
I don’t understand how your reply is responsive.
I recommend that people act in accordance with their (selfish) values because no other values are situated so as to be motivational. Motivation and values are brute facts, chemical processes that happen in individual brains, but that actually gives them an influence beyond that of mere reason, which could never produce obligations. My system also offers a solution to the paralysis brought on by infinitarian ethics—it’s not the aggregate amount of well being that matters, it’s only mine.
Because I believe this, recognizing that altruism is a subset of egoism is important for my system of ethics. I still believe in altruistic behavior, but only that which is motivated by empathy as opposed to some abstract sense of duty or fear of God’s wrath or something.
Does my position make more sense now?
Do you disagree with any matters of fact that I have asserted or implied? When you try to have a discussion like you are trying to have, about “logical necessity” and so on, you are just arguing about words. What do you predict about the world that is different from what I predict?
I think that it is important to recognize the relationship between thought processes because having a well organized mind allows us to change our minds more efficiently which improves the quality of our predictions. So long as you recognize that all moral behavior is motivated by internal experiences and values I don’t really care what you call it.