This seems egoistic rather than altruistic because you’d be valuing your own preference for tradition more than you value the well-being of others for their own sake.
If you’re a moral realist, you’re not letting others suffer for the sake of your preference for tradition, you’re letting them suffer for the sake of the moral value of tradition.
Otherwise, one could equally accuse the utilitarian of selfishly valuing their own preference for hedonism more than they value tradition for its own sake.
If you’re a moral realist, you’re not letting others suffer for the sake of your preference for tradition, you’re letting them suffer for the sake of the moral value of tradition.
This would only be an argument for being “moral” (whatever that may mean) rather than altruistic; it doesn’t address my point that utilitarianism is systematized altruism. Utilitarianism is what results if you apply veil of ignorance type of reasoning without being risk-averse.
Otherwise, one could equally accuse the utilitarian of selfishly valuing their own preference for hedonism more than they value tradition for its own sake.
As someone with negative utilitarian inclinations, I sympathize with the “self-centered preference for hedonism” objection against classical utilitarianism.
it doesn’t address my point that utilitarianism is systematized altruism. Utilitarianism is what results if you apply veil of ignorance type of reasoning without being risk-averse.
Preference Utilitarianism, or Parfit’s Success Theory, might be considered systematizied altruism. But classical utilitarianism isn’t altruistic at all. It doesn’t care about anything or anyone. It only cares about certain types of feelings. It ignores all other personal goals people have in a monstrously sociopathic fashion.
As someone with negative utilitarian inclinations, I sympathize with the “self-centered preference for hedonism” objection against classical utilitarianism.
I occasionally desire to do activities that make me suffer because I value the end result of that activity more than I value not suffering. If you try to stop me you’re at least as selfish as a hedonistic utilitarian who makes people suffer in order to generate a large amount of pleasure. (This is of course, if you’re a hedonistic negative utilitarian. If you’re a negative preference utilitarian I presume you’d want me to do that activity to prevent one of my preferences from not being satisfied.)
In my view, both classical and negative (hedonistic) utilitarianism are sort of “selfish” because being unselfish implies you respect other people’s desires for how their lives should go. If you make someone feel pleasure when they don’t want to, or not feel pain when they do want to, you are harming them. You are making their lives worse.
In fact, I think that classical and negative (hedonistic) utilitarianism are misnamed, because “utilitarian” is derived from the word “utility,” meaning “usefulness.” Something is “utilitarian” if it is useful to people in achieving their goals. But classical and negative utilitarians do not consider people’s goals to be valuable at all. All they value is Pleasure and NotPain. People are not creatures with lives and goals of their own, they are merely receptacles for containing Pleasure and NotPain.
Preference utilitarianism, Parfit’s Success Theory, and other similar theories do deserve the title of “utilitarian.” They do place value on the lives and desires of others. So I would say that they are “unselfish.” But all pleasure-maximizing/pain-minimizing theories of value are.
Preference Utilitarianism, or Parfit’s Success Theory, might be considered systematizied altruism.
Yes, certainly a strong case can be made if you have negative population ethics (i.e. don’t intend to bring about new preferences just to satisfy them). However, I also have sympathies for those who criticize the moral relevance of preferences.
I occasionally desire to do activities that make me suffer because I value the end result of that activity more than I value not suffering. If you try to stop me you’re at least as selfish as a hedonistic utilitarian who makes people suffer in order to generate a large amount of pleasure.
As you note, a negative preference utilitarian would let you go on. I think this view is plausible. I’m leaning towards a hedonistic view, though, and one reason for this has to do with my view on personal identity. I don’t think the concept makes any sense. I don’t think my present self has any privileged (normative) authority over my future selves, because when I just think in terms of consciousness-moments, I find it counterintuitive why preferences (as opposed to suffering) would be what is relevant.
I’m leaning towards a hedonistic view, though, and one reason for this has to do with my view on personal identity. I don’t think the concept makes any sense.
I consider nearly all arguments of the form “X is not a coherent concept, therefore we ought not to care about it” to be invalid. I don’t mean to give offense, but such arguments seem to me to be a form of pretending to be wise. This is especially true if X has predictive power, if knowing something is X can cause you to correctly anticipate your experiences. And you have to admit, knowing someone is the same person as someone you’ve encountered before makes you more likely to be able to predict their behavior.
Arguments that challenge the coherency of a concept typically function by asking a number of questions that our intuitions about the concept cannot answer readily, creating a sense of dumbfoundment. They then do not bother to think further about the questions and try to answer them, instead taking the inability to answer the question readily as evidence of incoherence. These arguments also frequently appeal to the fallacy of the gray, assuming that because there is no clear-cut border between two concepts or things that no distinction between them must exist.
This fact was brought home to me when I came across discussions of racism that argued that racism was wrong because “race” was not a coherent concept. The argument initially appealed to me because it hit a few applause lights, such as “racism is bad,” and “racists are morons.” However, I became increasingly bothered because I was knowledgeable about biology and genetics, and could easily see several simple ways to modify the concept of “race” into something coherent. It also seemed to me that the reason racism was wrong was that preference violation and suffering were bad regardless of the race of the person experiencing them, not because racists were guilty of incoherent reasoning. I realized that the argument was a terrible one, and could only be persuasive if one was predisposed to hate racism for other reasons.
The concept of personal identity makes plenty of sense. I’ve read Parfit too, and read all the questions about the nature of personal identity. I then proceeded to actually answer those questions and developed a much better understanding of what exactly it is I value when I say I value personal identity. To put it (extremely) shortly:
There are entities that have preferences about the future. These preferences include preferences about how the entity itself will change in the future (Parfit makes a similar point when he discusses “global preferences”). These preferences constitute a “personal identity.” If an entity changes we don’t need to make any references to the changed entity being the “same person” as a past entity. We simply take into account whether the change is desirable or not. I write much more about the subject here.
I don’t think my present self has any privileged (normative) authority over my future selves
I don’t necessarily think that either. That’s why I want to make sure that the future self that I turn into remains similar to my present self in certain ways, especially in his preferences. That way the issue won’t ever come up.
because when I just think in terms of consciousness-moments
This might be your first mistake. We aren’t just consciousness moments. We’re utility functions, memories, personalities and sets of values. Our consciousness-moments are just the tip of the iceberg. That’s one reason why it’s still immoral to violate a person’s preferences when they’re unconscious, their values still exist somewhere in their brain, even when they’re not conscious.
I find it counterintuitive why preferences (as opposed to suffering) would be what is relevant.
It seems obvious to me that they’re both relevant.
I consider nearly all arguments of the form “X is not a coherent concept, therefore we ought not to care about it” to be invalid.
I agree, I’m not saying you ought not care about it. My reasoning is different: I claim that people’s intuitive notion of personal identity is nonsense, in a similar way as the concept of free will is nonsense. There is no numerically identical thing existing over time, because there is no way such a notion could make sense in the first place.
Now, once someone realises this, he/she can either choose to group all the consciousness-moments together that trigger an intuitive notion of “same person” and care about that, even though it is now different from what they thought it was, or they can conclude that actually, now that they know it is something else, they don’t really care about it at all.
I think your view is entirely coherent, by the way. I agree that a reductionist account of personal identity still leaves room for preferences, and if you care about preferences as opposed to experience-moments, you can keep a meaningful and morally important notion of personal identity via preferences (although this would be an empirical issue—you could imagine beings without future-related preferences).
I guess the relevance for personal identity on the question of hedonism or preferences for me comes from a boost in intuitiveness of the hedonistic view after having internalized empty individualism.
It seems obvious to me that they’re both relevant.
I’m 100% sure that there is something I mean by “suffering”, and that it matters. I’m only maybe 10-20% sure that I’d also want to care about preferences if I knew everything there is to know.
Now, once someone realises this, he/she can either choose to group all the consciousness-moments together that trigger an intuitive notion of “same person” and care about that, even though it is now different from what they thought it was
I don’t know if your analysis is right or not, but I can tell you that that isn’t what it felt like I was doing when I was developing my concepts of personal identity and preferences. What it felt like I was doing was elucidating a concept I already cared about, and figured out exactly what I meant when I said “same person” and “personal identity.” When I thought about what such concepts mean I felt a thrill of discovery, like I was learning something new about myself I had never articulated before.
It might be that you are right and that my feelings are illusory, that what I was really doing was realizing a concept I cared about was incoherent and reaching about until I found a concept that was similar, but coherent. But I can tell you that’s not what it felt like.
EDIT: Let me make an analogy. Ancient people had some weird ideas about the concept of “strength.” They thought that it was somehow separate from the body of a person, and could be transferred by magic, or by eating a strong person or animal. Now, of course, we understand that that is not how strength works. It is caused by the complex interaction of a system of muscles, bones, tendons, and nerves, and you can’t transfer that complex system from one entity to another without changing many of the properties of the entity you’re sending it to.
Now, considering that fact, would you say that ancient people didn’t want anything coherent when they said they wanted to be strong? I don’t think so. They were mistaken about some aspects about how strength works, but they were working from a coherent concept. Once they understood how strength worked better they didn’t consider their previous desire for strength to be wrong.
I see personal identity as somewhat analagous to that. We had some weird ideas about it in the past, like that it was detached from physical matter. But I think that people have always cared about how they are going to change from one moment to the next, and had concrete preferences about it. And I think when I refined my concepts of personal identity I was making preferences I already had more explicit, not swapping out some incoherent preferences and replacing them with similar coherent ones.
I’m 100% sure that there is something I mean by “suffering”, and that it matters. I’m only maybe 10-20% sure that I’d also want to care about preferences if I knew everything there is to know.
I am 100% certain that there are things I want to do that will make me suffer (learning unpleasant truths for instance), but that I want to do anyway, because that is what I prefer to do.
Suffering seems relevant to me too. But I have to admit, sometimes when something is making me suffer, what dominates my thoughts is not a desire for it to stop, but rather annoyance that this suffering is disrupting my train of thought and making it hard for me to think and get the goals I have set for myself accomplished. And I’m not talking about mild suffering, the example in particular that I am thinking of is throwing up two days after having my entire abdomen cut open and sewn back together.
This is interesting. I wonder what a CEV-implementing AI would do with such cases. There seems to be a point where you’re inevitably going to hit the bottom of it. And in a way, this is at the same time going to be a self-fulfilling prophecy, because once you start identifying with this new image/goal of yours, it becomes your terminal value. Maybe you’d have to do separate evaluations of the preferences of all agent-moments and then formalise a distinction between “changing view based on valid input” and “changing view because of a failure of goal-preservation”. I’m not entirely sure whether such a distinction will hold up in the end.
I wonder what a CEV-implementing AI would do with such cases.
Even if it does turn out that my current conception of personal identity isn’t the same as my old one, but is rather I similar concept I adopted after realizing my values were incoherent, the AI might still find that the CEVs of my past and present selves concur. This is because, if I truly did adopt a new concept of identity because of it’s similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones that are as similar as possible to the original. If this is the case the AI would extrapolate that meta-value and give me a nice new coherent sense of personal identity, like the one I currently possess.
Of course, if I am right and my current conception of personal identity is based on my simply figuring out what I meant all along by “identity,” then the AI would just extrapolate that.
This is because, if I truly did adopt a new concept of identity because of it’s similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones that are as similar as possible to the original. If this is the case the AI would extrapolate that meta-value and give me a nice new coherent sense of personal identity, like the one I currently possess.
Maybe, but I doubt whether “as similar as possible” is (or can be made) uniquely denoting in all specific cases. This might sink it.
If you’re a moral realist, you’re not letting others suffer for the sake of your preference for tradition, you’re letting them suffer for the sake of the moral value of tradition.
Otherwise, one could equally accuse the utilitarian of selfishly valuing their own preference for hedonism more than they value tradition for its own sake.
This would only be an argument for being “moral” (whatever that may mean) rather than altruistic; it doesn’t address my point that utilitarianism is systematized altruism. Utilitarianism is what results if you apply veil of ignorance type of reasoning without being risk-averse.
As someone with negative utilitarian inclinations, I sympathize with the “self-centered preference for hedonism” objection against classical utilitarianism.
Preference Utilitarianism, or Parfit’s Success Theory, might be considered systematizied altruism. But classical utilitarianism isn’t altruistic at all. It doesn’t care about anything or anyone. It only cares about certain types of feelings. It ignores all other personal goals people have in a monstrously sociopathic fashion.
I occasionally desire to do activities that make me suffer because I value the end result of that activity more than I value not suffering. If you try to stop me you’re at least as selfish as a hedonistic utilitarian who makes people suffer in order to generate a large amount of pleasure. (This is of course, if you’re a hedonistic negative utilitarian. If you’re a negative preference utilitarian I presume you’d want me to do that activity to prevent one of my preferences from not being satisfied.)
In my view, both classical and negative (hedonistic) utilitarianism are sort of “selfish” because being unselfish implies you respect other people’s desires for how their lives should go. If you make someone feel pleasure when they don’t want to, or not feel pain when they do want to, you are harming them. You are making their lives worse.
In fact, I think that classical and negative (hedonistic) utilitarianism are misnamed, because “utilitarian” is derived from the word “utility,” meaning “usefulness.” Something is “utilitarian” if it is useful to people in achieving their goals. But classical and negative utilitarians do not consider people’s goals to be valuable at all. All they value is Pleasure and NotPain. People are not creatures with lives and goals of their own, they are merely receptacles for containing Pleasure and NotPain.
Preference utilitarianism, Parfit’s Success Theory, and other similar theories do deserve the title of “utilitarian.” They do place value on the lives and desires of others. So I would say that they are “unselfish.” But all pleasure-maximizing/pain-minimizing theories of value are.
Yes, certainly a strong case can be made if you have negative population ethics (i.e. don’t intend to bring about new preferences just to satisfy them). However, I also have sympathies for those who criticize the moral relevance of preferences.
As you note, a negative preference utilitarian would let you go on. I think this view is plausible. I’m leaning towards a hedonistic view, though, and one reason for this has to do with my view on personal identity. I don’t think the concept makes any sense. I don’t think my present self has any privileged (normative) authority over my future selves, because when I just think in terms of consciousness-moments, I find it counterintuitive why preferences (as opposed to suffering) would be what is relevant.
I consider nearly all arguments of the form “X is not a coherent concept, therefore we ought not to care about it” to be invalid. I don’t mean to give offense, but such arguments seem to me to be a form of pretending to be wise. This is especially true if X has predictive power, if knowing something is X can cause you to correctly anticipate your experiences. And you have to admit, knowing someone is the same person as someone you’ve encountered before makes you more likely to be able to predict their behavior.
Arguments that challenge the coherency of a concept typically function by asking a number of questions that our intuitions about the concept cannot answer readily, creating a sense of dumbfoundment. They then do not bother to think further about the questions and try to answer them, instead taking the inability to answer the question readily as evidence of incoherence. These arguments also frequently appeal to the fallacy of the gray, assuming that because there is no clear-cut border between two concepts or things that no distinction between them must exist.
This fact was brought home to me when I came across discussions of racism that argued that racism was wrong because “race” was not a coherent concept. The argument initially appealed to me because it hit a few applause lights, such as “racism is bad,” and “racists are morons.” However, I became increasingly bothered because I was knowledgeable about biology and genetics, and could easily see several simple ways to modify the concept of “race” into something coherent. It also seemed to me that the reason racism was wrong was that preference violation and suffering were bad regardless of the race of the person experiencing them, not because racists were guilty of incoherent reasoning. I realized that the argument was a terrible one, and could only be persuasive if one was predisposed to hate racism for other reasons.
The concept of personal identity makes plenty of sense. I’ve read Parfit too, and read all the questions about the nature of personal identity. I then proceeded to actually answer those questions and developed a much better understanding of what exactly it is I value when I say I value personal identity. To put it (extremely) shortly:
There are entities that have preferences about the future. These preferences include preferences about how the entity itself will change in the future (Parfit makes a similar point when he discusses “global preferences”). These preferences constitute a “personal identity.” If an entity changes we don’t need to make any references to the changed entity being the “same person” as a past entity. We simply take into account whether the change is desirable or not. I write much more about the subject here.
I don’t necessarily think that either. That’s why I want to make sure that the future self that I turn into remains similar to my present self in certain ways, especially in his preferences. That way the issue won’t ever come up.
This might be your first mistake. We aren’t just consciousness moments. We’re utility functions, memories, personalities and sets of values. Our consciousness-moments are just the tip of the iceberg. That’s one reason why it’s still immoral to violate a person’s preferences when they’re unconscious, their values still exist somewhere in their brain, even when they’re not conscious.
It seems obvious to me that they’re both relevant.
I agree, I’m not saying you ought not care about it. My reasoning is different: I claim that people’s intuitive notion of personal identity is nonsense, in a similar way as the concept of free will is nonsense. There is no numerically identical thing existing over time, because there is no way such a notion could make sense in the first place. Now, once someone realises this, he/she can either choose to group all the consciousness-moments together that trigger an intuitive notion of “same person” and care about that, even though it is now different from what they thought it was, or they can conclude that actually, now that they know it is something else, they don’t really care about it at all.
I think your view is entirely coherent, by the way. I agree that a reductionist account of personal identity still leaves room for preferences, and if you care about preferences as opposed to experience-moments, you can keep a meaningful and morally important notion of personal identity via preferences (although this would be an empirical issue—you could imagine beings without future-related preferences).
I guess the relevance for personal identity on the question of hedonism or preferences for me comes from a boost in intuitiveness of the hedonistic view after having internalized empty individualism.
I’m 100% sure that there is something I mean by “suffering”, and that it matters. I’m only maybe 10-20% sure that I’d also want to care about preferences if I knew everything there is to know.
I don’t know if your analysis is right or not, but I can tell you that that isn’t what it felt like I was doing when I was developing my concepts of personal identity and preferences. What it felt like I was doing was elucidating a concept I already cared about, and figured out exactly what I meant when I said “same person” and “personal identity.” When I thought about what such concepts mean I felt a thrill of discovery, like I was learning something new about myself I had never articulated before.
It might be that you are right and that my feelings are illusory, that what I was really doing was realizing a concept I cared about was incoherent and reaching about until I found a concept that was similar, but coherent. But I can tell you that’s not what it felt like.
EDIT: Let me make an analogy. Ancient people had some weird ideas about the concept of “strength.” They thought that it was somehow separate from the body of a person, and could be transferred by magic, or by eating a strong person or animal. Now, of course, we understand that that is not how strength works. It is caused by the complex interaction of a system of muscles, bones, tendons, and nerves, and you can’t transfer that complex system from one entity to another without changing many of the properties of the entity you’re sending it to.
Now, considering that fact, would you say that ancient people didn’t want anything coherent when they said they wanted to be strong? I don’t think so. They were mistaken about some aspects about how strength works, but they were working from a coherent concept. Once they understood how strength worked better they didn’t consider their previous desire for strength to be wrong.
I see personal identity as somewhat analagous to that. We had some weird ideas about it in the past, like that it was detached from physical matter. But I think that people have always cared about how they are going to change from one moment to the next, and had concrete preferences about it. And I think when I refined my concepts of personal identity I was making preferences I already had more explicit, not swapping out some incoherent preferences and replacing them with similar coherent ones.
I am 100% certain that there are things I want to do that will make me suffer (learning unpleasant truths for instance), but that I want to do anyway, because that is what I prefer to do.
Suffering seems relevant to me too. But I have to admit, sometimes when something is making me suffer, what dominates my thoughts is not a desire for it to stop, but rather annoyance that this suffering is disrupting my train of thought and making it hard for me to think and get the goals I have set for myself accomplished. And I’m not talking about mild suffering, the example in particular that I am thinking of is throwing up two days after having my entire abdomen cut open and sewn back together.
This is interesting. I wonder what a CEV-implementing AI would do with such cases. There seems to be a point where you’re inevitably going to hit the bottom of it. And in a way, this is at the same time going to be a self-fulfilling prophecy, because once you start identifying with this new image/goal of yours, it becomes your terminal value. Maybe you’d have to do separate evaluations of the preferences of all agent-moments and then formalise a distinction between “changing view based on valid input” and “changing view because of a failure of goal-preservation”. I’m not entirely sure whether such a distinction will hold up in the end.
Even if it does turn out that my current conception of personal identity isn’t the same as my old one, but is rather I similar concept I adopted after realizing my values were incoherent, the AI might still find that the CEVs of my past and present selves concur. This is because, if I truly did adopt a new concept of identity because of it’s similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones that are as similar as possible to the original. If this is the case the AI would extrapolate that meta-value and give me a nice new coherent sense of personal identity, like the one I currently possess.
Of course, if I am right and my current conception of personal identity is based on my simply figuring out what I meant all along by “identity,” then the AI would just extrapolate that.
Maybe, but I doubt whether “as similar as possible” is (or can be made) uniquely denoting in all specific cases. This might sink it.