Every non-sentientist value that you add to your pool of intrinsic values needs an exchange rate (which can be non-linear and complex and whatever) that implies you’d be willing to let people suffer in exchange for said value....If other people value tradition intrinsically, then preference utilitarianism will output that tradition counts to the extent that it satisfies people’s preferences for it. This would be the utilitarian way to include “complexity of value”.
I am proceeding along this line of thought as well. I believe in something similar to G. E. Moore’s “Ideal Utilitarianism.” I believe that we should maximize certain values like Truth, Beauty, Curiousity, Freedom, etc. However, I also believe that these values are meaningless when divorced from the existence of sentient creatures to appreciate them. Unlike Moore, I would not place any value at all on a piece of lovely artwork no one ever sees. There would need to be a creature to appreciate it for it to have any value.
So basically, I would shift maximizing complex values from regular ethics to population ethics. I would give “extra points” to the creation of creatures who place intrinsic value on these ideals, and “negative points” to the creation of creatures who don’t value them.
Now, you might argue that this does create scenarios where I am willing to create suffering to promote an ideal. Suppose I have the option of creating a wireheaded person that never suffers, or a person who appreciates ideals, and suffers a little (but not so much that their life is not worth living). I would gladly choose the idealistic person over the wirehead.
I do not consider this to be “biting a bullet” because that usually implies accepting a somewhat counterintuitive implication, and I don’t find this implication to be counterintuitive at all. As long as the idealistic person’s life is not so terrible that they wish they had never been born I can not truly be said to have hurt them.
So would you accept the very repugnant conclusion for total preference utilitarianism? If you value the creation of new preferences (of a certain kind), would this allow for tradeoff-situations where you had to frustrate all the currently existing preferences, create some more beings with completely frustrated preferences, and then create a huge number of beings living lives just slightly above the boundary where the satisfaction-percentage becomes “valuable” in order to make up for all the suffering (and overall improve the situation)? This conclusion seems to be hard to block if you consider it morally urgent to bring new satisfied preferences into existence. And it, too, seems to be selfish in a way, although this would have to be argued for further.
So would you accept the very repugnant conclusion for total preference utilitarianism?
I did not mention it because I didn’t want to belabor my view, but no, I wouldn’t. I think that one of the important Ideals that people seem to value is that a smaller population of people with highly satisfied preferences is better than a larger population with lives barely worth living, even if the total amount of preference satisfaction is higher in the large population. That’s one reason why the repugnant conclusion is repugnant. This means that sometimes it is good to add people, at other times it is bad.
Of course, this view needs some qualifiers. First of all, once someone is added to a population, they count as being part of it even after they are dead, so you can’t arrive at an ideal population size by killing people. This also entails accepting the Sadistic Conclusion, but that is an unavoidable part of all types of Negative Utilitarianism, whether they are of the normal variety, or the weird “sometimes negative sometimes positive depending on the context” variety I employ.
I think a helpful analogy would be Parfit’s concept of “global preferences,” which he discusses on page 3 of this article. Parfit argues that we have “Global Preferences,” which are meta-preferences about what sort of life we should live and what sort of desires we should have. He argues that these Global Preferences dictate the goodness of whether we develop a new preference.
For instance, Parfit argues, imagine someone gets you addicted to a drug, and gives you a lifetime supply of the drug. You now have a strong desire to get more of the drug, which is satisfied by your lifetime supply. Parfit argues that this does not make you life better, because you have a global meta-preference to not get addicted to drugs, which has been violated. By contrast (my example, not Parfit’s) if I enter into a romantic relationship with someone it will create a strong desire to spend time with that person, a desire much stronger than my initial desire to enter the relationship. However, this is a good thing, because I do have a global meta-preference to be in romantic relationships.
We can easily scale this up to population ethics. I have Global Moral Principles about the type and amount of people who should exist in the world. Adding people who fulfill these principles makes the world better. Adding people who do not fulfill these principles makes the world worse, and should be stopped.
And it, too, seems to be selfish in a way, although this would have to be argued for further.
Reading and responding to your exchange with other people about this sort of “moral selfishness” has gotten me thinking about what people mean and what concepts they refer to when they use that word. I’ve come to the conclusion that “selfish” isn’t a proper word to use in these contexts. Now, obviously this is something of a case of “disputing defintions”, but the word “selfish” and the concepts it refers to are extremely “loaded” and bring a lot of emotional and intuitive baggage with them, so I think it’s good mental hygiene to be clear about what they mean.
To me what’s become clear is that the word “selfish” doesn’t refer to any instance where someone puts some value of theirs ahead of something else. Selfishness is when someone puts their preferences about their own life and about their own happiness and suffering ahead of the preferences, happiness, and suffering of others.
To illustrate this, imagine the case of a racial supremacist who sacrifices his life in order to enable others of his race to continue oppressing different races. He is certainly a very bad person. But it seems absurd to call him “selfish.” In my view this is because, while he has certainly put some of his preferences ahead of the preferences of others, none of those preferences were preferences about his own life. They were preferences about the overall state of the world.
Now, obviously what makes a preference “about your own life” is a fairly complicated concept (Parfit discusses it in some detail here). But I don’t see that as inherently problematic. Most concepts are extremely complex once we unpack them.
I did not mention it because I didn’t want to belabor my view, but no, I wouldn’t. I think that one of the important Ideals that people seem to value is that a smaller population of people with highly satisfied preferences is better than a larger population with lives barely worth living, even if the total amount of preference satisfaction is higher in the large population.
It seems to me like your view is underdetermined in regard to population ethics. You introduce empirical considerations about which types of preferences people happen to have in order to block normative conclusions. What if people actually do want to bite the bullet, would that make it okay to do it? Suppose there were ten people, and they would be okay with getting tortured, adding a billion tortured people, plus adding a sufficiently large number of people with preferences more-satisfied-than-not. Would this ever be ok according to your view? If not, you seem to not intrinsically value the creation of satisfied preferences.
If not, you seem to not intrinsically value the creation of satisfied preferences.
You’re right that I do not intrinsically value the creation of all satisfied preferences. This is where my version of Moore’s Ideal Utilitarianism comes in. What I value is the creation of people with satisfied preferences if doing so also fulfills certain moral ideals I (and most other people, I think) have about how the world ought to be. In cases where the creation of a person with satisfied preferences would not fulfill those ideals I am essentially a negative preference utilitarian, I treat the creation of a person who doesn’t fulfill those ideals the same way a negative preference utilitarian would.
I differ from Moore in that I think the only way to fulfill an ideal is to create (or not create) a person with certain preferences and satisfy those preferences. I don’t think, like he did, that you can (for example) increase the beauty in the world by creating pretty objects no one ever sees.
I think a good analogy would again be Parfit’s concept of global preferences. If I read a book, and am filled with a mild preference to read more books with the same characters, such a desire is in line with my global preferences, so it is good for it to be created. By contrast, being addicted to heroin would fill me with a strong preference to use heroin. This preference is not in line with my global preferences, so I would be willing to hurt myself to avoid creating it.
Suppose there were ten people, and they would be okay with getting tortured, adding a billion tortured people, plus adding a sufficiently large number of people with preferences more-satisfied-than-not.
I have moral ideals about many things, which include how many people there should be, their overall level of welfare, and most importantly, what sort of preferences they ought to have. It seems likely to me that the scenario with the torture+new people scenario would violate those ideals, so I probably wouldn’t go along with it.
To give an example where creating the wrong type of preference would be a negative, I would oppose the creation of a sociopath or a paperclip maximizer, even if their life would have more satisfied preferences than not. Such a creature would not be in line with my ideals about what sort of creatures should exist. I would even be willing to harm myself or others, to some extent, to prevent their creation.
This brings up a major question I have about negative preference utilitarianism, which I wonder if you could answer since you seem to have thought more about the subject of negative utilitarianism than I have. How much harm should a negative preference utilitarian be willing to inflict on existing people to prevent a new person from being born? For instance, suppose you had a choice between torturing every person on Earth for the rest of their lives, or creating one new person who will live the life of a rich 1st world person with a high happiness set point? Surely you wouldn’t torture everyone on Earth? A hedonist negative utilitarian wouldn’t of course, but we’re talking about negative preference utilitarianism.
A similar question I have is, if a creature with an unbounded utility function is created, does that mean that infinite wrong has been done, since such a creature essentially has infinite unsatisfied preferences? How does negative preference utilitarianism address this?
The best thing I can come up with is to give the creation of such a creature a utility penalty equal to “However much utility the creature accumulates over its lifetime, minus x,” where x is a moderately sized number. However, it occurs to me that someone whose thought more about the subject than me might have figured out something better.
This also entails accepting the Sadistic Conclusion, but that is an unavoidable part of all types of Negative Utilitarianism, whether they are of the normal variety, or the weird “sometimes negative sometimes positive depending on the context” variety I employ.
I don’t think so, neither negative preference nor negative hedonistic utilitarianism implies the Sadistic Conclusion. Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:
The Sadistic Conclusion: In some circumstances, it would be better with respect to utility to add some unhappy people to the world (people with negative utility), rather than creating a larger number of happy people (people with positive utility).
Now, according to classical utilitarianism, the large number of happy beings would each be of “positive utility”. However, given the evaluation function of the negative view, their utility is neutral if their lives were perfect, and worse than neutral if their lives contain suffering. The Sadistic Conclusion is avoided, although only persuasively so if you find the axiology of the negative view convincing. Otherwise, you’re still left with an outcome that seems counterintuitive, but this seems to be much less worrisome than having something that seems to be messed up even on the theoretical level. You say you’re okay with the Sadistic Conclusion because there are no alternatives, but I would assume that, if you did not yet know that there are no alternatives (you’d want to go with), then you would have a strong inclination to count it as a serious deficiency of your stated view.
Addressing the comment right above now:
How much harm should a negative preference utilitarian be willing to inflict on existing people to prevent a new person from being born?
Negative utilitarians try to minimize the total amount of preference-frustrations, or suffering. Whether this is going to happen to a new person that you’ll bring into existence, or whether it is going to happen to a person that already exists, does not make a difference. (No presence-bias, as I said above.) So a negative preference utilitarian should be indifferent between killing an existing person and bringing a new person (fully developed, with memories and life-goals) into existence if this later person is going to die / be killed soon as well. (Also note that being killed is only a problem if you have a preference to go on living, and that even then, it might not be the thing considered worst that could happen to someone.)
This implies that the preferences of existing people may actually lead to it being the best action to bring new people into existence. If humans have a terminal value of having children, then these preferences of course count as well, and if the children are guaranteed perfect lives, you should bring them all into existence. You should even bring them into existence if some of them are going to suffer horribly, as long as the existing people’s preferences would, altogether, contain even more frustrations.
A similar question I have is, if a creature with an unbounded utility function is created, does that mean that infinite wrong has been done, since such a creature essentially has infinite unsatisfied preferences? How does negative preference utilitarianism address this?
You will need some way of normalizing all preferences, setting the difference between “everything fulfilled” and “everything frustrated” equal for beings of the same “type”. Then the question is whether all sentient beings fall under the same type, or whether you want to discount according to intensity of sentience, or some measure of agency or something like that. I have not yet defined my intuitions here, but I think I’d go for something having to do with sentience.
Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:
When I read the formulation of the Sadistic Conclusion I interpreted “people with positive utility” to mean either a person whose life contained no suffering, or a person whose satisfied preferences/happiness outweighed their suffering. So I would consider adding a small population of terrible lives instead of a large population of almost ideal lives to be the Sadistic Conclusion.
If I understand you correctly, you are saying that negative utilitarianism technically avoids the Sadistic Conclusion because it considers a life with any suffering at all to be a life of negative utility, regardless of how many positive things that life also contains. In other words, it avoid the SC because it’s criterion for what makes a life positive and negative are different than the criterion Arrenhius used when he first formulated the SC. I suppose that is true. However, NU does not avoid the (allegedly) unpleasant scenario Arrenhius wanted to avoid (adding a tortured life instead of a large amount of very positive lives).
Negative utilitarians try to minimize the total amount of preference-frustrations, or suffering....(Also note that being killed is only a problem if you have a preference to go on living, and that even then, it might not be the thing considered worst that could happen to someone.)
Right, but if someone has a preference to live forever does that mean that infinite harm has been done if they die? In which case you might as well do whatever afterwards, since infinite harm has already occurred? Should you torture everyone on Earth for decades to prevent such a person from being added? That seems weird.
The best solution I can currently think of is to compare different alternatives, rather than try to measure things in absolute terms. So if a person who would have lived to 80 dies at 75 that generates 5 years of unsatisfied preferences, not infinity, even if the person would have preferred to live forever. But that doesn’t solve the problem of adding people who wouldn’t have existed otherwise.
What I’m trying to say is, people have an awful lot of preferences, and generally only manage to satisfy a small fraction of them before they die. So how many unsatisfied preferences should adding a new person count as creating? How big a disutility is it compared to other disutilities, like thwarting existing preferences and inflicting pain on people.
A couple possibilities that occurs to me off the top of my head. One would be to find the difference in satisfaction between the new people and the old people, and then compare it to the difference in satisfaction between the old people and the counter-factual old people in the universe where the new people were never added.
Another possibility would be to set some sort of critical level based on what the maximum level of utility it is possible to give the new people given our society’s current level of resources, without inflicting greater disutilities on others than you give utility to the new people. Then weigh the difference between the new peoples actual utility and their “critical possible utility” and compare that to the dissatisfaction the existing people would suffer if the new people are not added.
Do either of these possibilities sound plausible to you, or do you have another idea?
I agree your points on the Sadistic Conclusion issue. Arrhenius acknowledges that his analysis depends on the (to him trivial) assumption that there are “positive” welfare levels. I don’t think this axiom is trivial because it interestingly implies that non-consciousness somehow becomes “tarnished” and non-optimal. Under a Buddhist view of value, this would be different.
Right, but if someone has a preference to live forever does that mean that infinite harm has been done if they die?
If all one person cared about was to live for at least 1′000 years, and all a second person cared about was to live for at least 1′000′000 years (and after their desired duration they would become completely indifferent), would the death of the first person at age 500 be less tragic than the death of the second person at age 500′000? I don’t think so, because assuming that they value partial-progress on their ultimate goal the same way, they both ended up reaching “half” of their true and only goal. I don’t think the first person would somehow care less in overall terms about achieving her goal than the second person.
To what extent would this way of comparing preferences change things?
What I’m trying to say is, people have an awful lot of preferences, and generally only manage to satisfy a small fraction of them before they die.
I think the point you make here is important. It seems like there should be a difference between beings who have only one preference and beings who have an awful lot of preferences. Imagine a chimpanzee with a few preferences and compare him to a sentient AGI, say. Would both count equally? If not, how would we determine how much their total preference (dis)satisfaction is worth? The example I gave above seems intuitive because we were talking about humans who are (as specified by the unwritten rules of thought experiments) equal in all relevant respects. With chimps vs. AI it seems different.
I’m actually not sure how I would proceed here, and this is of course a problem. Since I’d (in my preference-utilitarianism mode) only count the preferences of sentient beings and not e.g. the revealed preferences of a tree, I would maybe weight the overall value by something like “intensity of sentience”. However, I suspect that I’m inclined to do this because I have strong leanings towards hedonistic views, so it would not necessarily fit elegantly with a purely preference-based view on what matters. And that would be a problem because I don’t like ad hoc moves.
Or maybe a better way to deal with it would be the following:
Preferences ought to be somewhat specific. If people just say “infinity”, they still aren’t capable to envision what this would actually mean. So maybe a chimpanzee could only envision a certain amount of things because of some limit of brain complexity, while typical humans could envision slightly more stuff, but nothing close to infinity. In order for someone to at a given moment have the preference to live forever, that person then would in this case need an infinitely complex brain to properly envision all this implies. So you’d get an upper bound that prevents the problems you mentioned from arising.
You could argue that humans actually want to live for infinity by making use of personal identity and transitivity (e.g. “if I ask in ten years, the person will want to live for the next ten years and be able to give you detailed plans; and keep repeating that every ten years), but here I’d say we should just try to minimize preference-dissatisfaction of all consciousness-moments, not of persons. I might be talking nonsense with the word “envision”, but something along these lines seems plausible to me too.
The two possibilities you propose don’t seem plausible to me. I have a general aversion to things you’d only come up with in order to fix a specific problem and that wouldn’t seem intuitive from the beginning / from a top-down perspective. I need to think about this further.
I am proceeding along this line of thought as well. I believe in something similar to G. E. Moore’s “Ideal Utilitarianism.” I believe that we should maximize certain values like Truth, Beauty, Curiousity, Freedom, etc. However, I also believe that these values are meaningless when divorced from the existence of sentient creatures to appreciate them. Unlike Moore, I would not place any value at all on a piece of lovely artwork no one ever sees. There would need to be a creature to appreciate it for it to have any value.
So basically, I would shift maximizing complex values from regular ethics to population ethics. I would give “extra points” to the creation of creatures who place intrinsic value on these ideals, and “negative points” to the creation of creatures who don’t value them.
Now, you might argue that this does create scenarios where I am willing to create suffering to promote an ideal. Suppose I have the option of creating a wireheaded person that never suffers, or a person who appreciates ideals, and suffers a little (but not so much that their life is not worth living). I would gladly choose the idealistic person over the wirehead.
I do not consider this to be “biting a bullet” because that usually implies accepting a somewhat counterintuitive implication, and I don’t find this implication to be counterintuitive at all. As long as the idealistic person’s life is not so terrible that they wish they had never been born I can not truly be said to have hurt them.
So would you accept the very repugnant conclusion for total preference utilitarianism? If you value the creation of new preferences (of a certain kind), would this allow for tradeoff-situations where you had to frustrate all the currently existing preferences, create some more beings with completely frustrated preferences, and then create a huge number of beings living lives just slightly above the boundary where the satisfaction-percentage becomes “valuable” in order to make up for all the suffering (and overall improve the situation)? This conclusion seems to be hard to block if you consider it morally urgent to bring new satisfied preferences into existence. And it, too, seems to be selfish in a way, although this would have to be argued for further.
I did not mention it because I didn’t want to belabor my view, but no, I wouldn’t. I think that one of the important Ideals that people seem to value is that a smaller population of people with highly satisfied preferences is better than a larger population with lives barely worth living, even if the total amount of preference satisfaction is higher in the large population. That’s one reason why the repugnant conclusion is repugnant. This means that sometimes it is good to add people, at other times it is bad.
Of course, this view needs some qualifiers. First of all, once someone is added to a population, they count as being part of it even after they are dead, so you can’t arrive at an ideal population size by killing people. This also entails accepting the Sadistic Conclusion, but that is an unavoidable part of all types of Negative Utilitarianism, whether they are of the normal variety, or the weird “sometimes negative sometimes positive depending on the context” variety I employ.
I think a helpful analogy would be Parfit’s concept of “global preferences,” which he discusses on page 3 of this article. Parfit argues that we have “Global Preferences,” which are meta-preferences about what sort of life we should live and what sort of desires we should have. He argues that these Global Preferences dictate the goodness of whether we develop a new preference.
For instance, Parfit argues, imagine someone gets you addicted to a drug, and gives you a lifetime supply of the drug. You now have a strong desire to get more of the drug, which is satisfied by your lifetime supply. Parfit argues that this does not make you life better, because you have a global meta-preference to not get addicted to drugs, which has been violated. By contrast (my example, not Parfit’s) if I enter into a romantic relationship with someone it will create a strong desire to spend time with that person, a desire much stronger than my initial desire to enter the relationship. However, this is a good thing, because I do have a global meta-preference to be in romantic relationships.
We can easily scale this up to population ethics. I have Global Moral Principles about the type and amount of people who should exist in the world. Adding people who fulfill these principles makes the world better. Adding people who do not fulfill these principles makes the world worse, and should be stopped.
Reading and responding to your exchange with other people about this sort of “moral selfishness” has gotten me thinking about what people mean and what concepts they refer to when they use that word. I’ve come to the conclusion that “selfish” isn’t a proper word to use in these contexts. Now, obviously this is something of a case of “disputing defintions”, but the word “selfish” and the concepts it refers to are extremely “loaded” and bring a lot of emotional and intuitive baggage with them, so I think it’s good mental hygiene to be clear about what they mean.
To me what’s become clear is that the word “selfish” doesn’t refer to any instance where someone puts some value of theirs ahead of something else. Selfishness is when someone puts their preferences about their own life and about their own happiness and suffering ahead of the preferences, happiness, and suffering of others.
To illustrate this, imagine the case of a racial supremacist who sacrifices his life in order to enable others of his race to continue oppressing different races. He is certainly a very bad person. But it seems absurd to call him “selfish.” In my view this is because, while he has certainly put some of his preferences ahead of the preferences of others, none of those preferences were preferences about his own life. They were preferences about the overall state of the world.
Now, obviously what makes a preference “about your own life” is a fairly complicated concept (Parfit discusses it in some detail here). But I don’t see that as inherently problematic. Most concepts are extremely complex once we unpack them.
It seems to me like your view is underdetermined in regard to population ethics. You introduce empirical considerations about which types of preferences people happen to have in order to block normative conclusions. What if people actually do want to bite the bullet, would that make it okay to do it? Suppose there were ten people, and they would be okay with getting tortured, adding a billion tortured people, plus adding a sufficiently large number of people with preferences more-satisfied-than-not. Would this ever be ok according to your view? If not, you seem to not intrinsically value the creation of satisfied preferences.
I agree with your analysis of “selfish”.
You’re right that I do not intrinsically value the creation of all satisfied preferences. This is where my version of Moore’s Ideal Utilitarianism comes in. What I value is the creation of people with satisfied preferences if doing so also fulfills certain moral ideals I (and most other people, I think) have about how the world ought to be. In cases where the creation of a person with satisfied preferences would not fulfill those ideals I am essentially a negative preference utilitarian, I treat the creation of a person who doesn’t fulfill those ideals the same way a negative preference utilitarian would.
I differ from Moore in that I think the only way to fulfill an ideal is to create (or not create) a person with certain preferences and satisfy those preferences. I don’t think, like he did, that you can (for example) increase the beauty in the world by creating pretty objects no one ever sees.
I think a good analogy would again be Parfit’s concept of global preferences. If I read a book, and am filled with a mild preference to read more books with the same characters, such a desire is in line with my global preferences, so it is good for it to be created. By contrast, being addicted to heroin would fill me with a strong preference to use heroin. This preference is not in line with my global preferences, so I would be willing to hurt myself to avoid creating it.
I have moral ideals about many things, which include how many people there should be, their overall level of welfare, and most importantly, what sort of preferences they ought to have. It seems likely to me that the scenario with the torture+new people scenario would violate those ideals, so I probably wouldn’t go along with it.
To give an example where creating the wrong type of preference would be a negative, I would oppose the creation of a sociopath or a paperclip maximizer, even if their life would have more satisfied preferences than not. Such a creature would not be in line with my ideals about what sort of creatures should exist. I would even be willing to harm myself or others, to some extent, to prevent their creation.
This brings up a major question I have about negative preference utilitarianism, which I wonder if you could answer since you seem to have thought more about the subject of negative utilitarianism than I have. How much harm should a negative preference utilitarian be willing to inflict on existing people to prevent a new person from being born? For instance, suppose you had a choice between torturing every person on Earth for the rest of their lives, or creating one new person who will live the life of a rich 1st world person with a high happiness set point? Surely you wouldn’t torture everyone on Earth? A hedonist negative utilitarian wouldn’t of course, but we’re talking about negative preference utilitarianism.
A similar question I have is, if a creature with an unbounded utility function is created, does that mean that infinite wrong has been done, since such a creature essentially has infinite unsatisfied preferences? How does negative preference utilitarianism address this?
The best thing I can come up with is to give the creation of such a creature a utility penalty equal to “However much utility the creature accumulates over its lifetime, minus x,” where x is a moderately sized number. However, it occurs to me that someone whose thought more about the subject than me might have figured out something better.
Something you wrote in a comment further above:
I don’t think so, neither negative preference nor negative hedonistic utilitarianism implies the Sadistic Conclusion. Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:
Now, according to classical utilitarianism, the large number of happy beings would each be of “positive utility”. However, given the evaluation function of the negative view, their utility is neutral if their lives were perfect, and worse than neutral if their lives contain suffering. The Sadistic Conclusion is avoided, although only persuasively so if you find the axiology of the negative view convincing. Otherwise, you’re still left with an outcome that seems counterintuitive, but this seems to be much less worrisome than having something that seems to be messed up even on the theoretical level. You say you’re okay with the Sadistic Conclusion because there are no alternatives, but I would assume that, if you did not yet know that there are no alternatives (you’d want to go with), then you would have a strong inclination to count it as a serious deficiency of your stated view.
Addressing the comment right above now:
Negative utilitarians try to minimize the total amount of preference-frustrations, or suffering. Whether this is going to happen to a new person that you’ll bring into existence, or whether it is going to happen to a person that already exists, does not make a difference. (No presence-bias, as I said above.) So a negative preference utilitarian should be indifferent between killing an existing person and bringing a new person (fully developed, with memories and life-goals) into existence if this later person is going to die / be killed soon as well. (Also note that being killed is only a problem if you have a preference to go on living, and that even then, it might not be the thing considered worst that could happen to someone.)
This implies that the preferences of existing people may actually lead to it being the best action to bring new people into existence. If humans have a terminal value of having children, then these preferences of course count as well, and if the children are guaranteed perfect lives, you should bring them all into existence. You should even bring them into existence if some of them are going to suffer horribly, as long as the existing people’s preferences would, altogether, contain even more frustrations.
You will need some way of normalizing all preferences, setting the difference between “everything fulfilled” and “everything frustrated” equal for beings of the same “type”. Then the question is whether all sentient beings fall under the same type, or whether you want to discount according to intensity of sentience, or some measure of agency or something like that. I have not yet defined my intuitions here, but I think I’d go for something having to do with sentience.
When I read the formulation of the Sadistic Conclusion I interpreted “people with positive utility” to mean either a person whose life contained no suffering, or a person whose satisfied preferences/happiness outweighed their suffering. So I would consider adding a small population of terrible lives instead of a large population of almost ideal lives to be the Sadistic Conclusion.
If I understand you correctly, you are saying that negative utilitarianism technically avoids the Sadistic Conclusion because it considers a life with any suffering at all to be a life of negative utility, regardless of how many positive things that life also contains. In other words, it avoid the SC because it’s criterion for what makes a life positive and negative are different than the criterion Arrenhius used when he first formulated the SC. I suppose that is true. However, NU does not avoid the (allegedly) unpleasant scenario Arrenhius wanted to avoid (adding a tortured life instead of a large amount of very positive lives).
Right, but if someone has a preference to live forever does that mean that infinite harm has been done if they die? In which case you might as well do whatever afterwards, since infinite harm has already occurred? Should you torture everyone on Earth for decades to prevent such a person from being added? That seems weird.
The best solution I can currently think of is to compare different alternatives, rather than try to measure things in absolute terms. So if a person who would have lived to 80 dies at 75 that generates 5 years of unsatisfied preferences, not infinity, even if the person would have preferred to live forever. But that doesn’t solve the problem of adding people who wouldn’t have existed otherwise.
What I’m trying to say is, people have an awful lot of preferences, and generally only manage to satisfy a small fraction of them before they die. So how many unsatisfied preferences should adding a new person count as creating? How big a disutility is it compared to other disutilities, like thwarting existing preferences and inflicting pain on people.
A couple possibilities that occurs to me off the top of my head. One would be to find the difference in satisfaction between the new people and the old people, and then compare it to the difference in satisfaction between the old people and the counter-factual old people in the universe where the new people were never added.
Another possibility would be to set some sort of critical level based on what the maximum level of utility it is possible to give the new people given our society’s current level of resources, without inflicting greater disutilities on others than you give utility to the new people. Then weigh the difference between the new peoples actual utility and their “critical possible utility” and compare that to the dissatisfaction the existing people would suffer if the new people are not added.
Do either of these possibilities sound plausible to you, or do you have another idea?
I agree your points on the Sadistic Conclusion issue. Arrhenius acknowledges that his analysis depends on the (to him trivial) assumption that there are “positive” welfare levels. I don’t think this axiom is trivial because it interestingly implies that non-consciousness somehow becomes “tarnished” and non-optimal. Under a Buddhist view of value, this would be different.
If all one person cared about was to live for at least 1′000 years, and all a second person cared about was to live for at least 1′000′000 years (and after their desired duration they would become completely indifferent), would the death of the first person at age 500 be less tragic than the death of the second person at age 500′000? I don’t think so, because assuming that they value partial-progress on their ultimate goal the same way, they both ended up reaching “half” of their true and only goal. I don’t think the first person would somehow care less in overall terms about achieving her goal than the second person.
To what extent would this way of comparing preferences change things?
I think the point you make here is important. It seems like there should be a difference between beings who have only one preference and beings who have an awful lot of preferences. Imagine a chimpanzee with a few preferences and compare him to a sentient AGI, say. Would both count equally? If not, how would we determine how much their total preference (dis)satisfaction is worth? The example I gave above seems intuitive because we were talking about humans who are (as specified by the unwritten rules of thought experiments) equal in all relevant respects. With chimps vs. AI it seems different.
I’m actually not sure how I would proceed here, and this is of course a problem. Since I’d (in my preference-utilitarianism mode) only count the preferences of sentient beings and not e.g. the revealed preferences of a tree, I would maybe weight the overall value by something like “intensity of sentience”. However, I suspect that I’m inclined to do this because I have strong leanings towards hedonistic views, so it would not necessarily fit elegantly with a purely preference-based view on what matters. And that would be a problem because I don’t like ad hoc moves.
Or maybe a better way to deal with it would be the following: Preferences ought to be somewhat specific. If people just say “infinity”, they still aren’t capable to envision what this would actually mean. So maybe a chimpanzee could only envision a certain amount of things because of some limit of brain complexity, while typical humans could envision slightly more stuff, but nothing close to infinity. In order for someone to at a given moment have the preference to live forever, that person then would in this case need an infinitely complex brain to properly envision all this implies. So you’d get an upper bound that prevents the problems you mentioned from arising.
You could argue that humans actually want to live for infinity by making use of personal identity and transitivity (e.g. “if I ask in ten years, the person will want to live for the next ten years and be able to give you detailed plans; and keep repeating that every ten years), but here I’d say we should just try to minimize preference-dissatisfaction of all consciousness-moments, not of persons. I might be talking nonsense with the word “envision”, but something along these lines seems plausible to me too.
The two possibilities you propose don’t seem plausible to me. I have a general aversion to things you’d only come up with in order to fix a specific problem and that wouldn’t seem intuitive from the beginning / from a top-down perspective. I need to think about this further.