does have a significant effect on their morally significant property.
But not in any absolute sense, just because this is consistent with your moral intuition.
Last I checked never having existed had a large effect on your ability to have preferences, and your ability to feel please and pain
Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.
I do not care in the slightest that the heroin addicted me would have a strong desire for heroin.
Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consume heroin in the future?
Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.
Yes, but I would argue that the fact that they can’t actually do that yet makes a difference.
Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consume heroin in the future?
Yes, if I was actually going to be addicted. But it was a bad thing that I was addicted in the first place, not a good thing. What I meant when I said I “do not care in the slightest” was that the strength of that desire was not a good reason to get addicted to heroin. I didn’t mean that I wouldn’t try to satisfy that desire if I had no choice but to create it.
Similarly, in the case of adding lots of people with short lives, the fact that they would have desires and experience pain and pleasure if they existed is not a good reason to create them. But it is a good reason to try to help them extend their lives, and lead better ones, if you have no choice but to create them.
Thinking about it further, I realized that you were wrong in your initial assertion that “we have to introduce a fudge factor that favors people (such as us) who are or were alive.” The types of “fudge factors” that are being discussed here do not, in fact do that.
To illustrate this, imagine Omega presents you with the following two choices:
Everyone who currently exists receives a small amount of additional utility. Also, in the future the amount of births in the world will vastly increase, and the lifespan and level of utility per person will vastly decrease. The end result will be the Repugnant Conclusion for all future people, but existing people will not be harmed, in fact they will benefit from it.
Everyone who currently exists loses a small amount of their utility. In the future far fewer people will be born than in Option 1, but they will live immensely long lifespans full of happiness. Total utility is somewhat smaller than in Option 1, but concentrated in a smaller amount of people.
Someone using the fudge factor Kaj proposes in the OP would choose 2, even though it harms every single existing person in order to benefit people who don’t exist yet. It is not biased towards existing persons.
I basically view adding people to the world in the same light as I view adding desires to my brain. If a desire is ego-syntonic (i.e. a desire to read a particularly good book) then I want it to be added and will pay to make sure it is. If a desire is ego-dystonic (like using heroin) I want it to not be added and will pay to make sure it isn’t. Similarly, if adding a person makes the world more like my ideal world (i.e. a world full of people with long eudaemonic lives) then I want that person to be added. If it makes it less like my ideal world (i.e. Repugnant Conclusion) I don’t want that person to be added and will make sacrifices to stop it (for instance, I will spend money on contraceptives instead of candy).
As long as the people we are considering adding are prevented from ever having existed, I don’t think they have been harmed in the same way that that discriminating against an existing person for some reason like skin color or gender harms someone, and I see nothing wrong with stopping people from being created if it makes the world more ideal.
Of course, needless to say, if we fail and these people are created anyway, we have just as much moral obligation towards them as we would towards any preexisting person.
I basically view adding people to the world in the same light as I view adding desires to my brain.
Interesting way to view it. I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.
I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.
I used to think that way, but it resulted in what I considered to be too many counterintuitive conclusions. The biggest one, that I absolutely refuse to accept, being that we ought to kill the entire human race and use the resources doing that would free up to replace them with creatures whose desires are easier to satisfy. Paperclip maximizers or wireheads for instance. Humans have such picky, complicated goals, after all..… I consider this conclusion roughly a trillion times more repugnant than the original Repugnant Conclusion.
Naturally, I also reject the individual form of this conclusion, which is that we should kill people who want to read great books, climb mountains, run marathons, etc. and replace them with people who just want to laze around. If I was given a choice between having an ambitious child with a good life, or an unambitious child with a great life, I would pick the ambitious one, even though the total amount of welfare in the world would be smaller for it. And as long as the unambitious child doesn’t exist, never existed, and never will exist I see nothing wrong with this type of favoritism.
But not in any absolute sense, just because this is consistent with your moral intuition.
Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.
Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consume heroin in the future?
Yes, but I would argue that the fact that they can’t actually do that yet makes a difference.
Yes, if I was actually going to be addicted. But it was a bad thing that I was addicted in the first place, not a good thing. What I meant when I said I “do not care in the slightest” was that the strength of that desire was not a good reason to get addicted to heroin. I didn’t mean that I wouldn’t try to satisfy that desire if I had no choice but to create it.
Similarly, in the case of adding lots of people with short lives, the fact that they would have desires and experience pain and pleasure if they existed is not a good reason to create them. But it is a good reason to try to help them extend their lives, and lead better ones, if you have no choice but to create them.
Thinking about it further, I realized that you were wrong in your initial assertion that “we have to introduce a fudge factor that favors people (such as us) who are or were alive.” The types of “fudge factors” that are being discussed here do not, in fact do that.
To illustrate this, imagine Omega presents you with the following two choices:
Everyone who currently exists receives a small amount of additional utility. Also, in the future the amount of births in the world will vastly increase, and the lifespan and level of utility per person will vastly decrease. The end result will be the Repugnant Conclusion for all future people, but existing people will not be harmed, in fact they will benefit from it.
Everyone who currently exists loses a small amount of their utility. In the future far fewer people will be born than in Option 1, but they will live immensely long lifespans full of happiness. Total utility is somewhat smaller than in Option 1, but concentrated in a smaller amount of people.
Someone using the fudge factor Kaj proposes in the OP would choose 2, even though it harms every single existing person in order to benefit people who don’t exist yet. It is not biased towards existing persons.
I basically view adding people to the world in the same light as I view adding desires to my brain. If a desire is ego-syntonic (i.e. a desire to read a particularly good book) then I want it to be added and will pay to make sure it is. If a desire is ego-dystonic (like using heroin) I want it to not be added and will pay to make sure it isn’t. Similarly, if adding a person makes the world more like my ideal world (i.e. a world full of people with long eudaemonic lives) then I want that person to be added. If it makes it less like my ideal world (i.e. Repugnant Conclusion) I don’t want that person to be added and will make sacrifices to stop it (for instance, I will spend money on contraceptives instead of candy).
As long as the people we are considering adding are prevented from ever having existed, I don’t think they have been harmed in the same way that that discriminating against an existing person for some reason like skin color or gender harms someone, and I see nothing wrong with stopping people from being created if it makes the world more ideal.
Of course, needless to say, if we fail and these people are created anyway, we have just as much moral obligation towards them as we would towards any preexisting person.
Interesting way to view it. I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.
I used to think that way, but it resulted in what I considered to be too many counterintuitive conclusions. The biggest one, that I absolutely refuse to accept, being that we ought to kill the entire human race and use the resources doing that would free up to replace them with creatures whose desires are easier to satisfy. Paperclip maximizers or wireheads for instance. Humans have such picky, complicated goals, after all..… I consider this conclusion roughly a trillion times more repugnant than the original Repugnant Conclusion.
Naturally, I also reject the individual form of this conclusion, which is that we should kill people who want to read great books, climb mountains, run marathons, etc. and replace them with people who just want to laze around. If I was given a choice between having an ambitious child with a good life, or an unambitious child with a great life, I would pick the ambitious one, even though the total amount of welfare in the world would be smaller for it. And as long as the unambitious child doesn’t exist, never existed, and never will exist I see nothing wrong with this type of favoritism.