There seems to be some struggle here with defining u0 in a way that intuitively represents morality, so I’ll take a jab at it. As I see it, u0 can be viewed as “the utlity debt of creating a life”, that is to say the amount of utility a person has to generate over the course of their lifetime in order to morally justify their existence (with all of its effects on the UDNTs of others already accounted for in their calculation), reasonably assuming they get to live a full t0 life.
An intuitive (albeit extreme) proof of concept: Imagine a fetus (i.e. a pre-person at a stage where hypothetically everyone agrees they are not a moral patient) which is reliably predicted not to develop limbs (Tetra-Amelia Syndrome) and age rapidly (Progeria). It is widely morally agreed that this person should be pre-terminated, as their life would be short and full of misery. The suggested model reflects this intuition: this person’s expected UDNT is significantly lower than u0, considering their expected Tdeath is low and their expected average h is mostly below h0.
As such, subtracting u0 from the formula makes sure that every life added to the population is worth living. Does this click with people’s intuitions?
P.S.
I slightly worry that a development of this concept dabbles in eugenics in a way that may defeat the purpose of modeling ethics. The balance that should prevent eugenics is essentially another one of the many aspects reflected in h0, which is where we tend to dump most of the unsolved part of this model.
I think another good way to look at u0 that compliments yours is to look at it as the “penalty for dying with many preferences left unsatisfied.” Pretty much everyone dies with some things that they wanted to do left undone. I think most people have a strong moral intuition that being unable to fulfill major life desires and projects is tragic, and think a major reason death is bad is that it makes us unable to do even more of what we want to do with our lives. I think we could have u0 represent that intuition.
If we go back to Peter Singer’s original formulation of this topic, we can think of unsatisfied preferences as “debts” that are unpaid. So if we have a choice between creating two people who live x years, or 1 person who lives 2x years, assuming their total lifetime happiness is otherwise the same, we should prefer the one person for 2x years. This is because the two people living x years generate the same amount of happiness, but twice the amount of “debt” from unfulfilled preferences. Everyone will die with some unfulfilled preferences because everyone will always want more, and that’s fine and part of being human.
Obviously we need to calibrate this idea delicately in order to avoid any counterintuitive conclusions. If we treat creating a preference as a “debt” and satisfying it as merely “paying the debt” to “break even” then we get anti-natalism. We need to treat the “debt” that creating a preference generates as an “investment” that can “pay off” by creating tremendous happiness/life satisfaction when it is satisfied, but occasionally fails to “pay off” if its satisfaction is thwarted by death or something else.
I think that this approach could also address Isnasene’s question below of figuring out the -u0 penatly for nonhuman animals. Drawing from Singer again, since nonhuman animals are not mentally capable of having complex preferences for the future, they generate a smaller u0 penalty. The preferences that they die without having satisfied are not as strong or complex. This fits nicely with the human intuition that animals are more “replaceable” than humans and are of lesser (although nonzero) moral value. It also fits the intuition that animals with more advanced, human-like minds are of greater moral value.
Using that approach for animals also underscores the importance of treating creating preferences as an “investment” that can “pay off.” Otherwise it generates the counterintuitive conclusion that we should often favor creating animals over humans, since they have a lower u0 penalty. Treating complex preference creation as an “investment” means that humans are capable of generating far greater happiness/satisfaction than animals, which more than outweighs our greater u0 penalty.
We would also need some sort of way to avoid incentivizing the creation of intelligent creatures with weird preferences that are extremely easy to satisfy, or a strong preference for living a short life as an end in itself. This is a problem pretty much all forms of utilitarianism suffer from. I’m comfortable with just adding some kind of hack massively penalizing the creation of creatures with preferences that do not somehow fit with some broad human idea of eudaimonia.
There seems to be some struggle here with defining u0 in a way that intuitively represents morality, so I’ll take a jab at it. As I see it, u0 can be viewed as “the utlity debt of creating a life”, that is to say the amount of utility a person has to generate over the course of their lifetime in order to morally justify their existence (with all of its effects on the UDNTs of others already accounted for in their calculation), reasonably assuming they get to live a full t0 life.
An intuitive (albeit extreme) proof of concept: Imagine a fetus (i.e. a pre-person at a stage where hypothetically everyone agrees they are not a moral patient) which is reliably predicted not to develop limbs (Tetra-Amelia Syndrome) and age rapidly (Progeria). It is widely morally agreed that this person should be pre-terminated, as their life would be short and full of misery. The suggested model reflects this intuition: this person’s expected UDNT is significantly lower than u0, considering their expected Tdeath is low and their expected average h is mostly below h0.
As such, subtracting u0 from the formula makes sure that every life added to the population is worth living. Does this click with people’s intuitions?
P.S.
I slightly worry that a development of this concept dabbles in eugenics in a way that may defeat the purpose of modeling ethics. The balance that should prevent eugenics is essentially another one of the many aspects reflected in h0, which is where we tend to dump most of the unsolved part of this model.
I think another good way to look at u0 that compliments yours is to look at it as the “penalty for dying with many preferences left unsatisfied.” Pretty much everyone dies with some things that they wanted to do left undone. I think most people have a strong moral intuition that being unable to fulfill major life desires and projects is tragic, and think a major reason death is bad is that it makes us unable to do even more of what we want to do with our lives. I think we could have u0 represent that intuition.
If we go back to Peter Singer’s original formulation of this topic, we can think of unsatisfied preferences as “debts” that are unpaid. So if we have a choice between creating two people who live x years, or 1 person who lives 2x years, assuming their total lifetime happiness is otherwise the same, we should prefer the one person for 2x years. This is because the two people living x years generate the same amount of happiness, but twice the amount of “debt” from unfulfilled preferences. Everyone will die with some unfulfilled preferences because everyone will always want more, and that’s fine and part of being human.
Obviously we need to calibrate this idea delicately in order to avoid any counterintuitive conclusions. If we treat creating a preference as a “debt” and satisfying it as merely “paying the debt” to “break even” then we get anti-natalism. We need to treat the “debt” that creating a preference generates as an “investment” that can “pay off” by creating tremendous happiness/life satisfaction when it is satisfied, but occasionally fails to “pay off” if its satisfaction is thwarted by death or something else.
I think that this approach could also address Isnasene’s question below of figuring out the -u0 penatly for nonhuman animals. Drawing from Singer again, since nonhuman animals are not mentally capable of having complex preferences for the future, they generate a smaller u0 penalty. The preferences that they die without having satisfied are not as strong or complex. This fits nicely with the human intuition that animals are more “replaceable” than humans and are of lesser (although nonzero) moral value. It also fits the intuition that animals with more advanced, human-like minds are of greater moral value.
Using that approach for animals also underscores the importance of treating creating preferences as an “investment” that can “pay off.” Otherwise it generates the counterintuitive conclusion that we should often favor creating animals over humans, since they have a lower u0 penalty. Treating complex preference creation as an “investment” means that humans are capable of generating far greater happiness/satisfaction than animals, which more than outweighs our greater u0 penalty.
We would also need some sort of way to avoid incentivizing the creation of intelligent creatures with weird preferences that are extremely easy to satisfy, or a strong preference for living a short life as an end in itself. This is a problem pretty much all forms of utilitarianism suffer from. I’m comfortable with just adding some kind of hack massively penalizing the creation of creatures with preferences that do not somehow fit with some broad human idea of eudaimonia.