In my view population ethics failed at the start by making a false assumption, namely “Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility).” I believe this assumption is wrong.
Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions.
After meditating on the Nonidentity Problem for a while I realized Parfit’s proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve.
I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
In my view the primary thing that determines whether someone’s creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a “morally right” identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have come before. In addition to their internal desires, their relationship to other people is also important. (Of course, this only applies if their total lifetime utility is positive, if it’s negative it’s bad to create them no matter what their identity is).
We can now use this to patch Singer’s “Moral Ledger” in a way that fits Eliezer’s views. Creating someone with the “wrong” identity is a debt, but creating a person with a “right” identity is not. So we shouldn’t create a utility monster (if “utility monster” is a “wrong” identity), because that would create a debt, but killing the monster wouldn’t solve anything, it would just make it impossible to pay the debt.
My “Identity Matters” model also helps explain our intuitions about our duties to have children. In the total and average views, the identity of the child is unimportant. In my model it is. If someone doesn’t want to have children, having an unwanted child is a “debt” regardless of the child’s personal utility. A child born to parents who want to have one, by contrast may be “right” to have, even if its utility is lower than that of the aforementioned unwanted child. (Of course, this model needs to be flexible about what makes someone “your child” in order to regard things like sterile parents adopting unwanted children as positive, but I don’t see this as a major problem).
In addition to identity mattering, we also seem to have ideals about how utility should be concentrated. Most people intuitively reject things like Replaceability and the Repugnant Conclusion, and I think they’re right to. We seem to have an ideal that a small population with high per-person utility is better than a large one with low per-person utility, even if its total utility is higher. I’m not suggesting Average Utilitarianism, as I said in another comment, I think that AU is a disastrously bad attempt to mathematize that ideal. But I do think that ideal is worthwhile, we just need a less awful way to fit it into our ethical system.
A third reason for our belief that having children is optional is that most people seem to believe in some sort of Critical Level Utilitarianism with the critical level changing depending on what our capabilities for increasing people’s utility are. Most people in the modern world would consider it unthinkable to have a child whose level of utility would have been considered normal in Medieval Europe. And I think this belief isn’t just the status quo bias, I would also consider it unconscionable to have a child with normal Modern World levels of utility in a transhuman future.
It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don’t think that’s because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist and developing our understanding of the fundamental nature of the universe.) The druggie will not provide these positive externalities, and may even provide negative ones. (Say, turning to crime in order to feed his addiction, as some druggies do.)
I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.
I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.
I disagree. I have come to realize that that morality isn’t just about maximizing utility, it’s also about protecting fragile human* values. Creating creatures that have values fundamentally opposed to those values, such as paperclip maximizers, orgasmium, or sociopaths, seems a morally wrong thing to do to me.
This was driven home to me by a common criticism of utilitarianism, namely that it advocates that, if possible, we should kill everyone and replace them with creatures whose preferences are easier to satisfy, or who are easier to make happy. I believe this is a bug, not a feature, and that valuing the identity of created creatures is the solution. Eliezer’s essays on the fragility and complexity of human values also helped me realize this.
*When I say “human” I mean any creature with a sufficiently humanlike mind, regardless of whether it is biologically human or not.
Perhaps I was unclear. I used utilitarian terminology, but utilitarianism is not necessary for my point. To restate: If I could choose between an ambitious child being born, or a druggie child being born, I (and you, according to your above comment) would choose the ambitious child, all else being equal. Why would we choose that? Well, there are several possible explanations, including the one which you gave. However, yours was complicated and far from trivially true, and so I point out that such massive suppositions are unnecessary, as we already have a certain well known human desire to explain that choice. (Call that desire what you will, perhaps “altruism”, or “bettering the world”. It’s the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.)
I agree that externalities are the first reason that comes to mind. But when I try to modify the thought experiments to control for this my preferences remain the same.
For instance, if I imagine someone with rather introverted ambitions, for instance, someone who wants to collect and modify cars, or beat lots of difficult videogames, versus someone with unambitious, but harmless preferences, (such as looking at porn all day), I still preferred the ambitious person. Incidentally, I’m not saying it’s bad that there are people who want to look at porn (or who want to use recreational drugs, for that matter), I’m just saying it’s bad that there are people who want to devote their entire life too it and do nothing more ambitious.
To test my ideals even further (and to make sure my intuitions were not biased by the fact that porn and drugs are low-status activities) I imagined two people who both wanted to just look at porn all day. The difference was that one wanted to compare and contrast the porn they watched and develop theories about the patterns he found, while the other just wanted to passively absorb it without really thinking. I preferred the Intellectual Porn Watched to the Absorber.
Call that desire what you will, perhaps “altruism”, or “bettering the world”. It’s the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.
I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who “us” is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
Hm. That’s actually a pretty good answer. I too find I would prefer the Intellectual Porn Watcher to the Absorber. I will note, however, that the preference is rather weak. If you would give me $10 (or however much) in exchange for letting the Absorber exist rather than the Intellectual Porn Watcher, I’d take that, even for relatively low values of money. (I’m not quite sure of what the cuttoff is though, but it’s low). On the other hand, I think I’d be willing to give up a fair bit of money to have the Ambitious Intellectual exist rather than the Druggie.
Thinking about it in these terms is by no means perfect, but it allows me to solidify my view of my preferences. In any case, I’ll admit this is a good point.
I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who “us” is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
See, “valuable” is a two place word, it takes as arguments both an object or state, and a valuer. Now, when I talk about this, I say “us” as the valuer, (and you can argue that I really should be only saying me, as our goal-systems are not necessarily aligned, but we’ll put that aside), but that specifically means the “us” that is having this conversation. Or to put it another way, if you ask me “How much do you value thing X?”, you can model it as me going to a black box inside my head and getting an answer. Of course, if you take out that black box and replace it with another one, the answer may be different. But, even if I know that tomorrow someone will come and do surgery to swap those “boxes”, that doesn’t change my answer today.
Sorry for rambling a bit. I’m not sure how best to explain it all. But I value art and knowledge. (To use your example.) If you replace me with someone who values paperclips, then that other person will go and do the things he values, like making paperclips and not art and knowledge, and I will hate him for that. I don’t like the world were he does that, as my utility function does not include terms for paperclips. He would value that world, and would fight tooth and claw to get to that worldstate. Nothing says we have to agree on what is the best worldstate, and nothing says I am obliged to bring about arbitrary wold states others want.
… Oh. Actually, on reading what you wrote over again, I think (in the last section, the points about ambition still stand) we are arguing over different things, and are more in agreement then we thought. You say you value “identity over utility” (to some extent). I think I interpreted that to mean something subtly different from what you meant.
By utility, you meant total utility of everyone (or maybe the average utility of everyone?) Realizing that, of course we value lots of things over “utility”, when “utility” is used in that sense. (I will call it ToAU, for “Total or Average Utility”, to avoid confusing it with what I will call MPU, “My Personal Utility”.)
Yess, what you make is a good point that ToAU is not what we should be maximizing. I agree. I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing. (Ok, put aside for now, as before, that me and you may have slightly different goal systems and I so I should be using a different pronoun, either you, if I am talking about what you are maximising, or me, if we are talking about me.)
Now, MPU is quite the complex function, and for us, at least, it includes terms for art and science existing, for humans not being killed, for minimizing not only our (mine, your) personal suffering, but also for minimising global suffering. Altruism is a major part of MPU, make no mistake, I am not arguing that others’ opinions do not matter, at least for some value of “others”, definitely including all humans, and likely including many non humans. MPU does include a term for the enjoyment, happiness, identity, non-suffering, and so forth of those in this category, but (as you have shown) this category cannot be completely universal.
In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.
I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing.
Sorry, I tend to carelessly use the word “utility” to mean “the stuff utilitarians want to maximize,” forgetting that many people will read it as “Von Neuman-Morgenstern Utility.” You actually aren’t the first person on Less Wrong I’ve done this to.
In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.
In my view population ethics failed at the start by making a false assumption, namely “Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility).” I believe this assumption is wrong.
Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions.
After meditating on the Nonidentity Problem for a while I realized Parfit’s proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve.
I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
In my view the primary thing that determines whether someone’s creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a “morally right” identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have come before. In addition to their internal desires, their relationship to other people is also important. (Of course, this only applies if their total lifetime utility is positive, if it’s negative it’s bad to create them no matter what their identity is).
We can now use this to patch Singer’s “Moral Ledger” in a way that fits Eliezer’s views. Creating someone with the “wrong” identity is a debt, but creating a person with a “right” identity is not. So we shouldn’t create a utility monster (if “utility monster” is a “wrong” identity), because that would create a debt, but killing the monster wouldn’t solve anything, it would just make it impossible to pay the debt.
My “Identity Matters” model also helps explain our intuitions about our duties to have children. In the total and average views, the identity of the child is unimportant. In my model it is. If someone doesn’t want to have children, having an unwanted child is a “debt” regardless of the child’s personal utility. A child born to parents who want to have one, by contrast may be “right” to have, even if its utility is lower than that of the aforementioned unwanted child. (Of course, this model needs to be flexible about what makes someone “your child” in order to regard things like sterile parents adopting unwanted children as positive, but I don’t see this as a major problem).
In addition to identity mattering, we also seem to have ideals about how utility should be concentrated. Most people intuitively reject things like Replaceability and the Repugnant Conclusion, and I think they’re right to. We seem to have an ideal that a small population with high per-person utility is better than a large one with low per-person utility, even if its total utility is higher. I’m not suggesting Average Utilitarianism, as I said in another comment, I think that AU is a disastrously bad attempt to mathematize that ideal. But I do think that ideal is worthwhile, we just need a less awful way to fit it into our ethical system.
A third reason for our belief that having children is optional is that most people seem to believe in some sort of Critical Level Utilitarianism with the critical level changing depending on what our capabilities for increasing people’s utility are. Most people in the modern world would consider it unthinkable to have a child whose level of utility would have been considered normal in Medieval Europe. And I think this belief isn’t just the status quo bias, I would also consider it unconscionable to have a child with normal Modern World levels of utility in a transhuman future.
Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don’t think that’s because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist and developing our understanding of the fundamental nature of the universe.) The druggie will not provide these positive externalities, and may even provide negative ones. (Say, turning to crime in order to feed his addiction, as some druggies do.)
I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.
I disagree. I have come to realize that that morality isn’t just about maximizing utility, it’s also about protecting fragile human* values. Creating creatures that have values fundamentally opposed to those values, such as paperclip maximizers, orgasmium, or sociopaths, seems a morally wrong thing to do to me.
This was driven home to me by a common criticism of utilitarianism, namely that it advocates that, if possible, we should kill everyone and replace them with creatures whose preferences are easier to satisfy, or who are easier to make happy. I believe this is a bug, not a feature, and that valuing the identity of created creatures is the solution. Eliezer’s essays on the fragility and complexity of human values also helped me realize this.
*When I say “human” I mean any creature with a sufficiently humanlike mind, regardless of whether it is biologically human or not.
Perhaps I was unclear. I used utilitarian terminology, but utilitarianism is not necessary for my point. To restate: If I could choose between an ambitious child being born, or a druggie child being born, I (and you, according to your above comment) would choose the ambitious child, all else being equal. Why would we choose that? Well, there are several possible explanations, including the one which you gave. However, yours was complicated and far from trivially true, and so I point out that such massive suppositions are unnecessary, as we already have a certain well known human desire to explain that choice. (Call that desire what you will, perhaps “altruism”, or “bettering the world”. It’s the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.)
I agree that externalities are the first reason that comes to mind. But when I try to modify the thought experiments to control for this my preferences remain the same.
For instance, if I imagine someone with rather introverted ambitions, for instance, someone who wants to collect and modify cars, or beat lots of difficult videogames, versus someone with unambitious, but harmless preferences, (such as looking at porn all day), I still preferred the ambitious person. Incidentally, I’m not saying it’s bad that there are people who want to look at porn (or who want to use recreational drugs, for that matter), I’m just saying it’s bad that there are people who want to devote their entire life too it and do nothing more ambitious.
To test my ideals even further (and to make sure my intuitions were not biased by the fact that porn and drugs are low-status activities) I imagined two people who both wanted to just look at porn all day. The difference was that one wanted to compare and contrast the porn they watched and develop theories about the patterns he found, while the other just wanted to passively absorb it without really thinking. I preferred the Intellectual Porn Watched to the Absorber.
I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who “us” is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
Hm. That’s actually a pretty good answer. I too find I would prefer the Intellectual Porn Watcher to the Absorber. I will note, however, that the preference is rather weak. If you would give me $10 (or however much) in exchange for letting the Absorber exist rather than the Intellectual Porn Watcher, I’d take that, even for relatively low values of money. (I’m not quite sure of what the cuttoff is though, but it’s low). On the other hand, I think I’d be willing to give up a fair bit of money to have the Ambitious Intellectual exist rather than the Druggie.
Thinking about it in these terms is by no means perfect, but it allows me to solidify my view of my preferences. In any case, I’ll admit this is a good point.
See, “valuable” is a two place word, it takes as arguments both an object or state, and a valuer. Now, when I talk about this, I say “us” as the valuer, (and you can argue that I really should be only saying me, as our goal-systems are not necessarily aligned, but we’ll put that aside), but that specifically means the “us” that is having this conversation. Or to put it another way, if you ask me “How much do you value thing X?”, you can model it as me going to a black box inside my head and getting an answer. Of course, if you take out that black box and replace it with another one, the answer may be different. But, even if I know that tomorrow someone will come and do surgery to swap those “boxes”, that doesn’t change my answer today.
Sorry for rambling a bit. I’m not sure how best to explain it all. But I value art and knowledge. (To use your example.) If you replace me with someone who values paperclips, then that other person will go and do the things he values, like making paperclips and not art and knowledge, and I will hate him for that. I don’t like the world were he does that, as my utility function does not include terms for paperclips. He would value that world, and would fight tooth and claw to get to that worldstate. Nothing says we have to agree on what is the best worldstate, and nothing says I am obliged to bring about arbitrary wold states others want.
… Oh. Actually, on reading what you wrote over again, I think (in the last section, the points about ambition still stand) we are arguing over different things, and are more in agreement then we thought. You say you value “identity over utility” (to some extent). I think I interpreted that to mean something subtly different from what you meant.
By utility, you meant total utility of everyone (or maybe the average utility of everyone?) Realizing that, of course we value lots of things over “utility”, when “utility” is used in that sense. (I will call it ToAU, for “Total or Average Utility”, to avoid confusing it with what I will call MPU, “My Personal Utility”.)
Yess, what you make is a good point that ToAU is not what we should be maximizing. I agree. I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing. (Ok, put aside for now, as before, that me and you may have slightly different goal systems and I so I should be using a different pronoun, either you, if I am talking about what you are maximising, or me, if we are talking about me.)
Now, MPU is quite the complex function, and for us, at least, it includes terms for art and science existing, for humans not being killed, for minimizing not only our (mine, your) personal suffering, but also for minimising global suffering. Altruism is a major part of MPU, make no mistake, I am not arguing that others’ opinions do not matter, at least for some value of “others”, definitely including all humans, and likely including many non humans. MPU does include a term for the enjoyment, happiness, identity, non-suffering, and so forth of those in this category, but (as you have shown) this category cannot be completely universal.
In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.
Sorry, I tend to carelessly use the word “utility” to mean “the stuff utilitarians want to maximize,” forgetting that many people will read it as “Von Neuman-Morgenstern Utility.” You actually aren’t the first person on Less Wrong I’ve done this to.
I agree entirely.