If you prefer a happy monster to no monster and no monster to a sad monster, then you prefer a happy monster to a sad monster, and TsviBT’s point applies.
Whereas if you prefer no monster to a happy monster to a sad monster, why don’t you kill the monster?
...sometimes I wonder about the people who find it unintuitive to consider that “Killing X, once X is alive and asking not to be killed” and “Preferring that X not be born, if we have that option in advance” could have widely different utility to me. The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant. After all, not spawning babies as fast as possible is as bad as murdering that many existent adults, apparently.
The crucial question is how we want to value the creation of new sentience (aka population ethics). It has been proven impossible to come up with intuitive solutions to it, i.e. solutions that fit some seemingly very conservative adequacy conditions.
The view you outline as an alternative to total hedonistic utilitarianism is often left underdetermined, which hides some underlying difficulties.
In Practical Ethics, Peter Singer advocated a position he called “prior-existence preference utilitarianism”. He considered it wrong to kill existing people, but not wrong to not create new people as long as their lives would be worth living. This position is awkward because it leaves you no way of saying that a very happy life (one where almost all preferences are going to be fulfilled) is better than a merely decent life that is worth living. If it were better, and if the latter is equal to non-creation, then denying that the creation of the former life is preferable over non-existence would lead to intransitivity.
If I prefer, but only to a very tiny degree, having a child with a decent life over having one with an awesome life, would it be better if I had the child with the decent life?
In addition, nearly everyone would consider it bad to create lives that are miserable. But if the good parts of a decent life can make up for the bad parts in it, why doesn’t a life consisting solely of good parts constitute something that is important to create? (This point applies most forcefully for those who adhere to a reductionist/dissolved view on personal identity.)
One way out of the dilemma is what Singer called the “moral ledger model of preferences”. He proposed an analogy between preferences and debts. It is good if existing debts are paid, but there is nothing good about creating new debts just so they can be paid later. In fact, debts are potentially bad because they may remain unfulfilled, so all things being equal, we should try to avoid making debts. The creation of new sentience (in form of “preference-bundles” or newly created utility functions) would, according to this view, be at most neutral (if all the preferences will be perfectly fulfilled), and otherwise negative to the extent that preferences get frustrated.
Singer himself rejected this view because it would imply voluntary human extinction being a good outcome. However, something about the “prior-existence” alternative he offered seems obviously flawed, which is arguably a much bigger problem than something being counterintuitive.
In my view population ethics failed at the start by making a false assumption, namely “Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility).” I believe this assumption is wrong.
Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions.
After meditating on the Nonidentity Problem for a while I realized Parfit’s proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve.
I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
In my view the primary thing that determines whether someone’s creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a “morally right” identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have come before. In addition to their internal desires, their relationship to other people is also important. (Of course, this only applies if their total lifetime utility is positive, if it’s negative it’s bad to create them no matter what their identity is).
We can now use this to patch Singer’s “Moral Ledger” in a way that fits Eliezer’s views. Creating someone with the “wrong” identity is a debt, but creating a person with a “right” identity is not. So we shouldn’t create a utility monster (if “utility monster” is a “wrong” identity), because that would create a debt, but killing the monster wouldn’t solve anything, it would just make it impossible to pay the debt.
My “Identity Matters” model also helps explain our intuitions about our duties to have children. In the total and average views, the identity of the child is unimportant. In my model it is. If someone doesn’t want to have children, having an unwanted child is a “debt” regardless of the child’s personal utility. A child born to parents who want to have one, by contrast may be “right” to have, even if its utility is lower than that of the aforementioned unwanted child. (Of course, this model needs to be flexible about what makes someone “your child” in order to regard things like sterile parents adopting unwanted children as positive, but I don’t see this as a major problem).
In addition to identity mattering, we also seem to have ideals about how utility should be concentrated. Most people intuitively reject things like Replaceability and the Repugnant Conclusion, and I think they’re right to. We seem to have an ideal that a small population with high per-person utility is better than a large one with low per-person utility, even if its total utility is higher. I’m not suggesting Average Utilitarianism, as I said in another comment, I think that AU is a disastrously bad attempt to mathematize that ideal. But I do think that ideal is worthwhile, we just need a less awful way to fit it into our ethical system.
A third reason for our belief that having children is optional is that most people seem to believe in some sort of Critical Level Utilitarianism with the critical level changing depending on what our capabilities for increasing people’s utility are. Most people in the modern world would consider it unthinkable to have a child whose level of utility would have been considered normal in Medieval Europe. And I think this belief isn’t just the status quo bias, I would also consider it unconscionable to have a child with normal Modern World levels of utility in a transhuman future.
It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don’t think that’s because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist and developing our understanding of the fundamental nature of the universe.) The druggie will not provide these positive externalities, and may even provide negative ones. (Say, turning to crime in order to feed his addiction, as some druggies do.)
I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.
I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.
I disagree. I have come to realize that that morality isn’t just about maximizing utility, it’s also about protecting fragile human* values. Creating creatures that have values fundamentally opposed to those values, such as paperclip maximizers, orgasmium, or sociopaths, seems a morally wrong thing to do to me.
This was driven home to me by a common criticism of utilitarianism, namely that it advocates that, if possible, we should kill everyone and replace them with creatures whose preferences are easier to satisfy, or who are easier to make happy. I believe this is a bug, not a feature, and that valuing the identity of created creatures is the solution. Eliezer’s essays on the fragility and complexity of human values also helped me realize this.
*When I say “human” I mean any creature with a sufficiently humanlike mind, regardless of whether it is biologically human or not.
Perhaps I was unclear. I used utilitarian terminology, but utilitarianism is not necessary for my point. To restate: If I could choose between an ambitious child being born, or a druggie child being born, I (and you, according to your above comment) would choose the ambitious child, all else being equal. Why would we choose that? Well, there are several possible explanations, including the one which you gave. However, yours was complicated and far from trivially true, and so I point out that such massive suppositions are unnecessary, as we already have a certain well known human desire to explain that choice. (Call that desire what you will, perhaps “altruism”, or “bettering the world”. It’s the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.)
I agree that externalities are the first reason that comes to mind. But when I try to modify the thought experiments to control for this my preferences remain the same.
For instance, if I imagine someone with rather introverted ambitions, for instance, someone who wants to collect and modify cars, or beat lots of difficult videogames, versus someone with unambitious, but harmless preferences, (such as looking at porn all day), I still preferred the ambitious person. Incidentally, I’m not saying it’s bad that there are people who want to look at porn (or who want to use recreational drugs, for that matter), I’m just saying it’s bad that there are people who want to devote their entire life too it and do nothing more ambitious.
To test my ideals even further (and to make sure my intuitions were not biased by the fact that porn and drugs are low-status activities) I imagined two people who both wanted to just look at porn all day. The difference was that one wanted to compare and contrast the porn they watched and develop theories about the patterns he found, while the other just wanted to passively absorb it without really thinking. I preferred the Intellectual Porn Watched to the Absorber.
Call that desire what you will, perhaps “altruism”, or “bettering the world”. It’s the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.
I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who “us” is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
Hm. That’s actually a pretty good answer. I too find I would prefer the Intellectual Porn Watcher to the Absorber. I will note, however, that the preference is rather weak. If you would give me $10 (or however much) in exchange for letting the Absorber exist rather than the Intellectual Porn Watcher, I’d take that, even for relatively low values of money. (I’m not quite sure of what the cuttoff is though, but it’s low). On the other hand, I think I’d be willing to give up a fair bit of money to have the Ambitious Intellectual exist rather than the Druggie.
Thinking about it in these terms is by no means perfect, but it allows me to solidify my view of my preferences. In any case, I’ll admit this is a good point.
I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who “us” is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
See, “valuable” is a two place word, it takes as arguments both an object or state, and a valuer. Now, when I talk about this, I say “us” as the valuer, (and you can argue that I really should be only saying me, as our goal-systems are not necessarily aligned, but we’ll put that aside), but that specifically means the “us” that is having this conversation. Or to put it another way, if you ask me “How much do you value thing X?”, you can model it as me going to a black box inside my head and getting an answer. Of course, if you take out that black box and replace it with another one, the answer may be different. But, even if I know that tomorrow someone will come and do surgery to swap those “boxes”, that doesn’t change my answer today.
Sorry for rambling a bit. I’m not sure how best to explain it all. But I value art and knowledge. (To use your example.) If you replace me with someone who values paperclips, then that other person will go and do the things he values, like making paperclips and not art and knowledge, and I will hate him for that. I don’t like the world were he does that, as my utility function does not include terms for paperclips. He would value that world, and would fight tooth and claw to get to that worldstate. Nothing says we have to agree on what is the best worldstate, and nothing says I am obliged to bring about arbitrary wold states others want.
… Oh. Actually, on reading what you wrote over again, I think (in the last section, the points about ambition still stand) we are arguing over different things, and are more in agreement then we thought. You say you value “identity over utility” (to some extent). I think I interpreted that to mean something subtly different from what you meant.
By utility, you meant total utility of everyone (or maybe the average utility of everyone?) Realizing that, of course we value lots of things over “utility”, when “utility” is used in that sense. (I will call it ToAU, for “Total or Average Utility”, to avoid confusing it with what I will call MPU, “My Personal Utility”.)
Yess, what you make is a good point that ToAU is not what we should be maximizing. I agree. I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing. (Ok, put aside for now, as before, that me and you may have slightly different goal systems and I so I should be using a different pronoun, either you, if I am talking about what you are maximising, or me, if we are talking about me.)
Now, MPU is quite the complex function, and for us, at least, it includes terms for art and science existing, for humans not being killed, for minimizing not only our (mine, your) personal suffering, but also for minimising global suffering. Altruism is a major part of MPU, make no mistake, I am not arguing that others’ opinions do not matter, at least for some value of “others”, definitely including all humans, and likely including many non humans. MPU does include a term for the enjoyment, happiness, identity, non-suffering, and so forth of those in this category, but (as you have shown) this category cannot be completely universal.
In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.
I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing.
Sorry, I tend to carelessly use the word “utility” to mean “the stuff utilitarians want to maximize,” forgetting that many people will read it as “Von Neuman-Morgenstern Utility.” You actually aren’t the first person on Less Wrong I’ve done this to.
In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.
Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?
Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this “let’s just take the average!” come from?
If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, “If you prefer no monster to a happy monster why don’t you kill the monster.” The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be “no monster” is for it to never exist in the first place.
That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24⁄7), creating someone with slightly less negative utility (ie they are tortured 23⁄7) is better than creating nobody.
In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population’s total utility is higher. “Take the average utility of the population” sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out “munchkin” ways to manipulate the average, like adding moderately miserable people to a super-miserable world..
In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn’t as horrible as AU.
I would think so. Of course, that’s not to say we know that they count… my confidence that someone who doesn’t exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn’t exist is going to exist.
This should in no way be understood as endorsing the more general formulation.
Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.
For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn’t burst until after we died. If we don’t value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.
For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth’s carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)
For philosophical purposes, there’s an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say “I’m not the same person I was a decade ago”, and expect that the same will be true a decade from now. So if I want to value my future self, there’s a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.
If I kill someone in their sleep so they don’t experience death, and nobody else is affected by it (maybe it’s a hobo or something), is that okay under the timeless view because their prior utility still “counts”?
If we’re talking preference utilitarianism, in the “timeless sense” you have drastically reduced the utility of the person, since the person (while still living) would have preferred not to be so killed; and you went against that preference.
It’s because their prior utility (their preference not to be killed) counts, that killing someone is drastically different from them not being born in the first place.
No, because they’ll be deprived of any future utility they might have otherwise received by remaining alive.
So if a person is born, has 50 utility of experiences and is then killed, the timeless view says the population had one person of 50 utility added to it by their birth.
By contrast, if they were born, have 50 utility of experiences, avoid being killed, and then have an additional 60 utility of experiences before they die of old age, the timeless view says the population had one person of 110 utility added to it by their birth.
Obviously, all other things being equal, adding someone with 110 utility is better than adding someone with 50, so killing is still bad.
Yes, that’s my point (Maybe my tenses were wrong.) This answer (the weighting) was meant to be the answer to teageegeepea’s question of how exactly the timeless view considers the situation.
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
In real life, this would tend to make the remaining people less happy.
Then what’s the qualifier about their lives being worth living there for? Presumably he believes it’s also not wrong to not create people whose lives would not be worth living, right?
Huh. Rereading it, your interpretation might make more sense. I was thinking about that as ‘even if their lives would be worth living, you don’t have an obligation to create new people’, which is a position that Peter Singer holds, but so is the position expressed after your correction.
In the case of actual human children in an actual society, there are considerations that don’t necessarily apply to hypothetical alien five-dollar-bill-satisficers in a vacuum.
Perhaps you and they are just focusing on different stages of reasoning. The difference in utility that you’ve described is a temporal asymmetry that sure looks at first glance like a flaw. But that’s because it’s an unnecessary complexity to add it as a root principle when explaining morality up to now. Each of us desires not to be a victim of murder sprees (when there are too many people) or to have to care for dozens of babies (when there are too few people), and the simplest way for a group of people to organize to enforce satisfaction of that desire is for them to guarantee the state does not victimize any member of the group. So on desirist grounds I’d expect the temporal asymmetry to tend to emerge strategically as the conventional morality applying only among the ruling social class of a society: only humans and not animals in a modern democracy, only men when women lack suffrage, only whites when blacks are subjugated, only nobles in aristocratic society, and so on. (I can readily think of supporting examples, but I’m not confident in my inability to think of contrary examples, so I do not yet claim that history bears out desirism’s prediction on this matter.)
Of course, if you plan to build an AI capable of aquiring power over all current life, you may have strong reason to incorporate the temporal asymmetry as a root principle. It wouldn’t likely emerge out of unbalanced power relations. And similarly, if you plan on bootstrapping yourself as an em into a powerful optimizer, you have strong reason to precommit to the temporal asymmetry so the rest of us don’t fear you. :D
If the utility monster is so monstrously sad, why would it be asking not to be killed? Usually, a decent rule of thumb is that if someone doesn’t want to die there’s a good chance their lives are somewhat worth living.
The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant.
This conclusion is technically incorrect. For new babies, you don’t know in advance whether their lives will be worth living. Even if you go with positive expected value (and no negative externalities), you can still have better alternatives, e.g. do science now that makes many more and much better lives much later; “as fast as possible” is logically unnecessary.
Also, killing sprees have side-effects on society that omissions of reproduction don’t have, e.g. already-born people will take costly measures not to be killed (etc...)
If you have any sort of coherent utility system at all, they will be.
A better point is that “no monster” just means you’re shunting the problem to poor Alternate You in another many-worlds branch, whereas killing a happy monster means actually decreasing the number of universes with the monster in it by one.
If you prefer a happy monster to no monster and no monster to a sad monster, then you prefer a happy monster to a sad monster, and TsviBT’s point applies.
Whereas if you prefer no monster to a happy monster to a sad monster, why don’t you kill the monster?
...sometimes I wonder about the people who find it unintuitive to consider that “Killing X, once X is alive and asking not to be killed” and “Preferring that X not be born, if we have that option in advance” could have widely different utility to me. The converse perspective implies that we should either (1) be spawning as many babies as possible, as fast as possible, or (2) anyone who disagrees with 1 should go on a murder spree, or at best consider such murder sprees ethically unimportant. After all, not spawning babies as fast as possible is as bad as murdering that many existent adults, apparently.
The crucial question is how we want to value the creation of new sentience (aka population ethics). It has been proven impossible to come up with intuitive solutions to it, i.e. solutions that fit some seemingly very conservative adequacy conditions.
The view you outline as an alternative to total hedonistic utilitarianism is often left underdetermined, which hides some underlying difficulties.
In Practical Ethics, Peter Singer advocated a position he called “prior-existence preference utilitarianism”. He considered it wrong to kill existing people, but not wrong to not create new people as long as their lives would be worth living. This position is awkward because it leaves you no way of saying that a very happy life (one where almost all preferences are going to be fulfilled) is better than a merely decent life that is worth living. If it were better, and if the latter is equal to non-creation, then denying that the creation of the former life is preferable over non-existence would lead to intransitivity.
If I prefer, but only to a very tiny degree, having a child with a decent life over having one with an awesome life, would it be better if I had the child with the decent life?
In addition, nearly everyone would consider it bad to create lives that are miserable. But if the good parts of a decent life can make up for the bad parts in it, why doesn’t a life consisting solely of good parts constitute something that is important to create? (This point applies most forcefully for those who adhere to a reductionist/dissolved view on personal identity.)
One way out of the dilemma is what Singer called the “moral ledger model of preferences”. He proposed an analogy between preferences and debts. It is good if existing debts are paid, but there is nothing good about creating new debts just so they can be paid later. In fact, debts are potentially bad because they may remain unfulfilled, so all things being equal, we should try to avoid making debts. The creation of new sentience (in form of “preference-bundles” or newly created utility functions) would, according to this view, be at most neutral (if all the preferences will be perfectly fulfilled), and otherwise negative to the extent that preferences get frustrated.
Singer himself rejected this view because it would imply voluntary human extinction being a good outcome. However, something about the “prior-existence” alternative he offered seems obviously flawed, which is arguably a much bigger problem than something being counterintuitive.
In my view population ethics failed at the start by making a false assumption, namely “Personal identity does not matter, all that matters is the total amount of whatever makes life worth living (ie utility).” I believe this assumption is wrong.
Derek Parfit first made this assumption when discussing the Nonidentity Problem. He believed it was the most plausible solution, but was disturbed by its other implications, like the Repugnant Conclusion. His work is what spawned most of the further debate on population ethics and its disturbing conclusions.
After meditating on the Nonidentity Problem for a while I realized Parfit’s proposed solution had a major problem. In the traditional form of the NIP you are given a choice between two individuals who have different capabilities for utility generation (one is injured in utero, the other is not). However, there is another way to change the amount of utility someone gets out of life besides increasing or reducing their capabilities. You could also change the content of their preferences, so that a person has more ambitious preferences that are harder to achieve.
I reframed the NIP as giving a choice between having two children with equal capabilities (intelligence, able-bodiedness, etc.) but with different ambitions, one wanted to be a great scientist or artist, while the other just wanted to do heroin all day. It seemed obvious to me, and to most of the people I discussed this with, that it was better to have the ambitious child, even if the druggie had a greater level of lifetime utility.
In my view the primary thing that determines whether someone’s creation is good or not is their identity (ie, what sort of preferences they have, their personality, etc). What constitutes someone having a “morally right” identity is really complicated and fragile, but generally it means that they have the sort of rich, complex values that humans have, and that they are (in certain ways) unique and different from the people who have come before. In addition to their internal desires, their relationship to other people is also important. (Of course, this only applies if their total lifetime utility is positive, if it’s negative it’s bad to create them no matter what their identity is).
We can now use this to patch Singer’s “Moral Ledger” in a way that fits Eliezer’s views. Creating someone with the “wrong” identity is a debt, but creating a person with a “right” identity is not. So we shouldn’t create a utility monster (if “utility monster” is a “wrong” identity), because that would create a debt, but killing the monster wouldn’t solve anything, it would just make it impossible to pay the debt.
My “Identity Matters” model also helps explain our intuitions about our duties to have children. In the total and average views, the identity of the child is unimportant. In my model it is. If someone doesn’t want to have children, having an unwanted child is a “debt” regardless of the child’s personal utility. A child born to parents who want to have one, by contrast may be “right” to have, even if its utility is lower than that of the aforementioned unwanted child. (Of course, this model needs to be flexible about what makes someone “your child” in order to regard things like sterile parents adopting unwanted children as positive, but I don’t see this as a major problem).
In addition to identity mattering, we also seem to have ideals about how utility should be concentrated. Most people intuitively reject things like Replaceability and the Repugnant Conclusion, and I think they’re right to. We seem to have an ideal that a small population with high per-person utility is better than a large one with low per-person utility, even if its total utility is higher. I’m not suggesting Average Utilitarianism, as I said in another comment, I think that AU is a disastrously bad attempt to mathematize that ideal. But I do think that ideal is worthwhile, we just need a less awful way to fit it into our ethical system.
A third reason for our belief that having children is optional is that most people seem to believe in some sort of Critical Level Utilitarianism with the critical level changing depending on what our capabilities for increasing people’s utility are. Most people in the modern world would consider it unthinkable to have a child whose level of utility would have been considered normal in Medieval Europe. And I think this belief isn’t just the status quo bias, I would also consider it unconscionable to have a child with normal Modern World levels of utility in a transhuman future.
Oh? Yes, true it is better to have the ambitious child. I agree and I think most others will too. But I don’t think that’s because of some fundamental preference, but rather because the ambitious child has a far greater chance of causing good in the world. (Say, becoming an artist and painting masterpieces that will be admired for centuries to come, or becoming a scientist and developing our understanding of the fundamental nature of the universe.) The druggie will not provide these positive externalities, and may even provide negative ones. (Say, turning to crime in order to feed his addiction, as some druggies do.)
I think this adequately explains this reaction, and I do not see a need to posit a fundamental term in our utility functions to explain it.
I disagree. I have come to realize that that morality isn’t just about maximizing utility, it’s also about protecting fragile human* values. Creating creatures that have values fundamentally opposed to those values, such as paperclip maximizers, orgasmium, or sociopaths, seems a morally wrong thing to do to me.
This was driven home to me by a common criticism of utilitarianism, namely that it advocates that, if possible, we should kill everyone and replace them with creatures whose preferences are easier to satisfy, or who are easier to make happy. I believe this is a bug, not a feature, and that valuing the identity of created creatures is the solution. Eliezer’s essays on the fragility and complexity of human values also helped me realize this.
*When I say “human” I mean any creature with a sufficiently humanlike mind, regardless of whether it is biologically human or not.
Perhaps I was unclear. I used utilitarian terminology, but utilitarianism is not necessary for my point. To restate: If I could choose between an ambitious child being born, or a druggie child being born, I (and you, according to your above comment) would choose the ambitious child, all else being equal. Why would we choose that? Well, there are several possible explanations, including the one which you gave. However, yours was complicated and far from trivially true, and so I point out that such massive suppositions are unnecessary, as we already have a certain well known human desire to explain that choice. (Call that desire what you will, perhaps “altruism”, or “bettering the world”. It’s the desire that on the margin, more art, knowledge, and other things-considered-valuable-to-us are created.)
I agree that externalities are the first reason that comes to mind. But when I try to modify the thought experiments to control for this my preferences remain the same.
For instance, if I imagine someone with rather introverted ambitions, for instance, someone who wants to collect and modify cars, or beat lots of difficult videogames, versus someone with unambitious, but harmless preferences, (such as looking at porn all day), I still preferred the ambitious person. Incidentally, I’m not saying it’s bad that there are people who want to look at porn (or who want to use recreational drugs, for that matter), I’m just saying it’s bad that there are people who want to devote their entire life too it and do nothing more ambitious.
To test my ideals even further (and to make sure my intuitions were not biased by the fact that porn and drugs are low-status activities) I imagined two people who both wanted to just look at porn all day. The difference was that one wanted to compare and contrast the porn they watched and develop theories about the patterns he found, while the other just wanted to passively absorb it without really thinking. I preferred the Intellectual Porn Watched to the Absorber.
I think the strongest reason to value certain identities over others is that otherwise, the most efficient way to create things-considered-valuable-to-us is to change who “us” is. Once we get good at AI or genetics, kill everyone and replace them with creatures who value things that are easier to manufacture than art and knowledge. Or, if we have an aversion to killing, just sterilize everyone and make sure all future creatures born are of this type. The fact that this seems absurdly evil indicates to me that we do value identity over utility to some extent.
Hm. That’s actually a pretty good answer. I too find I would prefer the Intellectual Porn Watcher to the Absorber. I will note, however, that the preference is rather weak. If you would give me $10 (or however much) in exchange for letting the Absorber exist rather than the Intellectual Porn Watcher, I’d take that, even for relatively low values of money. (I’m not quite sure of what the cuttoff is though, but it’s low). On the other hand, I think I’d be willing to give up a fair bit of money to have the Ambitious Intellectual exist rather than the Druggie.
Thinking about it in these terms is by no means perfect, but it allows me to solidify my view of my preferences. In any case, I’ll admit this is a good point.
See, “valuable” is a two place word, it takes as arguments both an object or state, and a valuer. Now, when I talk about this, I say “us” as the valuer, (and you can argue that I really should be only saying me, as our goal-systems are not necessarily aligned, but we’ll put that aside), but that specifically means the “us” that is having this conversation. Or to put it another way, if you ask me “How much do you value thing X?”, you can model it as me going to a black box inside my head and getting an answer. Of course, if you take out that black box and replace it with another one, the answer may be different. But, even if I know that tomorrow someone will come and do surgery to swap those “boxes”, that doesn’t change my answer today.
Sorry for rambling a bit. I’m not sure how best to explain it all. But I value art and knowledge. (To use your example.) If you replace me with someone who values paperclips, then that other person will go and do the things he values, like making paperclips and not art and knowledge, and I will hate him for that. I don’t like the world were he does that, as my utility function does not include terms for paperclips. He would value that world, and would fight tooth and claw to get to that worldstate. Nothing says we have to agree on what is the best worldstate, and nothing says I am obliged to bring about arbitrary wold states others want.
… Oh. Actually, on reading what you wrote over again, I think (in the last section, the points about ambition still stand) we are arguing over different things, and are more in agreement then we thought. You say you value “identity over utility” (to some extent). I think I interpreted that to mean something subtly different from what you meant.
By utility, you meant total utility of everyone (or maybe the average utility of everyone?) Realizing that, of course we value lots of things over “utility”, when “utility” is used in that sense. (I will call it ToAU, for “Total or Average Utility”, to avoid confusing it with what I will call MPU, “My Personal Utility”.)
Yess, what you make is a good point that ToAU is not what we should be maximizing. I agree. I was arguing that it is nonsensical to not value utility, as by definition, MPU is what we should be maximizing. (Ok, put aside for now, as before, that me and you may have slightly different goal systems and I so I should be using a different pronoun, either you, if I am talking about what you are maximising, or me, if we are talking about me.)
Now, MPU is quite the complex function, and for us, at least, it includes terms for art and science existing, for humans not being killed, for minimizing not only our (mine, your) personal suffering, but also for minimising global suffering. Altruism is a major part of MPU, make no mistake, I am not arguing that others’ opinions do not matter, at least for some value of “others”, definitely including all humans, and likely including many non humans. MPU does include a term for the enjoyment, happiness, identity, non-suffering, and so forth of those in this category, but (as you have shown) this category cannot be completely universal.
In fact, in the end, all this boils down to is that you were arguing against utilitarianism, while I was arguing for consequentialism, two very similar ethical systems, but profoundly different.
Sorry, I tend to carelessly use the word “utility” to mean “the stuff utilitarians want to maximize,” forgetting that many people will read it as “Von Neuman-Morgenstern Utility.” You actually aren’t the first person on Less Wrong I’ve done this to.
I agree entirely.
Average utilitarianism (which can be both hedonistic or about preferences / utility functions) is another way to avoid the repugnant conclusion. However, average utilitarianism comes with its own conclusions that most consider to be unacceptable. If the average life in the universe turns out to be absolutely miserable, is it a good thing if I bring a child into existence that will have a life that goes slightly less miserable? Or similarly, if the average life is free of suffering and full of the most intense happiness possible, would I be acting catastrophically wrong if I brought into existence a lot of beings that constantly experience the peak of current human happiness (without ever having preferences unfulfilled too), simply because it would lower the overall average?
Another point to bring up against average utilitarianism is that is seems odd why the value of creating a new life should depend on what the rest of the universe looks like. All the conscious experiences remain the same, after all, so where does this “let’s just take the average!” come from?
More repugnant than that is that naive average utilitarianism would seem to say that killing the least happy person in the world is a good thing, no matter how happy they are.
This can be resolved by taking a timeless view of the population, so that someone still counts as part of the average even after they die. This neatly resolves the question you asked Eliezer earlier in the thread, “If you prefer no monster to a happy monster why don’t you kill the monster.” The answer is that once the monster is created it always exists in a timeless sense. The only way for there to be “no monster” is for it to never exist in the first place.
That still leaves the most repugnant conclusion of naive average utilitarianism, namely that it states that, if the average utility is ultranegative (i.e., everyone is tortured 24⁄7), creating someone with slightly less negative utility (ie they are tortured 23⁄7) is better than creating nobody.
In my view average utilitarianism is a failed attempt to capture a basic intuition, namely that a small population of high utility people is sometimes better than a large one of low utility people, even if the large population’s total utility is higher. “Take the average utility of the population” sounds like an easy and mathematically rigorous to express that intuition at first, but runs into problems once you figure out “munchkin” ways to manipulate the average, like adding moderately miserable people to a super-miserable world..
In my view we should keep the basic intuition (especially the timeless interpreation of it), but figure out a way to express it that isn’t as horrible as AU.
In that view, does someone already counts as part of the average even before they are born?
I would think so. Of course, that’s not to say we know that they count… my confidence that someone who doesn’t exist once existed is likely much higher, all else being equal, than my confidence that someone who doesn’t exist is going to exist.
This should in no way be understood as endorsing the more general formulation.
Presumably, only if they get born. Although that’s tweakable.
Yes and no. Yes in that the timeless view is timeless in both directions. No in that for decisionmaking we can only take into account predictions of the future and not the future itself.
For intuitive purposes, consider the current political issues of climate change and economic bubbles. It might be the case that we who are now alive could have better quality of life if we used up the natural resources and if we had the government propogate a massive economic bubble that wouldn’t burst until after we died. If we don’t value the welfare of possible future generations, we should do those things. If we do value the welfare of possible future generations, we should not do those things.
For technical purposes, suppose we have an AIXI-bot with a utility function that values human welfare. Examination of the AIXI definition makes it clear that the utility function is evaluated over the (predicted) total future. (Entertaining speculation: If the utility function was additive, such an optimizer might kill off those of us using more than our share of resources to ensure we stay within Earth’s carrying capacity, making it able to support a billion years of humanity; or it might enslave us to build space colonies capable of supporting unimaginable throngs of future happier humans.)
For philosophical purposes, there’s an important sense in which my brainstates change so much over the years that I can meaningfully, if not literally, say “I’m not the same person I was a decade ago”, and expect that the same will be true a decade from now. So if I want to value my future self, there’s a sense in which I necessarily must value the welfare of some only-partly-known set of possible future persons.
If I kill someone in their sleep so they don’t experience death, and nobody else is affected by it (maybe it’s a hobo or something), is that okay under the timeless view because their prior utility still “counts”?
If we’re talking preference utilitarianism, in the “timeless sense” you have drastically reduced the utility of the person, since the person (while still living) would have preferred not to be so killed; and you went against that preference.
It’s because their prior utility (their preference not to be killed) counts, that killing someone is drastically different from them not being born in the first place.
No, because they’ll be deprived of any future utility they might have otherwise received by remaining alive.
So if a person is born, has 50 utility of experiences and is then killed, the timeless view says the population had one person of 50 utility added to it by their birth.
By contrast, if they were born, have 50 utility of experiences, avoid being killed, and then have an additional 60 utility of experiences before they die of old age, the timeless view says the population had one person of 110 utility added to it by their birth.
Obviously, all other things being equal, adding someone with 110 utility is better than adding someone with 50, so killing is still bad.
The obvious way to avoid this is to weight each person by their measure, e.g. the amount of time they spend alive.
I think total utilitarianism already does that.
Yes, that’s my point (Maybe my tenses were wrong.) This answer (the weighting) was meant to be the answer to teageegeepea’s question of how exactly the timeless view considers the situation.
In real life, this would tend to make the remaining people less happy.
Did you mean to write, “not wrong to create new people...” ?
No, that’s Singer’s position. He’s saying there is no obligation to create new people.
Then what’s the qualifier about their lives being worth living there for? Presumably he believes it’s also not wrong to not create people whose lives would not be worth living, right?
Huh. Rereading it, your interpretation might make more sense. I was thinking about that as ‘even if their lives would be worth living, you don’t have an obligation to create new people’, which is a position that Peter Singer holds, but so is the position expressed after your correction.
In the case of actual human children in an actual society, there are considerations that don’t necessarily apply to hypothetical alien five-dollar-bill-satisficers in a vacuum.
Perhaps you and they are just focusing on different stages of reasoning. The difference in utility that you’ve described is a temporal asymmetry that sure looks at first glance like a flaw. But that’s because it’s an unnecessary complexity to add it as a root principle when explaining morality up to now. Each of us desires not to be a victim of murder sprees (when there are too many people) or to have to care for dozens of babies (when there are too few people), and the simplest way for a group of people to organize to enforce satisfaction of that desire is for them to guarantee the state does not victimize any member of the group. So on desirist grounds I’d expect the temporal asymmetry to tend to emerge strategically as the conventional morality applying only among the ruling social class of a society: only humans and not animals in a modern democracy, only men when women lack suffrage, only whites when blacks are subjugated, only nobles in aristocratic society, and so on. (I can readily think of supporting examples, but I’m not confident in my inability to think of contrary examples, so I do not yet claim that history bears out desirism’s prediction on this matter.)
Of course, if you plan to build an AI capable of aquiring power over all current life, you may have strong reason to incorporate the temporal asymmetry as a root principle. It wouldn’t likely emerge out of unbalanced power relations. And similarly, if you plan on bootstrapping yourself as an em into a powerful optimizer, you have strong reason to precommit to the temporal asymmetry so the rest of us don’t fear you. :D
If the utility monster is so monstrously sad, why would it be asking not to be killed? Usually, a decent rule of thumb is that if someone doesn’t want to die there’s a good chance their lives are somewhat worth living.
This conclusion is technically incorrect. For new babies, you don’t know in advance whether their lives will be worth living. Even if you go with positive expected value (and no negative externalities), you can still have better alternatives, e.g. do science now that makes many more and much better lives much later; “as fast as possible” is logically unnecessary.
Also, killing sprees have side-effects on society that omissions of reproduction don’t have, e.g. already-born people will take costly measures not to be killed (etc...)
I worries me how many people have come to exactly those conclusions. I mean, it’s not very many, but still …
Only if your preferences are transitive.
If you have any sort of coherent utility system at all, they will be.
A better point is that “no monster” just means you’re shunting the problem to poor Alternate You in another many-worlds branch, whereas killing a happy monster means actually decreasing the number of universes with the monster in it by one.