The birth penalty fixes a lot of unintuitive products of the classic total uti. For example, if you treat every “new” person as catching up to the penalty (which can only be achieved if you at least live with minimal acceptable happiness for your entire life, aka h0), then killing a person and replacing him with someone of equal happiness is bad. Cause the penalty that was not yet caught up with in the killed person remains as a negative quantity in the total utility, a debt, if you will. In total uti, this doesn’t apply and it logically follows that there’s nothing wrong with killing a person and replacing him with a new person of equal happiness, which is unintuitive.
“I’m also very unsure about the assertion that “happy to exist” and “prefer not to die now” is an important difference [...]”—this is important because there are people that feel they are not happy with existence, and would rather to not have been born at all, but don’t want to die now that they do in fact exist. If you don’t have this difference you can’t capture that intuition. I’m not sure how the N unhappy years argument is relevant to this or how it renders the difference moot. In particular:
″ “prefer to continue to live from this point” is equal to “happy to come into existence at this point” ”
is in fact false, for a significant amount of people.
Huh. I guess my intuitions are different enough that we’re just going to disagree on this.
I don’t think it’s problematic to replace a being with an equally-happy one (presuming painless and not happiness-reducing in those around them). And I don’t understand how one can prefer not to die _AND_ not be happier to exist than not.
If you don’t think killing is in itself bad then you are not on par with the intuition of almost everybody. Legit.
I personally would rather to have never been born but don’t want to commit suicide. There are numerous reasons. Hurting the people who care about me (and wouldn’t have if I was not born in the first place), fearing pain or the act of suicide itself, fearing death (both are emotional axioms that a lot of people have, there’s no point in debating them rationally) and many other.
To be clear, I didn’t say anything about killing. I said “replace”. This isn’t possible with humans, but picture the emulation world, where an entity can be erased with no warning or sensation, and a fully-developed one can be created at will. Even then, practically it would be impermissible to do a same-value replacement, both due to uncertainty and for negative effects on other lives.
In the human world, OF COURSE killing (and more generally, dieing) is bad. My point is that the badness is fully encoded in the reduction in h of the victim, and the reduced levels of h of those who survive the victim. It doesn’t need to be double-counted with another term.
I personally would rather to have never been born but don’t want to commit suicide.
I’m extremely saddened to know this. And it makes me feel mean to stick to my theme of “already included in h, no need for another term”. The fear of death, expectation of pain, and impact on others are _all_ differences in h which should not be double-counted.
Also, I very much hope that in a few years or decades, you’ll look back and realize you were mistaken in wishing you hadn’t been born, and are glad you persevered, and are overall glad you experienced life.
The “replace” in the original problem is ending one human and creating (in whatever way) another one. I don’t think you understand the scenario.
In total uti (in the human world), it is okay to:
kill someone, provided that by doing so you bring into the world another human with the same happiness. For the sake of argument, lets assume happiness potential is genetically encoded. So if you kill someone, you can always say “that’s ok guys, my wife just got pregnant with a fetus bearing the same genetic code as the guy I just murdered”. In a model where all you do is sum up the happiness of every individual in the population, this is ok. In Vannesa’s model it isn’t, and what makes sure it isn’t is the penalty.
″ I’m extremely saddened to know this. And it makes me feel mean to stick to my theme of “already included in h, no need for another term”. The fear of death, expectation of pain, and impact on others are _all_ differences in h which should not be double-counted.”
It might be double counted, that’s not what I was talking about when I said the model captures this intuition. The existence of h0 does that, it might be that other parts of the model do that as well (I don’t think so though). Also, I’m always up for an intelligent discussion and you were not being mean :)
″ Also, I very much hope that in a few years or decades, you’ll look back and realize you were mistaken in wishing you hadn’t been born, and are glad you persevered, and are overall glad you experienced life.”
My prior for this is low, since I’ve been feeling this way for my entire adult life, but one can always hope. Plus, I’ve actually met and talked to many like minded individuals so I wouldn’t discount this intuition as “not worth capturing since its just some small anomaly”.
I just want to note here for readers that the following isn’t correct (but you’ve already made a clarifying comment, so I realise you know this):
In total uti (in the human world), it is okay to:
kill someone, provided that by doing so you bring into the world another human with the same happiness.
Total uti only says this is ok if you leave everything else equal (in terms of total utility). In almost all natural situations you don’t: killing someone influences the happiness of others too, generally negatively.
Is the intuition about killing someone and replacing them with someone who will experience equal total happiness assuming that killing someone directly causes a large drop in total happiness, but that the replacement only has total happiness equal to what the killed moral patient would have had without the killing?
Because my intuition is that if the first entity had expected future happiness of 100, but being killed changed that to −1000, their replacement, in order for them to result in ‘equal happiness’ must have expected future happiness of 1100, not 100. Intuitively, the more it sucks to be killed, the more benefit is required for it to be not wrong to kill someone.
Being killed doesn’t change your expected happiness, knowing you will be killed does. That’s different. If you want to separate variables properly think about someone being gunned down randomly with no earlier indication. Being killed just means ending you prematurely, and denying you the happiness you would have had were you alive. A good model will reflect why that’s bad even if you replace the killed person with someone that would compensate for future loss in happiness.
Pragmatically speaking, killing people causes unhappiness because it hurts the people who lost them, but that is reflected in the happiness values of those individuals, and a good model will reflect that killing someone is bad even if know one knows about it.
Being killed changes your actual happiness, compared to not being killed. I should not have used ‘expected happiness’ to refer to h|”not killed”.
I’m counting ‘the act of being gunned down’ as worth −1000 utility in itself, in addition to cancelling all happiness that would accumulate afterwards, and assuming that the replacement person would compensate all of the negative happiness that the killing caused.
Basically, I’m saying that I expect bleeding out after a gunshot wound to suck, a lot. The replacement compensating for loss in happiness starts from a hole the size of the killing.
I’m assuming that whatever heuristic you’re using survives the transporter paradox; killing Captain Kirk twice a day and replacing him with an absolutely identical copy (just in a different location) is not bad.
It doesn’t change your actual happiness, just the future one. If you are literally shot with a sniper rifle while walking in the street with no warning, there is no time in which you are saddened by your death. You just are, and then aren’t. What is lost is all the happiness that you would have otherwise experienced. Assume the guy is shot in the head, so there’s no bleeding out part.
I’m not sure where the −1000 number comes from. There is no point in which the shot down person feels 1000 less happiness than before. Saying “the act itself is worth −1000” is adding a rule to the model. A hard coded rule that killing someone is −1000. First of all, such a rule doesn’t exist in the total uti, and this model fixes it. Second of all, not all killings are equally bad, so you have to come up with a model for that now. Instead, in this model, when someone is killed the total moral utility of the population is reduced by an amount equal to, at least, minimal “life worth living happiness” for every year the killed man had left. That is pretty intuitive and solves things without hard coded rules.
Plus, nobody said “an absolutely identical copy”, the problem in total uti is that it follows it is ok to murder someone and replace him with someone of EQUAL HAPPINESS, not equal everything. The same heuristic won’t work (because it deals with identity issues like “how do we define who is captain kirk”). In this model, this problem doesn’t occur anymore.
So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices?
Is it okay for someone to change their mind about what they were going to do, and produce equal happiness doing something else?
Is it okay to kill someone and replace them with an absolutely identical copy, where nobody notices including the deceased, if the new person changes their mind about what they were going to do and ends up producing equal happiness doing something else?
″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.
The birth penalty fixes a lot of unintuitive products of the classic total uti. For example, if you treat every “new” person as catching up to the penalty (which can only be achieved if you at least live with minimal acceptable happiness for your entire life, aka h0), then killing a person and replacing him with someone of equal happiness is bad. Cause the penalty that was not yet caught up with in the killed person remains as a negative quantity in the total utility, a debt, if you will. In total uti, this doesn’t apply and it logically follows that there’s nothing wrong with killing a person and replacing him with a new person of equal happiness, which is unintuitive.
“I’m also very unsure about the assertion that “happy to exist” and “prefer not to die now” is an important difference [...]”—this is important because there are people that feel they are not happy with existence, and would rather to not have been born at all, but don’t want to die now that they do in fact exist. If you don’t have this difference you can’t capture that intuition. I’m not sure how the N unhappy years argument is relevant to this or how it renders the difference moot. In particular:
″ “prefer to continue to live from this point” is equal to “happy to come into existence at this point” ”
is in fact false, for a significant amount of people.
Huh. I guess my intuitions are different enough that we’re just going to disagree on this.
I don’t think it’s problematic to replace a being with an equally-happy one (presuming painless and not happiness-reducing in those around them). And I don’t understand how one can prefer not to die _AND_ not be happier to exist than not.
If you don’t think killing is in itself bad then you are not on par with the intuition of almost everybody. Legit.
I personally would rather to have never been born but don’t want to commit suicide. There are numerous reasons. Hurting the people who care about me (and wouldn’t have if I was not born in the first place), fearing pain or the act of suicide itself, fearing death (both are emotional axioms that a lot of people have, there’s no point in debating them rationally) and many other.
To be clear, I didn’t say anything about killing. I said “replace”. This isn’t possible with humans, but picture the emulation world, where an entity can be erased with no warning or sensation, and a fully-developed one can be created at will. Even then, practically it would be impermissible to do a same-value replacement, both due to uncertainty and for negative effects on other lives.
In the human world, OF COURSE killing (and more generally, dieing) is bad. My point is that the badness is fully encoded in the reduction in h of the victim, and the reduced levels of h of those who survive the victim. It doesn’t need to be double-counted with another term.
I’m extremely saddened to know this. And it makes me feel mean to stick to my theme of “already included in h, no need for another term”. The fear of death, expectation of pain, and impact on others are _all_ differences in h which should not be double-counted.
Also, I very much hope that in a few years or decades, you’ll look back and realize you were mistaken in wishing you hadn’t been born, and are glad you persevered, and are overall glad you experienced life.
The “replace” in the original problem is ending one human and creating (in whatever way) another one. I don’t think you understand the scenario.
In total uti (in the human world), it is okay to:
kill someone, provided that by doing so you bring into the world another human with the same happiness. For the sake of argument, lets assume happiness potential is genetically encoded. So if you kill someone, you can always say “that’s ok guys, my wife just got pregnant with a fetus bearing the same genetic code as the guy I just murdered”. In a model where all you do is sum up the happiness of every individual in the population, this is ok. In Vannesa’s model it isn’t, and what makes sure it isn’t is the penalty.
″ I’m extremely saddened to know this. And it makes me feel mean to stick to my theme of “already included in h, no need for another term”. The fear of death, expectation of pain, and impact on others are _all_ differences in h which should not be double-counted.”
It might be double counted, that’s not what I was talking about when I said the model captures this intuition. The existence of h0 does that, it might be that other parts of the model do that as well (I don’t think so though). Also, I’m always up for an intelligent discussion and you were not being mean :)
″ Also, I very much hope that in a few years or decades, you’ll look back and realize you were mistaken in wishing you hadn’t been born, and are glad you persevered, and are overall glad you experienced life.”
My prior for this is low, since I’ve been feeling this way for my entire adult life, but one can always hope. Plus, I’ve actually met and talked to many like minded individuals so I wouldn’t discount this intuition as “not worth capturing since its just some small anomaly”.
I just want to note here for readers that the following isn’t correct (but you’ve already made a clarifying comment, so I realise you know this):
Total uti only says this is ok if you leave everything else equal (in terms of total utility). In almost all natural situations you don’t: killing someone influences the happiness of others too, generally negatively.
Is the intuition about killing someone and replacing them with someone who will experience equal total happiness assuming that killing someone directly causes a large drop in total happiness, but that the replacement only has total happiness equal to what the killed moral patient would have had without the killing?
Because my intuition is that if the first entity had expected future happiness of 100, but being killed changed that to −1000, their replacement, in order for them to result in ‘equal happiness’ must have expected future happiness of 1100, not 100. Intuitively, the more it sucks to be killed, the more benefit is required for it to be not wrong to kill someone.
Being killed doesn’t change your expected happiness, knowing you will be killed does. That’s different. If you want to separate variables properly think about someone being gunned down randomly with no earlier indication. Being killed just means ending you prematurely, and denying you the happiness you would have had were you alive. A good model will reflect why that’s bad even if you replace the killed person with someone that would compensate for future loss in happiness.
Pragmatically speaking, killing people causes unhappiness because it hurts the people who lost them, but that is reflected in the happiness values of those individuals, and a good model will reflect that killing someone is bad even if know one knows about it.
Being killed changes your actual happiness, compared to not being killed. I should not have used ‘expected happiness’ to refer to h|”not killed”.
I’m counting ‘the act of being gunned down’ as worth −1000 utility in itself, in addition to cancelling all happiness that would accumulate afterwards, and assuming that the replacement person would compensate all of the negative happiness that the killing caused.
Basically, I’m saying that I expect bleeding out after a gunshot wound to suck, a lot. The replacement compensating for loss in happiness starts from a hole the size of the killing.
I’m assuming that whatever heuristic you’re using survives the transporter paradox; killing Captain Kirk twice a day and replacing him with an absolutely identical copy (just in a different location) is not bad.
It doesn’t change your actual happiness, just the future one. If you are literally shot with a sniper rifle while walking in the street with no warning, there is no time in which you are saddened by your death. You just are, and then aren’t. What is lost is all the happiness that you would have otherwise experienced. Assume the guy is shot in the head, so there’s no bleeding out part.
I’m not sure where the −1000 number comes from. There is no point in which the shot down person feels 1000 less happiness than before. Saying “the act itself is worth −1000” is adding a rule to the model. A hard coded rule that killing someone is −1000. First of all, such a rule doesn’t exist in the total uti, and this model fixes it. Second of all, not all killings are equally bad, so you have to come up with a model for that now. Instead, in this model, when someone is killed the total moral utility of the population is reduced by an amount equal to, at least, minimal “life worth living happiness” for every year the killed man had left. That is pretty intuitive and solves things without hard coded rules.
Plus, nobody said “an absolutely identical copy”, the problem in total uti is that it follows it is ok to murder someone and replace him with someone of EQUAL HAPPINESS, not equal everything. The same heuristic won’t work (because it deals with identity issues like “how do we define who is captain kirk”). In this model, this problem doesn’t occur anymore.
So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices?
Is it okay for someone to change their mind about what they were going to do, and produce equal happiness doing something else?
Is it okay to kill someone and replace them with an absolutely identical copy, where nobody notices including the deceased, if the new person changes their mind about what they were going to do and ends up producing equal happiness doing something else?
″ So it IS okay to kill someone and replace them with an absolutely identical copy, as long as the deceased feels no pain and nobody notices? ”
In total uti it is ok. This is counter-intuitive, so this model fixes it, and its no longer ok. Again, that’s the reason the penalty is there.
The absolute identical copy trick might be ok, and might not be ok, but this is besides the point. If a completely identical copy is defined as being the same person, then you didn’t replace anybody and the entire question is moot. If its not, then you killed someone, which is bad, and it ought to be reflected in the model (which it is, as of now).
In order to penalize something that probably shouldn’t be explicitly punished, you’re requiring that identity be well-defined.
There’s still the open question of “how bad?”. Personally, I share the intuition that such replacement is undesirable, but I’m far from clear on how I’d want it quantified.
The key situation here isn’t “kill and replace with person of equal happiness”, but rather “kill and replace with person with more happiness”.
DNT is saying there’s a threshold of “more happiness” above which it’s morally permissible to make this replacement, and below which it is not. That seems plausible, but I don’t have a clear intuition where I’d want to set that threshold.