This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.
The usual argument (which I agree with) is that “Death events have a negative utility”. Once a human already exists, it’s bad for them to stop existing.
Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing. .
Makaulay Culkin and Haley Joel Osmend (or whatever spelling) notwithstanding, that is a good argument against children.
Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing.
An adult, yes. But what about the elderly? Of course this is an argument for preventing the problems of old age.
that is a good argument against children.
Is it? It just says that you should value adults over children, not that you should value children over no children. To get one of these valuable adult minds you have to start with something.
How does that negative utility vary over time though? Because if it stays the same (or increases) then if we know now it’s impossible to live 3^^^3 years, then disutility from death sooner than that is counterbalanced (or more than that) by averted disutility from dying later, meaning decisions made are basically the same as if you didn’t disvalue death (or as if you valued it).
I think that part of the badness of death is the destruction of that person’s accumulated experience. Thus the negative utility of death does indeed increase over time. However this is counterbalanced by the positive utility of their continued existence. If someone lives to 70 rather than 50 then we’re happy because the 20 extra years of life were worth more than the worsening of the death event.
In this case, it seems like the best policy is cryopreserving then letting them stay dead but extracting those experiences and inserting them in new minds.
Which sounds weird when you say it like that, but is functionally equivalent to many of the scenarios you would intuitively expect and find good, like radically improving minds and linking them into bigger ones before waking them up since anything else would leave them unable to meaningfully interact with anything anyway and human-level minds are unlikely to qualify for informed consent.
So if Bob is cryopreserved, and I can res him for N dollars, or create a simulation of a new person and run them quickly enough to catch up a number of years equal to Bob’s age at death, for N − 1 dollars, I should spend all available dollars on the latter?
Edit: to clarify why I think this is implied by your answer, what this is doing is trading such that you gain a death at Bob’s current age, but gain a life of experience up to Bob’s current age. If a life ending at Bob’s current age is net utility positive, this has to be net utility positive too.
broadly: yes, though all available dollars is actually all available dollars (for making people), and you’re ignoring considerations like keeping promises to people unable to enforce them such as the cryopreserved or asleep or unconscious etc.
Assuming Rawls’s veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.
I agree, but is this the right comparison? Isn’t this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Let us try this framing instead: Assume there are a very large number Z of possible different human “persons” (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be “you”. Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?
I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.
I agree, but is this the right comparison? Isn’t this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Rawls’s veil of ignorance + self-sampling assumption = average utilitarianism, Rawls’s veil of ignorance + self-indication assumption = total utilitarianism (so to speak)? I had already kind-of noticed that, but hadn’t given much thought to it.
Doesn’t Rawls’s veil of ignorance prove too much here though? If both worlds would exist anyway, I’d rather be born into a world where a million people lived 101 year lifetimes than a world where 3^^^3 people lived 100 year lifetimes.
So then, Rawls’s veil has to be modified such that you are randomly chosen to be one of a quadrillion people. In scenario A, you live a million years. In scenario B, one trillion people live for one billion years each, the rest are fertilized eggs which for some reason don’t develop.
Would you? A million probably isn’t enough to sustain a modern economy, for example. (Although in the 3^^^3 case it depends on the assumed density since we can only fit a negligible fraction of that many people into our visible universe).
But compared to 3^^^3, it doesn’t matter whether it’s a million people, a billion, or a trillion. You can certainly find a number that is sufficient to sustain an economy and is still vastly smaller than 3^^^3, and you will end up preferring the smaller number for a single additional year of lifespan. Of course, for Rawls, this is a feature, not a bug.
Existing people take priority over theoretical people. Infinitely so. This should be obvious, as the reverse conclusion ends up with utter absurdities of the “Every sperm is sacred” variety.
Mad grin
Once a child is born, it has as much claim on our consideration as every other person in our light cone, but there is no obligation to have children. Not any specific child, nor any at all. Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born. Right now.
Even if you stay pregnant till you die/never masturbate, this would effectively not help at all—each conception moves one potential from the space of “could be” to to the space of “is”, but at the same time eliminates at least several hundred million other potential children from the possibility space—that is just how human reproduction works.
Existing people take priority over theoretical people. Infinitely so.
Does this mean that I am free to build a doomsday weapon that kills everyone born after September 4th 2013 100 years from now, if that gets me a cookie?
This should be obvious, as the reverse conclusion ends up with utter absurdities of the “Every sperm is sacred” variety.
Not necessarily. It would merely be your obligation to have as many children as possible, while still ensuring that they are healthy and well cared for. At some point having an extra child will make all your children less well of.
Once a child is born, it has as much claim on our consideration as every other person in our light cone
Why is there a threshold at birth? I agree that it is a convenient point, but it is arbitrary.
Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born.
Why should I commit suicide? That reduces the number of people. It would be much better to start having children. (Note that I am not saying that this is my utility function).
The “infinitely so” part seems wrong, but the idea is that 4D histories which include a sentient being coming into existence, and then dying, are dispreferred to 4D world-histories in which that sentient being continues. Since the latter type of such histories may not be available, we specify that continuing for a billion years and then halting is greatly preferable to continuing for 10 years then halting. Our degree of preference for such is substantially greater than the degree to which we feel morally obligated to create more people, especially people who shall themselves be doomed to short lives.
The switch from consquentialist language (“4D histories which include… are dispreferred”) to deontological language (“…the degree to which we feel morally obligated to create more people”) is confusing. I agree that saving the lives of existing people is a stronger moral imperative than creating new ones, at the level of deontological rules and virtuous conduct which is a large part of everyday human moral reasoning. I am much less clear than when evaluating 4D histories I assign higher utility to one with few people living long lives than to one with more people living shorter lives. Actually, I tend towards the opposite intuition preferring a world with more people who live less (as long as the their lives are still well worth living, etc.)
Not sure what part of this comment tree this belongs so just posting it here where it’s likely to be seen:
It struck me with an image that it’s not at all necessary that these tradeoffs are actually a thing once you dissolve the “person” abstraction; it’s possible that something like the following is optimal: half the universe is dedicated to search the space of all experiences in order starting with the highest utility/most meaningful/lowest hanging fruit. This is then aggregated and metadata added and sent to the other half which is tiled with minimal context-experiencing units equivalent to individual peoples subjective whatever. in the end, you end up with equivalent to if you had half the number of individual people as if that was your only priority, each having the utility as a single person with the entire future history of half the universe dedicated to it, including context of history.
Thats the best case scenario. It’s pretty certain SOME aspect or another of the fragile godshatter will disallow it obviously.
This is true, but in my experience usually used to massage models that don’t consider death a disutility into giving the right answers. I can’t think of ever hearing this argument used for any other reason, in fact, in meatspace.
(Replying to this comment out of context on the Recent Comments.)
Are old humans better than new humans?
This seems to be a hidden assumption of cryonics / transhumanism / anti-deathism: We should do everything we can to prevent people from dying, rather than investing these resources into making more or more productive children.
The usual argument (which I agree with) is that “Death events have a negative utility”. Once a human already exists, it’s bad for them to stop existing.
So every human has a right to their continued existence. That’s a good argument. Thanks.
Complement it with the fact that it costs about 800 thousand dollars to raise a mind, and an adult mind might be able to create value at rates high enough to continue existing. .
Makaulay Culkin and Haley Joel Osmend (or whatever spelling) notwithstanding, that is a good argument against children.
An adult, yes. But what about the elderly? Of course this is an argument for preventing the problems of old age.
Is it? It just says that you should value adults over children, not that you should value children over no children. To get one of these valuable adult minds you have to start with something.
How does that negative utility vary over time though? Because if it stays the same (or increases) then if we know now it’s impossible to live 3^^^3 years, then disutility from death sooner than that is counterbalanced (or more than that) by averted disutility from dying later, meaning decisions made are basically the same as if you didn’t disvalue death (or as if you valued it).
I think that part of the badness of death is the destruction of that person’s accumulated experience. Thus the negative utility of death does indeed increase over time. However this is counterbalanced by the positive utility of their continued existence. If someone lives to 70 rather than 50 then we’re happy because the 20 extra years of life were worth more than the worsening of the death event.
In this case, it seems like the best policy is cryopreserving then letting them stay dead but extracting those experiences and inserting them in new minds.
Which sounds weird when you say it like that, but is functionally equivalent to many of the scenarios you would intuitively expect and find good, like radically improving minds and linking them into bigger ones before waking them up since anything else would leave them unable to meaningfully interact with anything anyway and human-level minds are unlikely to qualify for informed consent.
So if Bob is cryopreserved, and I can res him for N dollars, or create a simulation of a new person and run them quickly enough to catch up a number of years equal to Bob’s age at death, for N − 1 dollars, I should spend all available dollars on the latter?
Edit: to clarify why I think this is implied by your answer, what this is doing is trading such that you gain a death at Bob’s current age, but gain a life of experience up to Bob’s current age. If a life ending at Bob’s current age is net utility positive, this has to be net utility positive too.
broadly: yes, though all available dollars is actually all available dollars (for making people), and you’re ignoring considerations like keeping promises to people unable to enforce them such as the cryopreserved or asleep or unconscious etc.
Assuming Rawls’s veil of ignorance, I would prefer to be randomly born in a world where a trillion people lead billion-year lifespans than one in which a quadrillion people lead million-year lifespans.
I agree, but is this the right comparison? Isn’t this framing obscuring the fact that in the trillion-people world, you are much less likely to be born in the first place, in some sense?
Let us try this framing instead: Assume there are a very large number Z of possible different human “persons” (e.g. given by combinatorics on genes and formative experiences). There is a Rawlsian chance of 1/Z that a new created human will be “you”. Behind the veil of ignorance, do you prefer the world to be one with X people living N years (where your chance of being born is X/Z) or the one with 10X people living N/10 years (where your chance of being born is 10X/Z)?
I am not sure this is the right intuition pump, but it seems to capture an aspect of the problem that yours leaves out.
Rawls’s veil of ignorance + self-sampling assumption = average utilitarianism, Rawls’s veil of ignorance + self-indication assumption = total utilitarianism (so to speak)? I had already kind-of noticed that, but hadn’t given much thought to it.
Doesn’t Rawls’s veil of ignorance prove too much here though? If both worlds would exist anyway, I’d rather be born into a world where a million people lived 101 year lifetimes than a world where 3^^^3 people lived 100 year lifetimes.
So then, Rawls’s veil has to be modified such that you are randomly chosen to be one of a quadrillion people. In scenario A, you live a million years. In scenario B, one trillion people live for one billion years each, the rest are fertilized eggs which for some reason don’t develop.
I’d still choose B over A.
Would you? A million probably isn’t enough to sustain a modern economy, for example. (Although in the 3^^^3 case it depends on the assumed density since we can only fit a negligible fraction of that many people into our visible universe).
If the economies would be the same, then yes. Don’t fight the hypothetical.
I think “fighting the hypothetical” is justified in cases where the necessary assumptions are misleadingly inaccurate—which I think is the case here.
But compared to 3^^^3, it doesn’t matter whether it’s a million people, a billion, or a trillion. You can certainly find a number that is sufficient to sustain an economy and is still vastly smaller than 3^^^3, and you will end up preferring the smaller number for a single additional year of lifespan. Of course, for Rawls, this is a feature, not a bug.
Existing people take priority over theoretical people. Infinitely so. This should be obvious, as the reverse conclusion ends up with utter absurdities of the “Every sperm is sacred” variety.
Mad grin
Once a child is born, it has as much claim on our consideration as every other person in our light cone, but there is no obligation to have children. Not any specific child, nor any at all. Reject this axiom and you might as well commit suicide over the guilt of the billions of potentials children you could have that are never going to be born. Right now.
Even if you stay pregnant till you die/never masturbate, this would effectively not help at all—each conception moves one potential from the space of “could be” to to the space of “is”, but at the same time eliminates at least several hundred million other potential children from the possibility space—that is just how human reproduction works.
TL:DR; yes, yes they are. It is a silly question.
Does this mean that I am free to build a doomsday weapon that kills everyone born after September 4th 2013 100 years from now, if that gets me a cookie?
Not necessarily. It would merely be your obligation to have as many children as possible, while still ensuring that they are healthy and well cared for. At some point having an extra child will make all your children less well of.
Why is there a threshold at birth? I agree that it is a convenient point, but it is arbitrary.
Why should I commit suicide? That reduces the number of people. It would be much better to start having children. (Note that I am not saying that this is my utility function).
The “infinitely so” part seems wrong, but the idea is that 4D histories which include a sentient being coming into existence, and then dying, are dispreferred to 4D world-histories in which that sentient being continues. Since the latter type of such histories may not be available, we specify that continuing for a billion years and then halting is greatly preferable to continuing for 10 years then halting. Our degree of preference for such is substantially greater than the degree to which we feel morally obligated to create more people, especially people who shall themselves be doomed to short lives.
The switch from consquentialist language (“4D histories which include… are dispreferred”) to deontological language (“…the degree to which we feel morally obligated to create more people”) is confusing. I agree that saving the lives of existing people is a stronger moral imperative than creating new ones, at the level of deontological rules and virtuous conduct which is a large part of everyday human moral reasoning. I am much less clear than when evaluating 4D histories I assign higher utility to one with few people living long lives than to one with more people living shorter lives. Actually, I tend towards the opposite intuition preferring a world with more people who live less (as long as the their lives are still well worth living, etc.)
Not sure what part of this comment tree this belongs so just posting it here where it’s likely to be seen:
It struck me with an image that it’s not at all necessary that these tradeoffs are actually a thing once you dissolve the “person” abstraction; it’s possible that something like the following is optimal: half the universe is dedicated to search the space of all experiences in order starting with the highest utility/most meaningful/lowest hanging fruit. This is then aggregated and metadata added and sent to the other half which is tiled with minimal context-experiencing units equivalent to individual peoples subjective whatever. in the end, you end up with equivalent to if you had half the number of individual people as if that was your only priority, each having the utility as a single person with the entire future history of half the universe dedicated to it, including context of history.
Thats the best case scenario. It’s pretty certain SOME aspect or another of the fragile godshatter will disallow it obviously.
Yea, this was basically pseud tangential musings.
If by “old humans” you mean healthy adults, yes. If you mean this, no. (IMO—YMMV.)
Death isn’t just a negative for the dead person—it also causes paperwork and expenses, destruction of relationships, and grief among the living.
This is true, but in my experience usually used to massage models that don’t consider death a disutility into giving the right answers. I can’t think of ever hearing this argument used for any other reason, in fact, in meatspace.
(Replying to this comment out of context on the Recent Comments.)
The context is someone asking whether it’s better to stop existing people from dying or just make new people.
Hmm. I guess I’m going to cautiously say “called it!”
Yes.
Because?
a level 5 character is more valuable than a level 1 character.
A person who is older has more to give the world and has been more invested in than a baby. they’re a lot less replaceable.
also i like em more.