The obvious answer is “Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that’s fine too; totally up to you.”
In any case that seems to me to be much more obvious than “we (for some value of ‘we’) decide, for all of humanity, how long everyone gets to live”.
In other words, I don’t think there’s a fact of the matter about “if people should die after 100 years, a thousand years, or longer or at all”. The question assumes that there’s some single answer that works for everyone. That seems unlikely. And the idea that it’s OK to impose a fixed lifespan on someone who doesn’t want it is abhorrent.
Additionally — this is re: shminux’s comment, but is related to the overall point — “Good for humanity as a whole” and “advantage for the species as a whole” seem like nonsensical concepts in this context. Humanity is just the set of all humans. There’s no such thing as a nebulous “good for humanity” that’s somehow divorced from what’s good for any or every individual human.
In other words, I don’t think there’s a fact of the matter about “if people should die after 100 years, a thousand years, or longer or at all”. The question assumes that there’s some single answer that works for everyone. That seems unlikely.
Not necessarily true. The question posits the existence of an optimal outcome. It just neglects to mention what, exactly, said outcome would be optimal to. It would probably be necessary to determine the criteria a system that accounts for immortality has to meet to satisfy us before we start coming up with solutions.
The obvious answer is “Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that’s fine too; totally up to you.”
A limited distribution of resources somewhat complicates the issue, and even with nanotechnology and fusion power there would still be the problem of organizing a system that isn’t inherently self-destructive.
I think I agree with the spirit of your answer. “We can’t possibly figure out how to do that and in any case doing so wouldn’t feel right, so we’ll let the people involved sort it out amongst themselves.,” but there are a lot of problems that can arise from that. There would probably need to be some sort of system of checks and balances, but then that would probably deteriorate over time and has the potential to turn the whole thing upside down in itself. I doubt you’ll ever be able to really design a system for all humanity.
And the idea that it’s OK to impose a fixed lifespan on someone who doesn’t want it is abhorrent.
To you, perhaps. Well, and me. You’re intuitions on the matter are not universal, however. Far from it, as our friends’s comments show.
My main problems (read: ones that don’t rest entirely on feelings of moral sacredness) with such an idea would be the dangerous vulnerability of the system it describes to power grabs, its capacity to threaten my ambitions, and the fact that, if implemented, it would lead to a world that’s all around boring (I mean, if you can fix the life spans then you already know the ending. The person dies. Why not just save yourself the trouble and leave them dead to begin with?)
If resources are limited and population has reached carrying capacity — even if those numbers are many orders of magnitude larger than today — then each living entity would get to have one full measure of participating in the creation of a new living entity, and then enough time after that such that the average time of participating in life-creation was the same as the average of birth and death. So with sexual reproduction, you’d get to have two kids, and then when your second kid is as old as you were when your first kid was born, it would be your turn to die. I suspect in that world I would decide to have my second kid eventually and thus I’d end up dying when my age was somewhere in the 3 digits.
Obviously, that solution is “fair and stable”, not “optimal”. I’m not arguing that that’s how things should work — and I can easily imagine ways to change it that I’d view as improvements — but it’s a nice simple model of how things could be stable.
Well, that model may be stable (I haven’t actually thought it through sufficiently to judge, but let’s grant that it is) — but how exactly is it “fair”? I mean, you’re assuming a set of values which is nowhere near universal in humanity, even. I’m really not even sure what your criteria here are for fairness (or, for that matter, optimality).
My problem with what you describe is the same as my problem with what shminux says in some of his comments, and with a sort of comment that people often make in similar discussions about immortality and human lifespan. Someone will describe a set of rules, which, if they were descriptive of how the universe worked, would satisfy some criteria under discussion (e.g. stability), or lack some problem under discussion (e.g. overpopulation).
Ok. But:
Those rules are not, in fact, descriptive of how the universe works (or else we wouldn’t be having this discussion). Do you think they should be?
If so, how do we get from here to there? Are we modifying the physical laws of the universe somehow? Are we putting enforced restrictions in place?
Who enforces these restrictions? Who decides what they are in the first place? Why those people? What if I disagree? (i.e. are you just handwaving away all the sociopolitical issues inherent in attempts to institute a system?)
For instance, you say that “each living entity would get to have” so-and-so in terms of lifespan. What does that mean? Are you suggesting that the DNA of every human be modified to cause spontaneous death at some predetermined age? Aside from the scientific challenge, there are… a few… moral issues here. Perhaps we’ll just kill people at some age?
What I am getting at is that you can’t just specify a set of rules that would describe the ideal system when in reality, getting from our current situation to one where those rules are in place would require a) massive amounts of improbable scientific work and social engineering, and b) rewriting human terminal values. We might not be able to do the former, and I (and, I suspect, most people, at least in this community) would strongly object to the latter.
The obvious answer is “Everyone dies if and when they feel like it. If you want to die after 100 years, by all means; if you feel like living for a thousand years, that’s fine too; totally up to you.”
In any case that seems to me to be much more obvious than “we (for some value of ‘we’) decide, for all of humanity, how long everyone gets to live”.
In other words, I don’t think there’s a fact of the matter about “if people should die after 100 years, a thousand years, or longer or at all”. The question assumes that there’s some single answer that works for everyone. That seems unlikely. And the idea that it’s OK to impose a fixed lifespan on someone who doesn’t want it is abhorrent.
Additionally — this is re: shminux’s comment, but is related to the overall point — “Good for humanity as a whole” and “advantage for the species as a whole” seem like nonsensical concepts in this context. Humanity is just the set of all humans. There’s no such thing as a nebulous “good for humanity” that’s somehow divorced from what’s good for any or every individual human.
Not necessarily true. The question posits the existence of an optimal outcome. It just neglects to mention what, exactly, said outcome would be optimal to. It would probably be necessary to determine the criteria a system that accounts for immortality has to meet to satisfy us before we start coming up with solutions.
A limited distribution of resources somewhat complicates the issue, and even with nanotechnology and fusion power there would still be the problem of organizing a system that isn’t inherently self-destructive.
I think I agree with the spirit of your answer. “We can’t possibly figure out how to do that and in any case doing so wouldn’t feel right, so we’ll let the people involved sort it out amongst themselves.,” but there are a lot of problems that can arise from that. There would probably need to be some sort of system of checks and balances, but then that would probably deteriorate over time and has the potential to turn the whole thing upside down in itself. I doubt you’ll ever be able to really design a system for all humanity.
To you, perhaps. Well, and me. You’re intuitions on the matter are not universal, however. Far from it, as our friends’s comments show.
My main problems (read: ones that don’t rest entirely on feelings of moral sacredness) with such an idea would be the dangerous vulnerability of the system it describes to power grabs, its capacity to threaten my ambitions, and the fact that, if implemented, it would lead to a world that’s all around boring (I mean, if you can fix the life spans then you already know the ending. The person dies. Why not just save yourself the trouble and leave them dead to begin with?)
If resources are limited and population has reached carrying capacity — even if those numbers are many orders of magnitude larger than today — then each living entity would get to have one full measure of participating in the creation of a new living entity, and then enough time after that such that the average time of participating in life-creation was the same as the average of birth and death. So with sexual reproduction, you’d get to have two kids, and then when your second kid is as old as you were when your first kid was born, it would be your turn to die. I suspect in that world I would decide to have my second kid eventually and thus I’d end up dying when my age was somewhere in the 3 digits.
Obviously, that solution is “fair and stable”, not “optimal”. I’m not arguing that that’s how things should work — and I can easily imagine ways to change it that I’d view as improvements — but it’s a nice simple model of how things could be stable.
Well, that model may be stable (I haven’t actually thought it through sufficiently to judge, but let’s grant that it is) — but how exactly is it “fair”? I mean, you’re assuming a set of values which is nowhere near universal in humanity, even. I’m really not even sure what your criteria here are for fairness (or, for that matter, optimality).
My problem with what you describe is the same as my problem with what shminux says in some of his comments, and with a sort of comment that people often make in similar discussions about immortality and human lifespan. Someone will describe a set of rules, which, if they were descriptive of how the universe worked, would satisfy some criteria under discussion (e.g. stability), or lack some problem under discussion (e.g. overpopulation).
Ok. But:
Those rules are not, in fact, descriptive of how the universe works (or else we wouldn’t be having this discussion). Do you think they should be?
If so, how do we get from here to there? Are we modifying the physical laws of the universe somehow? Are we putting enforced restrictions in place?
Who enforces these restrictions? Who decides what they are in the first place? Why those people? What if I disagree? (i.e. are you just handwaving away all the sociopolitical issues inherent in attempts to institute a system?)
For instance, you say that “each living entity would get to have” so-and-so in terms of lifespan. What does that mean? Are you suggesting that the DNA of every human be modified to cause spontaneous death at some predetermined age? Aside from the scientific challenge, there are… a few… moral issues here. Perhaps we’ll just kill people at some age?
What I am getting at is that you can’t just specify a set of rules that would describe the ideal system when in reality, getting from our current situation to one where those rules are in place would require a) massive amounts of improbable scientific work and social engineering, and b) rewriting human terminal values. We might not be able to do the former, and I (and, I suspect, most people, at least in this community) would strongly object to the latter.