Thank you, but that post doesn’t seem to answer my question, since it doesn’t take up how death interplays with our cognitive biases. I agree that if we were perfectly rational beings immortality would be great, however I don’t see how that implies that considering our current state that the choice to live forever (or a really long time) would be in our best interest.
Similarly I don’t see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil. For example, would having a technology to live 150 years cause more benefit or would it cause wars over who gets to use the technology?
You keep using the words “we” and “our”, but “we” don’t have lifespans; individual humans do. So the relevant questions, it seems to me, are: is removing the current cap on lifespan in the interest of any given individual? And: is removing the current cap on lifespan, for all individuals who wish it removed, in the interests of other individuals in their (family, country, society, culture, world)?
Those are different questions. Likewise, the choice to make immortality available to anyone who wants it, and the choice to actually continue living, are two different choices. (Actually, the latter is an infinite sequence[1] of choices.)
Similarly I don’t see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil.
No one is necessarily claiming that we should. Like I say in my top-level comment, this is a perfectly valid question, one which we would do well to consider in the process of solving the engineering challenge that is human lifespan.
[1] Maybe. Someone with a better-exercised grasp of calculus correct me if I’m wrong — if I’m potentially making the choice continuously at all times, can it still be represented as an infinite sequence?
“You keep using the words “we” and “our”, but “we” don’t have lifespans; individual humans do.”
Of course, but “we” is common shorthand for decisions which are made at the level of society, even though that is a collection of individual decisions (e.g. should we build a bridge, or should we legalize marijuana). Do you think that using standard english expressions is problematic? (I agree that both the question of benefit for the self and benefit for others is important and think the issue of cognitive biases is relevant to both of them)
I just looked at your comment, and I agree with that argument, but that hasn’t been my impression of the view of many on this site (and clearly isn’t the view of researchers like De Grey), however I am relatively new here and may be mistaken about that. Thank you for clarifying.
Also there is the possibility of fighting over the resources to use that technology (either within society or without). Do you disagree with the general idea that without greater rationality extreme longevity will not necessarily be beneficial or do you only disagree with the example?
That sounds more like something that would motivate the side that’s not already long-lived. They’d already have plenty of motivation. I’m saying the country that has access to the tech but wants to restrict is isn’t going to have the will to fight.
Well, “not necessarily be beneficial” strictly means “is not certain to be beneficial”, but connotationally means “is likely enough to prove not-beneficial that we shouldn’t do it”, so I ADBOC—it’s conceivable that it could go wrong, but I think it’s likely enough to have a beneficial enough outcome that we should do it anyway.
yes and that was the meaning of my initial comment, and that is a concern in today’s world where we do have limited resources so that not everyone would be able to make use of such a technology. The country that has it (or the subset of people that have it within one country) will be motivated to defend their resources necessary to use it., This isn’t an argument against such research in a world without any scarcity, but that isn’t our world.
I am still not sure whether it is likely to be more beneficial or not for heavily emotional and biased humans like us.
Thank you, but that post doesn’t seem to answer my question, since it doesn’t take up how death interplays with our cognitive biases. I agree that if we were perfectly rational beings immortality would be great, however I don’t see how that implies that considering our current state that the choice to live forever (or a really long time) would be in our best interest.
Similarly I don’t see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil. For example, would having a technology to live 150 years cause more benefit or would it cause wars over who gets to use the technology?
You keep using the words “we” and “our”, but “we” don’t have lifespans; individual humans do. So the relevant questions, it seems to me, are: is removing the current cap on lifespan in the interest of any given individual? And: is removing the current cap on lifespan, for all individuals who wish it removed, in the interests of other individuals in their (family, country, society, culture, world)?
Those are different questions. Likewise, the choice to make immortality available to anyone who wants it, and the choice to actually continue living, are two different choices. (Actually, the latter is an infinite sequence[1] of choices.)
No one is necessarily claiming that we should. Like I say in my top-level comment, this is a perfectly valid question, one which we would do well to consider in the process of solving the engineering challenge that is human lifespan.
[1] Maybe. Someone with a better-exercised grasp of calculus correct me if I’m wrong — if I’m potentially making the choice continuously at all times, can it still be represented as an infinite sequence?
“You keep using the words “we” and “our”, but “we” don’t have lifespans; individual humans do.” Of course, but “we” is common shorthand for decisions which are made at the level of society, even though that is a collection of individual decisions (e.g. should we build a bridge, or should we legalize marijuana). Do you think that using standard english expressions is problematic? (I agree that both the question of benefit for the self and benefit for others is important and think the issue of cognitive biases is relevant to both of them)
I just looked at your comment, and I agree with that argument, but that hasn’t been my impression of the view of many on this site (and clearly isn’t the view of researchers like De Grey), however I am relatively new here and may be mistaken about that. Thank you for clarifying.
I don’t think anyone’s willing to fight a war just to prevent another country’s life expectancy from increasing.
Maybe, but on the other hand there is inequity aversion: http://en.wikipedia.org/wiki/Inequity_aversion
Also there is the possibility of fighting over the resources to use that technology (either within society or without). Do you disagree with the general idea that without greater rationality extreme longevity will not necessarily be beneficial or do you only disagree with the example?
That sounds more like something that would motivate the side that’s not already long-lived. They’d already have plenty of motivation. I’m saying the country that has access to the tech but wants to restrict is isn’t going to have the will to fight.
Well, “not necessarily be beneficial” strictly means “is not certain to be beneficial”, but connotationally means “is likely enough to prove not-beneficial that we shouldn’t do it”, so I ADBOC—it’s conceivable that it could go wrong, but I think it’s likely enough to have a beneficial enough outcome that we should do it anyway.
yes and that was the meaning of my initial comment, and that is a concern in today’s world where we do have limited resources so that not everyone would be able to make use of such a technology. The country that has it (or the subset of people that have it within one country) will be motivated to defend their resources necessary to use it., This isn’t an argument against such research in a world without any scarcity, but that isn’t our world.
I am still not sure whether it is likely to be more beneficial or not for heavily emotional and biased humans like us.