We live in special period of time when radical life extension is not far. We just need to survive until the moment when all >the necessary technologies will be created.
The positive scenario suggests it could happen by 2050 (plus or minus 20 years), when humanity will create an >advanced and powerful AI, highly developed nanotechnologies and a cure for aging.
Many young people could reach the year 2050 without even doing anything special.
But for many other people an opportunity to extend their life for just 10-20 years is the key to achieving radical life >extension (at least for a thousand of years, perhaps even more), because they will be able to survive until the creation of >strong life extension technologies.
I think this sort of transhumanism is part of the reason that a lot of people don’t take life extension seriously. I don’t know if it’s worse that concrete timelines are almost never well-supported and almost certainly wrong, or that they’re superfluous to the argument in favor of researching and applying life extension methods.
The only thing you have to argue is that continuous technological progress, molecular nanotechnology, cryonics, mind uploading, and biological immortality are in line with our current scientific generalizations, and that the invention of these technologies is far more probable than it has been historically, such that it’s relevant to decision-making. You don’t need to give exact years or even 20-year confidence intervals. It’s not that no one should ever try to come up with a timeline estimate, but it’s not necessary and it gives people who don’t take this seriously the opportunity to reduce your social status by poking holes in arguments that are noncentral to the main point, a point which is sound and life-or-death important.
If you do have something to support your timeline, then that seems like something worth making explicit.
You don’t need to give exact years or even 20-year confidence intervals. It’s not that no one should ever try to come up with a timeline estimate, but it’s not necessary and it gives people who don’t take this seriously the opportunity to reduce your social status by poking holes
By saying that these technologies are relevant to the decisionmaking of a currently alive human, you are implicitly giving a timeline estimate. It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
These arguments rely on timelines, which IMO should be explicitly stated in the appropriate probabilistic language.
By saying that these technologies are relevant to the decisionmaking of a currently alive human, you are implicitly giving a timeline estimate. It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
These arguments rely on timelines, which IMO should be explicitly stated in the appropriate probabilistic language.
Ok. I generally agree with your response. I have realized that nuance about implicit timelines before, when I was giving a list of what existential risks would be rendered negligible on an arbitrary long timescale by the construction of a successful extraterrestrial colony. Molecular nanotechnology was an atypical example of a risk that would be mitigated in the short term future but not in the long term. So, I agree that saying without qualification that there is no need for any timeline estimates is misleading. You can’t talk about probability estimates without talking about time constraints, even if implicitly.
My real objection is to saying things that are implicitly giving estimates like P(Invention of artificial general intelligence in 5 years) = 0.95. That is wildly overconfident. And things like this:
A human who has the motivation to extend his life, a proper understanding of how to achieve it and the necessary skills to realize his plans, should be considered as almost a superman.
Yes, that was in the article. So I still think it’s valid to say that there are elements of what makes bad transhumanism in this article. You should be appropriately confident, you shouldn’t color the wrong parts with your values, and you shouldn’t say things that do nothing helpful and probably do something harmful, even if that’s just lowering your social status in the eyes of the people that you need to persuade.
It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
I do still think that actually saving as many people as possible might not look like what rational debate looks like ideally in your mind. My mind jumps to the people who are making surface-level generalizations on the level of “This is weird, so it is wrong.” And since we want to actually save lives, we should ask ourselves how effective saying something like “My probability estimate for X in Y years is Z; my probability estimate for...” would actually be, and also be concerned with our social status because that affects how effective we are at persuading other humans.
I think the OP would be much better if it were rephrased with probabilistic timelines, even if they were clearly wrong/overconfident.
A human who has the motivation to extend his life, a proper understanding of how to achieve it and the necessary skills to realize his plans, should be considered as almost a superman.
This could be deciphered to mean “there is a 95% chance that an average motivated individual of age 25 today will ride the life extension bandwagon to live to be >1000 years old”.
Which IMO is incorrect, but I like it much more now that it’s making itself maximally vulnerable to criticism.
I think this sort of transhumanism is part of the reason that a lot of people don’t take life extension seriously. I don’t know if it’s worse that concrete timelines are almost never well-supported and almost certainly wrong, or that they’re superfluous to the argument in favor of researching and applying life extension methods.
The only thing you have to argue is that continuous technological progress, molecular nanotechnology, cryonics, mind uploading, and biological immortality are in line with our current scientific generalizations, and that the invention of these technologies is far more probable than it has been historically, such that it’s relevant to decision-making. You don’t need to give exact years or even 20-year confidence intervals. It’s not that no one should ever try to come up with a timeline estimate, but it’s not necessary and it gives people who don’t take this seriously the opportunity to reduce your social status by poking holes in arguments that are noncentral to the main point, a point which is sound and life-or-death important.
If you do have something to support your timeline, then that seems like something worth making explicit.
I like the map and have liked your other maps.
By saying that these technologies are relevant to the decisionmaking of a currently alive human, you are implicitly giving a timeline estimate. It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
These arguments rely on timelines, which IMO should be explicitly stated in the appropriate probabilistic language.
Ok. I generally agree with your response. I have realized that nuance about implicit timelines before, when I was giving a list of what existential risks would be rendered negligible on an arbitrary long timescale by the construction of a successful extraterrestrial colony. Molecular nanotechnology was an atypical example of a risk that would be mitigated in the short term future but not in the long term. So, I agree that saying without qualification that there is no need for any timeline estimates is misleading. You can’t talk about probability estimates without talking about time constraints, even if implicitly.
My real objection is to saying things that are implicitly giving estimates like P(Invention of artificial general intelligence in 5 years) = 0.95. That is wildly overconfident. And things like this:
Yes, that was in the article. So I still think it’s valid to say that there are elements of what makes bad transhumanism in this article. You should be appropriately confident, you shouldn’t color the wrong parts with your values, and you shouldn’t say things that do nothing helpful and probably do something harmful, even if that’s just lowering your social status in the eyes of the people that you need to persuade.
I do still think that actually saving as many people as possible might not look like what rational debate looks like ideally in your mind. My mind jumps to the people who are making surface-level generalizations on the level of “This is weird, so it is wrong.” And since we want to actually save lives, we should ask ourselves how effective saying something like “My probability estimate for X in Y years is Z; my probability estimate for...” would actually be, and also be concerned with our social status because that affects how effective we are at persuading other humans.
I think the OP would be much better if it were rephrased with probabilistic timelines, even if they were clearly wrong/overconfident.
This could be deciphered to mean “there is a 95% chance that an average motivated individual of age 25 today will ride the life extension bandwagon to live to be >1000 years old”.
Which IMO is incorrect, but I like it much more now that it’s making itself maximally vulnerable to criticism.