You don’t need to give exact years or even 20-year confidence intervals. It’s not that no one should ever try to come up with a timeline estimate, but it’s not necessary and it gives people who don’t take this seriously the opportunity to reduce your social status by poking holes
By saying that these technologies are relevant to the decisionmaking of a currently alive human, you are implicitly giving a timeline estimate. It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
These arguments rely on timelines, which IMO should be explicitly stated in the appropriate probabilistic language.
By saying that these technologies are relevant to the decisionmaking of a currently alive human, you are implicitly giving a timeline estimate. It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
These arguments rely on timelines, which IMO should be explicitly stated in the appropriate probabilistic language.
Ok. I generally agree with your response. I have realized that nuance about implicit timelines before, when I was giving a list of what existential risks would be rendered negligible on an arbitrary long timescale by the construction of a successful extraterrestrial colony. Molecular nanotechnology was an atypical example of a risk that would be mitigated in the short term future but not in the long term. So, I agree that saying without qualification that there is no need for any timeline estimates is misleading. You can’t talk about probability estimates without talking about time constraints, even if implicitly.
My real objection is to saying things that are implicitly giving estimates like P(Invention of artificial general intelligence in 5 years) = 0.95. That is wildly overconfident. And things like this:
A human who has the motivation to extend his life, a proper understanding of how to achieve it and the necessary skills to realize his plans, should be considered as almost a superman.
Yes, that was in the article. So I still think it’s valid to say that there are elements of what makes bad transhumanism in this article. You should be appropriately confident, you shouldn’t color the wrong parts with your values, and you shouldn’t say things that do nothing helpful and probably do something harmful, even if that’s just lowering your social status in the eyes of the people that you need to persuade.
It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
I do still think that actually saving as many people as possible might not look like what rational debate looks like ideally in your mind. My mind jumps to the people who are making surface-level generalizations on the level of “This is weird, so it is wrong.” And since we want to actually save lives, we should ask ourselves how effective saying something like “My probability estimate for X in Y years is Z; my probability estimate for...” would actually be, and also be concerned with our social status because that affects how effective we are at persuading other humans.
I think the OP would be much better if it were rephrased with probabilistic timelines, even if they were clearly wrong/overconfident.
A human who has the motivation to extend his life, a proper understanding of how to achieve it and the necessary skills to realize his plans, should be considered as almost a superman.
This could be deciphered to mean “there is a 95% chance that an average motivated individual of age 25 today will ride the life extension bandwagon to live to be >1000 years old”.
Which IMO is incorrect, but I like it much more now that it’s making itself maximally vulnerable to criticism.
By saying that these technologies are relevant to the decisionmaking of a currently alive human, you are implicitly giving a timeline estimate. It’s just hidden “so that people don’t lower my social status”. That sounds like the opposite of rational debate to me.
These arguments rely on timelines, which IMO should be explicitly stated in the appropriate probabilistic language.
Ok. I generally agree with your response. I have realized that nuance about implicit timelines before, when I was giving a list of what existential risks would be rendered negligible on an arbitrary long timescale by the construction of a successful extraterrestrial colony. Molecular nanotechnology was an atypical example of a risk that would be mitigated in the short term future but not in the long term. So, I agree that saying without qualification that there is no need for any timeline estimates is misleading. You can’t talk about probability estimates without talking about time constraints, even if implicitly.
My real objection is to saying things that are implicitly giving estimates like P(Invention of artificial general intelligence in 5 years) = 0.95. That is wildly overconfident. And things like this:
Yes, that was in the article. So I still think it’s valid to say that there are elements of what makes bad transhumanism in this article. You should be appropriately confident, you shouldn’t color the wrong parts with your values, and you shouldn’t say things that do nothing helpful and probably do something harmful, even if that’s just lowering your social status in the eyes of the people that you need to persuade.
I do still think that actually saving as many people as possible might not look like what rational debate looks like ideally in your mind. My mind jumps to the people who are making surface-level generalizations on the level of “This is weird, so it is wrong.” And since we want to actually save lives, we should ask ourselves how effective saying something like “My probability estimate for X in Y years is Z; my probability estimate for...” would actually be, and also be concerned with our social status because that affects how effective we are at persuading other humans.
I think the OP would be much better if it were rephrased with probabilistic timelines, even if they were clearly wrong/overconfident.
This could be deciphered to mean “there is a 95% chance that an average motivated individual of age 25 today will ride the life extension bandwagon to live to be >1000 years old”.
Which IMO is incorrect, but I like it much more now that it’s making itself maximally vulnerable to criticism.