Allow me to invent (or put under the microscope a slight, existing) distinction.
“Poorly stated”—not explicit, without fixed meaning. The words written may mean any of several things.
“Poorly worded”—worded so as to mean one thing which is wrong, perhaps even obviously wrong, in which case the writer may intend for people to assume he didn’t mean the obviously wrong thing, but instead meant the less literal, plausibly correct thing.
I have several times criticized the use of the words “immortal” and “immortality” by several people, including EY. I agree with the analysis by Robin Hanson here, in which he argues that the word “immortality” distracts from what people actually intend.
I characterize the use of “immortality” on this site as frequently obviously wrong in many contexts in which it is used, in which it is intended to mean the near thing “living a very long time and not being as fragile as humans are now.” In other words, often it is a poor wording of clear concepts.
I’m not sure if you agree, or instead think that the goal of very long life is unclear, or poorly justified, or just wrong, or perhaps something else.
As far as I understand, EY believes that humans and/or AIs will be able to survive until at least the heat death of the Universe, which would render such entities effectively immortal (i.e., as immortal as it is possible to be). That said, I do agree with your assessment.
If someone believed that no human and/or AI will ever be able to last longer than 1,000 years—perhaps any mind goes mad at that age, or explodes due to a law of the universe dealing with mental entities, or whatever—that person would be lambasted for using “immortal” to mean beings “as immortal as it is possible to be in my opinion.”
It is unfortunate that we don’t have clearer single words for the more plausible, more limited alternatives, closer to
living a very long time and not being as fragile as humans are now.
Come to think of it, if de Grey’s SENS program actually succeeded, we’d get the “living a very long time”
but not the “not being as fragile as humans are now” so we could use terms to distinguish those.
And all of the variations on these are distinct from uploading/ems, with the possibility of distributed backups
Unfortunately, I suspect that neither of these is very likely to ultimately happen.
SENS has curing cancer as a subtask. Uploading/ems requires a scanning technology
fast enough to scan a whole human brain and fine-grained enough to distinguish synapse types.
I think other events will happen first.
...Poor choice of words based on EY’s goals (which are just as poorly-stated).
Allow me to invent (or put under the microscope a slight, existing) distinction.
“Poorly stated”—not explicit, without fixed meaning. The words written may mean any of several things.
“Poorly worded”—worded so as to mean one thing which is wrong, perhaps even obviously wrong, in which case the writer may intend for people to assume he didn’t mean the obviously wrong thing, but instead meant the less literal, plausibly correct thing.
I have several times criticized the use of the words “immortal” and “immortality” by several people, including EY. I agree with the analysis by Robin Hanson here, in which he argues that the word “immortality” distracts from what people actually intend.
I characterize the use of “immortality” on this site as frequently obviously wrong in many contexts in which it is used, in which it is intended to mean the near thing “living a very long time and not being as fragile as humans are now.” In other words, often it is a poor wording of clear concepts.
I’m not sure if you agree, or instead think that the goal of very long life is unclear, or poorly justified, or just wrong, or perhaps something else.
Yeah, good point. That makes sense.
As far as I understand, EY believes that humans and/or AIs will be able to survive until at least the heat death of the Universe, which would render such entities effectively immortal (i.e., as immortal as it is possible to be). That said, I do agree with your assessment.
If someone believed that no human and/or AI will ever be able to last longer than 1,000 years—perhaps any mind goes mad at that age, or explodes due to a law of the universe dealing with mental entities, or whatever—that person would be lambasted for using “immortal” to mean beings “as immortal as it is possible to be in my opinion.”
It is unfortunate that we don’t have clearer single words for the more plausible, more limited alternatives, closer to
Come to think of it, if de Grey’s SENS program actually succeeded, we’d get the “living a very long time” but not the “not being as fragile as humans are now” so we could use terms to distinguish those.
And all of the variations on these are distinct from uploading/ems, with the possibility of distributed backups
Unfortunately, I suspect that neither of these is very likely to ultimately happen. SENS has curing cancer as a subtask. Uploading/ems requires a scanning technology fast enough to scan a whole human brain and fine-grained enough to distinguish synapse types. I think other events will happen first.
(Waves to Clippy)