It’s perhaps useful to compare this with the more general topic of mortality. Every generation assumes they’ll be immortal until sometime in mid-to-late adulthood, with various justifications from afterlife to medicine to (recently) upload or simulation. So far, it’s not worked out that way.
Life is 100% fatal within 120 years, and more often within 90. It may continue to slowly and asymptotically extend, for both median and tail cases, but it’s pure fantasy to assume that, for you, you’ll see the rate of extension exceed the rate that time passes (one year per year, inexorably).
The concept of “dignity points” does not resonate with me AT ALL. The concept of “conscious-experience-seconds” does (not perfectly, but it’s the right direction). Optimizing for my model of current humans’ near- to medium-term satisfaction/utility (again, no perfect descriptor found, but this gestures toward it) is all I have, and that’s enough. The far future (“far” meaning emotionally/predictability, not necessarily just time/distance. the far future could hit next year.) is SO variant that I don’t think I have much, if any, ability to choose how to impact it.
Possibly irrelevant, but I do want to admit that this is NOT Utilitarianism, even if the underlying beliefs could be compatible—I recognize that I’m optimizing for my experiences (including my imagination of others’ experiences), not anyone’s actual utility or even those preferences which I don’t understand/agree with. I admit that I don’t actually have everyone’s best interest at the top of my priorities, and that I care VERY unequally about different people, for reasons that are accidental rather than objective.
So the question for you is, in what ways are “doing what you’re doing” not bringing you in-the-moment satisfaction or utility-experience-seconds? The long run is strictly the sum of all short runs—you need to do at least some local optimization to make global optimization relevant.
but it’s pure fantasy to assume that, for you, you’ll see the rate of extension exceed the rate that time passes (one year per year, inexorably).
It’s pure fantasy today, correct. That’s because the rate of extension is essentially zero. There are no FDA approved drugs that slow aging at all. There is no treatment whatsoever.
However, your argument rings false because this would be like before the age of powered flight, talking about balloon travel. “you’ll never be able to make it to France faster than the wind”.
It’s not an interesting claim. Aging is ultimately a mechanism, and that mechanism can be manipulated. We have both other mammals (naked mole rats et al) that are using strategies that can possibly be copied. We have newly developed tools almost never used in humans (somatic gene editing) that can potentially make any arbitrary genetic change once perfected. AI seems to be able to predict protein folding and design proteins and drugs rationally at a scale that was not possible at all.
It could be fantasy for all humans alive today, but at some point there will be living observers who see their projected lifespan go from that max of 120, to probably much more than 1 year per year. If you treated aging even imperfectly, your projected lifespan would hurtle away from you, gaining decades for every small reduction in the rate of aging. If you had AI RL based life support systems, there would be similar gains.
If the mechanism were : AGI based surgery robotics, large scale biolabs that test mockups of human bodies and in vitro organ growth, and AGI controlled ICUs, the gain could easily be from (120 → 10k years) over about a 10 year time. A “hard takeover” in longevity.
That is, I predict that if you had a system able to integrate all biology knowledge, because it has more memory than humans, access to a large set of biolabs made with self replicating robotics (so it’s millions of separate isolated ‘cells’ each performing experiments in parallel), and the right goal heuristics to support such an endeavor, the system would gain the knowledge sufficient to keep humans alive essentially indefinitely in about 10 years.
Compared to the speed of compute systems, humans die rather slowly from detectable systemic failures. Human medicine can’t do anything about them, because there are thousands of possible failures and you would need to transplant without errors de-aged organs regrown from scratch, but this is not actually that hard of a problem.
Patients under the supervision of a control system able to react quickly enough, and with a deep enough model of biology there aren’t any edge cases, would be effectively immortal, in that there is no sequence of events that will cause them to die faster than the system can react. Remember, it can ask for new experiments from a large set of biolabs, some which have living mockups of humans with similar genetics to the patient, if there is an unusual new failure.
And unlike humans, the system could do all the steps of analyzing new scientific data and determining the new optimal policy for treatment over mere seconds, theoretically even while a patient is dying.
The bottleneck in this scenario becomes brain health, as receiving a brain transplant is not very useful. I’m not sure how much of an obstacle this will be in practice.
I realize I made a mistake, I did not ensure that everyone reading the question, was made aware that “die with dignity” decompressed to something specific.
I have amended the question to include that information, and just to be sure, here is the link to the post that gave rise to the phrase.
I have a slightly different situation (less experience; optimistic but not sure about security mindset; didn’t update on my own, but understood and accepted the arguments very easily; in Russia and can’t easily leave), but I’m interested in answers to the same question!
It’s perhaps useful to compare this with the more general topic of mortality. Every generation assumes they’ll be immortal until sometime in mid-to-late adulthood, with various justifications from afterlife to medicine to (recently) upload or simulation. So far, it’s not worked out that way.
Life is 100% fatal within 120 years, and more often within 90. It may continue to slowly and asymptotically extend, for both median and tail cases, but it’s pure fantasy to assume that, for you, you’ll see the rate of extension exceed the rate that time passes (one year per year, inexorably).
The concept of “dignity points” does not resonate with me AT ALL. The concept of “conscious-experience-seconds” does (not perfectly, but it’s the right direction). Optimizing for my model of current humans’ near- to medium-term satisfaction/utility (again, no perfect descriptor found, but this gestures toward it) is all I have, and that’s enough. The far future (“far” meaning emotionally/predictability, not necessarily just time/distance. the far future could hit next year.) is SO variant that I don’t think I have much, if any, ability to choose how to impact it.
Possibly irrelevant, but I do want to admit that this is NOT Utilitarianism, even if the underlying beliefs could be compatible—I recognize that I’m optimizing for my experiences (including my imagination of others’ experiences), not anyone’s actual utility or even those preferences which I don’t understand/agree with. I admit that I don’t actually have everyone’s best interest at the top of my priorities, and that I care VERY unequally about different people, for reasons that are accidental rather than objective.
So the question for you is, in what ways are “doing what you’re doing” not bringing you in-the-moment satisfaction or utility-experience-seconds? The long run is strictly the sum of all short runs—you need to do at least some local optimization to make global optimization relevant.
but it’s pure fantasy to assume that, for you, you’ll see the rate of extension exceed the rate that time passes (one year per year, inexorably).
It’s pure fantasy today, correct. That’s because the rate of extension is essentially zero. There are no FDA approved drugs that slow aging at all. There is no treatment whatsoever.
However, your argument rings false because this would be like before the age of powered flight, talking about balloon travel. “you’ll never be able to make it to France faster than the wind”.
It’s not an interesting claim. Aging is ultimately a mechanism, and that mechanism can be manipulated. We have both other mammals (naked mole rats et al) that are using strategies that can possibly be copied. We have newly developed tools almost never used in humans (somatic gene editing) that can potentially make any arbitrary genetic change once perfected. AI seems to be able to predict protein folding and design proteins and drugs rationally at a scale that was not possible at all.
It could be fantasy for all humans alive today, but at some point there will be living observers who see their projected lifespan go from that max of 120, to probably much more than 1 year per year. If you treated aging even imperfectly, your projected lifespan would hurtle away from you, gaining decades for every small reduction in the rate of aging. If you had AI RL based life support systems, there would be similar gains.
If the mechanism were : AGI based surgery robotics, large scale biolabs that test mockups of human bodies and in vitro organ growth, and AGI controlled ICUs, the gain could easily be from (120 → 10k years) over about a 10 year time. A “hard takeover” in longevity.
That is, I predict that if you had a system able to integrate all biology knowledge, because it has more memory than humans, access to a large set of biolabs made with self replicating robotics (so it’s millions of separate isolated ‘cells’ each performing experiments in parallel), and the right goal heuristics to support such an endeavor, the system would gain the knowledge sufficient to keep humans alive essentially indefinitely in about 10 years.
Compared to the speed of compute systems, humans die rather slowly from detectable systemic failures. Human medicine can’t do anything about them, because there are thousands of possible failures and you would need to transplant without errors de-aged organs regrown from scratch, but this is not actually that hard of a problem.
Patients under the supervision of a control system able to react quickly enough, and with a deep enough model of biology there aren’t any edge cases, would be effectively immortal, in that there is no sequence of events that will cause them to die faster than the system can react. Remember, it can ask for new experiments from a large set of biolabs, some which have living mockups of humans with similar genetics to the patient, if there is an unusual new failure.
And unlike humans, the system could do all the steps of analyzing new scientific data and determining the new optimal policy for treatment over mere seconds, theoretically even while a patient is dying.
The bottleneck in this scenario becomes brain health, as receiving a brain transplant is not very useful. I’m not sure how much of an obstacle this will be in practice.
Thanks for the reply.
I realize I made a mistake, I did not ensure that everyone reading the question, was made aware that “die with dignity” decompressed to something specific.
I have amended the question to include that information, and just to be sure, here is the link to the post that gave rise to the phrase.
I have a slightly different situation (less experience; optimistic but not sure about security mindset; didn’t update on my own, but understood and accepted the arguments very easily; in Russia and can’t easily leave), but I’m interested in answers to the same question!