The way I see it, when we’re talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.
Am I reading this incorrectly or are you saying that you don’t care about your friends and loved ones dying?
There’s at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that’s just the most salient example. At some point I’ll definitely make a post with a detailed answer to the question of “what can I do”. As for the problem not being solvable in principle, I don’t believe I’ve ever seen an argument for this which didn’t involve a horrendous strawman or quasi-religion of some sort.
I care about my friends and loved ones. I even care about strangers. I’m a fan of life extension research. But I’m not dedicating much of my resources to it—in the big picture, one human’s about as good as another, and in the small picture I don’t expect to have much chance of success, and don’t want to reduce my enjoyment of my remaining time on a crazy longshot.
I have to say that neither of those trials look particularly promising on the “ending aging” front. They may slightly delay some problems (and that’s GREAT—living longer is, in fact, better), but that’s not anywhere near solving it in principle. Mind upload might be a solution eventually, but I think it’s more likely for bio-brains to continue dying and the immortal are digital from birth.
but that’s not anywhere near solving it in principle
Of course they are not, that’s not the point. The point is that they can add more time for us to discover more cures—to the few decades most rationalists already have, considering the age distribution. During that time new approaches will likely be discovered, hopefully adding even more time, until we get to mind uploading, or nanobots constantly repairing the body, or some other complete solution. The concept is called longevity escape velocity.
but I think it’s more likely for bio-brains to continue dying and the immortal are digital from birth
Why would you think that?
And another question. Imagine you’ve found yourself with an incurable disease and 3 years to live. Moreover, it’s infectious and it has infected everyone you love. Would you try experimental cures and encourage them to try as well, or would you just give up so as not to reduce your enjoyment of the remaining time?
Imagine you’ve found yourself with an incurable disease and 3 years to live.
This is an obvious and common enough analogy that you don’t need to frame it as a thought experiment. I understand that I have an incurable disease. It’s longer than 3 years, I hope, but not by much more than an order of magnitude, certainly nowhere near 2. I’m not even doing everything I could in terms of lifestyle, exercise, and nutrition to extend it, let alone “experimental” cures. It’s not infectious, fortunately—everyone already has it.
Friends I’ve lost to disease, accident, or suicide ALSO didn’t universally commit to “experimental cures”—in all cases I know of, the cost (non-monetary cost of side-effects more than pure money, but some of that too) of the long-shots were higher than their perceived success rate.
As Pascal’s Wager options go, giving up significant resources or happiness over the next decade for a VERY TINY chance of living longer, seems to be among the less compelling formulations.
Equating high risk/high reward strategies with Pascal Wager is a way too common failure mode, and it’s helped by putting numbers on your estimates. How much is VERY TINY, how much do you think the best available options really cost, and how much would you be willing to pay (assuming you have that kind of money) for a 50% chance of living to 300 years?
To be clear, I’m not so much trying to convince you personally, as to get a generally better sense of the inferential distances involved.
I’d actually like to be convinced, but I suspect our priors differ by enough that it’s unlikely. I currently assign less than a 0.05% that I’ll live another 50 years (which would put me over 100), and three orders of magnitude less likely that I’ll live to 300. These are small enough that I don’t have as much precision in my beliefs as that implies, of course.
Conditional on significant lifestyle changes, I can probably raise those chances by 10x, from vanishingly unlikely to … vanishingly unlikely. Conditional on more money than I’m likely to have (which is already in the top few percent of humanity), maybe another 3x.
I don’t believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.
That’s, like, 99.95% probability, one in two thousand chances. You’d have two orders of magnitude higher chances of survival if you were to literally shoot yourself with a literal gun. I’m not sure you can forecast anything at all (about humans or technologies) with this degree of certainty decades into the future, definitely not that every single one of dozens attempts in a technology you’re not an expert in fail and every single one of hundreds attempts in another technology you’re not an expert in fail (building aligned AGI).
I don’t believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.
I don’t believe it either, it’s a thought experiment, I assumed it’d be obvious since it’s a very common technique to estimate how much one should value low probabilities.
I think we’ve found at least one important crux, I’m going to bow out now. I realize I misspoke earlier—I don’t much care if I become convinced, but I very much hope you succeed in keeping me and you and others alive much longer.
Am I reading this incorrectly or are you saying that you don’t care about your friends and loved ones dying?
There’s at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that’s just the most salient example. At some point I’ll definitely make a post with a detailed answer to the question of “what can I do”. As for the problem not being solvable in principle, I don’t believe I’ve ever seen an argument for this which didn’t involve a horrendous strawman or quasi-religion of some sort.
I care about my friends and loved ones. I even care about strangers. I’m a fan of life extension research. But I’m not dedicating much of my resources to it—in the big picture, one human’s about as good as another, and in the small picture I don’t expect to have much chance of success, and don’t want to reduce my enjoyment of my remaining time on a crazy longshot.
I have to say that neither of those trials look particularly promising on the “ending aging” front. They may slightly delay some problems (and that’s GREAT—living longer is, in fact, better), but that’s not anywhere near solving it in principle. Mind upload might be a solution eventually, but I think it’s more likely for bio-brains to continue dying and the immortal are digital from birth.
Of course they are not, that’s not the point. The point is that they can add more time for us to discover more cures—to the few decades most rationalists already have, considering the age distribution. During that time new approaches will likely be discovered, hopefully adding even more time, until we get to mind uploading, or nanobots constantly repairing the body, or some other complete solution. The concept is called longevity escape velocity.
Why would you think that?
And another question. Imagine you’ve found yourself with an incurable disease and 3 years to live. Moreover, it’s infectious and it has infected everyone you love. Would you try experimental cures and encourage them to try as well, or would you just give up so as not to reduce your enjoyment of the remaining time?
This is an obvious and common enough analogy that you don’t need to frame it as a thought experiment. I understand that I have an incurable disease. It’s longer than 3 years, I hope, but not by much more than an order of magnitude, certainly nowhere near 2. I’m not even doing everything I could in terms of lifestyle, exercise, and nutrition to extend it, let alone “experimental” cures. It’s not infectious, fortunately—everyone already has it.
Friends I’ve lost to disease, accident, or suicide ALSO didn’t universally commit to “experimental cures”—in all cases I know of, the cost (non-monetary cost of side-effects more than pure money, but some of that too) of the long-shots were higher than their perceived success rate.
As Pascal’s Wager options go, giving up significant resources or happiness over the next decade for a VERY TINY chance of living longer, seems to be among the less compelling formulations.
Equating high risk/high reward strategies with Pascal Wager is a way too common failure mode, and it’s helped by putting numbers on your estimates. How much is VERY TINY, how much do you think the best available options really cost, and how much would you be willing to pay (assuming you have that kind of money) for a 50% chance of living to 300 years?
To be clear, I’m not so much trying to convince you personally, as to get a generally better sense of the inferential distances involved.
I’d actually like to be convinced, but I suspect our priors differ by enough that it’s unlikely. I currently assign less than a 0.05% that I’ll live another 50 years (which would put me over 100), and three orders of magnitude less likely that I’ll live to 300. These are small enough that I don’t have as much precision in my beliefs as that implies, of course.
Conditional on significant lifestyle changes, I can probably raise those chances by 10x, from vanishingly unlikely to … vanishingly unlikely. Conditional on more money than I’m likely to have (which is already in the top few percent of humanity), maybe another 3x.
I don’t believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.
That’s, like, 99.95% probability, one in two thousand chances. You’d have two orders of magnitude higher chances of survival if you were to literally shoot yourself with a literal gun. I’m not sure you can forecast anything at all (about humans or technologies) with this degree of certainty decades into the future, definitely not that every single one of dozens attempts in a technology you’re not an expert in fail and every single one of hundreds attempts in another technology you’re not an expert in fail (building aligned AGI).
I don’t believe it either, it’s a thought experiment, I assumed it’d be obvious since it’s a very common technique to estimate how much one should value low probabilities.
I think we’ve found at least one important crux, I’m going to bow out now. I realize I misspoke earlier—I don’t much care if I become convinced, but I very much hope you succeed in keeping me and you and others alive much longer.