I agree with the Statement. As strongly as I can agree with anything. I think the hope of current humans achieving… if not immortality, then very substantially increased longevity… without AI doing the work for us, is at most a rounding error. And ASI that was even close to aligned, that found it worth reserving even a billionth part of the value of the universe for humans, would treat this as the obvious most urgent problem and solve death pretty much if there’s any physically possible way of doing so. And when I look inside, I find that I simply don’t care about a glorious transhumanist future that doesn’t include me or any of the particular other humans I care about. I do somewhat prefer being kind / helpful / benificent to people I’ve never met, very slightly prefer that even for people who don’t exist yet, but it’s far too weak a preference to trade off against any noticeable change to the odds of me and everyone I care about dying. If that makes me a “sociopath” in the view of someone or other, oh well.
I’ve been a supporter of MIRI, AI alignment, etc. for a long time, not because I share that much with EA in terms of values, but because the path to the future having any value has seemed for a long time to route through our building aligned ASI, which I consider as hard as MIRI does. But when the “pivotal act” framing started being discussed, rather than actually aligning ASI, I noticed a crack developing between my values and MIRI’s, and the past year with advocacy for “shut it all down” and so on has blown that crack wide open. I no longer feel like a future I value has any group trying to pursue it. Everyone outside of AI alignment is either just confused and flailing around with unpredictable effects, or is badly mistaken and actively pushing towards turning us all into paperclips, but those in AI alignment are either extremely unrealistically optimistic about plans that I’m pretty sure, for reasons that MIRI has argued, won’t work; or, like current MIRI, they say things like that I should stake my personal presence in the glorious transhumanist future on cryonics (and what of my friends and family members who I could never convince to sign up? What of the fact that, IMO, current cryonics practice probably doesn’t even prevent info-theoretical death, let alone give one a good shot at actually being revived at some point in the future?)
I happen to also think that most plans for preventing ASI from happening soon, that aren’t “shut it all down” in a very indiscriminate way, just won’t work—that is, I think we’ll get ASI (and probably all die) pretty soon anyway. And I think “shut it all down” is very unlikely to be societally selected as our plan for how to deal with AI in the near term, let alone effectively implemented. There are forms of certain actors choosing to go slower on their paths to ASI that I would support, but only if those actors are doing that specifically to attempt to solve alignment before ASI, and only if it won’t slow them down so much that someone else just makes unaligned ASI first anyway. And of course we should forcibly stop anyone who is on the path to making ASI without even trying to align it (because they’re mistaken about the default result of building ASI without aligning it, or because they think humanity’s extinction is good actually), although I’m not sure how capable we are of stopping them. But I want an organization that is facing up to the real, tremendous difficulty of making the first ASI aligned, and trying to do that anyway, because no other option actually has a result that they (or I) find acceptable. (By the way, MIRI is right that “do your alignment homework for you” is probably the literal worst possible task to give to one’s newly developed AGI, so e.g. OpenAI’s alignment plan seems deeply delusional to me and thus OpenAI is not the org for which I’m looking.)
I’d like someone from MIRI to read this. If no one replies here, I may send them a copy, or something based on this.
Thank you for writing this. I usually struggle to find resonating thoughts, but this indeed resonates. Not all of it, but many key points have a reflection that I’m going to share:
Biological immortality (radical life extension) without ASI (and reasonably soon) looks hardly achievable. It’s a difficult topic, but for me even Michael Levin’s talks are not inspiring enough. (I would rather prefer to become a substrate-independent mind, but, again, imagine all the R&D without substantial super-human help.)
I’m a rational egoist (more or less), so I want to see the future and have pretty much nothing to say about the world without me. Enjoying not being alone on the planet is just a personal preference. (I mean, the origin system is good, nice planets and stuff, but what if I want to GTFO?) Also, I don’t trust imaginary agents (gods, evolution, future generations, AGIs), however creating some of them may be rational.
Let’s say that early Yudkowsky has influenced my transhumanist views. To be honest, I feel somewhat betrayed. Here my position is close to what Max More says. Basically, I value the opportunities, even if I don’t like all the risks.
I agree that AI progress is really hard to stop. The scaling leaves possible algorithmic breakthroughs underexplored. There is so much to be done, I believe. The tech world will still be working on it even with mediocre hardware. So we are going to ASI anyway.
And all the alignment plans… Well, yes, they tend to be questionable. For me, creating human-like agency in AI (to negotiate with) is more about capabilities, but that’s a different story.
I respectfully disagree on the first point. I am a doctor myself and given observable increase in investment in life extension (largely in well funded stealth startups or Google Calico), I have ~70% confidence that in the absence of superhuman AGI or other x-risks in the near term, we have a shot at getting to longevity escape velocity in 20 years.
While my p(doom) for AGI is about 30% now, down from a peak of 70% maybe 2 years ago after the demonstration that it didn’t take complex or abstruse techniques to reasonably align our best AI (LLMs), I can’t fully endorse acceleration on that front because I expect the tradeoff in life expectancy to be net negative.
YMMV, it’s not like I’m overly confident myself at 70% for life expectancy being uncapped, and it’s not like we’re probably going to find out either. It just doesn’t look like a fundamentally intractable problem in isolation.
I also see multiple technological pathways that would get us to longevity escape velocity that seem plausible without AGI in that timeframe.
If nothing else, with advances in tissue engineering I expect we will be able to regrow and replace every organ in the body except the brain by mid-century.
But I also think a lot of what’s needed is culturally/politically/legally fraught in various ways. I think if we don’t get life-extending tech, it will be because we made rules inadvertently preventing it or pretending it’s better not to have it.
I have ~70% confidence that in the absence of superhuman AGI or other x-risks in the near term, we have a shot at getting to longevity escape velocity in 20 years.
Is the claim here a 70% chance of longevity escape velocity by 2043? It’s a bit hard to parse.
If that is indeed the claim, I find it very surprising, and I’m curious about what evidence you’re using to make that claim? Also, is that LEV for like, a billionaire, a middle class person in a developed nation, or everyone?
Yes, you can reformat it in that form if you prefer.
This is a gestalt impression based off my best impressions of the pace of ongoing research (significantly ramped up compared to where investment was 20 years ago), human neurology, synthetic organs and finally non-biological alternatives like cybernetic enhancement. I will emphasize that LEV != actual biological immortality, but it leads to at least a cure for aging if nothing else.
Aging, while complicated and likely multifactorial, doesn’t seem intractable to analysis or mitigation. We have independent research projects tackling individual aspects, but as I’ve stated, most of them are in stealth mode even if they’re well-funded, and solving any individual mechanism is insufficient because of how aging itself is an exponential process.
To help, I’m going to tackle the top causes of aging in the West-
Heart disease- This is highly amenable to outright replacement of the organ, be it with a cybernetic replacement or one grown in-vitro. Obesity, which contributes heavily to cardiovascular disease and morbidity, is already being tackled by the discovery of GLP-1 antagonists like semaglutide, and I fully expect that the obesity epidemic that is dragging down life expectancy in the West will be over well before then.
Cancer- Another reason for optimism, CAR-T therapy is incredibly promising, as are other targeted therapies. So are vaccines for diseases like HPV that themselves cause cancer (said vaccine already exists, I’m talking more generally).
Unintentional injuries- The world has grown grossly safer, and only will continue to do so, especially as things get more automated.
Respiratory diseases- Once again, reason for optimism that biological replacements will be cheap enough that we won’t have to rely on limited numbers of donors for transplants.
Stroke and cerebrovascular disease- I’ll discuss the brain separately, but while this is a harder subject to tackle, mitigating obesity helps immensely.
Alzheimers- Same disclaimer as above
Diabetes- Our insulin pumps and formulations only get better and cheaper, and many of the drawbacks of artificial insulin supplementation will vanish (our pancreas is currently better at quickly and responsively adjusting blood sugar levels by releasing insulin than we are). Once again, a target for outright replacement of the organ.
These are ranked in descending order.
The brain remains incredibly difficult to regenerate, so if we run into something intractable to the hypothetical capabilities 20 years hence, this will likely be the biggest hurdle. Even then, I’m cautiously optimistic we’ll figure something out, or reduce the incidence of dementia.
Beyond organic replacement, I’m bullish on gene therapy, most hereditary disease will be eliminated, and eventually somatic gene therapy will be able to work on the scale of the entire body, and I would be highly surprised if this wasn’t possible in 20 years.
I expect regenerative medicine to be widely available, beyond our current limited attempts at arresting the progression of illness or settling for replacements from human donors. There’s a grab bag of individual therapies like thymic replacement that I won’t get into.
As for the costs associated with this, I claim no particular expertise, but in general, most such treatments are amenable to economies of scale, and I don’t expect them to remain out of reach for long. Organ replacement will likely get a lot cheaper once they’re being vat grown, and I put a decent amount of probability that ~universally acceptable organs can be created by careful management of the expression of HLA antigens such that they’re unlikely to be rejected outright. Worst case, patient tissue such as pluripotent stem cells will be used to fill out inert scaffolding like we do today.
As a doctor, I can clearly see the premium people put on any additional extension of their lives when mortality is staring them in the face, and while price will likely be prohibitive for getting everyone on the globe to avail of such options, I expect even middle class Westerners with insurance to be able to keep up.
Like I said, this is a gestalt impression of a very broad field, and 70% isn’t an immense declaration of confidence. Besides, it’s mostly moot in the first place, we’re very likely certainly getting AGI of some form by 2043.
To further put numbers on it, I think that in a world where AI is arrested at a level not significantly higher than GPT-4, I, being under the age of 30, have a ~80% chance of making it to LEV in my lifespan, with an approximately 5% drop for every additional decade older you are at the present.
I, being under the age of 30, have a ~80% chance of making it to LEV in my lifespan, with an approximately 5% drop for every additional decade older you are at the present.
You, being a relatively wealthy person in a modernized country? Do you think you’ll be able to afford the LEV by that time, or only that some of the wealthiest people will?
I’m a doctor in India right now, and will likely be a doctor in the UK by then, assuming I’m not economically obsolete. And yes, I expect that if we do have therapies that help provide LEV, they will be affordable in my specific circumstances as well as most LW readers, if not globally. UK doctors are far poorer compared to the their US kin.
Most biological therapies are relatively amenable to economies of scale, and while there are others that might be too bespoke to manage the same, that won’t last indefinitely. I can’t imagine anything with as much demand as a therapy that is proven to delay aging nigh indefinitely, for an illustrative example look at what Ozempic and Co are achieving already, every pharma industry leader and their dog wants to get in on the action, and the prices will keep dropping for a good while.
It might even make economic sense for countries to subsidize the treatment (IIRC, it wouldn’t take much more for GLP-1 drugs to reach the point where they’re a net savings for insurers or governments in terms of reducing obesity related health expenditures). After all, aging is why we end up succumbing to so many diseases in our senescence, not the reverse.
Specifically, gene therapy will likely be the best bet for scaling, if a simple drug doesn’t come about (seems unlikely to me, I doubt there’s such low hanging fruit, even if the net result of LEV might rely on multiple different treatments in parallel with none achieving it by themself).
Thanks for your reply. “70% confidence that… we have a shot” is slightly ambiguous—I’d say that most shots one has are missed, but I’m guessing that isn’t what you meant, and that you instead meant 70% chance of success.
70% feels way too high to me, but I do find it quite plausible that calling it a rounding error is wrong. However, with a 20 year timeline, a lot of people I care about will almost definitely still die, who could have not died if death were Solved, which group with very much not negligible probability includes myself. And as you note downthread, the brain is a really deep problem with prosaic life extension. Overall I don’t see how anything along these lines can be fast enough and certain enough to be a crux on AI for me, but I’m glad people are working on it more than is immediately apparent to the casual observer. (I’m a type 1 diabetic and would have died at 8 years old if I’d lived before insulin was discovered and made medically available, so the value of prosaic life extension is very much not lost on me.)
T1DM is a nasty disease, and much like you, I’m more than glad to live in the present day when we have tools to tackle it, even if other diseases still persist. There’s no other time I’d rather be alive, even if I die soon, it’s going to be interesting, and we’ll either solve ~all our problems or die trying.
However, with a 20 year timeline, a lot of people I care about will almost definitely still die, who could have not died if death were Solved, which group with very much not negligible probability includes myself
I understand. My mother has chronic liver disease, and my grandpa is 95 years old, even if he’s healthy for his age (a low bar!). In the former case, I think she has a decent chance of making it to 2043 in the absence of a Singularity, even if it’s not as high as I would like. As for my grandfather, at that age just living to see the next birthday quickly becomes something you can’t take for granted. I certainly cherish all the time I can spend him with him, and hope it all goes favorably for us all.
As for me, I went from envying the very young, because I thought they were shoe-ins for making it to biological immortality, to pitying them more these days, because they haven’t had at least the quarter decade of life I’ve had in the event AGI turns out malign.
Hey, at least I’m glad we’re not in the Worst Possible Timeline, given that awareness of AI x-risk has gone mainstream. That has to count for something.
P.S. Having this set of values and beliefs is very hard on one’s epistemics. I think it’s a writ-large version of what Eliezer has stated as “thinking about AI timelines is bad for one’s epistemics”. Here are some examples:
(1) Although I’ve never been at all tempted by e/acc techno-optimism (on this topic specifically) / alignment isn’t a problem at all / alignment by default, boy, it sure would be nice to hear about a strategy for alignment that didn’t sound almost definitely doomed for one reason or another. Even though Eliezer can (accurately, IMO) shoot down a couple of new alignment strategies before getting out of bed in the morning. So far I’ve never found myself actually doing it, but it’s impossible not to notice that if I just weren’t as good at finding problems or as willing to acknowledge problems found by others, then some alignment strategies I’ve seen might have looked non-doomed, at least at first...
(2) I don’t expect any kind of deliberate slowdown of making AGI to be all that effective even on its own terms, with the single exception of indiscriminate “tear it all down”, which I think is unlikely to get within the Overton window, at least in a robust way that would stop development even in countries that don’t agree (forcing someone to sabotage / invade / bomb them). Although such actions might buy us a few years, it seems overdetermined to me that they still leave us doomed, and in fact they appear to cut away some of the actually-helpful options that might otherwise be available (the current crop of companies attempting to develop AGI definitely aren’t the least concerned with existential risk of all actors who’d develop AGI if they could, for one thing). Compute thresholds of any kind, in particular, I expect to lead to much greater focus on doing more with the same compute resources rather than doing more by using more compute resources, and I expect there’s a lot of low-hanging fruit there since that isn’t where people have been focusing, and that the thresholds would need to decrease very much very fast to actually prevent AGI, and decreasing the thresholds below the power of a 2023 gaming rig is untenable. I’m not aware of any place in this argument where I’m allowing “if deliberate slowdowns were effective on their own terms, I’d still consider the result very bad” to bias my judgment. But is it? I can’t really prove it isn’t...
(3) The “pivotal act” framing seems unhelpful to me. It seems strongly impossible to me for humans to make an AI that’s able to pass strawberry alignment that has so little understanding of agency that it couldn’t, if it wanted to, seize control of the world. (That kind of AI is probably logically possible, but I don’t think humans have any real possibility of building one.) An AI that can’t even pass strawberry alignment clearly can’t be safely handed “melt all the GPUs” or any other task that requires strongly superhuman capabilities (and if “melt all the GPUs” were a good idea, and it didn’t require strongly superhuman capabilities, then people should just directly do that). So, it seems to me that the only good result that could come from aiming for a pivotal act would be that the ASI you’re using to execute it is actually aligned with humans and “goes rogue” to implement our glorious transhuman future; and it seems to me that if that’s what you want, it would be better to aim for that directly rather than trying to fit it through this weirdly-shaped “pivotal act” hole.
But… if this is wrong, and a narrow AGI could safely do a pivotal act, I’d very likely consider the resulting world very bad anyway, because we’d be in a world where unaligned ASI has been reliably prevented from coming into existence, and if the way that was done wasn’t by already having aligned ASI, then by far the obvious way for that to happen is to reliably prevent any ASI from coming into existence. But IMO we need aligned ASI to solve death. Does any of that affect how compelling I find the case for narrow pivotal-act AI on its own terms? Who knows...
I agree with the Statement. As strongly as I can agree with anything. I think the hope of current humans achieving… if not immortality, then very substantially increased longevity… without AI doing the work for us, is at most a rounding error. And ASI that was even close to aligned, that found it worth reserving even a billionth part of the value of the universe for humans, would treat this as the obvious most urgent problem and solve death pretty much if there’s any physically possible way of doing so. And when I look inside, I find that I simply don’t care about a glorious transhumanist future that doesn’t include me or any of the particular other humans I care about. I do somewhat prefer being kind / helpful / benificent to people I’ve never met, very slightly prefer that even for people who don’t exist yet, but it’s far too weak a preference to trade off against any noticeable change to the odds of me and everyone I care about dying. If that makes me a “sociopath” in the view of someone or other, oh well.
I’ve been a supporter of MIRI, AI alignment, etc. for a long time, not because I share that much with EA in terms of values, but because the path to the future having any value has seemed for a long time to route through our building aligned ASI, which I consider as hard as MIRI does. But when the “pivotal act” framing started being discussed, rather than actually aligning ASI, I noticed a crack developing between my values and MIRI’s, and the past year with advocacy for “shut it all down” and so on has blown that crack wide open. I no longer feel like a future I value has any group trying to pursue it. Everyone outside of AI alignment is either just confused and flailing around with unpredictable effects, or is badly mistaken and actively pushing towards turning us all into paperclips, but those in AI alignment are either extremely unrealistically optimistic about plans that I’m pretty sure, for reasons that MIRI has argued, won’t work; or, like current MIRI, they say things like that I should stake my personal presence in the glorious transhumanist future on cryonics (and what of my friends and family members who I could never convince to sign up? What of the fact that, IMO, current cryonics practice probably doesn’t even prevent info-theoretical death, let alone give one a good shot at actually being revived at some point in the future?)
I happen to also think that most plans for preventing ASI from happening soon, that aren’t “shut it all down” in a very indiscriminate way, just won’t work—that is, I think we’ll get ASI (and probably all die) pretty soon anyway. And I think “shut it all down” is very unlikely to be societally selected as our plan for how to deal with AI in the near term, let alone effectively implemented. There are forms of certain actors choosing to go slower on their paths to ASI that I would support, but only if those actors are doing that specifically to attempt to solve alignment before ASI, and only if it won’t slow them down so much that someone else just makes unaligned ASI first anyway. And of course we should forcibly stop anyone who is on the path to making ASI without even trying to align it (because they’re mistaken about the default result of building ASI without aligning it, or because they think humanity’s extinction is good actually), although I’m not sure how capable we are of stopping them. But I want an organization that is facing up to the real, tremendous difficulty of making the first ASI aligned, and trying to do that anyway, because no other option actually has a result that they (or I) find acceptable. (By the way, MIRI is right that “do your alignment homework for you” is probably the literal worst possible task to give to one’s newly developed AGI, so e.g. OpenAI’s alignment plan seems deeply delusional to me and thus OpenAI is not the org for which I’m looking.)
I’d like someone from MIRI to read this. If no one replies here, I may send them a copy, or something based on this.
Thank you for writing this. I usually struggle to find resonating thoughts, but this indeed resonates. Not all of it, but many key points have a reflection that I’m going to share:
Biological immortality (radical life extension) without ASI (and reasonably soon) looks hardly achievable. It’s a difficult topic, but for me even Michael Levin’s talks are not inspiring enough. (I would rather prefer to become a substrate-independent mind, but, again, imagine all the R&D without substantial super-human help.)
I’m a rational egoist (more or less), so I want to see the future and have pretty much nothing to say about the world without me. Enjoying not being alone on the planet is just a personal preference. (I mean, the origin system is good, nice planets and stuff, but what if I want to GTFO?) Also, I don’t trust imaginary agents (gods, evolution, future generations, AGIs), however creating some of them may be rational.
Let’s say that early Yudkowsky has influenced my transhumanist views. To be honest, I feel somewhat betrayed. Here my position is close to what Max More says. Basically, I value the opportunities, even if I don’t like all the risks.
I agree that AI progress is really hard to stop. The scaling leaves possible algorithmic breakthroughs underexplored. There is so much to be done, I believe. The tech world will still be working on it even with mediocre hardware. So we are going to ASI anyway.
And all the alignment plans… Well, yes, they tend to be questionable. For me, creating human-like agency in AI (to negotiate with) is more about capabilities, but that’s a different story.
I respectfully disagree on the first point. I am a doctor myself and given observable increase in investment in life extension (largely in well funded stealth startups or Google Calico), I have ~70% confidence that in the absence of superhuman AGI or other x-risks in the near term, we have a shot at getting to longevity escape velocity in 20 years.
While my p(doom) for AGI is about 30% now, down from a peak of 70% maybe 2 years ago after the demonstration that it didn’t take complex or abstruse techniques to reasonably align our best AI (LLMs), I can’t fully endorse acceleration on that front because I expect the tradeoff in life expectancy to be net negative.
YMMV, it’s not like I’m overly confident myself at 70% for life expectancy being uncapped, and it’s not like we’re probably going to find out either. It just doesn’t look like a fundamentally intractable problem in isolation.
I also see multiple technological pathways that would get us to longevity escape velocity that seem plausible without AGI in that timeframe.
If nothing else, with advances in tissue engineering I expect we will be able to regrow and replace every organ in the body except the brain by mid-century.
But I also think a lot of what’s needed is culturally/politically/legally fraught in various ways. I think if we don’t get life-extending tech, it will be because we made rules inadvertently preventing it or pretending it’s better not to have it.
Is the claim here a 70% chance of longevity escape velocity by 2043? It’s a bit hard to parse.
If that is indeed the claim, I find it very surprising, and I’m curious about what evidence you’re using to make that claim? Also, is that LEV for like, a billionaire, a middle class person in a developed nation, or everyone?
Yes, you can reformat it in that form if you prefer.
This is a gestalt impression based off my best impressions of the pace of ongoing research (significantly ramped up compared to where investment was 20 years ago), human neurology, synthetic organs and finally non-biological alternatives like cybernetic enhancement. I will emphasize that LEV != actual biological immortality, but it leads to at least a cure for aging if nothing else.
Aging, while complicated and likely multifactorial, doesn’t seem intractable to analysis or mitigation. We have independent research projects tackling individual aspects, but as I’ve stated, most of them are in stealth mode even if they’re well-funded, and solving any individual mechanism is insufficient because of how aging itself is an exponential process.
To help, I’m going to tackle the top causes of aging in the West-
Heart disease- This is highly amenable to outright replacement of the organ, be it with a cybernetic replacement or one grown in-vitro. Obesity, which contributes heavily to cardiovascular disease and morbidity, is already being tackled by the discovery of GLP-1 antagonists like semaglutide, and I fully expect that the obesity epidemic that is dragging down life expectancy in the West will be over well before then.
Cancer- Another reason for optimism, CAR-T therapy is incredibly promising, as are other targeted therapies. So are vaccines for diseases like HPV that themselves cause cancer (said vaccine already exists, I’m talking more generally).
Unintentional injuries- The world has grown grossly safer, and only will continue to do so, especially as things get more automated.
Respiratory diseases- Once again, reason for optimism that biological replacements will be cheap enough that we won’t have to rely on limited numbers of donors for transplants.
Stroke and cerebrovascular disease- I’ll discuss the brain separately, but while this is a harder subject to tackle, mitigating obesity helps immensely.
Alzheimers- Same disclaimer as above
Diabetes- Our insulin pumps and formulations only get better and cheaper, and many of the drawbacks of artificial insulin supplementation will vanish (our pancreas is currently better at quickly and responsively adjusting blood sugar levels by releasing insulin than we are). Once again, a target for outright replacement of the organ.
These are ranked in descending order.
The brain remains incredibly difficult to regenerate, so if we run into something intractable to the hypothetical capabilities 20 years hence, this will likely be the biggest hurdle. Even then, I’m cautiously optimistic we’ll figure something out, or reduce the incidence of dementia.
Beyond organic replacement, I’m bullish on gene therapy, most hereditary disease will be eliminated, and eventually somatic gene therapy will be able to work on the scale of the entire body, and I would be highly surprised if this wasn’t possible in 20 years.
I expect regenerative medicine to be widely available, beyond our current limited attempts at arresting the progression of illness or settling for replacements from human donors. There’s a grab bag of individual therapies like thymic replacement that I won’t get into.
As for the costs associated with this, I claim no particular expertise, but in general, most such treatments are amenable to economies of scale, and I don’t expect them to remain out of reach for long. Organ replacement will likely get a lot cheaper once they’re being vat grown, and I put a decent amount of probability that ~universally acceptable organs can be created by careful management of the expression of HLA antigens such that they’re unlikely to be rejected outright. Worst case, patient tissue such as pluripotent stem cells will be used to fill out inert scaffolding like we do today.
As a doctor, I can clearly see the premium people put on any additional extension of their lives when mortality is staring them in the face, and while price will likely be prohibitive for getting everyone on the globe to avail of such options, I expect even middle class Westerners with insurance to be able to keep up.
Like I said, this is a gestalt impression of a very broad field, and 70% isn’t an immense declaration of confidence. Besides, it’s mostly moot in the first place, we’re very likely certainly getting AGI of some form by 2043.
To further put numbers on it, I think that in a world where AI is arrested at a level not significantly higher than GPT-4, I, being under the age of 30, have a ~80% chance of making it to LEV in my lifespan, with an approximately 5% drop for every additional decade older you are at the present.
You, being a relatively wealthy person in a modernized country? Do you think you’ll be able to afford the LEV by that time, or only that some of the wealthiest people will?
I’m a doctor in India right now, and will likely be a doctor in the UK by then, assuming I’m not economically obsolete. And yes, I expect that if we do have therapies that help provide LEV, they will be affordable in my specific circumstances as well as most LW readers, if not globally. UK doctors are far poorer compared to the their US kin.
Most biological therapies are relatively amenable to economies of scale, and while there are others that might be too bespoke to manage the same, that won’t last indefinitely. I can’t imagine anything with as much demand as a therapy that is proven to delay aging nigh indefinitely, for an illustrative example look at what Ozempic and Co are achieving already, every pharma industry leader and their dog wants to get in on the action, and the prices will keep dropping for a good while.
It might even make economic sense for countries to subsidize the treatment (IIRC, it wouldn’t take much more for GLP-1 drugs to reach the point where they’re a net savings for insurers or governments in terms of reducing obesity related health expenditures). After all, aging is why we end up succumbing to so many diseases in our senescence, not the reverse.
Specifically, gene therapy will likely be the best bet for scaling, if a simple drug doesn’t come about (seems unlikely to me, I doubt there’s such low hanging fruit, even if the net result of LEV might rely on multiple different treatments in parallel with none achieving it by themself).
Thanks for your reply. “70% confidence that… we have a shot” is slightly ambiguous—I’d say that most shots one has are missed, but I’m guessing that isn’t what you meant, and that you instead meant 70% chance of success.
70% feels way too high to me, but I do find it quite plausible that calling it a rounding error is wrong. However, with a 20 year timeline, a lot of people I care about will almost definitely still die, who could have not died if death were Solved, which group with very much not negligible probability includes myself. And as you note downthread, the brain is a really deep problem with prosaic life extension. Overall I don’t see how anything along these lines can be fast enough and certain enough to be a crux on AI for me, but I’m glad people are working on it more than is immediately apparent to the casual observer. (I’m a type 1 diabetic and would have died at 8 years old if I’d lived before insulin was discovered and made medically available, so the value of prosaic life extension is very much not lost on me.)
T1DM is a nasty disease, and much like you, I’m more than glad to live in the present day when we have tools to tackle it, even if other diseases still persist. There’s no other time I’d rather be alive, even if I die soon, it’s going to be interesting, and we’ll either solve ~all our problems or die trying.
I understand. My mother has chronic liver disease, and my grandpa is 95 years old, even if he’s healthy for his age (a low bar!). In the former case, I think she has a decent chance of making it to 2043 in the absence of a Singularity, even if it’s not as high as I would like. As for my grandfather, at that age just living to see the next birthday quickly becomes something you can’t take for granted. I certainly cherish all the time I can spend him with him, and hope it all goes favorably for us all.
As for me, I went from envying the very young, because I thought they were shoe-ins for making it to biological immortality, to pitying them more these days, because they haven’t had at least the quarter decade of life I’ve had in the event AGI turns out malign.
Hey, at least I’m glad we’re not in the Worst Possible Timeline, given that awareness of AI x-risk has gone mainstream. That has to count for something.
P.S. Having this set of values and beliefs is very hard on one’s epistemics. I think it’s a writ-large version of what Eliezer has stated as “thinking about AI timelines is bad for one’s epistemics”. Here are some examples:
(1) Although I’ve never been at all tempted by e/acc techno-optimism (on this topic specifically) / alignment isn’t a problem at all / alignment by default, boy, it sure would be nice to hear about a strategy for alignment that didn’t sound almost definitely doomed for one reason or another. Even though Eliezer can (accurately, IMO) shoot down a couple of new alignment strategies before getting out of bed in the morning. So far I’ve never found myself actually doing it, but it’s impossible not to notice that if I just weren’t as good at finding problems or as willing to acknowledge problems found by others, then some alignment strategies I’ve seen might have looked non-doomed, at least at first...
(2) I don’t expect any kind of deliberate slowdown of making AGI to be all that effective even on its own terms, with the single exception of indiscriminate “tear it all down”, which I think is unlikely to get within the Overton window, at least in a robust way that would stop development even in countries that don’t agree (forcing someone to sabotage / invade / bomb them). Although such actions might buy us a few years, it seems overdetermined to me that they still leave us doomed, and in fact they appear to cut away some of the actually-helpful options that might otherwise be available (the current crop of companies attempting to develop AGI definitely aren’t the least concerned with existential risk of all actors who’d develop AGI if they could, for one thing). Compute thresholds of any kind, in particular, I expect to lead to much greater focus on doing more with the same compute resources rather than doing more by using more compute resources, and I expect there’s a lot of low-hanging fruit there since that isn’t where people have been focusing, and that the thresholds would need to decrease very much very fast to actually prevent AGI, and decreasing the thresholds below the power of a 2023 gaming rig is untenable. I’m not aware of any place in this argument where I’m allowing “if deliberate slowdowns were effective on their own terms, I’d still consider the result very bad” to bias my judgment. But is it? I can’t really prove it isn’t...
(3) The “pivotal act” framing seems unhelpful to me. It seems strongly impossible to me for humans to make an AI that’s able to pass strawberry alignment that has so little understanding of agency that it couldn’t, if it wanted to, seize control of the world. (That kind of AI is probably logically possible, but I don’t think humans have any real possibility of building one.) An AI that can’t even pass strawberry alignment clearly can’t be safely handed “melt all the GPUs” or any other task that requires strongly superhuman capabilities (and if “melt all the GPUs” were a good idea, and it didn’t require strongly superhuman capabilities, then people should just directly do that). So, it seems to me that the only good result that could come from aiming for a pivotal act would be that the ASI you’re using to execute it is actually aligned with humans and “goes rogue” to implement our glorious transhuman future; and it seems to me that if that’s what you want, it would be better to aim for that directly rather than trying to fit it through this weirdly-shaped “pivotal act” hole.
But… if this is wrong, and a narrow AGI could safely do a pivotal act, I’d very likely consider the resulting world very bad anyway, because we’d be in a world where unaligned ASI has been reliably prevented from coming into existence, and if the way that was done wasn’t by already having aligned ASI, then by far the obvious way for that to happen is to reliably prevent any ASI from coming into existence. But IMO we need aligned ASI to solve death. Does any of that affect how compelling I find the case for narrow pivotal-act AI on its own terms? Who knows...