I agree the argument needs fleshing out—only intended as a rough sketch.
There are three possibilities:
Longevity research success → AI capabilities researchers slow down b/c more risk-averse + achieved their immortality aims that motivated their AI research
Longevity research success → no effect on AI capabilities researcher activity
Longevity research success → Extends research career of AI capabilities researchers, accelerating AI discovery
You also appeal to just open-ended uncertainty—even if we come up with strong confident predictions on these specific mechanisms, we still haven’t moved the needle on predicting the effect of longevity research success on AI timelines.
Here are a few quick responses.
Longevity research success would also extend the careers of AI safety researchers. A counterargument is that AI safety researchers are mostly young. In the very short term, this may benefit AI capabilities research more than AI safety research. Over time, that may flip. However with short AI timelines, longevity research is not an effective solution because it’s extremely unlikely that convincing proof we’ve achieved longevity escape velocity within the next 10-20 years. If we all became immortal now and AI capabilities were to be invented soon, this aspect might be net bad for safety. If we became immortal in 20 years and AI capabilities would otherwise be invented in 40 years, now both the safety and capabilities researchers get the benefit of career extension.
Longevity research success may also make politicians and powerful people in the private sector (early beneficiaries of longevity research success) more risk-averse, making them regulate AI capabilities with more scrutiny. If they shut off the giant GPUs, it will be hard for capabilities research to succeed. It’s even easier to imagine politicians + powerful businessmen allowing AI capabilities research to accelerate as a desperate longevity gamble than it is to imagine the AI capabilities researchers themselves pursuing it for that reason.
It is difficult for researchers to switch from CS to biology and vice versa. I think de Grey is probably a rare exception, and I think the problem of longevity research success causing a flood of research into AI capabilities is unlikely. Indeed, I expect concrete wins in longevity research would pull people in the other direction as the field became superheated.
We should emphasize that under longtermist EV calculus, we only need to become mildly confident that longevity research success has a positive sign to think it’s overwhelmingly important.
If we’re extremely uncertain and we really truly think the issue is course-of-the-universe-determiningly important, then that just means we really ought to think it through, not stop at “I’m just very uncertain.” What are some additional concrete scenarios where longevity research makes things better or worse?
I think it would be more accurate to say that I’m simply acknowledging the sheer complexity of the world and the massive ramifications that such a large change would have. Hypothesizing about a few possible downstream effects of something like life extension on something as far away from it causally as AI risk is all well and good, but I think you would need to put a lot of time and effort into it in order to be very confident at all about things like directionality of net effects overall.
I would go as far as to say the implementation details of how we get life extension itself could change the sign of the impact with regards to AI risk—there are enough different possible scenarios as to how it could go that could each amplify different components of its impact on AI risk to produce a different overall net effect.
What are some additional concrete scenarios where longevity research makes things better or worse?
So first you didn’t respond to the example I gave with regards to preventing human capital waste (preventing people with experience/education/knowledge/expertise dying from aging-related disease), and the additional slack from the additional general productive capacity in the economy more broadly that is able to go into AI capabilities research.
Here’s another one. Lets say medicine and healthcare becomes a much smaller field after the advent of popularly available regenerative therapies that prevent diseases of old age. In this world people only need to go see a medical professional when they face injury or the increasingly rare infection by a communicable disease. The demand for medical professionals disappears to a massive extent, and the best and brightest (medical programs often have the highest/most competitive entry requirements) that would have gone into medicine are routed elsewhere, including AI which accelerating capabilities and causing faster overall timelines.
An assumption that much might hinge on is that I expect differential technological development with regards to capability versus safety to be pretty heavily favouring accelerating capabilities over safety in circumstances where additional resources are made available for both. This isn’t necessarily going to be the case of course, for example the resources in theory could be exclusively routed towards safety, but I just don’t expect most worlds to go that way, or even for the ratio of resources to be allocated towards safety enough such that you get better posistive expected value from the additional resources very often. But even something as basic as this is subject to a lot of uncertainty.
Personally I’d be shocked if longevity medicine resulted in a downsizing of the healthcare industry.
Longevity medicine likely will displace some treatments for acute illness with various maintenance treatments to prevent onset of acute illness. There will be more monitoring, complex surgeries, all kinds of things to do.
And the medical profession doesn’t overlap that well with AI research. It’s a service industry with a helping of biochem. People who do medicine typically hate math. AI is a super hot industry. If people aren’t going into it, it’s because they don’t have great fit.
I don’t know enough about differential development arguments to respond to that bit right now.
Overall, I agree that the issue is complex, but I think it’s tractable complex and we shouldn’t overestimate the number of major uncertainties. If in general it was too hard to predict the macro consequences of strategy X then it would not be possible to strategize. We clearly have a lot of confidence around here about the likelihood of AI doom. I think we need a good clean argument about why we can make confident predictions in certain areas and why we can make “massive complexity” arguments in others.
I thought I did respond to your human capital waste example. Can you clarify the mechanism you’re proposing? Maybe it wasn’t clear to me.
With regard to the massive complexity argument, I think this points to a broader issue. Sometimes, we feel confident about the macroeconomic impact of X on Y. For example, people in the know seem pretty confident that the US insourcing the chip industry is bad for AI capability and thus good for AI safety. What is it that causes us to be confidently uncertain due to a “massive complexity” argument in the case of longevity, but mildly confident in the sign of the intervention in the case of chip insourcing?
I don’t know your view on chip insourcing, but I think it’s relevant to the argument whether you’d also make a “massive complexity” argument for that issue or not.
Edit: I misclicked submit too early. Will finish replying in another comment.
I agree the argument needs fleshing out—only intended as a rough sketch.
There are three possibilities:
Longevity research success → AI capabilities researchers slow down b/c more risk-averse + achieved their immortality aims that motivated their AI research
Longevity research success → no effect on AI capabilities researcher activity
Longevity research success → Extends research career of AI capabilities researchers, accelerating AI discovery
You also appeal to just open-ended uncertainty—even if we come up with strong confident predictions on these specific mechanisms, we still haven’t moved the needle on predicting the effect of longevity research success on AI timelines.
Here are a few quick responses.
Longevity research success would also extend the careers of AI safety researchers. A counterargument is that AI safety researchers are mostly young. In the very short term, this may benefit AI capabilities research more than AI safety research. Over time, that may flip. However with short AI timelines, longevity research is not an effective solution because it’s extremely unlikely that convincing proof we’ve achieved longevity escape velocity within the next 10-20 years. If we all became immortal now and AI capabilities were to be invented soon, this aspect might be net bad for safety. If we became immortal in 20 years and AI capabilities would otherwise be invented in 40 years, now both the safety and capabilities researchers get the benefit of career extension.
Longevity research success may also make politicians and powerful people in the private sector (early beneficiaries of longevity research success) more risk-averse, making them regulate AI capabilities with more scrutiny. If they shut off the giant GPUs, it will be hard for capabilities research to succeed. It’s even easier to imagine politicians + powerful businessmen allowing AI capabilities research to accelerate as a desperate longevity gamble than it is to imagine the AI capabilities researchers themselves pursuing it for that reason.
It is difficult for researchers to switch from CS to biology and vice versa. I think de Grey is probably a rare exception, and I think the problem of longevity research success causing a flood of research into AI capabilities is unlikely. Indeed, I expect concrete wins in longevity research would pull people in the other direction as the field became superheated.
We should emphasize that under longtermist EV calculus, we only need to become mildly confident that longevity research success has a positive sign to think it’s overwhelmingly important.
If we’re extremely uncertain and we really truly think the issue is course-of-the-universe-determiningly important, then that just means we really ought to think it through, not stop at “I’m just very uncertain.” What are some additional concrete scenarios where longevity research makes things better or worse?
I think it would be more accurate to say that I’m simply acknowledging the sheer complexity of the world and the massive ramifications that such a large change would have. Hypothesizing about a few possible downstream effects of something like life extension on something as far away from it causally as AI risk is all well and good, but I think you would need to put a lot of time and effort into it in order to be very confident at all about things like directionality of net effects overall.
I would go as far as to say the implementation details of how we get life extension itself could change the sign of the impact with regards to AI risk—there are enough different possible scenarios as to how it could go that could each amplify different components of its impact on AI risk to produce a different overall net effect.
So first you didn’t respond to the example I gave with regards to preventing human capital waste (preventing people with experience/education/knowledge/expertise dying from aging-related disease), and the additional slack from the additional general productive capacity in the economy more broadly that is able to go into AI capabilities research.
Here’s another one. Lets say medicine and healthcare becomes a much smaller field after the advent of popularly available regenerative therapies that prevent diseases of old age. In this world people only need to go see a medical professional when they face injury or the increasingly rare infection by a communicable disease. The demand for medical professionals disappears to a massive extent, and the best and brightest (medical programs often have the highest/most competitive entry requirements) that would have gone into medicine are routed elsewhere, including AI which accelerating capabilities and causing faster overall timelines.
An assumption that much might hinge on is that I expect differential technological development with regards to capability versus safety to be pretty heavily favouring accelerating capabilities over safety in circumstances where additional resources are made available for both. This isn’t necessarily going to be the case of course, for example the resources in theory could be exclusively routed towards safety, but I just don’t expect most worlds to go that way, or even for the ratio of resources to be allocated towards safety enough such that you get better posistive expected value from the additional resources very often. But even something as basic as this is subject to a lot of uncertainty.
Personally I’d be shocked if longevity medicine resulted in a downsizing of the healthcare industry.
Longevity medicine likely will displace some treatments for acute illness with various maintenance treatments to prevent onset of acute illness. There will be more monitoring, complex surgeries, all kinds of things to do.
And the medical profession doesn’t overlap that well with AI research. It’s a service industry with a helping of biochem. People who do medicine typically hate math. AI is a super hot industry. If people aren’t going into it, it’s because they don’t have great fit.
I don’t know enough about differential development arguments to respond to that bit right now.
Overall, I agree that the issue is complex, but I think it’s tractable complex and we shouldn’t overestimate the number of major uncertainties. If in general it was too hard to predict the macro consequences of strategy X then it would not be possible to strategize. We clearly have a lot of confidence around here about the likelihood of AI doom. I think we need a good clean argument about why we can make confident predictions in certain areas and why we can make “massive complexity” arguments in others.
I thought I did respond to your human capital waste example. Can you clarify the mechanism you’re proposing? Maybe it wasn’t clear to me.
With regard to the massive complexity argument, I think this points to a broader issue. Sometimes, we feel confident about the macroeconomic impact of X on Y. For example, people in the know seem pretty confident that the US insourcing the chip industry is bad for AI capability and thus good for AI safety. What is it that causes us to be confidently uncertain due to a “massive complexity” argument in the case of longevity, but mildly confident in the sign of the intervention in the case of chip insourcing?
I don’t know your view on chip insourcing, but I think it’s relevant to the argument whether you’d also make a “massive complexity” argument for that issue or not.
Edit: I misclicked submit too early. Will finish replying in another comment.