Personally I’d be shocked if longevity medicine resulted in a downsizing of the healthcare industry.
Longevity medicine likely will displace some treatments for acute illness with various maintenance treatments to prevent onset of acute illness. There will be more monitoring, complex surgeries, all kinds of things to do.
And the medical profession doesn’t overlap that well with AI research. It’s a service industry with a helping of biochem. People who do medicine typically hate math. AI is a super hot industry. If people aren’t going into it, it’s because they don’t have great fit.
I don’t know enough about differential development arguments to respond to that bit right now.
Overall, I agree that the issue is complex, but I think it’s tractable complex and we shouldn’t overestimate the number of major uncertainties. If in general it was too hard to predict the macro consequences of strategy X then it would not be possible to strategize. We clearly have a lot of confidence around here about the likelihood of AI doom. I think we need a good clean argument about why we can make confident predictions in certain areas and why we can make “massive complexity” arguments in others.
Personally I’d be shocked if longevity medicine resulted in a downsizing of the healthcare industry.
Longevity medicine likely will displace some treatments for acute illness with various maintenance treatments to prevent onset of acute illness. There will be more monitoring, complex surgeries, all kinds of things to do.
And the medical profession doesn’t overlap that well with AI research. It’s a service industry with a helping of biochem. People who do medicine typically hate math. AI is a super hot industry. If people aren’t going into it, it’s because they don’t have great fit.
I don’t know enough about differential development arguments to respond to that bit right now.
Overall, I agree that the issue is complex, but I think it’s tractable complex and we shouldn’t overestimate the number of major uncertainties. If in general it was too hard to predict the macro consequences of strategy X then it would not be possible to strategize. We clearly have a lot of confidence around here about the likelihood of AI doom. I think we need a good clean argument about why we can make confident predictions in certain areas and why we can make “massive complexity” arguments in others.