However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age.
Do you need to, though? People have been citing Feynman’s anecdote about the RAND researchers deciding to stop bothering with retirement savings in the 1940s/50s because they thought the odds of nuclear war was so high. But no one has mentioned any of those RAND researchers dying on the streets or living off the proverbial dog chow in retirement. And why would they have?
First, anyone who was a RAND researcher is a smart cookie doing white-collar jobs who will be in high demand into retirement and beyond (and maybe higher than any time before in their career); it’s not like they were construction workers whose backs are giving out in their 30s and will be unable to earn any money after that. Quite the opposite.
Second, no one said stopping was irrevocable. You can’t go back in time, of course, but you can always just start saving again. This is relevant because when nuclear war didn’t happen within a decade or two, presumably they noticed at some point, ‘hey, I’m still alive’. There is very high option value/Value of Information. If, after a decade or two, nuclear war has not happened and you survive the Cuban Missile Crisis… you can start saving then.
The analogue here would be for AI risk, most of the short-term views are that we are going to learn a lot over the next 5-10 years. By 5 years, a decent number of AGI predictions will be expiring, and it will be much clearer how far DL scaling will go. DL will either be much scarier than it is now, or it will have slammed to a halt. And by 10 years, most excuses for any kind of pause will have expired and matters will be clearer. You are not committed to dissavings forever; you are not even committed for more than a few years, during which you will learn a lot.
Third, consider also the implication of ‘no nuclear war’ for those RAND researchers: that’s good for the American economy. Very good. If you were a RAND researcher who stopped saving in the 1940s-1950s and decided to start again in the ’60s, and you were ‘behind’, well, that meant that you started investing in time for what Warren Buffett likes to call one of the greatest economic long-term bull markets in the history of humanity.
Going back to AI, if you are wrong about AGI being a danger, and AGI is achieved on track but it’s safe and beneficial, the general belief is that whatever else it does, it ought to lead to massive economic growth. So, if you are wrong, and you start saving again, you are investing at the start of what may be the greatest long-term bull market that could ever happen in the history of humanity. Seems like a pretty good starting point for your retirement savings to catch up, no?
(It has been pointed out before that if you are optimistic about AGI safety & economic growth and you are saving money, you are moving your resources from when you are very poor to when you will be very rich, and this seems like a questionable move. You should instead either be consuming now, or hedging against bad outcomes rather than doubling down on the good outcome.)
Whereas if you are wrong, the size of your retirement savings accounts will only bitterly recall to you your complacency and the time you wasted. The point of having savings, after all, is to spend them. (Memento mori.)
You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to “present rate no singularity”/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn’t know about AI.
hold_my_fish’s setup in which there is no increase in growth rates, either destruction or normality, is not the same as my discussion. If you include the third option of a high-growth-rate future (which is increasingly a plausible outcome), you would also want to consume a lot now to consumption-smooth, because once hypergrowth starts, you need very little capital/income to smooth/achieve the same standard of living under luxury-robot-space-communism as before. (Indeed, you might want to load up on debt on the expectation that if you survive, you’ll pay it out of growth.)
You know how in prospera, Honduras you can get gene therapy for myostatin inhibitors? (Supposed to help with aging, definitely makes people buff)
I am imagining a future where things go well, in say 2035 a biotech company starts working on aging with a licensed ASI. By 2045-2065 they finally have a viable solution (by a lot of bootstrapping and developing living models), but the FDA obstructs and you can get early access somewhere like Honduras. Just need a few mil cash (ironically third world residents get the first treatments for testing, then billionaires, then hundred millionaires, then...)
Sure 10 years later it gets approved in the USA after much protest and violence, but do you want “died 2 years before the cure” on your tombstone?
That sounds like a knife’s-edge sort of scenario. The treatment arrives neither much earlier nor later but just a year or two before you die (inclusive of all interim medical/longevity improvements, which presumably are nontrivial if some new treatment is curing aging outright) and costs not vastly too much nor vastly below your net worth but just enough that, even in a Christiano-esque slow takeoff where global GDP is doubling every decade & also the treatment will soon be shipped out to so many people that it will drop massively in price each year, that you still just can’t afford it—but you could if only you had been socking away an avocado-toast a month in your 401k way back in 2020?
Yep. And given how short human lifespans are, and how long medical research has historically taken, the ‘blade of the knife’ is 10 years across. With the glacial speed of current medical research it’s more like 20-30.
It’s not fundamentally different from the backyard bunker boom in the past. That’s a knife blade—you’re far enough from the blast not to be killed immediately, but not so far your house doesn’t burn to the ground from the blast wave and followup firestorm. And then your crude bunker doesn’t accumulate enough radioactive fallout for the dose to be fatal, and you don’t run out of supplies before it’s cool enough to survive the wasteland above long enough to escape on foot.
Do you think it’s realistic to assume that we won’t have an ASI by the time you reach old age, or that it won’t render all this aging stuff irrelevant? In my own model, it’s a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.
Do you need to, though? People have been citing Feynman’s anecdote about the RAND researchers deciding to stop bothering with retirement savings in the 1940s/50s because they thought the odds of nuclear war was so high. But no one has mentioned any of those RAND researchers dying on the streets or living off the proverbial dog chow in retirement. And why would they have?
First, anyone who was a RAND researcher is a smart cookie doing white-collar jobs who will be in high demand into retirement and beyond (and maybe higher than any time before in their career); it’s not like they were construction workers whose backs are giving out in their 30s and will be unable to earn any money after that. Quite the opposite.
Second, no one said stopping was irrevocable. You can’t go back in time, of course, but you can always just start saving again. This is relevant because when nuclear war didn’t happen within a decade or two, presumably they noticed at some point, ‘hey, I’m still alive’. There is very high option value/Value of Information. If, after a decade or two, nuclear war has not happened and you survive the Cuban Missile Crisis… you can start saving then.
The analogue here would be for AI risk, most of the short-term views are that we are going to learn a lot over the next 5-10 years. By 5 years, a decent number of AGI predictions will be expiring, and it will be much clearer how far DL scaling will go. DL will either be much scarier than it is now, or it will have slammed to a halt. And by 10 years, most excuses for any kind of pause will have expired and matters will be clearer. You are not committed to dissavings forever; you are not even committed for more than a few years, during which you will learn a lot.
Third, consider also the implication of ‘no nuclear war’ for those RAND researchers: that’s good for the American economy. Very good. If you were a RAND researcher who stopped saving in the 1940s-1950s and decided to start again in the ’60s, and you were ‘behind’, well, that meant that you started investing in time for what Warren Buffett likes to call one of the greatest economic long-term bull markets in the history of humanity.
Going back to AI, if you are wrong about AGI being a danger, and AGI is achieved on track but it’s safe and beneficial, the general belief is that whatever else it does, it ought to lead to massive economic growth. So, if you are wrong, and you start saving again, you are investing at the start of what may be the greatest long-term bull market that could ever happen in the history of humanity. Seems like a pretty good starting point for your retirement savings to catch up, no?
(It has been pointed out before that if you are optimistic about AGI safety & economic growth and you are saving money, you are moving your resources from when you are very poor to when you will be very rich, and this seems like a questionable move. You should instead either be consuming now, or hedging against bad outcomes rather than doubling down on the good outcome.)
Whereas if you are wrong, the size of your retirement savings accounts will only bitterly recall to you your complacency and the time you wasted. The point of having savings, after all, is to spend them. (Memento mori.)
You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to “present rate no singularity”/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn’t know about AI.
hold_my_fish’s setup in which there is no increase in growth rates, either destruction or normality, is not the same as my discussion. If you include the third option of a high-growth-rate future (which is increasingly a plausible outcome), you would also want to consume a lot now to consumption-smooth, because once hypergrowth starts, you need very little capital/income to smooth/achieve the same standard of living under luxury-robot-space-communism as before. (Indeed, you might want to load up on debt on the expectation that if you survive, you’ll pay it out of growth.)
The math in the comment I linked works the same whether the chance of money ceasing to matter in five years’ time is for happy or unhappy reasons.
Gwern I have one very specific scenario in mind.
You know how in prospera, Honduras you can get gene therapy for myostatin inhibitors? (Supposed to help with aging, definitely makes people buff)
I am imagining a future where things go well, in say 2035 a biotech company starts working on aging with a licensed ASI. By 2045-2065 they finally have a viable solution (by a lot of bootstrapping and developing living models), but the FDA obstructs and you can get early access somewhere like Honduras. Just need a few mil cash (ironically third world residents get the first treatments for testing, then billionaires, then hundred millionaires, then...)
Sure 10 years later it gets approved in the USA after much protest and violence, but do you want “died 2 years before the cure” on your tombstone?
Does this sound like a plausible scenario?
That sounds like a knife’s-edge sort of scenario. The treatment arrives neither much earlier nor later but just a year or two before you die (inclusive of all interim medical/longevity improvements, which presumably are nontrivial if some new treatment is curing aging outright) and costs not vastly too much nor vastly below your net worth but just enough that, even in a Christiano-esque slow takeoff where global GDP is doubling every decade & also the treatment will soon be shipped out to so many people that it will drop massively in price each year, that you still just can’t afford it—but you could if only you had been socking away an avocado-toast a month in your 401k way back in 2020?
Yep. And given how short human lifespans are, and how long medical research has historically taken, the ‘blade of the knife’ is 10 years across. With the glacial speed of current medical research it’s more like 20-30.
It’s not fundamentally different from the backyard bunker boom in the past. That’s a knife blade—you’re far enough from the blast not to be killed immediately, but not so far your house doesn’t burn to the ground from the blast wave and followup firestorm. And then your crude bunker doesn’t accumulate enough radioactive fallout for the dose to be fatal, and you don’t run out of supplies before it’s cool enough to survive the wasteland above long enough to escape on foot.
Do you think it’s realistic to assume that we won’t have an ASI by the time you reach old age, or that it won’t render all this aging stuff irrelevant? In my own model, it’s a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.