I gather from the recent census article that most of the readers of this site are significantly younger than I am, so I’ll relay some first-hand experiences you probably didn’t live through.
I was born in 1964. The Cuban Missle Crisis was only a few years in the past, and Kennedy had just been shot, possibly by Russians, or The Mob, or whomever. Continuing through at least the end of the Cold War in 1989, there was significant public opinion that we were all going to die in a nuclear holocaust (or Nuclear Winter), so really, what was the point in making long-term plans?
Spoiler: things worked out better than expected, although not without significant bumps along the way. Spending all your money on hookers and blow because you might as well enjoy yourself now would not have been a solid investment strategy.
Now, much like the AGI/ASI threat, the nuclear threat could have actually played out. There were other close calls where we (or they) thought the attack had started already (Vasily Arkhipov comes to mind), and of course, Death From AI could well happen. However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age. Remember, we still don’t have flying cars.
However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age.
Do you need to, though? People have been citing Feynman’s anecdote about the RAND researchers deciding to stop bothering with retirement savings in the 1940s/50s because they thought the odds of nuclear war was so high. But no one has mentioned any of those RAND researchers dying on the streets or living off the proverbial dog chow in retirement. And why would they have?
First, anyone who was a RAND researcher is a smart cookie doing white-collar jobs who will be in high demand into retirement and beyond (and maybe higher than any time before in their career); it’s not like they were construction workers whose backs are giving out in their 30s and will be unable to earn any money after that. Quite the opposite.
Second, no one said stopping was irrevocable. You can’t go back in time, of course, but you can always just start saving again. This is relevant because when nuclear war didn’t happen within a decade or two, presumably they noticed at some point, ‘hey, I’m still alive’. There is very high option value/Value of Information. If, after a decade or two, nuclear war has not happened and you survive the Cuban Missile Crisis… you can start saving then.
The analogue here would be for AI risk, most of the short-term views are that we are going to learn a lot over the next 5-10 years. By 5 years, a decent number of AGI predictions will be expiring, and it will be much clearer how far DL scaling will go. DL will either be much scarier than it is now, or it will have slammed to a halt. And by 10 years, most excuses for any kind of pause will have expired and matters will be clearer. You are not committed to dissavings forever; you are not even committed for more than a few years, during which you will learn a lot.
Third, consider also the implication of ‘no nuclear war’ for those RAND researchers: that’s good for the American economy. Very good. If you were a RAND researcher who stopped saving in the 1940s-1950s and decided to start again in the ’60s, and you were ‘behind’, well, that meant that you started investing in time for what Warren Buffett likes to call one of the greatest economic long-term bull markets in the history of humanity.
Going back to AI, if you are wrong about AGI being a danger, and AGI is achieved on track but it’s safe and beneficial, the general belief is that whatever else it does, it ought to lead to massive economic growth. So, if you are wrong, and you start saving again, you are investing at the start of what may be the greatest long-term bull market that could ever happen in the history of humanity. Seems like a pretty good starting point for your retirement savings to catch up, no?
(It has been pointed out before that if you are optimistic about AGI safety & economic growth and you are saving money, you are moving your resources from when you are very poor to when you will be very rich, and this seems like a questionable move. You should instead either be consuming now, or hedging against bad outcomes rather than doubling down on the good outcome.)
Whereas if you are wrong, the size of your retirement savings accounts will only bitterly recall to you your complacency and the time you wasted. The point of having savings, after all, is to spend them. (Memento mori.)
You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to “present rate no singularity”/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn’t know about AI.
hold_my_fish’s setup in which there is no increase in growth rates, either destruction or normality, is not the same as my discussion. If you include the third option of a high-growth-rate future (which is increasingly a plausible outcome), you would also want to consume a lot now to consumption-smooth, because once hypergrowth starts, you need very little capital/income to smooth/achieve the same standard of living under luxury-robot-space-communism as before. (Indeed, you might want to load up on debt on the expectation that if you survive, you’ll pay it out of growth.)
You know how in prospera, Honduras you can get gene therapy for myostatin inhibitors? (Supposed to help with aging, definitely makes people buff)
I am imagining a future where things go well, in say 2035 a biotech company starts working on aging with a licensed ASI. By 2045-2065 they finally have a viable solution (by a lot of bootstrapping and developing living models), but the FDA obstructs and you can get early access somewhere like Honduras. Just need a few mil cash (ironically third world residents get the first treatments for testing, then billionaires, then hundred millionaires, then...)
Sure 10 years later it gets approved in the USA after much protest and violence, but do you want “died 2 years before the cure” on your tombstone?
That sounds like a knife’s-edge sort of scenario. The treatment arrives neither much earlier nor later but just a year or two before you die (inclusive of all interim medical/longevity improvements, which presumably are nontrivial if some new treatment is curing aging outright) and costs not vastly too much nor vastly below your net worth but just enough that, even in a Christiano-esque slow takeoff where global GDP is doubling every decade & also the treatment will soon be shipped out to so many people that it will drop massively in price each year, that you still just can’t afford it—but you could if only you had been socking away an avocado-toast a month in your 401k way back in 2020?
Yep. And given how short human lifespans are, and how long medical research has historically taken, the ‘blade of the knife’ is 10 years across. With the glacial speed of current medical research it’s more like 20-30.
It’s not fundamentally different from the backyard bunker boom in the past. That’s a knife blade—you’re far enough from the blast not to be killed immediately, but not so far your house doesn’t burn to the ground from the blast wave and followup firestorm. And then your crude bunker doesn’t accumulate enough radioactive fallout for the dose to be fatal, and you don’t run out of supplies before it’s cool enough to survive the wasteland above long enough to escape on foot.
Do you think it’s realistic to assume that we won’t have an ASI by the time you reach old age, or that it won’t render all this aging stuff irrelevant? In my own model, it’s a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.
I have a big peeve about that. When I try to model a flying car I see the tradeoffs of
(High fuel consumption, higher cost to build, higher skill to drive, noise, falling debris) vs (less time to reach work)
As long as the value per hour of a workers time is less than the cost per hour of the vtol + externalities, there isn’t ROI for most workers.
Less market size means higher cost and thus we just have helicopters for billionaires and everyone else drives.
Did this come up in the 1970s or after the oil shocks were over in the 80s? Because they just jump out at me as a doomed idea that doesn’t happen because it doesn’t make money.
Even now: electric vtols fix the fuel cost, using commodity parts makes them cheaper, automation makes them easier to fly, but you still have the negative externalities.
AI makes immediate money, gpt-4 seems to be 100+ percent annual ROI...(60 mil to train, 2 billion annual revenue after a year, assuming 10 percent profit margin)
I may have used too much shorthand here. I agree that flying cars are impractical for the reasons you suggest. I also agree that anybody who can justify it uses a helicopter, which is akin to a flying car.
According to Wikipedia, this is not a concept that first took off (hah!) in the 1970s—there have been working prototypes since at least the mid-1930s. The point of mentioning the idea is that it represents a cautionary tale about how hard it is to make predictions, especially about the future. When cars became widely used (certainly post-WWII), futurists started predicting what transportation tech would look like, and flying cars were one of the big topics. The fact that they’re impractical didn’t occur to many of the people making predictions.
I have a strong suspicion that there are flaws in current reasoning about the future, especially as it relates to the threat of AGI. Recall that there was a round of AI hype back in the 1980s that fizzled out when it became clear nothing much worked beyond the toy systems. I think there are good reasons to believe we’re in a very dangerous time, but I think there are also reasons to believe that we’ll figure it out before we all kill ourselves. Frankly, I’m more concerned about global warming, as that requires absolutely no new technology nor policy changes to be able to kill us or at least put a real dent in global human happiness.
My point is simply that deciding that we’re 95% likely to die in the next five years is probably wrong, and if you base your entire set of life choices on that prediction, you are going to be surprised when it turns out differently.
Also, I’m not strongly invested in convincing others of this fact, partly because I don’t think I have any special lock on predicting the future. I’m just suggesting you look back farther than 1980 for examples of how people expected things to turn out vs. how they actually did and factor that into your calculations.
[Small edit in the first paragraph for clearer wording]
The fact that they’re impractical didn’t occur to many of the people making predictions.
Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don’t pencil in.
AGI financially does pencil in.
AGI killing everyone with 95 percent probability in 5 years doesn’t because it require several physically unlikely assumptions.
The two assumptions are
A. being able to optimize an algorithm to use many oom less compute than right now
B. The “utility gain” of superintelligence being so high it can just do things credentialed humans don’t think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs.
If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead...
Then yeah flying cars sound plausible but you made physically unlikely assumptions.
I gather from the recent census article that most of the readers of this site are significantly younger than I am, so I’ll relay some first-hand experiences you probably didn’t live through.
I was born in 1964. The Cuban Missle Crisis was only a few years in the past, and Kennedy had just been shot, possibly by Russians, or The Mob, or whomever. Continuing through at least the end of the Cold War in 1989, there was significant public opinion that we were all going to die in a nuclear holocaust (or Nuclear Winter), so really, what was the point in making long-term plans?
Spoiler: things worked out better than expected, although not without significant bumps along the way. Spending all your money on hookers and blow because you might as well enjoy yourself now would not have been a solid investment strategy.
Now, much like the AGI/ASI threat, the nuclear threat could have actually played out. There were other close calls where we (or they) thought the attack had started already (Vasily Arkhipov comes to mind), and of course, Death From AI could well happen. However, you should probably hedge your bets to a certain extent just in case you manage to live to retirement age. Remember, we still don’t have flying cars.
Do you need to, though? People have been citing Feynman’s anecdote about the RAND researchers deciding to stop bothering with retirement savings in the 1940s/50s because they thought the odds of nuclear war was so high. But no one has mentioned any of those RAND researchers dying on the streets or living off the proverbial dog chow in retirement. And why would they have?
First, anyone who was a RAND researcher is a smart cookie doing white-collar jobs who will be in high demand into retirement and beyond (and maybe higher than any time before in their career); it’s not like they were construction workers whose backs are giving out in their 30s and will be unable to earn any money after that. Quite the opposite.
Second, no one said stopping was irrevocable. You can’t go back in time, of course, but you can always just start saving again. This is relevant because when nuclear war didn’t happen within a decade or two, presumably they noticed at some point, ‘hey, I’m still alive’. There is very high option value/Value of Information. If, after a decade or two, nuclear war has not happened and you survive the Cuban Missile Crisis… you can start saving then.
The analogue here would be for AI risk, most of the short-term views are that we are going to learn a lot over the next 5-10 years. By 5 years, a decent number of AGI predictions will be expiring, and it will be much clearer how far DL scaling will go. DL will either be much scarier than it is now, or it will have slammed to a halt. And by 10 years, most excuses for any kind of pause will have expired and matters will be clearer. You are not committed to dissavings forever; you are not even committed for more than a few years, during which you will learn a lot.
Third, consider also the implication of ‘no nuclear war’ for those RAND researchers: that’s good for the American economy. Very good. If you were a RAND researcher who stopped saving in the 1940s-1950s and decided to start again in the ’60s, and you were ‘behind’, well, that meant that you started investing in time for what Warren Buffett likes to call one of the greatest economic long-term bull markets in the history of humanity.
Going back to AI, if you are wrong about AGI being a danger, and AGI is achieved on track but it’s safe and beneficial, the general belief is that whatever else it does, it ought to lead to massive economic growth. So, if you are wrong, and you start saving again, you are investing at the start of what may be the greatest long-term bull market that could ever happen in the history of humanity. Seems like a pretty good starting point for your retirement savings to catch up, no?
(It has been pointed out before that if you are optimistic about AGI safety & economic growth and you are saving money, you are moving your resources from when you are very poor to when you will be very rich, and this seems like a questionable move. You should instead either be consuming now, or hedging against bad outcomes rather than doubling down on the good outcome.)
Whereas if you are wrong, the size of your retirement savings accounts will only bitterly recall to you your complacency and the time you wasted. The point of having savings, after all, is to spend them. (Memento mori.)
You have to be really confidently optimistic or pessimistic about AI to justify a major change in consumption rates; if you assign a significant probability to “present rate no singularity”/AI winter futures then the benefits of consumption smoothing dominate and you should save almost as much (or as little) as you would if you didn’t know about AI.
hold_my_fish’s setup in which there is no increase in growth rates, either destruction or normality, is not the same as my discussion. If you include the third option of a high-growth-rate future (which is increasingly a plausible outcome), you would also want to consume a lot now to consumption-smooth, because once hypergrowth starts, you need very little capital/income to smooth/achieve the same standard of living under luxury-robot-space-communism as before. (Indeed, you might want to load up on debt on the expectation that if you survive, you’ll pay it out of growth.)
The math in the comment I linked works the same whether the chance of money ceasing to matter in five years’ time is for happy or unhappy reasons.
Gwern I have one very specific scenario in mind.
You know how in prospera, Honduras you can get gene therapy for myostatin inhibitors? (Supposed to help with aging, definitely makes people buff)
I am imagining a future where things go well, in say 2035 a biotech company starts working on aging with a licensed ASI. By 2045-2065 they finally have a viable solution (by a lot of bootstrapping and developing living models), but the FDA obstructs and you can get early access somewhere like Honduras. Just need a few mil cash (ironically third world residents get the first treatments for testing, then billionaires, then hundred millionaires, then...)
Sure 10 years later it gets approved in the USA after much protest and violence, but do you want “died 2 years before the cure” on your tombstone?
Does this sound like a plausible scenario?
That sounds like a knife’s-edge sort of scenario. The treatment arrives neither much earlier nor later but just a year or two before you die (inclusive of all interim medical/longevity improvements, which presumably are nontrivial if some new treatment is curing aging outright) and costs not vastly too much nor vastly below your net worth but just enough that, even in a Christiano-esque slow takeoff where global GDP is doubling every decade & also the treatment will soon be shipped out to so many people that it will drop massively in price each year, that you still just can’t afford it—but you could if only you had been socking away an avocado-toast a month in your 401k way back in 2020?
Yep. And given how short human lifespans are, and how long medical research has historically taken, the ‘blade of the knife’ is 10 years across. With the glacial speed of current medical research it’s more like 20-30.
It’s not fundamentally different from the backyard bunker boom in the past. That’s a knife blade—you’re far enough from the blast not to be killed immediately, but not so far your house doesn’t burn to the ground from the blast wave and followup firestorm. And then your crude bunker doesn’t accumulate enough radioactive fallout for the dose to be fatal, and you don’t run out of supplies before it’s cool enough to survive the wasteland above long enough to escape on foot.
Do you think it’s realistic to assume that we won’t have an ASI by the time you reach old age, or that it won’t render all this aging stuff irrelevant? In my own model, it’s a 5% scenario. Most likely in my model is that we get an unaligned AGI that kills us all or a nuclear war that prevents any progress in my lifetime or even AI to be regulated into oblivion after a large-scale disaster such as a model that can hack into just about anything connected to the Internet bringing down the entire digital infrastructure.
I have a big peeve about that. When I try to model a flying car I see the tradeoffs of
(High fuel consumption, higher cost to build, higher skill to drive, noise, falling debris) vs (less time to reach work)
As long as the value per hour of a workers time is less than the cost per hour of the vtol + externalities, there isn’t ROI for most workers.
Less market size means higher cost and thus we just have helicopters for billionaires and everyone else drives.
Did this come up in the 1970s or after the oil shocks were over in the 80s? Because they just jump out at me as a doomed idea that doesn’t happen because it doesn’t make money.
Even now: electric vtols fix the fuel cost, using commodity parts makes them cheaper, automation makes them easier to fly, but you still have the negative externalities.
AI makes immediate money, gpt-4 seems to be 100+ percent annual ROI...(60 mil to train, 2 billion annual revenue after a year, assuming 10 percent profit margin)
I may have used too much shorthand here. I agree that flying cars are impractical for the reasons you suggest. I also agree that anybody who can justify it uses a helicopter, which is akin to a flying car.
According to Wikipedia, this is not a concept that first took off (hah!) in the 1970s—there have been working prototypes since at least the mid-1930s. The point of mentioning the idea is that it represents a cautionary tale about how hard it is to make predictions, especially about the future. When cars became widely used (certainly post-WWII), futurists started predicting what transportation tech would look like, and flying cars were one of the big topics. The fact that they’re impractical didn’t occur to many of the people making predictions.
I have a strong suspicion that there are flaws in current reasoning about the future, especially as it relates to the threat of AGI. Recall that there was a round of AI hype back in the 1980s that fizzled out when it became clear nothing much worked beyond the toy systems. I think there are good reasons to believe we’re in a very dangerous time, but I think there are also reasons to believe that we’ll figure it out before we all kill ourselves. Frankly, I’m more concerned about global warming, as that requires absolutely no new technology nor policy changes to be able to kill us or at least put a real dent in global human happiness.
My point is simply that deciding that we’re 95% likely to die in the next five years is probably wrong, and if you base your entire set of life choices on that prediction, you are going to be surprised when it turns out differently.
Also, I’m not strongly invested in convincing others of this fact, partly because I don’t think I have any special lock on predicting the future. I’m just suggesting you look back farther than 1980 for examples of how people expected things to turn out vs. how they actually did and factor that into your calculations.
[Small edit in the first paragraph for clearer wording]
Right I am just trying to ask if you personally thought they were far fetched when you learned of them. Or were there serious predictions that this was going to happen. Flying cars don’t pencil in.
AGI financially does pencil in.
AGI killing everyone with 95 percent probability in 5 years doesn’t because it require several physically unlikely assumptions.
The two assumptions are
A. being able to optimize an algorithm to use many oom less compute than right now
B. The “utility gain” of superintelligence being so high it can just do things credentialed humans don’t think are possible at all. Like developing nanotechnology in a garage rather than needing a bunch of facilities that resemble IC fabs.
If you imagined you might be able to find a way to make flying cars like regular cars, and reach mpgs similar to that of regular cars, and the entire FAA drops dead...
Then yeah flying cars sound plausible but you made physically unlikely assumptions.