I almost wrote a post today with roughly the following text. It seems highly related so I guess I’ll write a short version.
The ability to sacrifice today for tomorrow is one of the hard skills all humans must learn. To be able to make long-term plans is not natural. Most people around me seem to be able to think about today and the next month, and occasionally the next year (job stability, housing stability, relationship stability, etc), but very rarely do I see anyone acting on plans with timescales of decades (or centuries). Robin Hanson has written about how humans naturally think in a very low-detail and unrealistic mode in far (as opposed to near) thinking, and I know that humans have a difficult time with scope sensitivity.
It seems to me that is is a very common refrain in the community to say “but timelines are short” in response to someone’s long-term thinking or proposal, to suggest somewhere in the 5-25 year range until no further work matters because an AGI Foom has occured. My epistemic state is that even if this is true (which it very well may be), most people who are thinking this way are not in fact making 10 year plans. They are continuing to make at most 2 year plans, while avoiding learning how to make longer term plans.
There is a two-step form of judo required to first learn to make 50 year plans and then secondarily restrict yourself to shorter-term plans. It is not one move, and I often see “but timelines are short” used to prevent someone from learning the first move.
I had not considered until the OP that this was actively adversarially selected for, certainly in industry, but it does seem compelling as a method for making today very exciting and stopping people from taking caution for the long-term. Will think on it more.
There is a two-step form of judo required to first learn to make 50 year plans and then secondarily restrict yourself to shorter-term plans. It is not one move, and I often see “but timelines are short” used to prevent someone from learning the first move.
Is there a reason you need to do 50 year plans before you can do 10 year plans? I’d expect the opposite to be true.
(I happen to currently have neither a 50 nor 10 year plan, apart from general savings, but this is mostly because it’s… I dunno kinda hard and I haven’t gotten around to it or something, rather than anything to do with timelines.)
Is there a reason you need to do 50 year plans before you can do 10 year plans?
No.
It’s often worth practising on harder problems to make the smaller problems second nature, and I think this is a similar situation. Nowadays I do more often notice plans that would take 5+ years to complete (that are real plans with hopefully large effect sizes), and I’m trying to push it higher.
Thinking carefully about how things are built that have lasted decades or centuries (science, the American constitution, etc) I think is very helpful for making shorter plans that still require coordination of 1000s of people over 10+ years.
Relatedly I don’t think anyone in this community working on AI risk should be devoting 100% of their probability mass to things in the <15 year scale, and so should think about plans that fail gracefully or are still useful if the world is still muddling along at relatively similar altitudes in 70 years.
Is there a reason you need to do 50 year plans before you can do 10 year plans? I’d expect the opposite to be true.
I think you do need to learn how to make plans that can actually work, at all, before you learn how to make plans with very limited resources.
And I think that people fall into the habit of making “plans”, that they don’t inner sim actually leading to success, because they condition themselves into thinking that things are desperate and the best action will only be the best action “in expected value” eg that the “right” action should look like a moonshot.
This seems concerning to me. It seems like you should be, first and foremost, figuring out how you can get any plan that works at all, and then secondarily, trying to figure out how to make it work in the time allotted. Actual, multi-step strategy shouldn’t mostly feel like “thinking up some moon-shots”.
Thinking more, my current sense is that this is not an AI-specific thing, but a broader societal problem where people fail to think long-term. Peter Thiel very helpfully writes about it as a distinction between “definite” and “indefinite” attitudes to the future, where in the former it is understandable and lawful and in the latter it will happen no matter what you do (fatalism). My sense is that when I have told myself to focus on short-timelines, if it’s been unhealthy it’s been a general excuse for not having to look at hard problems.
I almost wrote a post today with roughly the following text. It seems highly related so I guess I’ll write a short version.
I had not considered until the OP that this was actively adversarially selected for, certainly in industry, but it does seem compelling as a method for making today very exciting and stopping people from taking caution for the long-term. Will think on it more.
Is there a reason you need to do 50 year plans before you can do 10 year plans? I’d expect the opposite to be true.
(I happen to currently have neither a 50 nor 10 year plan, apart from general savings, but this is mostly because it’s… I dunno kinda hard and I haven’t gotten around to it or something, rather than anything to do with timelines.)
No.
It’s often worth practising on harder problems to make the smaller problems second nature, and I think this is a similar situation. Nowadays I do more often notice plans that would take 5+ years to complete (that are real plans with hopefully large effect sizes), and I’m trying to push it higher.
Thinking carefully about how things are built that have lasted decades or centuries (science, the American constitution, etc) I think is very helpful for making shorter plans that still require coordination of 1000s of people over 10+ years.
Relatedly I don’t think anyone in this community working on AI risk should be devoting 100% of their probability mass to things in the <15 year scale, and so should think about plans that fail gracefully or are still useful if the world is still muddling along at relatively similar altitudes in 70 years.
Ah, that all makes sense.
I think you do need to learn how to make plans that can actually work, at all, before you learn how to make plans with very limited resources.
And I think that people fall into the habit of making “plans”, that they don’t inner sim actually leading to success, because they condition themselves into thinking that things are desperate and the best action will only be the best action “in expected value” eg that the “right” action should look like a moonshot.
This seems concerning to me. It seems like you should be, first and foremost, figuring out how you can get any plan that works at all, and then secondarily, trying to figure out how to make it work in the time allotted. Actual, multi-step strategy shouldn’t mostly feel like “thinking up some moon-shots”.
Strongly agreed with what you have said. See also the psychology of doomsday cults.
Thinking more, my current sense is that this is not an AI-specific thing, but a broader societal problem where people fail to think long-term. Peter Thiel very helpfully writes about it as a distinction between “definite” and “indefinite” attitudes to the future, where in the former it is understandable and lawful and in the latter it will happen no matter what you do (fatalism). My sense is that when I have told myself to focus on short-timelines, if it’s been unhealthy it’s been a general excuse for not having to look at hard problems.