I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors’ arguments allowed for, and society managed to work-around all the regulations and other problems the government and big businesses keep creating. Since the regulations and rules keep growing and creating more problems and rigidity along the way, eventually there will be a collapse, but anyone that gives any kind of timing for it is grabbing at the short end of the stick.
Anyone here have more suggestions as to reasons they have been wrong?
(originally posted on esr’s blog 2010-05-09, revised and expanded since)
Not sure if you’re referring to the same literature, but I note a great divergence between peak oil advocates and singularitarians. This is a little weird, if you think of Aumann’s Agreement theorem.
Both groups are highly populated with engineer types, highly interested in cognitive biases, group dynamics, habits of individuals and societies and neither are mainstream.
Both groups use extrapolation of curves from very real phenomena. In the case of the kurzweillian singularitarians, it is computing power and in the case of the peak oil advocates, it is the hubbert curve for resources along with solid Net Energy based arguments about how civilization should decline.
The extreme among the Peak Oil advocates are collapsitarians and believe that people should drastically change their lifestyles, if they want to survive. They are also not waiting for the others to join them and many are preparing to go to small towns, villages etc. The oildrum, linked here had started as a moderate peak oil site discussing all possibilities, nowadays, apparently, its all doom all the time.
The extreme among the singularitarians have been asked no such sacrifice, just to give enough money and support to make sure that Friendly AI is achieved first.
Both groups believe that business as usual cannot go on for too long, but they expect dramatically different consequences. The singularitarians assert that economics conditions and technology will improve until a nonchalant super-intelligence will be created and wipe out humanity. The collapsitarians believe that economic conditions will worsen, civilization is not built robustly and will collapse badly with humanity probably going extinct or only the last hunter gatherers surviving.
It should be possible to believe both—unless you’re expecting peak oil to lead to social collapse fairly soon, Moore’s law could make a singluarity possible while energy becomes more expensive.
Which could suggest a distressing pinch point: not wanting to delay AI too long in case we run out of energy for it to use; not wanting to make an AI too soon in case it’s Unfriendly.
Y2K. I thought I had a solid lower bound for the size of that one:
Small businesses basically did nothing in preparation, and they still had a fair
amount of dependence on date-dependent programs, so I was expecting that
the impact on them would set a sizable lower bound on the the size of the
overall impact. I’ve never been so glad to be wrong. I would still like to see a good
retrospective explaining how that sector of the economy wound up unaffected...
Small businesses basically did nothing in preparation [for Y2K], and they still had a fair amount of dependence on date-dependent programs
The smaller the business, the less likely they are to have their own software that’s not simply a database or spreadsheet, managed in say, a Microsoft product. The smaller the business, the less likely that anything automated is relying on correct date calculations.
These at least would have been strong mitigating factors.
[Edit: also, even industry-specific programs would likely be fixed by the manufacturer. For example, most of the real-estate software produced by the company I worked for in the 80′s and 90′s was Y2K-ready since before 1985.]
First, the “economic collapse” I referred to in the original post were actually at least 6 different predictions at different times.
As another example, but not quite a “collapse” scenario, consider the predictions of the likelihood of nuclear war; there were three distinct periods where it was considered more or less likely by different groups. The late 1940s some intelligent and informed, but peripheral, observers like Robert Heinlein considered it a significant risk. Next was the late 1950s through the Cuban Missile Crisis in the early 1960s, when nearly everybody considered it a major risk. Then there was another scare in the late 1970s to early 1980s, primarily leftists (including the media) favoring disarmament promulgating the fear to try to get the US to reduce their stockpiles and conservatives (derided by the media as “survivalists” and nuts) who were afraid they would succeed.
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors’ arguments allowed for, and society managed to work-around all the regulations and other problems the government and big businesses keep creating. Since the regulations and rules keep growing and creating more problems and rigidity along the way, eventually there will be a collapse, but anyone that gives any kind of timing for it is grabbing at the short end of the stick.
Anyone here have more suggestions as to reasons they have been wrong?
(originally posted on esr’s blog 2010-05-09, revised and expanded since)
Not sure if you’re referring to the same literature, but I note a great divergence between peak oil advocates and singularitarians. This is a little weird, if you think of Aumann’s Agreement theorem.
Both groups are highly populated with engineer types, highly interested in cognitive biases, group dynamics, habits of individuals and societies and neither are mainstream.
Both groups use extrapolation of curves from very real phenomena. In the case of the kurzweillian singularitarians, it is computing power and in the case of the peak oil advocates, it is the hubbert curve for resources along with solid Net Energy based arguments about how civilization should decline.
The extreme among the Peak Oil advocates are collapsitarians and believe that people should drastically change their lifestyles, if they want to survive. They are also not waiting for the others to join them and many are preparing to go to small towns, villages etc. The oildrum, linked here had started as a moderate peak oil site discussing all possibilities, nowadays, apparently, its all doom all the time.
The extreme among the singularitarians have been asked no such sacrifice, just to give enough money and support to make sure that Friendly AI is achieved first.
Both groups believe that business as usual cannot go on for too long, but they expect dramatically different consequences. The singularitarians assert that economics conditions and technology will improve until a nonchalant super-intelligence will be created and wipe out humanity. The collapsitarians believe that economic conditions will worsen, civilization is not built robustly and will collapse badly with humanity probably going extinct or only the last hunter gatherers surviving.
It should be possible to believe both—unless you’re expecting peak oil to lead to social collapse fairly soon, Moore’s law could make a singluarity possible while energy becomes more expensive.
Which could suggest a distressing pinch point: not wanting to delay AI too long in case we run out of energy for it to use; not wanting to make an AI too soon in case it’s Unfriendly.
Could you give some examples of the predicted collapses that didn’t happen?
Y2K. I thought I had a solid lower bound for the size of that one: Small businesses basically did nothing in preparation, and they still had a fair amount of dependence on date-dependent programs, so I was expecting that the impact on them would set a sizable lower bound on the the size of the overall impact. I’ve never been so glad to be wrong. I would still like to see a good retrospective explaining how that sector of the economy wound up unaffected...
The smaller the business, the less likely they are to have their own software that’s not simply a database or spreadsheet, managed in say, a Microsoft product. The smaller the business, the less likely that anything automated is relying on correct date calculations.
These at least would have been strong mitigating factors.
[Edit: also, even industry-specific programs would likely be fixed by the manufacturer. For example, most of the real-estate software produced by the company I worked for in the 80′s and 90′s was Y2K-ready since before 1985.]
First, the “economic collapse” I referred to in the original post were actually at least 6 different predictions at different times.
As another example, but not quite a “collapse” scenario, consider the predictions of the likelihood of nuclear war; there were three distinct periods where it was considered more or less likely by different groups. The late 1940s some intelligent and informed, but peripheral, observers like Robert Heinlein considered it a significant risk. Next was the late 1950s through the Cuban Missile Crisis in the early 1960s, when nearly everybody considered it a major risk. Then there was another scare in the late 1970s to early 1980s, primarily leftists (including the media) favoring disarmament promulgating the fear to try to get the US to reduce their stockpiles and conservatives (derided by the media as “survivalists” and nuts) who were afraid they would succeed.