How effectively can we plan for future decades? (initial findings)
Cross-posted from MIRI’s blog.
MIRI aims to do research now that increases humanity’s odds of successfully managing important AI-related events that are at least a few decades away. Thus, we’d like to know: To what degree can we take actions now that will predictably have positive effects on AI-related events decades from now? And, which factors predict success and failure in planning for decades-distant events that share important features with future AI events?
Or, more generally: How effectively can humans plan for future decades? Which factors predict success and failure in planning for future decades?
To investigate these questions, we asked Jonah Sinick to examine historical attempts to plan for future decades and summarize his findings. We pre-committed to publishing our entire email exchange on the topic (with minor editing), just as Jonah had done previously with GiveWell on the subject of insecticide-treated nets. The post below is a summary of findings from our full email exchange (.docx) so far.
We decided to publish our initial findings after investigating only a few historical cases. This allows us to gain feedback on the value of the project, as well as suggestions for improvement, before continuing. It also means that we aren’t yet able to draw any confident conclusions about our core questions.
The most significant results from this project so far are:
Jonah’s initial impressions about The Limits to Growth (1972), a famous forecasting study on population and resource depletion, were that its long-term predictions were mostly wrong, and also that its authors (at the time of writing it) didn’t have credentials that would predict forecasting success. Upon reading the book, its critics, and its defenders, Jonah concluded that many critics and defenders had seriously misrepresented the book, and that the book itself exhibits high epistemic standards and does not make significant predictions that turned out to be wrong.
Svante Arrhenius (1859-1927) did a surprisingly good job of climate modeling given the limited information available to him, but he was nevertheless wrong about two important policy-relevant factors. First, he failed to predict how quickly carbon emissions would increase. Second, he predicted that global warming would have positive rather than negative humanitarian impacts. If more people had taken Arrhenius’ predictions seriously and burned fossil fuels faster for humanitarian reasons, then today’s scientific consensus on the effects of climate change suggests that the humanitarian effects would have been negative.
In retrospect, Norbert Wiener’s concerns about the medium-term dangers of increased automation appear naive, and it seems likely that even at the time, better epistemic practices would have yielded substantially better predictions.
Upon initial investigation, several historical cases seemed unlikely to shed substantial light on our core questions: Norman Rasmussen’s analysis of the safety of nuclear power plants, Leo Szilard’s choice to keep secret a patent related to nuclear chain reactions, Cold War planning efforts to win decades later, and several cases of “ethically concerned scientists.”
Upon initial investigation, two historical cases seemed like they might shed light on our core questions, but only after many hours of additional research on each of them: China’s one-child policy, and the Ford Foundation’s impact on India’s 1991 financial crisis.
We listed many other historical cases that may be worth investigating.
The project has also produced a chapter-by-chapter list of some key lessons from Nate Silver’s The Signal and the Noise, available here.
Further details are given below. For sources and more, please see our full email exchange (.docx).
The Limits to Growth
In his initial look at The Limits to Growth (1972), Jonah noted that the authors were fairly young at the time of writing (the oldest was 31), and they lacked credentials in long-term forecasting. Moreover, it appeared that Limits to Growth predicted a sort of doomsday scenario—ala Ehrlich’s The Population Bomb (1968) - that had failed to occur. In particular, it appeared that Limits to Growth had failed to appreciate Julian Simon’s point that other resources would substitute for depleted resources. Upon reading the book, Jonah found that:
The book avoids strong, unconditional claims. Its core claim is that if exponential growth of resource usage continues, then there will likely be a societal collapse by 2100.
The book was careful to qualify its claims, and met high epistemic standards. Jonah wrote: “The book doesn’t look naive even in retrospect, which is impressive given that it was written 40 years ago. ”
The authors discuss substitutability at length in chapter 4.
The book discusses mitigation at a theoretical level, but doesn’t give explicit policy recommendations, perhaps because the issues involved were too complex.
Svante Arrhenius
Derived more than a century ago, Svante Arrhenius’ equation for how the Earth’s temperature varies as a function of concentration of carbon dioxide is the same equation used today. But while Arrhenius’ climate modeling was impressive given the information available to him at the time, he failed to predict (by a large margin) how quickly fossil fuels would be burned. He also predicted that global warming would have positive humanitarian effects, but based on our current understanding, the expected humanitarian effects seem negative.
Arrhenius’s predictions were mostly ignored at the time, but had people taken them seriously and burned fossil fuels more quickly, the humanitarian effects would probably have been negative.
Norbert Wiener
As Jonah explains, Norbert Wiener (1894-1964) “believed that unless countermeasures were taken, automation would render low skilled workers unemployable. He believed that this would precipitate an economic crisis far worse than that of the Great Depression.” Nearly 50 years after his death, this doesn’t seem to have happened much, though it may eventually happen.
Jonah’s impression is that Wiener had strong views on the subject, doesn’t seem to have updated much in response to incoming evidence, and seems to have relied to heavily on what Berlin (1953) and Tetlock (2005) described as “hedgehog” thinking: “the fox knows many things, but the hedgehog knows one big thing.”
Some historical cases that seem unlikely to shed light on our questions
Rasmussen (1975) is a probabilistic risk assessment of nuclear power plants, written before any nuclear power plant disasters had occurred. However, Jonah concluded that this historical case wasn’t very relevant to our specific questions about taking actions useful for decades-distant AI outcomes, in part because the issue is highly domain specific, and because the report makes a large number of small predictions rather than a few salient predictions.
In 1936, Leó Szilárd assigned his chain reaction patent in a way that ensured it would be kept secret from the Nazis. However, Jonah concluded:
I think that this isn’t a good example of a nontrivial future prediction. The destructive potential seems pretty obvious – anything that produces a huge amount of concentrated energy can be used in a destructive way. As for the Nazis, Szilard was himself Jewish and fled from the Nazis, and it seems pretty obvious that one wouldn’t want a dangerous regime to acquire knowledge that has destructive potential. It would be more impressive if the early developers of quantum mechanics had kept their research secret on account of dimly being aware of the possibility of destructive potential, or if Szilard had filed his patent secretly in a hypothetical world in which the Nazi regime was years away.
Jonah briefly investigated Cold War efforts aimed at winning the war decades later, but concluded that it was “too difficult to tie these efforts to war outcomes.”
Jonah also investigated Kaj Sotala’s A brief history of ethically concerned scientists. Most of the historical cases cited there didn’t seem relevant to this project. Many cases involved “scientists concealing their discoveries out of concern that they would be used for military purposes,” but this seems to be an increasingly irrelevant sort of historical case, since science and technology markets are now relatively efficient, and concealing a discovery rarely delays progress for very long (e.g. see Kelly 2011). Other cases involved efforts to reduce the use of dangerous weapons for which the threat was imminent during the time of the advocacy. There may be lessons among these cases, but they appear to be of relatively weak relevance to our current project.
Some historical cases that might shed light on our questions with much additional research
Jonah performed an initial investigation of the impacts of China’s one-child policy, and concluded that it would take many, many hours of research to determine both the sign and the magnitude of the policy’s impacts.
Jonah also investigated a case involving the Ford Foundation. In a conversation with GiveWell, Lant Pritchett said:
[One] example of transformative philanthropy is related to India’s recovery from its economic crisis of 1991. Other countries had previously had similar crises and failed to implement good policies that would have allowed them to recover from their crises. By way of contrast, India implemented good policies and recovered in a short time frame. Most of the key actors who ensured that India implemented the policies that it did were influenced by a think tank established by the Ford Foundation ten years before the crisis. The think tank exposed Indians to relevant ideas from the developed world about liberalization. The difference between (a) India’s upward economic trajectory and (b) what its upward economic trajectory would have been if it had been unsuccessful in recovering from the 1991 crisis is in the trillions of dollars. As such, the Ford Foundation’s investment in the think tank had a huge impact. For the ten years preceding the crisis, it looked like the think tank was having no impact, but it turned out to have a huge impact.
Unfortunately, Jonah was unable to find any sources or contacts that would allow him to check whether this story is true.
Other historical cases that might be worth investigating
Historical cases we identified but did not yet investigate include:
Eric Drexler’s early predictions about the feasibility and likely effects of nanotechnology.
Efforts to detect asteroids before they threaten Earth
The Green Revolution
The modern history of cryptography
Early efforts to mitigate global warming
Possible deliberate long term efforts to produce scientific breakthroughs (the transistor? the human genome?)
Rachel Carson’s Silent Spring (1962)
Paul Ehrlich’s The Population Bomb (1968)
The Worldwatch Institute’s State of the World reports (since 1984)
The WCED’s Our Common Future (1987)
I’m a little puzzled that you don’t include such obvious examples of attempted long-term schemes as Soviet (and others’) five-year plans, or dirigisme more generally. Another class of human plans intended to have long-term effects is peace treaties and settlements; probably the diplomats at Versailles did not literally think they were making a peace for all time, but surely they expected to do better than having another shattering conflict in their own lifetimes. Conversely, at Vienna the Great Powers probably intended their settlement to last thirty or fifty years, but would likely have been surprised to learn that the next Great War would be almost 100 years later. And I doubt the negotiators at Westphalia thought they were creating a concept of statehood that we would still be using 350 years later; they may well have thought they were patching up yet another temporary truce to allow shattered armies and stripped economies to recover a bit.
With cryptography, the government attempted to delay mainstream access to the technology—so they could benefit from using it. It would be interesting to know if they are doing the same to mainstream machine intelligence efforts—for example, via intellectual property laws and secrecy orders.
Can you recommend any good histories of the government’s attempts to delay mainstream access to new cryptographic techniques?
James Bamford’s books in this area are very readable:
The Puzzle Palace: Inside the National Security Agency, America’s Most Secret Intelligence Organization
Body of Secrets: Anatomy of the Ultra-Secret National Security Agency
The classic history of the field is this one, but you’ll get some coverage of the topic in practically any popular book on cryptography.
Trivial correction: Wiener, not Weiner.
Fixed!
Also: Ehrlich, not Erlich.
Fixed!
One factor that will be difficult to evaluate is how predictions have interacted with later events. Warnings can (at times) be heeded and risks avoided. Those most difficult cases might be precisely the ones of greatest interest given your aims of shifting humanity’s odds.
A related question is how much impact these predictions had (aside from their accuracy). Things like Limits to Growth or The Population Bomb were extremely influential in spite of their predictive failures (once again, leaving the hypothesis that they served as self-refuting prophecies).
Once you have a better sense of these cases, it will also be interesting to evaluate how responses developed. Were the authors or predictors influential in the resulting actions? You mention at least one case in the email thread where the author was shut out of later efforts due to the prediction (Drexler). I’d be curious to see how the triggers interacted with the resulting movements or responses (if any).
In order to avoid selection bias, it would be good to define some domain(s) and then study all long-term predictions in those domains. Something along the lines of “the first 100 studies listed in a search of such-and-such database using these keywords”. Note, I haven’t read the full email exchange, so if this is addressed there, I apologize for wasting your time.
My impression is Jonah may have gotten wrong impressions of Wiener’s views. I also didn’t see where Jonah talked about Wiener not having updated much in response to incoming evidence. (What evidence?) Did you see that in his post, or did he write about it elsewhere?
I responded here.
I wrote this in our full email exchange and didn’t provide justification. I no longer remember what I had in mind, and I may not have had good reasons for saying that.
My best guess is that I was thinking something along the lines of “he didn’t investigate sufficiently thoroughly to solicit and understand other people’s opinions on the subject,” but this is coming primarily from a general strong prior that people don’t solicit other perspectives and try to understand them, rather than anything specific to Wiener, and I recognize that there’s room for disagreement as to what prior is appropriate.
It seems really wrong for you to state any conclusions based solely on your prior, since the whole point of this exercise is to gather evidence about how hard it is to plan for the future. Don’t you think that given the purpose of the project, people would naturally interpret all of your writings from the project as being about the evidence that you found, rather than about your personal priors?
Morally wrong? ;)
I didn’t come across evidence that Wiener did update his beliefs.
I don’t necessarily stand by my remark about him not updating his beliefs. Note that the email exchange with Luke was very long. Taking enough care so as to make sure that every statement that I made was epistemically justified would have been prohibitively time consuming.
Do you think he should have updated his beliefs, if so how? Given that he started writing about this stuff in 1947, and died in 1964, I’m not sure what kind of update he could have possibly (ideally) performed, that might justify the conclusion that he “doesn’t seem to have updated much in response to incoming evidence”.
Perhaps one update may be that unemployment isn’t as urgent a problem as he thought, assuming he did originally think it really urgent. But note that in the second writing I linked to, 13 years after his first, he no longer talked about unemployment. If he both thought the issue urgent and failed to update, don’t you think he would have repeated his warnings in an article dedicated to “the social consequences of [cybernetic techniques]”?
This seems like a poor excuse, given that the statement in question was part of the main conclusions of the Wiener project, not a tangential remark, which is why Luke chose to repeat it for public consumption. In any case, do you currently think it sufficiently justified to be included in Luke’s post?
As a meta-remark, I think that you’re being unnecessarily combative / aggressive.
To my mind, the key takeaway from the Wiener case study is that the juxtaposition of
(i) Automation hasn’t dramatically increased unemployment (ii) Wiener expressed concern that automation would dramatically increase unemployment.
shouldn’t be taken as evidence that it’s not possible to make predictions about AI. My original justification for this takeaway was “Wiener was wrong, but his methodology was bad.” Your view seems to be “Wiener wasn’t wrong,” but while different from what I said, this is is also a justification for the takeaway. So I don’t think that it matters much either way.
Thanks for the feedback.
Ok, this makes your position more understandable. I guess I was thinking that Wiener’s case also has relevance for other issues that we care about, for example what kind of epistemic standards we can expect mainstream AGI researchers (or mainstream elites in general) to adopt when thinking about the future.
A third party perspective: I hadn’t noticed this while watching the thread in RSS, so I went back and checked. I now think that if I were Wei Dai, the change I’d want to make in future threads would be to avoid language like “really wrong” and “a poor excuse” in favour of less loaded terms like “a big mistake” or “not a good reason”.