“Why AI is Harder Than We Think” is a recent (April 26, 2021) arXiv preprint by Melanie Mitchell. While the author is a tenured professor with technical publications, they also have published multiple layman-level essays and articles which tend to be skeptical and pessimistic of AI progress. Why AI is Harder Than We Think falls somewhere in between the two extremes—it’s written in the style of an academic paper, but the arguments appeal to a layman audience.
Reddit’s /r/machinelearning was pretty harsh on the paper—click here for the full discussion. While I agree it has flaws, I still found it an interesting and valuable read because I enjoyed the process of figuring out where my opinions diverged from the author’s.
In this post, I will briefly summarize the paper and state my opinions. The most interesting part (= where I disagree the most) is Fallacy 4, so skip to that if you don’t want to read the whole blog post.
Introduction: AI Springs and Winters
Self-driving cars were predicted to be available for purchase by 2020, but they aren’t. The author also gives a more in depth historical account of the various AI “springs” (periods of growth and optimism due to advances in AI research) and “winters” (periods of stagnation and pessimism when said advances aren’t powerful ‘enough’) which have occurred since the 1950s.
Why does this keep happening? The author’s thesis:
In this chapter I explore the reasons for the repeating cycle of overconfidence followed by disappointment in expectations about AI. I argue that over-optimism among the public, the media, and even experts can arise from several fallacies in how we talk about AI and in our intuitions about the nature of intelligence.
Specifically, they discuss four fallacies, which I’ll address individually in the coming sections.
Sidenote: I really enjoyed pages 2–3 (the section on AI Springs and Winters). Whether or not you buy the thesis of this paper, that section is a well written historical account and you should check it out if you’re interested.
Fallacy 1: Narrow intelligence is on a continuum with general intelligence
The claim here is, when we make progress in AI by beating top humans at chess, we’ve solved a problem which is much more narrow than we think. Similar claims apply to seemingly general systems such as IBM’s Watson and OpenAI’s GPT-3.
My opinion
Issue 1: Narrow tasks are slightly generalizable. While it is true that chess is much narrower than general intelligence, the author neglects to mention that once we solved chess, we were able to “easily” apply those techniques to other tree search based domains. For a certain class of tasks, chess is like an “”NP-Complete problem”“—once we solved it, we (after a “”polynomial time transformation””) were able to solve all other problems in that class. This is how DeepMind went from AlphaGo to MuZero so quickly.
Issue 2: GPT-3 is kinda general. I completely disagree with the notion that GPT-3 is only slightly less narrow than chess, in comparison to human intelligence. GPT-n trained on a sufficiently large amount of text written by me would be indistinguishable from the real me. If GPT-n is 90% “general intelligence” and chess is 0.001% (coming from some dumb heuristic like “chess-like things are 1 out of 100,000 tasks a general intelligence should be able to do”), then I think GPT-3 is 1% general intelligence. And 1% is closer to 100% than it is 0.001%, in terms of orders of magnitude.
Where we agree: The “first-step fallacy” is real. Here, the “first step fallacy” refers to the phenomenon where an advance (“first step”) in AI is perceived as less narrow than it really is. I agree that the ML research frontier tends to “overfit” to tasks, causing research progress to appear like it required more insight than it did. This seems related to the planning fallacy—researchers assume research progress will continue at the same rate, where really they should expect to experience diminishing returns in effort. A third way of thinking about this is related to The illustrated guide to a Ph.D.. I’ve lazily altered some images from there to make my point.
Fallacy 2: Easy things are easy and hard things are hard
The claim here is basically Moravec’s law—tasks which humans think are hard are actually relatively easy for AI, and tasks which humans consider to be easy are much more difficult. For example, AlphaGo solving Go was seen as a triumph because Go is a very challenging game, but “challenging” here refers to a human’s perspective. From an AI’s perspective, charades is a much more “challenging” game.
My opinion
I agree with the author in that Moravec’s law is correct. I think this is a good point to bring up when talking about the history of AI, and (for example) why early attempts at computer vision failed. However, I think the modern ML research community has internalized this phenomenon, so I don’t think this point is super relevant to a modern conversation about the future of artificial general intelligence.
Fallacy 3: The lure of wishful mnemonics
The claim here is related to the idea that the terminology we use impacts our understanding of the objects we are discussing. For example, calling a neutral network a neural network implies that neutral networks are more similar to a human brain than they actually are. This also applies to how NLP benchmarks are named (a model which does well on the “Stanford Question Answering Dataset” is not going to be able to answer all questions well) and how people talk about models’ behavior (saying “AlphaGo’s goal” implies more coherence than it may have). That is, the terminology/shorthands/mnemonics here are all wishful.
Using these shorthands is damaging because they give off an impression to the public that AI systems are more capable than they actually are.
My opinion
I agree with the author that terminology is often not well-explained, and this leads to misrepresentation of AI research in the media. It’s hard to fix this problem because STEM coverage is rarely good. I think the solution here is to get better reporters (for example, Quanta Magazine has good math reporting) rather than to change the language AI researchers use.
There is an additional problem, which the author doesn’t focus on as much, of researchers using these “wishful mnemonics” within the research community. Sometimes this is fine—I find it easy to substitute a phrase like “AlphaGo’s goal” with a more precise phrase like “AlphaGo’s loss function is minimized by the following policy”.
But in the context of AI Safety, anthropomorphizing can get dicey. For example, the problem of understanding deceptive mesa-optimizers only becomes tricky and nuanced when we stop anthropomorphizing the system. If this point is not communicated well from the AI Safety community to the ML research community at large, the AI Safety community runs risk of leaving the ML research community unconvinced that deceptive alignment is an important problem.
I wish the author more clearly distinguished between these two settings—the first being communicating with the media and their resulting message to the public, and the second being the language used internally to the AI research community. Their point about “wishful mnemonics” would be stronger if they more clearly explained when they are beneficial/problematic in either setting.
Fallacy 4: Intelligence is all in the brain
This is the most complex claim of the four. The author’s reasoning here can be separated into two ideas. In both, the theme is that human intelligent reasoning does not only occur in the brain, but also <somewhere else>.
Idea 1: human intelligent reasoning does not only occur in the brain—it is also inextricably intertwined with the rest of our body. This theory is called embodied cognition. Experimental evidence in favor of embodied cognition exists, for example:
Research in neuroscience suggests, for example, that the neural structures controlling cognition are richly linked to those controlling sensory and motor systems, and that abstract thinking exploits body-based neural ‘maps’.
Idea 2: human intelligent reasoning can not be reduced to a sequence of purely rational decisions—it is also inextricably intertwined with our emotions and cultural biases. Therefore, we have no reason to believe that we can create artificial general intelligence which would be superintelligent, but lack emotions and cultural knowledge.
My opinion
Fallacy 4 is where this paper is weakest. I don’t find either of these ideas particularly convincing, and I find Idea 2 especially problematic.
My opinion on Idea 1: Embodied cognition sounds reasonable to me. I found the example mentioned in the paper too abstract to be compelling, so here is a more concrete example of my own: When humans do geometry, they might use spacial reasoning in such an intense way so that their eyes, arms, hands, etc. are engaged. This means that the amount of computational power which is being used on geometry exceeds the amount of computational power the human brain has.
Where my opinion differs from the author’s opinion is on the subject of what to do about this. The author seems to think that more research into embodied cognition—and more specifically the precise mechanisms underlying human intelligence—are necessary for making progress on artificial general intelligence. However, I think that all embodied cognition is saying is that we might need a bit more than 10^15 FLOP/s to match the processing power of a human. An extra factor of 2, 10, or 100 won’t make a difference in the long run. The Bitter Lesson provides evidence in favor of my opinion here—compute often eventually outperforms hard-coded human insights.
My opinion on Idea 2: At best, this seems incorrect. At worst, this seems completely incoherent.
Ironically, the author’s problem here seems to be that they are falling for their own Fallacy 3 (“The lure of wishful mnemonics”)—more specifically, they seem to be over-anthropomorphizing. Yes, it is true that human intelligent reasoning is intertwined with our irrational heuristics and biases. But this doesn’t mean that an artificial general intelligence has to operate in the same way.
For example, the author is skeptical of Bostrom’s orthogonality thesis because they think (for example) a paperclip maximizer with enough intelligence to be a threat cannot exist “without any basic humanlike common sense, yet while seamlessly preserving the speed, precision, and programmability of a computer.”
While I agree that a superintelligent paperclip maximizer will have to have a decently accurate world model, I disagree with the notion that it will have to learn and internalize human values.
For example, one could imagine a paperclip maximizer trained exclusively on synthetic physics simulation data. If its influence on human society is only indirect (for example, maybe all it can do is control where clouds are distributed in the sky or something silly), then the strategies it employs to increase paperclip production will seem convoluted and unintelligible from a human perspective. (“Why does putting clouds in these exact 1000 places triple paperclip production? Who knows!”) Maybe concepts like “what is a paperclip factory” will crystalize in its internals, but still, I think such a model’s inner workings would be very far from “humanlike common sense”.
Moreover, there is no reason to expect it to be difficult for an artificial general intelligence to learn whatever human biases and cultural knowledge are supposedly necessary for human-level intelligence. In fact, this seems to be the default path for AGI.
Interpreting the author’s point here as charitably as possible, it seems like the issue here is an imprecise notion of intelligence. MuZero is able to “superintelligently” beat some Atari games without the human bias of “getting dopamine when I reach a checkpoint” or the cultural knowledge of “death is bad and I should avoid it”.
So, the author’s notion of intelligence must be broader—closer to a general-purpose, human-level intelligence. But then, does GPT-3 count? Its output is human-like writing which feels like it was influenced by irrational heuristics and biases, not a superintelligent, purely rational system.
Even if GPT-3 doesn’t count—the entire field of AI Ethics is specifically devoted to the problem of ridding AI of human biases. Whenever we train ML systems on human data, the default outcome is that they learn our human biases! Since we keep running into these problems, the idea that AGI progress will be blocked by our understanding of the mechanics of human cognition seems ludicrous.
Conclusion
This paper is at its best when it goes over the history of AI, and to some extent when it discusses Fallacy 3 (“The lure of wishful mnemonics”). I do think that crisp communication, both within the AI research community and the general public, would make conversations about AI policy more productive.
This also applies to the AI Safety community in particular—the fact that the author, a professor of computer science, understood the orthogonality thesis as poorly as they did, speaks to how much more credible AI Safety could become if it had more accessible literature.
The paper is certainly at its worst in Fallacy 4, where it claims that AI is hard by appealing to a “special sauce”-type argument in favor of human cognition. I would not be surprised if human reasoning is more complex than a brain-sized neural network, due to the brain being highly optimized and the body performing additional computation. However, at worst, I think all this implies is that we’ll need a bit extra compute to achieve human-level intelligence.
For the well written and sourced AI history content alone, I do recommend you read this paper. Just maybe critically evaluate the author’s claims about what progress in AI will look like going forward, because I don’t buy many of them.
Review of “Why AI is Harder Than We Think”
“Why AI is Harder Than We Think” is a recent (April 26, 2021) arXiv preprint by Melanie Mitchell. While the author is a tenured professor with technical publications, they also have published multiple layman-level essays and articles which tend to be skeptical and pessimistic of AI progress. Why AI is Harder Than We Think falls somewhere in between the two extremes—it’s written in the style of an academic paper, but the arguments appeal to a layman audience.
Reddit’s /r/machinelearning was pretty harsh on the paper—click here for the full discussion. While I agree it has flaws, I still found it an interesting and valuable read because I enjoyed the process of figuring out where my opinions diverged from the author’s.
In this post, I will briefly summarize the paper and state my opinions. The most interesting part (= where I disagree the most) is Fallacy 4, so skip to that if you don’t want to read the whole blog post.
Introduction: AI Springs and Winters
Self-driving cars were predicted to be available for purchase by 2020, but they aren’t. The author also gives a more in depth historical account of the various AI “springs” (periods of growth and optimism due to advances in AI research) and “winters” (periods of stagnation and pessimism when said advances aren’t powerful ‘enough’) which have occurred since the 1950s.
Why does this keep happening? The author’s thesis:
Specifically, they discuss four fallacies, which I’ll address individually in the coming sections.
Sidenote: I really enjoyed pages 2–3 (the section on AI Springs and Winters). Whether or not you buy the thesis of this paper, that section is a well written historical account and you should check it out if you’re interested.
Fallacy 1: Narrow intelligence is on a continuum with general intelligence
The claim here is, when we make progress in AI by beating top humans at chess, we’ve solved a problem which is much more narrow than we think. Similar claims apply to seemingly general systems such as IBM’s Watson and OpenAI’s GPT-3.
My opinion
Issue 1: Narrow tasks are slightly generalizable. While it is true that chess is much narrower than general intelligence, the author neglects to mention that once we solved chess, we were able to “easily” apply those techniques to other tree search based domains. For a certain class of tasks, chess is like an “”NP-Complete problem”“—once we solved it, we (after a “”polynomial time transformation””) were able to solve all other problems in that class. This is how DeepMind went from AlphaGo to MuZero so quickly.
Issue 2: GPT-3 is kinda general. I completely disagree with the notion that GPT-3 is only slightly less narrow than chess, in comparison to human intelligence. GPT-n trained on a sufficiently large amount of text written by me would be indistinguishable from the real me. If GPT-n is 90% “general intelligence” and chess is 0.001% (coming from some dumb heuristic like “chess-like things are 1 out of 100,000 tasks a general intelligence should be able to do”), then I think GPT-3 is 1% general intelligence. And 1% is closer to 100% than it is 0.001%, in terms of orders of magnitude.
Where we agree: The “first-step fallacy” is real. Here, the “first step fallacy” refers to the phenomenon where an advance (“first step”) in AI is perceived as less narrow than it really is. I agree that the ML research frontier tends to “overfit” to tasks, causing research progress to appear like it required more insight than it did. This seems related to the planning fallacy—researchers assume research progress will continue at the same rate, where really they should expect to experience diminishing returns in effort. A third way of thinking about this is related to The illustrated guide to a Ph.D.. I’ve lazily altered some images from there to make my point.
Fallacy 2: Easy things are easy and hard things are hard
The claim here is basically Moravec’s law—tasks which humans think are hard are actually relatively easy for AI, and tasks which humans consider to be easy are much more difficult. For example, AlphaGo solving Go was seen as a triumph because Go is a very challenging game, but “challenging” here refers to a human’s perspective. From an AI’s perspective, charades is a much more “challenging” game.
My opinion
I agree with the author in that Moravec’s law is correct. I think this is a good point to bring up when talking about the history of AI, and (for example) why early attempts at computer vision failed. However, I think the modern ML research community has internalized this phenomenon, so I don’t think this point is super relevant to a modern conversation about the future of artificial general intelligence.
Fallacy 3: The lure of wishful mnemonics
The claim here is related to the idea that the terminology we use impacts our understanding of the objects we are discussing. For example, calling a neutral network a neural network implies that neutral networks are more similar to a human brain than they actually are. This also applies to how NLP benchmarks are named (a model which does well on the “Stanford Question Answering Dataset” is not going to be able to answer all questions well) and how people talk about models’ behavior (saying “AlphaGo’s goal” implies more coherence than it may have). That is, the terminology/shorthands/mnemonics here are all wishful.
Using these shorthands is damaging because they give off an impression to the public that AI systems are more capable than they actually are.
My opinion
I agree with the author that terminology is often not well-explained, and this leads to misrepresentation of AI research in the media. It’s hard to fix this problem because STEM coverage is rarely good. I think the solution here is to get better reporters (for example, Quanta Magazine has good math reporting) rather than to change the language AI researchers use.
There is an additional problem, which the author doesn’t focus on as much, of researchers using these “wishful mnemonics” within the research community. Sometimes this is fine—I find it easy to substitute a phrase like “AlphaGo’s goal” with a more precise phrase like “AlphaGo’s loss function is minimized by the following policy”.
But in the context of AI Safety, anthropomorphizing can get dicey. For example, the problem of understanding deceptive mesa-optimizers only becomes tricky and nuanced when we stop anthropomorphizing the system. If this point is not communicated well from the AI Safety community to the ML research community at large, the AI Safety community runs risk of leaving the ML research community unconvinced that deceptive alignment is an important problem.
I wish the author more clearly distinguished between these two settings—the first being communicating with the media and their resulting message to the public, and the second being the language used internally to the AI research community. Their point about “wishful mnemonics” would be stronger if they more clearly explained when they are beneficial/problematic in either setting.
Fallacy 4: Intelligence is all in the brain
This is the most complex claim of the four. The author’s reasoning here can be separated into two ideas. In both, the theme is that human intelligent reasoning does not only occur in the brain, but also <somewhere else>.
Idea 1: human intelligent reasoning does not only occur in the brain—it is also inextricably intertwined with the rest of our body. This theory is called embodied cognition. Experimental evidence in favor of embodied cognition exists, for example:
Idea 2: human intelligent reasoning can not be reduced to a sequence of purely rational decisions—it is also inextricably intertwined with our emotions and cultural biases. Therefore, we have no reason to believe that we can create artificial general intelligence which would be superintelligent, but lack emotions and cultural knowledge.
My opinion
Fallacy 4 is where this paper is weakest. I don’t find either of these ideas particularly convincing, and I find Idea 2 especially problematic.
My opinion on Idea 1: Embodied cognition sounds reasonable to me. I found the example mentioned in the paper too abstract to be compelling, so here is a more concrete example of my own: When humans do geometry, they might use spacial reasoning in such an intense way so that their eyes, arms, hands, etc. are engaged. This means that the amount of computational power which is being used on geometry exceeds the amount of computational power the human brain has.
Where my opinion differs from the author’s opinion is on the subject of what to do about this. The author seems to think that more research into embodied cognition—and more specifically the precise mechanisms underlying human intelligence—are necessary for making progress on artificial general intelligence. However, I think that all embodied cognition is saying is that we might need a bit more than 10^15 FLOP/s to match the processing power of a human. An extra factor of 2, 10, or 100 won’t make a difference in the long run. The Bitter Lesson provides evidence in favor of my opinion here—compute often eventually outperforms hard-coded human insights.
My opinion on Idea 2: At best, this seems incorrect. At worst, this seems completely incoherent.
Ironically, the author’s problem here seems to be that they are falling for their own Fallacy 3 (“The lure of wishful mnemonics”)—more specifically, they seem to be over-anthropomorphizing. Yes, it is true that human intelligent reasoning is intertwined with our irrational heuristics and biases. But this doesn’t mean that an artificial general intelligence has to operate in the same way.
For example, the author is skeptical of Bostrom’s orthogonality thesis because they think (for example) a paperclip maximizer with enough intelligence to be a threat cannot exist “without any basic humanlike common sense, yet while seamlessly preserving the speed, precision, and programmability of a computer.”
While I agree that a superintelligent paperclip maximizer will have to have a decently accurate world model, I disagree with the notion that it will have to learn and internalize human values.
For example, one could imagine a paperclip maximizer trained exclusively on synthetic physics simulation data. If its influence on human society is only indirect (for example, maybe all it can do is control where clouds are distributed in the sky or something silly), then the strategies it employs to increase paperclip production will seem convoluted and unintelligible from a human perspective. (“Why does putting clouds in these exact 1000 places triple paperclip production? Who knows!”) Maybe concepts like “what is a paperclip factory” will crystalize in its internals, but still, I think such a model’s inner workings would be very far from “humanlike common sense”.
Moreover, there is no reason to expect it to be difficult for an artificial general intelligence to learn whatever human biases and cultural knowledge are supposedly necessary for human-level intelligence. In fact, this seems to be the default path for AGI.
Interpreting the author’s point here as charitably as possible, it seems like the issue here is an imprecise notion of intelligence. MuZero is able to “superintelligently” beat some Atari games without the human bias of “getting dopamine when I reach a checkpoint” or the cultural knowledge of “death is bad and I should avoid it”.
So, the author’s notion of intelligence must be broader—closer to a general-purpose, human-level intelligence. But then, does GPT-3 count? Its output is human-like writing which feels like it was influenced by irrational heuristics and biases, not a superintelligent, purely rational system.
Even if GPT-3 doesn’t count—the entire field of AI Ethics is specifically devoted to the problem of ridding AI of human biases. Whenever we train ML systems on human data, the default outcome is that they learn our human biases! Since we keep running into these problems, the idea that AGI progress will be blocked by our understanding of the mechanics of human cognition seems ludicrous.
Conclusion
This paper is at its best when it goes over the history of AI, and to some extent when it discusses Fallacy 3 (“The lure of wishful mnemonics”). I do think that crisp communication, both within the AI research community and the general public, would make conversations about AI policy more productive.
This also applies to the AI Safety community in particular—the fact that the author, a professor of computer science, understood the orthogonality thesis as poorly as they did, speaks to how much more credible AI Safety could become if it had more accessible literature.
The paper is certainly at its worst in Fallacy 4, where it claims that AI is hard by appealing to a “special sauce”-type argument in favor of human cognition. I would not be surprised if human reasoning is more complex than a brain-sized neural network, due to the brain being highly optimized and the body performing additional computation. However, at worst, I think all this implies is that we’ll need a bit extra compute to achieve human-level intelligence.
For the well written and sourced AI history content alone, I do recommend you read this paper. Just maybe critically evaluate the author’s claims about what progress in AI will look like going forward, because I don’t buy many of them.