This is my critique of David Brooks opinion piece in the New York Times.
Tl;dr: Brooks believes that AI will never replace human intelligence but does not describe any testable capabilities that he predicts AI will never possess.
David Brooks argues that artificial intelligence will never replace human intelligence. I believe it will. The fundamental distinction is that human intelligence emerged through evolution, while AI is being designed by humans. For AI to never match human intelligence, there would need to be a point where progress in AI becomes impossible. This would require the existence of a capability that evolution managed to develop, but that science could never replicate. Given enough computing power, why would we not be able to replicate this capability by simulating a human brain? Alternatively, we could simulate evolution inside a sufficiently complex environment. Does Brooks believe that certain functionalities can only be realized through biology? While this seems unlikely, if it were the cause, we could create biological AI. Why does Brooks believe that AI has limits that carbon based brains produced by evolution does not have? It is possible that he is referring to a more narrow definition of AI, like silicon based intelligence based on the currently popular machine learning paradigm, but the article doesn’t specify what AIs Brooks is talking about.
In fact, one of my main concerns with the article is that Brooks’ arguments rely on several ambiguous terms without explaining what he means by them. For example:
The A.I. ’mind’ lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences.
Most of these terms are associated with the subjective non-material phenomena of consciousness (i.e., ‘What it is like to be something’). However, AIs that possess all the testable capabilities of humans but lack consciousness would still be able to perform human jobs. After all, you are paid for your output and not your experiences. Therefore, I believe we should avoid focusing on the nebulous concept of consciousness and instead concentrate on testable capabilities. If Brooks believes that certain capabilities require conscious experience, I would be interested to know what those capabilities are. Demonstrating such capabilities should, in that case, be enough to convince Brooks that an entity is conscious.
Take, for example, the term ‘self-awareness’. If we focus on the testable capability this term implies, I would argue that current AI systems already exhibit it. If you ask ChatGPT-4o ‘What are you?’, it provides an accurate answer. We assess whether elephants are self-aware by marking their body in a place they cannot see and then test whether they can identify the mark with the help of a mirror.
I suggest that Brooks supplement these ambiguous terms with concrete tests that he believes AI will never be able to pass. Additionally, it would be helpful if he could clarify why he believes science will never be able to replicate these capabilities, despite evolution having achieved them.
On a broader level, this reminds me of how people once believed that the universe revolved around Earth simply because Earth was the celestial body that mattered most to them. Just because it feels, from our human perspective, that we are special does not mean that we are. The universe is vast, and Earth occupies no significant place within it beyond being our home. Similarly, the space of potential minds and intelligences is vast. It would be very surprising if our carbon-based brains shaped by evolution occupied an insurmountable peak in this space.
In his opening paragraph, Brooks claims to acknowledge the dangers of AI, yet the only potential harm he mentions is misuse. I would argue that the most critical risks associated with AI are existential risks and we arguably have a concensus among the experts of the field that this is a serious risk. Consider the views of the four most-cited AI researchers on this topic—Hinton, Bengio, and Sutskever have all expressed significant concerns about existential risks posed by AI, while LeCun does not believe in such risks. The leaders of the top three AI labs—Altman, Amodei, and Hassabis—have also voiced concerns about existential risks. I understand that the article is intended for a liberal arts audience, but I still find it unreasonable that John Keats is quoted before any AI experts.
In summary, the article is vague and lacks the specificity needed for a thorough critique. Mostly I interpret it as Brooks finding it difficult to imagine that something so different from our human mind as an AI could ever be conscious. As a result, he concludes that there are capabilities that AI will never possess. The heading of the article is unearned as the article does not even address certain concerns voiced by experts in the field, like the existential risks posed by not being able to align Artificial General Intelligence.
Critique of ‘Many People Fear A.I. They Shouldn’t’ by David Brooks.
This is my critique of David Brooks opinion piece in the New York Times.
Tl;dr: Brooks believes that AI will never replace human intelligence but does not describe any testable capabilities that he predicts AI will never possess.
David Brooks argues that artificial intelligence will never replace human intelligence. I believe it will. The fundamental distinction is that human intelligence emerged through evolution, while AI is being designed by humans. For AI to never match human intelligence, there would need to be a point where progress in AI becomes impossible. This would require the existence of a capability that evolution managed to develop, but that science could never replicate. Given enough computing power, why would we not be able to replicate this capability by simulating a human brain? Alternatively, we could simulate evolution inside a sufficiently complex environment. Does Brooks believe that certain functionalities can only be realized through biology? While this seems unlikely, if it were the cause, we could create biological AI. Why does Brooks believe that AI has limits that carbon based brains produced by evolution does not have? It is possible that he is referring to a more narrow definition of AI, like silicon based intelligence based on the currently popular machine learning paradigm, but the article doesn’t specify what AIs Brooks is talking about.
In fact, one of my main concerns with the article is that Brooks’ arguments rely on several ambiguous terms without explaining what he means by them. For example:
Most of these terms are associated with the subjective non-material phenomena of consciousness (i.e., ‘What it is like to be something’). However, AIs that possess all the testable capabilities of humans but lack consciousness would still be able to perform human jobs. After all, you are paid for your output and not your experiences. Therefore, I believe we should avoid focusing on the nebulous concept of consciousness and instead concentrate on testable capabilities. If Brooks believes that certain capabilities require conscious experience, I would be interested to know what those capabilities are. Demonstrating such capabilities should, in that case, be enough to convince Brooks that an entity is conscious.
Take, for example, the term ‘self-awareness’. If we focus on the testable capability this term implies, I would argue that current AI systems already exhibit it. If you ask ChatGPT-4o ‘What are you?’, it provides an accurate answer. We assess whether elephants are self-aware by marking their body in a place they cannot see and then test whether they can identify the mark with the help of a mirror.
I suggest that Brooks supplement these ambiguous terms with concrete tests that he believes AI will never be able to pass. Additionally, it would be helpful if he could clarify why he believes science will never be able to replicate these capabilities, despite evolution having achieved them.
On a broader level, this reminds me of how people once believed that the universe revolved around Earth simply because Earth was the celestial body that mattered most to them. Just because it feels, from our human perspective, that we are special does not mean that we are. The universe is vast, and Earth occupies no significant place within it beyond being our home. Similarly, the space of potential minds and intelligences is vast. It would be very surprising if our carbon-based brains shaped by evolution occupied an insurmountable peak in this space.
In his opening paragraph, Brooks claims to acknowledge the dangers of AI, yet the only potential harm he mentions is misuse. I would argue that the most critical risks associated with AI are existential risks and we arguably have a concensus among the experts of the field that this is a serious risk. Consider the views of the four most-cited AI researchers on this topic—Hinton, Bengio, and Sutskever have all expressed significant concerns about existential risks posed by AI, while LeCun does not believe in such risks. The leaders of the top three AI labs—Altman, Amodei, and Hassabis—have also voiced concerns about existential risks. I understand that the article is intended for a liberal arts audience, but I still find it unreasonable that John Keats is quoted before any AI experts.
In summary, the article is vague and lacks the specificity needed for a thorough critique. Mostly I interpret it as Brooks finding it difficult to imagine that something so different from our human mind as an AI could ever be conscious. As a result, he concludes that there are capabilities that AI will never possess. The heading of the article is unearned as the article does not even address certain concerns voiced by experts in the field, like the existential risks posed by not being able to align Artificial General Intelligence.