When discussing AI, we often associate technological proficiency with an understanding of AI’s broader implications. This notion is as misguided as believing that a master printing press mechanic from Gutenberg’s era would have been best equipped to predict the impacts of the printing press.
The AI development field, represented by experts like Geoffrey Hinton (proponent of safety measures and potential regulation) or Yann LeCun (an advocate of AI acceleration with fewer restrictions), primarily concerns itself with technical prowess and the creation of AI algorithms. In contrast, understanding the expansive implications of AI necessitates a comprehensive, multidisciplinary approach encompassing areas like warfare, game theory, sociology, philosophy, ethics, economics, and law, among others.
In this context, the debate between Yann LeCun and historian-author Yuval Noah Harari offers a glimpse into these diverging standpoints. While there’s an understandable tendency to lend more credence to LeCun’s perspective because of his technical accolades, this thought process might be a misstep when discussing the broader implications of AI.
On the other hand, Harari brings forth a more holistic perspective shaped by his wide-ranging understanding of human history, sociology, and philosophy. With his expertise in scrutinizing historical shifts and societal changes, Harari may be better prepared to explore the implications of AI because understanding implications and creating AI systems require different skill sets.
Those who can fully grasp AI’s overarching impacts might not be technical experts but rather polymaths.
This post is a response to an observed bias in discussions surrounding AI: the undue weight given to technical experts’ perspectives regarding the broad implications of AI. (Like in the last AI risk letter), while downplaying or ignoring the views of non-technical experts.
Against Conflating Expertise: Distinguishing AI Development from AI Implication Analysis
When discussing AI, we often associate technological proficiency with an understanding of AI’s broader implications. This notion is as misguided as believing that a master printing press mechanic from Gutenberg’s era would have been best equipped to predict the impacts of the printing press.
The AI development field, represented by experts like Geoffrey Hinton (proponent of safety measures and potential regulation) or Yann LeCun (an advocate of AI acceleration with fewer restrictions), primarily concerns itself with technical prowess and the creation of AI algorithms. In contrast, understanding the expansive implications of AI necessitates a comprehensive, multidisciplinary approach encompassing areas like warfare, game theory, sociology, philosophy, ethics, economics, and law, among others.
In this context, the debate between Yann LeCun and historian-author Yuval Noah Harari offers a glimpse into these diverging standpoints. While there’s an understandable tendency to lend more credence to LeCun’s perspective because of his technical accolades, this thought process might be a misstep when discussing the broader implications of AI.
On the other hand, Harari brings forth a more holistic perspective shaped by his wide-ranging understanding of human history, sociology, and philosophy. With his expertise in scrutinizing historical shifts and societal changes, Harari may be better prepared to explore the implications of AI because understanding implications and creating AI systems require different skill sets.
Those who can fully grasp AI’s overarching impacts might not be technical experts but rather polymaths.
This post is a response to an observed bias in discussions surrounding AI: the undue weight given to technical experts’ perspectives regarding the broad implications of AI. (Like in the last AI risk letter), while downplaying or ignoring the views of non-technical experts.