While I disagreed with a lot of Robin Hanson’s latest take on AI risk, I am glad he came out with an updated position. I think with everything that’s happened in the past 6-12 months, it’s a good time for public intellectuals and prominent people who have previously commented on AGI and AI risk to check in again and share their latest views.
That got me curious if Steven Pinker had any recent statements. I found this article on the Harvard Gazette from last month (Feb 2023), which I couldn’t find posted on LessWrong before:
Here’s a summary of the article that ChatGPT generated for me just now (bold mine):
Steven Pinker, a psychology professor at Harvard, has commented on OpenAI’s ChatGPT, an artificial intelligence (AI) chatbot that can answer questions and write texts. He is impressed with the AI’s abilities, but also highlights its flaws, such as a lack of common sense and factual errors. Pinker believes that ChatGPT has revealed how statistical patterns in large data sets can be used to generate intelligent-sounding text, even if it does not have understanding of the world. He also believes that the development of artificial general intelligence is incoherent and not achievable, and that current AI devices will always exceed humans in some challenges and not others. Pinker is not concerned about ChatGPT being used in the classroom, as its output is easy to unmask as it mashes up quotations and references that do not exist.
Note that while he comments on AGI being an incoherent idea, he doesn’t speak specifically about existential risk from AI misalignment. So it’s not totally clear, but I think we can infer Pinker considers the risk very low, since he doesn’t think AGI is possible in the first place.
Steven Pinker on ChatGPT and AGI (Feb 2023)
Link post
While I disagreed with a lot of Robin Hanson’s latest take on AI risk, I am glad he came out with an updated position. I think with everything that’s happened in the past 6-12 months, it’s a good time for public intellectuals and prominent people who have previously commented on AGI and AI risk to check in again and share their latest views.
That got me curious if Steven Pinker had any recent statements. I found this article on the Harvard Gazette from last month (Feb 2023), which I couldn’t find posted on LessWrong before:
Article link
Will ChatGPT supplant us as writers, thinkers?
Q&A with Steven Pinker
by Alvin Powell
Feb 14, 2023
Summary
Here’s a summary of the article that ChatGPT generated for me just now (bold mine):
Note that while he comments on AGI being an incoherent idea, he doesn’t speak specifically about existential risk from AI misalignment. So it’s not totally clear, but I think we can infer Pinker considers the risk very low, since he doesn’t think AGI is possible in the first place.