Before June 2022 was the month of the possible start of the Second American Civil War, it was the month of alivelydebatebetween Scott Alexander and Gary Marcus about the scaling of large language models, such as GPT-3. Will GPT-n be able to do all the intellectual work that humans do, in the limit of large n? If so, should we be impressed? Terrified? Should we dismiss these language models as mere “stochastic parrots”?
I was privileged to be part of various email exchanges about those same questions with Steven Pinker, Ernest Davis, Gary Marcus, Douglas Hofstadter, and Scott Alexander. It’s fair to say that, overall, Pinker, Davis, Marcus, and Hofstadter were more impressed by GPT-3’s blunders, while we Scotts were more impressed by its abilities. (On the other hand, Hofstadter, more so than Pinker, Davis, or Marcus, said that he’s terrified about how powerful GPT-like systems will become in the future.)
Anyway, at some point Pinker produced an essay setting out his thoughts, and asked whether “either of the Scotts” wanted to share it on our blogs. Knowing an intellectual scoop when I see one, I answered that I’d be honored to host Steve’s essay—along with my response, along with Steve’s response to that. To my delight, Steve immediately agreed. Enjoy! –SA
Scott Aaronson and Steven Pinker Debate AI Scaling
Link post