“Based on your understanding of AI technical developments as of March 29, 2023, evaluate the most important known object-level predictions of Eliezer Yudkowsky on the subject, and which ones seemed true versus false. Afterwards, evaluate those predictions as a group, on a scale from ‘mostly true’ to ‘mostly false.’“
I ran this prompt but substituted in “Gary Marcus” for “Eliezer Yudkowsky”. Claude says
Overall evaluation: On a scale from ‘mostly true’ to ‘mostly false,’ I would rate Gary Marcus’s predictions as a group as “Mostly True.”
Many of Marcus’s predictions about the limitations of current AI systems and the challenges ahead have proven accurate. His concerns about reasoning, abstract thinking, and the need for more sophisticated knowledge representation align with ongoing challenges in the field. His emphasis on AI safety and alignment has also been prescient.
However, it’s worth noting that some of his predictions might be seen as overly pessimistic by some in the AI community. The rapid progress in LLMs and their applications has surprised many, including some skeptics. Nonetheless, many of the fundamental challenges he pointed out remain relevant.
It’s also important to remember that the field of AI is rapidly evolving, and assessments of such predictions can change quickly as new breakthroughs occur. As of my last update in April 2024, many of Marcus’s key points still held true, but the field continues to advance at a rapid pace.
I think Claude likes saying nice things about people, so it’s worth trying to control for that.
I ran this prompt but substituted in “Gary Marcus” for “Eliezer Yudkowsky”. Claude says
I think Claude likes saying nice things about people, so it’s worth trying to control for that.