I’m wondering if GPT-5 or Gemini would snap people like LeCun out of their complacency. I suspect LeCun has a pretty detailed model of intelligence which implies things like mesa-optimization not being a problem etc. as well as further scaling successes being implausible. Something like Gemini having a good enough world model to do plenty of physical reasoning in a simulation may violate enough of his assumptions that he actually updates.
I’m wondering if GPT-5 or Gemini would snap people like LeCun out of their complacency. I suspect LeCun has a pretty detailed model of intelligence which implies things like mesa-optimization not being a problem etc. as well as further scaling successes being implausible. Something like Gemini having a good enough world model to do plenty of physical reasoning in a simulation may violate enough of his assumptions that he actually updates.