It is goalpost moving. Basically, it says “current models are not really intelligent”. I don’t think there is much disagreement here. And it’s hard to make any predictions based on that.
Also, “Producing human-like text” is not well defined here; even ELIZA may match this definition. Even the current SOTA may not match it because the adversarial Turning Test has not yet been passed.
It’s not goapost moving, it’s the hype that’s moving. People reduce intelligence to arbitrary skills or problems that are currently being solved and then they are let down when they find out that the skill was actually not a good proxy.
I agree that LMs are concetually more similar to ELIZA than to AGI.
The observation that things that people used to consider intelligent are now considered easy is critical.
The space of stuff remaining that we call intelligent, but AIs cannot yet do, is shrinking. Every time AI eats something, we realize it wasn’t even that complicated.
The reasonable lesson appears to be: we should stop default-thinking things are hard, and we should start thinking that even stupid approaches might be able to do too much.
It’s a statement more about the problem being solved, not the problem solver.
When you stack this on a familiarity with the techniques in use and how they can be transformatively improved with little effort, that’s when you start sweating.
It is goalpost moving. Basically, it says “current models are not really intelligent”. I don’t think there is much disagreement here. And it’s hard to make any predictions based on that.
Also, “Producing human-like text” is not well defined here; even ELIZA may match this definition. Even the current SOTA may not match it because the adversarial Turning Test has not yet been passed.
It’s not goapost moving, it’s the hype that’s moving. People reduce intelligence to arbitrary skills or problems that are currently being solved and then they are let down when they find out that the skill was actually not a good proxy.
I agree that LMs are concetually more similar to ELIZA than to AGI.
The observation that things that people used to consider intelligent are now considered easy is critical.
The space of stuff remaining that we call intelligent, but AIs cannot yet do, is shrinking. Every time AI eats something, we realize it wasn’t even that complicated.
The reasonable lesson appears to be: we should stop default-thinking things are hard, and we should start thinking that even stupid approaches might be able to do too much.
It’s a statement more about the problem being solved, not the problem solver.
When you stack this on a familiarity with the techniques in use and how they can be transformatively improved with little effort, that’s when you start sweating.