To a non-trivial extent, it vindicates the LLM skeptics of recent fame, like Gary Marcus and Yann Lecun, and generally makes the path for LLMs to be much more constrained in capabilities than we used to believe.
This is both good and bad:
The biggest good thing about this, combined with the twitter talk on LLMs, is that makes timelines quite a bit longer. In particular, Daniel Kokotajlo’s model becomes very difficult to sustain without truly ludicrous progress and switching to other types of AI.
The biggest potentially bad thing is that algorithmic progress, and to a lesser extent a change of paradigms becomes more important, and this complicates AI governance, because any adversarial pressure on LLMs is yet another force on AI progress, and while I don’t subscribe to standard views on what will happen as a result of that, it does complicate AI governance.
To a non-trivial extent, it vindicates the LLM skeptics of recent fame, like Gary Marcus and Yann Lecun, and generally makes the path for LLMs to be much more constrained in capabilities than we used to believe.
This is both good and bad:
The biggest good thing about this, combined with the twitter talk on LLMs, is that makes timelines quite a bit longer. In particular, Daniel Kokotajlo’s model becomes very difficult to sustain without truly ludicrous progress and switching to other types of AI.
The biggest potentially bad thing is that algorithmic progress, and to a lesser extent a change of paradigms becomes more important, and this complicates AI governance, because any adversarial pressure on LLMs is yet another force on AI progress, and while I don’t subscribe to standard views on what will happen as a result of that, it does complicate AI governance.