I agree.
AI safety advocates seem to be myopically focused on current-day systems. There is a lot of magical talk about LLMs. They do exactly what they’re trained to: next-token prediction. Good predictions requires you to implicitly learn natural abstractions. I think when you absorb this lesson the emergent abilities of gpt isn’t mega surprising.
Agentic AI will come. It won’t be just a scaled up LLM. It might grow as some sort of gremlin inside the llm but much more likely imho is that people build agentic AIs because agentic AIs are more powerful. The focus on spontaneous gremlin emergence seems like a distraction and motivated partially by political reasons rather than a dispassionate analysis of what’s possible.
I think Just Don’t Build Agents could be a win-win here. All the fun of AGI without the washing up, if it’s enforceable.
Possible ways to enforce it:
(1) Galaxy-brained AI methods like Davidad’s night watchman. Downside: scary, hard.
(2) Ordinary human methods like requring all large training runs to be approved by the No Agents committee.
Downside: we’d have to ban not just training agents, but training any system that could plausibly be used to build an agent, which might well include oracle-ish AI like LLMs. Possibly something like Bengio’s scientist AI might be allowed.
I agree. AI safety advocates seem to be myopically focused on current-day systems. There is a lot of magical talk about LLMs. They do exactly what they’re trained to: next-token prediction. Good predictions requires you to implicitly learn natural abstractions. I think when you absorb this lesson the emergent abilities of gpt isn’t mega surprising.
Agentic AI will come. It won’t be just a scaled up LLM. It might grow as some sort of gremlin inside the llm but much more likely imho is that people build agentic AIs because agentic AIs are more powerful. The focus on spontaneous gremlin emergence seems like a distraction and motivated partially by political reasons rather than a dispassionate analysis of what’s possible.
I think Just Don’t Build Agents could be a win-win here. All the fun of AGI without the washing up, if it’s enforceable.
Possible ways to enforce it:
(1) Galaxy-brained AI methods like Davidad’s night watchman. Downside: scary, hard.
(2) Ordinary human methods like requring all large training runs to be approved by the No Agents committee.
Downside: we’d have to ban not just training agents, but training any system that could plausibly be used to build an agent, which might well include oracle-ish AI like LLMs. Possibly something like Bengio’s scientist AI might be allowed.
The No Agentic Foundation Models Club ? 😁
I mean, I should mention that I also don’t think that agentic models will try to deceive us if trained how LLMs currently are, unfortunately.