That’s not surprising to me! I pretty much agree with all of this, yup. I’d only add that:
This is why I’m fairly unexcited about the current object-level regulation, and especially the “responsible scaling policies”. Scale isn’t what matters, novel architectural advances is. Scale is safe, and should be encouraged; new theoretical research is dangerous and should be banned/discouraged.
The current major AI labs are fairly ideological about getting to AGI specifically. If they actually pivoted to just scaling LLMs, that’d be great, but I don’t think they’d do it by default.
I agree that LLMs aren’t dangerous. But that’s entirely separate from whether they’re a path to real AGI that is. I think adding self-directed learning and agency to LLMs by using them in cognitive architectures is relatively straightforward: Capabilities and alignment of LLM cognitive architectures.
On this model, improvements in LLMs do contribute to dangerous AGI. They need the architectural additions as well, but better LLMs make those easier.
That’s not surprising to me! I pretty much agree with all of this, yup. I’d only add that:
This is why I’m fairly unexcited about the current object-level regulation, and especially the “responsible scaling policies”. Scale isn’t what matters, novel architectural advances is. Scale is safe, and should be encouraged; new theoretical research is dangerous and should be banned/discouraged.
The current major AI labs are fairly ideological about getting to AGI specifically. If they actually pivoted to just scaling LLMs, that’d be great, but I don’t think they’d do it by default.
I agree that LLMs aren’t dangerous. But that’s entirely separate from whether they’re a path to real AGI that is. I think adding self-directed learning and agency to LLMs by using them in cognitive architectures is relatively straightforward: Capabilities and alignment of LLM cognitive architectures.
On this model, improvements in LLMs do contribute to dangerous AGI. They need the architectural additions as well, but better LLMs make those easier.