But do you really think we’re going to stop with tool AI, and not turn them into agents?
But if it is the case that agentic AI is an existential risk then if actors could choose not to develop it, which is a coordination problem not an alignment problem.
We already have aligned AGI, we can coordinate to not build misaligned AGI.
How can we solve that coordination problem? I have yet to hear a workable idea.
We agree that far, then! I just don’t think that’s a workable strategy (you also didn’t state that big assumption in your post—that AGI is still dangerous as hell, we just have a route to really useful AI that isn’t).
The problem is that we don’t know whether agents based on LLMs are alignable. We don’t have enough people working on the conjunction of LLM/deep nets and real AGI. So everyone building it is going to optmistically assume it’s alignable. The Yudkowsky et al arguments for alignment being very difficult are highly incomplete; they aren’t convincing because they shouldn’t be. But they make good points.
If we refuse to think about aligning AGI LLM architectures because it sounds risky, it seems pretty certain that people will try it without our help. Even convincing them not to would require grappling in depth with why alignment would or wouldn’t work for that type of AGI.
We don’t have “aligned AGI”. We have neither “AGI” nor an “aligned” system. We have sophisticated human-output simulators that don’t have the generality to produce effective agentic behavior when looped but which also don’t follow human intentions with the reliability that you’d want from a super-powerful system (which, fortunately, they aren’t).
But if it is the case that agentic AI is an existential risk then if actors could choose not to develop it, which is a coordination problem not an alignment problem.
We already have aligned AGI, we can coordinate to not build misaligned AGI.
How can we solve that coordination problem? I have yet to hear a workable idea.
We agree that far, then! I just don’t think that’s a workable strategy (you also didn’t state that big assumption in your post—that AGI is still dangerous as hell, we just have a route to really useful AI that isn’t).
The problem is that we don’t know whether agents based on LLMs are alignable. We don’t have enough people working on the conjunction of LLM/deep nets and real AGI. So everyone building it is going to optmistically assume it’s alignable. The Yudkowsky et al arguments for alignment being very difficult are highly incomplete; they aren’t convincing because they shouldn’t be. But they make good points.
If we refuse to think about aligning AGI LLM architectures because it sounds risky, it seems pretty certain that people will try it without our help. Even convincing them not to would require grappling in depth with why alignment would or wouldn’t work for that type of AGI.
This is my next project!
We don’t have “aligned AGI”. We have neither “AGI” nor an “aligned” system. We have sophisticated human-output simulators that don’t have the generality to produce effective agentic behavior when looped but which also don’t follow human intentions with the reliability that you’d want from a super-powerful system (which, fortunately, they aren’t).