I also think a bunch of alignment writing seems syntactical. Like, “we need to solve adversarial robustness so that the AI can’t find bad inputs and exploit them / we don’t have to worry about distributional shift. Existing robustness strategies have downsides A B and C and it’s hard to even get ϵ-ball guarantees on classifications. Therefore, …”
And I’m worried that this writing isn’t abstractly summarizing a concrete story for failure that they have in mind (like “I train the AI [with this setup] and it produces [this internal cognition] for [these mechanistic reasons]”; see A shot at the diamond alignment problem for an example) and then their best guesses at how to intervene on the story to prevent the failures from being able to happen (eg “but if we had [this robustness property] we could be sure its policy would generalize into situations X Y and Z, which makes the story go well”). I’m rather worried that people are more playing syntactically, and not via detailed models of what might happen.
Detailed models are expensive to make. Detailed stories are hard to write. There’s a lot we don’t know. But we sure as hell aren’t going to solve alignment only via valid reasoning steps on informally specified axioms (“The AI has to be robust or we die”, or something?).
I also think a bunch of alignment writing seems syntactical. Like, “we need to solve adversarial robustness so that the AI can’t find bad inputs and exploit them / we don’t have to worry about distributional shift. Existing robustness strategies have downsides A B and C and it’s hard to even get ϵ-ball guarantees on classifications. Therefore, …”
And I’m worried that this writing isn’t abstractly summarizing a concrete story for failure that they have in mind (like “I train the AI [with this setup] and it produces [this internal cognition] for [these mechanistic reasons]”; see A shot at the diamond alignment problem for an example) and then their best guesses at how to intervene on the story to prevent the failures from being able to happen (eg “but if we had [this robustness property] we could be sure its policy would generalize into situations X Y and Z, which makes the story go well”). I’m rather worried that people are more playing syntactically, and not via detailed models of what might happen.
Detailed models are expensive to make. Detailed stories are hard to write. There’s a lot we don’t know. But we sure as hell aren’t going to solve alignment only via valid reasoning steps on informally specified axioms (“The AI has to be robust or we die”, or something?).