If I didn’t miss anything and I’m understanding the scenario correctly, then for this part:
At some point, we reach the level of interpretability where we are convinced that the evolved AI system is already aligned with us before even being finetuned on specific tasks,
I’d expect that interpretability tools, if they work, would tell you “yup, this AI is planning to kill you as soon as it possibly can”, without giving you a way to fix that (that’s robust to capability gains). Ie this story still seems to rely on an unexplained step that goes ”… and a miracle occurs where we fundamentally figure out how to align AI just in the nick of time”.
Totally agreed that the doc does not address that argument. Quoting from my original comment:
the disagreement is much more in the “mechanisms underlying intelligence”, which that doc barely talks about, and the stuff it does say feels pretty outdated; I’d say different things now.
If I didn’t miss anything and I’m understanding the scenario correctly, then for this part:
I’d expect that interpretability tools, if they work, would tell you “yup, this AI is planning to kill you as soon as it possibly can”, without giving you a way to fix that (that’s robust to capability gains). Ie this story still seems to rely on an unexplained step that goes ”… and a miracle occurs where we fundamentally figure out how to align AI just in the nick of time”.
Totally agreed that the doc does not address that argument. Quoting from my original comment: