I wrote this doc a couple of years ago (while I was at CHAI). It’s got many rough edges (I think I wrote it in one sitting and never bothered to rewrite it to make it better), but I still endorse the general gist, if we’re talking about what systems are being deployed to do and what happens amongst organizations. It doesn’t totally answer your question (it’s more focused on what happens before we get systems that could kill everyone), but it seems pretty related.
(I haven’t brought it up before because it seems to me like the disagreement is much more in the “mechanisms underlying intelligence”, which that doc barely talks about, and the stuff it does say feels pretty outdated; I’d say different things now.)
If I didn’t miss anything and I’m understanding the scenario correctly, then for this part:
At some point, we reach the level of interpretability where we are convinced that the evolved AI system is already aligned with us before even being finetuned on specific tasks,
I’d expect that interpretability tools, if they work, would tell you “yup, this AI is planning to kill you as soon as it possibly can”, without giving you a way to fix that (that’s robust to capability gains). Ie this story still seems to rely on an unexplained step that goes ”… and a miracle occurs where we fundamentally figure out how to align AI just in the nick of time”.
Totally agreed that the doc does not address that argument. Quoting from my original comment:
the disagreement is much more in the “mechanisms underlying intelligence”, which that doc barely talks about, and the stuff it does say feels pretty outdated; I’d say different things now.
I wrote this doc a couple of years ago (while I was at CHAI). It’s got many rough edges (I think I wrote it in one sitting and never bothered to rewrite it to make it better), but I still endorse the general gist, if we’re talking about what systems are being deployed to do and what happens amongst organizations. It doesn’t totally answer your question (it’s more focused on what happens before we get systems that could kill everyone), but it seems pretty related.
(I haven’t brought it up before because it seems to me like the disagreement is much more in the “mechanisms underlying intelligence”, which that doc barely talks about, and the stuff it does say feels pretty outdated; I’d say different things now.)
If I didn’t miss anything and I’m understanding the scenario correctly, then for this part:
I’d expect that interpretability tools, if they work, would tell you “yup, this AI is planning to kill you as soon as it possibly can”, without giving you a way to fix that (that’s robust to capability gains). Ie this story still seems to rely on an unexplained step that goes ”… and a miracle occurs where we fundamentally figure out how to align AI just in the nick of time”.
Totally agreed that the doc does not address that argument. Quoting from my original comment: