A particular pattern Nate has talked about is what I might call “reflection.” The basic idea is that in order to do certain kinds of research effectively, you need to keep asking questions like “What am I actually trying to do here and why? What is my big-picture goal?”, which are questions that might “change your aims” in some important sense. The idea is not necessarily that you’re rewriting your own source code, but that you’re doing the kind of reflection and self-modification a philosophically inclined, independent-minded human might do: “I’ve always thought I cared about X, but when I really think about the implications of that, I realize maybe I only care about Y” and such. I think that in Nate’s ontology (and I am partly sympathetic), it’s hard to disentangle something like “Refocusing my research agenda to line it up with my big-picture goals” from something like “Reconsidering and modifying my big-picture goals so that they feel more satisfying in light of all the things I’ve noticed about myself.” Reflection (figuring out what you “really want”) is a kind of CIS, and one that could present danger, if an AI is figuring out what it “really wants” and we haven’t got specific reasons to think that’s going to be what we want it to want.
I’ll unpack a bit more the sort of mental moves which I think Nate is talking about here.
In January, I spent several weeks trying to show that the distribution of low-level world state given a natural abstract summary has to take a specific form. Eventually, I became convinced that the thing I was trying to show was wrong—the distributions did not take that form. So then what? A key mental move at that point is to:
Query why I wanted this thing-that-turned-out-not-to-work in the first place—e.g. maybe that form of distribution has some useful properties
Look for other ways to get I want—e.g. a more general form which has a slightly weaker version of the useful properties I hoped to use
I think that’s the main kind of mental move Nate is gesturing at.
It’s a mental move which comes up at multiple different levels when doing research. At the level of hours or even minutes, I try a promising path, find that it’s a dead end, then need to back up and think about what I hoped to get from that path and how else to get it. At the level of months or years, larger-scale approaches turn out not to work.
I’d guess that it’s a mental move which designers/engineers are also familiar with: turns out that one promising-looking class of designs won’t work for some reason, so we need to back up and ask what was promising about that class and how to get it some other way.
Notably: that mental move is only relevant in areas where we lack a correct upfront high-level roadmap to solve the main problem. It’s relevant specifically because we don’t know the right path, so we try a lot of wrong paths along the way.
As to why that kind of mental move would potentially be highly correlated with dangerous alignment problems… Well, what does that same mental move do when applied to near-top-level goals? For instance, maybe we tasked the AI with figuring out corrigibility. What happens when it turns out that e.g. corrigibility as originally formulated is impossible? Well, an AI which systematically makes the move of “Why did I want X in the first place and how else can I get what I want here?” will tend to go look for loopholes. Unfortunately, insofar as the AI’s mesa-objective is only a rough proxy for our intended target, the divergences between mesa-objective and intended target are particularly likely places for loopholes to be.
I personally wouldn’t put nearly so much weight on this argument as Nate does. (Though I do think the example training process Holden outlines is pretty doomed; as Nate notes, disjunctive failure modes hit hard.) The most legible-to-me reason for the difference is that I think that kind of mental move is a necessary but less central part of research than I expect Nate thinks. This is a model-difference I’ve noticed between myself and Nate in the past: Nate thinks the central rate-limiting step to intellectual progress is noticing places where our models are wrong, then letting go and doing something else, whereas I think identifying useful correct submodels in the exponentially large space of possibilities is the rate-limiting step (at least among relatively-competent researchers) and replacing the wrong parts of the old model is relatively fast after that.
Oh this is a great complication—you highlight why mental moves, like “reflection,” can lead to potential loopholes and complications. Regardless of whether it’s a necessary or less central part of research, as you suggest, self-modifying goal-finding is always a potential issue in AI alignment. I appreciate the notion of “noticeable lack.” This kind of thinking pushes us to take stock of how and whether AIs actually are doing useful alignment research with benign seeming training setups.
Is it *noticeably* lacking or clearing an expected bar? This nuance is less about quantity or quality than it is about expectation—*do we expect it to work this well?* Or, do we expect more extreme directions need to be managed? This is the kind of expectation that I think builds stronger theory. Great food for thought in your reply too. Consideration of model differences between yourself and others is super important! Have you considered trying to synthesize between Nate and your own viewpoints? It might be a powerful thing for expectations and approaches.
I’ll unpack a bit more the sort of mental moves which I think Nate is talking about here.
In January, I spent several weeks trying to show that the distribution of low-level world state given a natural abstract summary has to take a specific form. Eventually, I became convinced that the thing I was trying to show was wrong—the distributions did not take that form. So then what? A key mental move at that point is to:
Query why I wanted this thing-that-turned-out-not-to-work in the first place—e.g. maybe that form of distribution has some useful properties
Look for other ways to get I want—e.g. a more general form which has a slightly weaker version of the useful properties I hoped to use
I think that’s the main kind of mental move Nate is gesturing at.
It’s a mental move which comes up at multiple different levels when doing research. At the level of hours or even minutes, I try a promising path, find that it’s a dead end, then need to back up and think about what I hoped to get from that path and how else to get it. At the level of months or years, larger-scale approaches turn out not to work.
I’d guess that it’s a mental move which designers/engineers are also familiar with: turns out that one promising-looking class of designs won’t work for some reason, so we need to back up and ask what was promising about that class and how to get it some other way.
Notably: that mental move is only relevant in areas where we lack a correct upfront high-level roadmap to solve the main problem. It’s relevant specifically because we don’t know the right path, so we try a lot of wrong paths along the way.
As to why that kind of mental move would potentially be highly correlated with dangerous alignment problems… Well, what does that same mental move do when applied to near-top-level goals? For instance, maybe we tasked the AI with figuring out corrigibility. What happens when it turns out that e.g. corrigibility as originally formulated is impossible? Well, an AI which systematically makes the move of “Why did I want X in the first place and how else can I get what I want here?” will tend to go look for loopholes. Unfortunately, insofar as the AI’s mesa-objective is only a rough proxy for our intended target, the divergences between mesa-objective and intended target are particularly likely places for loopholes to be.
I personally wouldn’t put nearly so much weight on this argument as Nate does. (Though I do think the example training process Holden outlines is pretty doomed; as Nate notes, disjunctive failure modes hit hard.) The most legible-to-me reason for the difference is that I think that kind of mental move is a necessary but less central part of research than I expect Nate thinks. This is a model-difference I’ve noticed between myself and Nate in the past: Nate thinks the central rate-limiting step to intellectual progress is noticing places where our models are wrong, then letting go and doing something else, whereas I think identifying useful correct submodels in the exponentially large space of possibilities is the rate-limiting step (at least among relatively-competent researchers) and replacing the wrong parts of the old model is relatively fast after that.
Oh this is a great complication—you highlight why mental moves, like “reflection,” can lead to potential loopholes and complications. Regardless of whether it’s a necessary or less central part of research, as you suggest, self-modifying goal-finding is always a potential issue in AI alignment. I appreciate the notion of “noticeable lack.” This kind of thinking pushes us to take stock of how and whether AIs actually are doing useful alignment research with benign seeming training setups.
Is it *noticeably* lacking or clearing an expected bar? This nuance is less about quantity or quality than it is about expectation—*do we expect it to work this well?* Or, do we expect more extreme directions need to be managed? This is the kind of expectation that I think builds stronger theory. Great food for thought in your reply too. Consideration of model differences between yourself and others is super important! Have you considered trying to synthesize between Nate and your own viewpoints? It might be a powerful thing for expectations and approaches.