I agree that arguments for AI risk rely pretty crucially on human inability to notice and control what the AI wants. But for conceptual clarity I think we shouldn’t hardcode that inability into our definition of ‘wants!’ Instead I’d say that So8res is right that ability to solve long-horizon tasks is correlated with wanting things / agency, and then say that there’s a separate question of how transparent and controllable the wants will be around the time of AGI and beyond. This then leads into a conversation about visible/faithful/authentic CoT, which is what I’ve been thinking about for the past six months and which is something MIRI started thinking about years ago. (See also my response to Ryan elsewhere in this thread)
If you use that definition, I don’t understand in what sense LMs don’t “want” things—if you prompt them to “take actions to achieve X” then they will do so, and if obstacles appear they will suggest ways around them, and if you connect them to actuators they will frequently achieve X even in the face of obstacles, etc. By your definition isn’t that “want” or “desire” like behavior? So what does it mean when Nate says “AI doesn’t seem to have all that much “want”- or “desire”-like behavior”?
I’m genuinely unclear what the OP is asserting at that point, and it seems like it’s clearly not responsive to actual people in the real world saying “LLMs turned out to be not very want-y, when are the people who expected ‘agents’ going to update?” People who say that kind of thing mostly aren’t saying that LMs can’t be prompted to achieve outcomes. They are saying that LMs don’t want things in the sense that is relevant to usual arguments about deceptive alignment or reward hacking (e.g. don’t seem to have preferences about the training objective, or that are coherent over time).
I would say that current LLMs, when prompted and RLHF’d appropriately, and especially when also strapped into an AutoGPT-type scaffold/harness, DO want things. I would say that wanting things is a spectrum and that the aforementioned tweaks (appropriate prompting, AutoGPT, etc.) move the system along that spectrum. I would say that future systems will be even further along that spectrum. IDK what Nate meant but on my charitable interpretation he simply meant that they are not very far along the spectrum compared to e.g. humans or prophecied future AGIs.
It’s a response to “LLMs turned out to not be very want-y, when are the people who expcted ‘agents’ going to update?” because it’s basically replying “I didn’t expect LLMs to be agenty/wanty; I do expect agenty/wanty AIs to come along before the end and indeed we are already seeing progress in that direction.”
To the people saying “LLMs don’t want things in the sense that is relevant to the usual arguments...” I recommend rephrasing to be less confusing: Your claim is that LLMs don’t seem to have preferences about the training objective, or that are coherent over time, unless hooked up into a prompt/scaffold that explicitly tries to get them to have such preferences. I agree with this claim, but don’t think it’s contrary to my present or past models.
Two separate thoughts, based on my understanding of what the OP is gesturing at (which may not be what Nate is trying to say, but oh well):
Using that definition, LMs do “want” things, but: the extent to which it’s useful to talk about an abstraction like “wants” or “desires” depends heavily on how well the system can be modelled that way. For a system that manages to competently and robustly orient itself around complex obstacles, there’s a notion of “want” that’s very strong. For a system that’s slightly weaker than that—well, it’s still capable of orienting itself around some obstacles so there’s still a notion of “want”, but it’s correspondingly weaker, and plausibly less useful. And insofar as you’re trying to assert something about some particular behaviour of danger arising from strong wants, systems with weaker wants wouldn’t update you very much.
Unfortunately, this does mean you have to draw arbitrary lines about where strong wants are and making that more precise is probably useful, but doesn’t seem to inherently be an argument against it. (To be clear though, I don’t buy this line of reasoning to the extent I think Nate does).
On the ability to solve long-horizon tasks: I think of it as a proxy measure for how strong your wants are (where the proxy breaks down because diverse contexts and complex obstacles aren’t perfectly correlated with long-horizon tasks, but might still be a pretty good one in realistic scenarios). If you have cognition that can robustly handle long-horizon tasks, then one could argue that this can measure by proxy how strongly-held and coherent its objectives are, and correspondingly how capable it is of (say) applying optimization power to break control measures you place on it or breaking the proxies you were training on.
More concretely: I expect that one answer to this might anchor to “wants at least as strong as a human’s”, in which case AI systems 10x-ing the pace of R&D autonomously would definitely suffice; the chain of logic being “has the capabilities to robustly handle real-world obstacles in pursuit of task X” ⇒ “can handle obstacles like control or limiting measures we’ve placed, in pursuit of task X”.
I also don’t buy this line of argument as much as I think Nate does, not because I disagree with my understanding of what the central chain of logic is, but because I don’t think that it applies to language models in the way he describes it (but still plausibly manifests in different ways). I agree that LMs don’t want things in the sense relevant to e.g. deceptive alignment, but primarily because I think it’s a type error—LMs are far more substrate than they are agents. You can still have agents being predicted / simulated by LMs that have strong wants if you have a sufficiently powerful system, that might not have preferences about the training objective, but which still has preferences it’s capable enough to try and achieve. Whether or not you can also ask that system “Which action would maximize the expected amount of Y?” and get a different predicted / simulated agent doesn’t answer the question of whether or not the agent you do get to try and solve a task like that on a long horizon would itself be dangerous, independent of whether you consider the system at large to be dangerous in a similar way toward a similar target.
I agree that arguments for AI risk rely pretty crucially on human inability to notice and control what the AI wants. But for conceptual clarity I think we shouldn’t hardcode that inability into our definition of ‘wants!’ Instead I’d say that So8res is right that ability to solve long-horizon tasks is correlated with wanting things / agency, and then say that there’s a separate question of how transparent and controllable the wants will be around the time of AGI and beyond. This then leads into a conversation about visible/faithful/authentic CoT, which is what I’ve been thinking about for the past six months and which is something MIRI started thinking about years ago. (See also my response to Ryan elsewhere in this thread)
If you use that definition, I don’t understand in what sense LMs don’t “want” things—if you prompt them to “take actions to achieve X” then they will do so, and if obstacles appear they will suggest ways around them, and if you connect them to actuators they will frequently achieve X even in the face of obstacles, etc. By your definition isn’t that “want” or “desire” like behavior? So what does it mean when Nate says “AI doesn’t seem to have all that much “want”- or “desire”-like behavior”?
I’m genuinely unclear what the OP is asserting at that point, and it seems like it’s clearly not responsive to actual people in the real world saying “LLMs turned out to be not very want-y, when are the people who expected ‘agents’ going to update?” People who say that kind of thing mostly aren’t saying that LMs can’t be prompted to achieve outcomes. They are saying that LMs don’t want things in the sense that is relevant to usual arguments about deceptive alignment or reward hacking (e.g. don’t seem to have preferences about the training objective, or that are coherent over time).
I would say that current LLMs, when prompted and RLHF’d appropriately, and especially when also strapped into an AutoGPT-type scaffold/harness, DO want things. I would say that wanting things is a spectrum and that the aforementioned tweaks (appropriate prompting, AutoGPT, etc.) move the system along that spectrum. I would say that future systems will be even further along that spectrum. IDK what Nate meant but on my charitable interpretation he simply meant that they are not very far along the spectrum compared to e.g. humans or prophecied future AGIs.
It’s a response to “LLMs turned out to not be very want-y, when are the people who expcted ‘agents’ going to update?” because it’s basically replying “I didn’t expect LLMs to be agenty/wanty; I do expect agenty/wanty AIs to come along before the end and indeed we are already seeing progress in that direction.”
To the people saying “LLMs don’t want things in the sense that is relevant to the usual arguments...” I recommend rephrasing to be less confusing: Your claim is that LLMs don’t seem to have preferences about the training objective, or that are coherent over time, unless hooked up into a prompt/scaffold that explicitly tries to get them to have such preferences. I agree with this claim, but don’t think it’s contrary to my present or past models.
Two separate thoughts, based on my understanding of what the OP is gesturing at (which may not be what Nate is trying to say, but oh well):
Using that definition, LMs do “want” things, but: the extent to which it’s useful to talk about an abstraction like “wants” or “desires” depends heavily on how well the system can be modelled that way. For a system that manages to competently and robustly orient itself around complex obstacles, there’s a notion of “want” that’s very strong. For a system that’s slightly weaker than that—well, it’s still capable of orienting itself around some obstacles so there’s still a notion of “want”, but it’s correspondingly weaker, and plausibly less useful. And insofar as you’re trying to assert something about some particular behaviour of danger arising from strong wants, systems with weaker wants wouldn’t update you very much.
Unfortunately, this does mean you have to draw arbitrary lines about where strong wants are and making that more precise is probably useful, but doesn’t seem to inherently be an argument against it. (To be clear though, I don’t buy this line of reasoning to the extent I think Nate does).
On the ability to solve long-horizon tasks: I think of it as a proxy measure for how strong your wants are (where the proxy breaks down because diverse contexts and complex obstacles aren’t perfectly correlated with long-horizon tasks, but might still be a pretty good one in realistic scenarios). If you have cognition that can robustly handle long-horizon tasks, then one could argue that this can measure by proxy how strongly-held and coherent its objectives are, and correspondingly how capable it is of (say) applying optimization power to break control measures you place on it or breaking the proxies you were training on.
More concretely: I expect that one answer to this might anchor to “wants at least as strong as a human’s”, in which case AI systems 10x-ing the pace of R&D autonomously would definitely suffice; the chain of logic being “has the capabilities to robustly handle real-world obstacles in pursuit of task X” ⇒ “can handle obstacles like control or limiting measures we’ve placed, in pursuit of task X”.
I also don’t buy this line of argument as much as I think Nate does, not because I disagree with my understanding of what the central chain of logic is, but because I don’t think that it applies to language models in the way he describes it (but still plausibly manifests in different ways). I agree that LMs don’t want things in the sense relevant to e.g. deceptive alignment, but primarily because I think it’s a type error—LMs are far more substrate than they are agents. You can still have agents being predicted / simulated by LMs that have strong wants if you have a sufficiently powerful system, that might not have preferences about the training objective, but which still has preferences it’s capable enough to try and achieve. Whether or not you can also ask that system “Which action would maximize the expected amount of Y?” and get a different predicted / simulated agent doesn’t answer the question of whether or not the agent you do get to try and solve a task like that on a long horizon would itself be dangerous, independent of whether you consider the system at large to be dangerous in a similar way toward a similar target.