It sounds like you are saying “In the current paradigm of prompted/scaffolded instruction-tuned LLMs, we get the faithful CoT property by default. Therefore our systems will indeed be agentic / goal-directed / wanting-things, but we’ll be able to choose what they want (at least imperfectly, via the prompt) and we’ll be able to see what they are thinking (at least imperfectly, via monitoring the CoT), therefore they won’t be able to successfully plot against us.”
Yes of course. My research for the last few months has been focused on what happens after that, when the systems get smart enough and/or get trained so that the chain of thought is unfaithful when it needs to be faithful, e.g. the system uses euphemisms when it’s thinking about whether it’s misaligned and what to do about that.
Anyhow I think this is mostly just a misunderstanding of Nate and my position. It doesn’t contradict anything we’ve said. Nate and I both agree that if we can create & maintain some sort of faithful/visible thoughts property through human-level AGI and beyond, then we are in pretty good shape & I daresay things are looking pretty optimistic. (We just need to use said AGI to solve the rest of the problem for us, whilst we monitor it to make sure it doesn’t plot against us or otherwise screw us over.)
It sounds like you are saying “In the current paradigm of prompted/scaffolded instruction-tuned LLMs, we get the faithful CoT property by default. Therefore our systems will indeed be agentic / goal-directed / wanting-things, but we’ll be able to choose what they want (at least imperfectly, via the prompt) and we’ll be able to see what they are thinking (at least imperfectly, via monitoring the CoT), therefore they won’t be able to successfully plot against us.”
Basically, but more centrally that in literal current LLM agents the scary part of the system that we don’t understand (the LLM) doesn’t generalize in any scary way due to wanting while we can still get the overall system to achieve specific long term outcomes in practice. And that it’s at least plausible that this property will be preserved in the future.
I edited my earlier comment to hopefully make this more clear.
Anyhow I think this is mostly just a misunderstanding of Nate and my position. It doesn’t contradict anything we’ve said. Nate and I both agree that if we can create & maintain some sort of faithful/visible thoughts property through human-level AGI and beyond, then we are in pretty good shape & I daresay things are looking pretty optimistic. (We just need to use said AGI to solve the rest of the problem for us, whilst we monitor it to make sure it doesn’t plot against us or otherwise screw us over.)
Even if we didn’t have the visible thoughts property in the actual deployed system, the fact that all of the retargeting behavior is based on explicit human engineering is still relevant and contradicts the core claim Nate makes in this post IMO.
Anyhow I think this is mostly just a misunderstanding of Nate and my position. It doesn’t contradict anything we’ve said.
I think it contradicts things Nate says in this post directly. I don’t know if it contradicts things you’ve said.
To clarify, I’m commenting on the following chain:
First Nate said:
This observable “it keeps reorienting towards some target no matter what obstacle reality throws in its way” behavior is what I mean when I describe an AI as having wants/desires “in the behaviorist sense”.
as well as
Well, I claim that these are more-or-less the same fact. It’s no surprise that the AI falls down on various long-horizon tasks and that it doesn’t seem all that well-modeled as having “wants/desires”; these are two sides of the same coin.
Then, Paul responded with
I think this is a semantic motte and bailey that’s failing to think about mechanics of the situation. LM agents already have the behavior “reorient towards a target in response to obstacles,” but that’s not the sense of “wanting” about which people disagree or that is relevant to AI risk (which I tried to clarify in my comment). No one disagrees that an LM asked “how can I achieve X in this situation?” will be able to propose methods to achieve X, and those methods will be responsive to obstacles. But this isn’t what you need for AI risk arguments!
Then you said
What do you think is the sense of “wanting” needed for AI risk arguments? Why is the sense described above not enough?
And I was responding to this.
So, I was just trying to demonstrate at least one plausible example of a system which plausibly could pursue long term goals and doesn’t have the sense of wanting needed for AI risk arguments. In particular, LLM agents where the retargeting is purely based on human engineering (analogous to a myopic employee retargeted by a manager who cares about longer term outcomes).
This directly contradicts “Well, I claim that these are more-or-less the same fact. It’s no surprise that the AI falls down on various long-horizon tasks and that it doesn’t seem all that well-modeled as having “wants/desires”; these are two sides of the same coin.”.
My version of what’s happening in this conversation is that you and Paul are like “Well, what if it wants things but in a way which is transparent/interpretable and hence controllable by humans, e.g. if it wants what it is prompted to want?” My response is “Indeed that would be super safe, but it would still count as wanting things. Nate’s post is titled “ability to solve long-horizon tasks correlates with wanting” not “ability to solve long-horizon tasks correlates with hidden uncontrollable wanting.”
One thing at time. First we establish that ability to solve long-horizon tasks correlates with wanting, then we argue about whether or not the future systems that are able to solve diverse long-horizon tasks better than humans can will have transparent controllable wants or not. As you yourself pointed out, insofar as we are doing lots of RL it’s dubious that the wants will remain as transparent and controllable as they are now. I meanwhile will agree that a large part of my hope for a technical solution comes from something like the Faithful CoT agenda, in which we build powerful agentic systems whose wants (and more generally, thoughts) are transparent and controllable.
If this is what’s going on, then I basically can’t imagine any context in which I would want someone to read the OP rather a post than showing examples of LM agents achieving goals and saying “it’s already the case that LM agents want things, more and more deployments of LMs will be agents, and those agents will become more competent such that it would be increasingly scary if they wanted something at cross-purposes to humans.” Is there something I’m missing?
I think your interpretation of Nate is probably wrong, but I’m not sure and happy to drop it.
FWIW, your proposed pitch “it’s already the case that...” is almost exactly the elevator pitch I currently go around giving. So maybe we agree? I’m not here to defend Nate’s choice to write this post rather than some other post.
It sounds like you are saying “In the current paradigm of prompted/scaffolded instruction-tuned LLMs, we get the faithful CoT property by default. Therefore our systems will indeed be agentic / goal-directed / wanting-things, but we’ll be able to choose what they want (at least imperfectly, via the prompt) and we’ll be able to see what they are thinking (at least imperfectly, via monitoring the CoT), therefore they won’t be able to successfully plot against us.”
Yes of course. My research for the last few months has been focused on what happens after that, when the systems get smart enough and/or get trained so that the chain of thought is unfaithful when it needs to be faithful, e.g. the system uses euphemisms when it’s thinking about whether it’s misaligned and what to do about that.
Anyhow I think this is mostly just a misunderstanding of Nate and my position. It doesn’t contradict anything we’ve said. Nate and I both agree that if we can create & maintain some sort of faithful/visible thoughts property through human-level AGI and beyond, then we are in pretty good shape & I daresay things are looking pretty optimistic. (We just need to use said AGI to solve the rest of the problem for us, whilst we monitor it to make sure it doesn’t plot against us or otherwise screw us over.)
Basically, but more centrally that in literal current LLM agents the scary part of the system that we don’t understand (the LLM) doesn’t generalize in any scary way due to wanting while we can still get the overall system to achieve specific long term outcomes in practice. And that it’s at least plausible that this property will be preserved in the future.
I edited my earlier comment to hopefully make this more clear.
Even if we didn’t have the visible thoughts property in the actual deployed system, the fact that all of the retargeting behavior is based on explicit human engineering is still relevant and contradicts the core claim Nate makes in this post IMO.
I think it contradicts things Nate says in this post directly. I don’t know if it contradicts things you’ve said.
To clarify, I’m commenting on the following chain:
First Nate said:
as well as
Then, Paul responded with
Then you said
And I was responding to this.
So, I was just trying to demonstrate at least one plausible example of a system which plausibly could pursue long term goals and doesn’t have the sense of wanting needed for AI risk arguments. In particular, LLM agents where the retargeting is purely based on human engineering (analogous to a myopic employee retargeted by a manager who cares about longer term outcomes).
This directly contradicts “Well, I claim that these are more-or-less the same fact. It’s no surprise that the AI falls down on various long-horizon tasks and that it doesn’t seem all that well-modeled as having “wants/desires”; these are two sides of the same coin.”.
Thanks for the explanation btw.
My version of what’s happening in this conversation is that you and Paul are like “Well, what if it wants things but in a way which is transparent/interpretable and hence controllable by humans, e.g. if it wants what it is prompted to want?” My response is “Indeed that would be super safe, but it would still count as wanting things. Nate’s post is titled “ability to solve long-horizon tasks correlates with wanting” not “ability to solve long-horizon tasks correlates with hidden uncontrollable wanting.”
One thing at time. First we establish that ability to solve long-horizon tasks correlates with wanting, then we argue about whether or not the future systems that are able to solve diverse long-horizon tasks better than humans can will have transparent controllable wants or not. As you yourself pointed out, insofar as we are doing lots of RL it’s dubious that the wants will remain as transparent and controllable as they are now. I meanwhile will agree that a large part of my hope for a technical solution comes from something like the Faithful CoT agenda, in which we build powerful agentic systems whose wants (and more generally, thoughts) are transparent and controllable.
If this is what’s going on, then I basically can’t imagine any context in which I would want someone to read the OP rather a post than showing examples of LM agents achieving goals and saying “it’s already the case that LM agents want things, more and more deployments of LMs will be agents, and those agents will become more competent such that it would be increasingly scary if they wanted something at cross-purposes to humans.” Is there something I’m missing?
I think your interpretation of Nate is probably wrong, but I’m not sure and happy to drop it.
FWIW, your proposed pitch “it’s already the case that...” is almost exactly the elevator pitch I currently go around giving. So maybe we agree? I’m not here to defend Nate’s choice to write this post rather than some other post.