GPT4 does not engage in the sorts of naive misinterpretations which were discussed in the early days of AI safety. If you ask it for a plan to manufacture paperclips, it doesn’t think the best plan would involve converting all the matter in the solar system into paperclips.
I’m somewhat surprised by this paragraph. I thought the MIRI position was that they did not in fact predict AIs behaving like this, and the behavior of GPT4 was not an update at all for them. See this comment by Eliezer. I mostly bought that MIRI in fact never worried about AIs going rouge based on naive misinterpretations, so I’m surprised to see Abram saying the opposite now.
Abram, did you disagree about this with others at MIRI, so the behavior of GPT4 was an update for you but not for them, or do you think they are misremembering/misconstructing their earlier thoughts on this matter, or is there a subtle distinction here that I’m missing?
“Misinterpretation” is somewhat ambiguous. It either means not correctly interpreting the intent of an instruction (and therefore also not acting on that intent) or correctly understanding the intent of the instruction while still acting on a different interpretation. The latter is presumably what the outcome pump was assumed to do. LLMs can apparently both understand and act on instructions pretty well. The latter was not at all clear in the past.
I more-or-less agree with Eliezer’s comment (to the extent that I have the data necessary to evaluate his words, which is greater than most, but still, I didn’t know him in 1996). I have a small beef with his bolded “MIRI is always in every instance” claim, because a universal like that is quite a strong claim, and I would be very unsurprised to find a single counterexample somewhere (particularly if we include every MIRI employee and everything they’ve ever said while employed at MIRI).
What I am trying to say is something looser and more gestalt. I do think what I am saying contains some disagreement with some spirit-of-MIRI, and possibly some specific others at MIRI, such that I could say I’ve updated on the modern progress of AI in a different way than they have.
For example, in my update, the modern progress of LLMs points towards the Paul side of some Eliezer-Paul debates. (I would have to think harder about how to spell out exactly which Eliezer-Paul debates.)
One thing I can say is that I myself often argued using “naive misinterpretation”-like cases such as the paperclip example. However, I was also very aware of the Eliezer-meme “the AI will understand what the humans mean, it just won’t care”. I would have predicted difficulty in building a system which correctly interprets and correctly cares about human requests to the extent that GPT4 does.
This does not mean that AI safety is easy, or that it is solved; only that it is easier than I anticipated at this particular level of capability.
Getting more specific to what I wrote in the post:
My claim is that modern LLMs are “doing roughly what they seem like they are doing” and “internalize human intuitive concepts”. This does include some kind of claim that these systems are more-or-less ethical (they appear to be trying to be helpful and friendly, therefore they “roughly are”).
The reason I don’t think this contradicts with Eliezer’s bolded claim (“Getting a shape into the AI’s preferences is different from getting it into the AI’s predictive model”) is that I read Eliezer as talking about strongly superhuman AI with this claim. It is not too difficult to get something into the values of some basic reinforcement learning agent, to the extent that something like that has values worth speaking of. It gets increasingly difficult as the agent gets cleverer. At the level of intelligence of, say, GPT4, there is not a clear difference between getting the LLM to really care about something vs merely getting those values into its predictive model. It may be deceptive or honest; or, it could even be meaningless to classify it as deceptive or honest. This is less true of o1, since we can see it actively scheming to deceive.
I’m somewhat surprised by this paragraph. I thought the MIRI position was that they did not in fact predict AIs behaving like this, and the behavior of GPT4 was not an update at all for them. See this comment by Eliezer. I mostly bought that MIRI in fact never worried about AIs going rouge based on naive misinterpretations, so I’m surprised to see Abram saying the opposite now.
Abram, did you disagree about this with others at MIRI, so the behavior of GPT4 was an update for you but not for them, or do you think they are misremembering/misconstructing their earlier thoughts on this matter, or is there a subtle distinction here that I’m missing?
“Misinterpretation” is somewhat ambiguous. It either means not correctly interpreting the intent of an instruction (and therefore also not acting on that intent) or correctly understanding the intent of the instruction while still acting on a different interpretation. The latter is presumably what the outcome pump was assumed to do. LLMs can apparently both understand and act on instructions pretty well. The latter was not at all clear in the past.
I more-or-less agree with Eliezer’s comment (to the extent that I have the data necessary to evaluate his words, which is greater than most, but still, I didn’t know him in 1996). I have a small beef with his bolded “MIRI is always in every instance” claim, because a universal like that is quite a strong claim, and I would be very unsurprised to find a single counterexample somewhere (particularly if we include every MIRI employee and everything they’ve ever said while employed at MIRI).
What I am trying to say is something looser and more gestalt. I do think what I am saying contains some disagreement with some spirit-of-MIRI, and possibly some specific others at MIRI, such that I could say I’ve updated on the modern progress of AI in a different way than they have.
For example, in my update, the modern progress of LLMs points towards the Paul side of some Eliezer-Paul debates. (I would have to think harder about how to spell out exactly which Eliezer-Paul debates.)
One thing I can say is that I myself often argued using “naive misinterpretation”-like cases such as the paperclip example. However, I was also very aware of the Eliezer-meme “the AI will understand what the humans mean, it just won’t care”. I would have predicted difficulty in building a system which correctly interprets and correctly cares about human requests to the extent that GPT4 does.
This does not mean that AI safety is easy, or that it is solved; only that it is easier than I anticipated at this particular level of capability.
Getting more specific to what I wrote in the post:
My claim is that modern LLMs are “doing roughly what they seem like they are doing” and “internalize human intuitive concepts”. This does include some kind of claim that these systems are more-or-less ethical (they appear to be trying to be helpful and friendly, therefore they “roughly are”).
The reason I don’t think this contradicts with Eliezer’s bolded claim (“Getting a shape into the AI’s preferences is different from getting it into the AI’s predictive model”) is that I read Eliezer as talking about strongly superhuman AI with this claim. It is not too difficult to get something into the values of some basic reinforcement learning agent, to the extent that something like that has values worth speaking of. It gets increasingly difficult as the agent gets cleverer. At the level of intelligence of, say, GPT4, there is not a clear difference between getting the LLM to really care about something vs merely getting those values into its predictive model. It may be deceptive or honest; or, it could even be meaningless to classify it as deceptive or honest. This is less true of o1, since we can see it actively scheming to deceive.