I’m not quite seeing how this negates my point, help me out?
Eliezer sometimes spoke of AIs as if they had “reward channel”
But they don’t, instead they are something a bit like “adaption executors, not fitness maximizers”
This is potentially an interesting misprediction!
Eliezer also said that if you give the AI the goal of maximizing smiley faces, it will make tiny molecular ones
TurnTrout points out that if you ask an LLM if that would be a good thing to do, it says no
My point is that this is exactly what Eliezer would have predicted for an LLM whose reward channel was “maximize reader scores”
Our LLMs tend to produce high reader scores for a reason that’s not exactly “they’re trying to maximize their reward channel”
I don’t at all see how this difference makes a difference! Eliezer would always have predicted that an AI aimed at maximizing reader scores would have produced a response to TurnTrout’s question that maximized reader scores, so it’s silly to present them doing so as a gotcha!
This article does not predict that LLM behavior. Here’s another quote from it:
The novice thinks that Friendly AI is a problem of coercing an AI to make it do what you want, rather than the AI following its own desires. But the real problem of Friendly AI is one of communication—transmitting category boundaries, like “good”, that can’t be fully delineated in any training data you can give the AI during its childhood.
Here, the category boundary you are describing is “outputs that human raters give high scores to”. That is a complex category of human values. This is squarely in both “formal fallacies” described by the article, the fallacy of “underestimating the complexity of a concept we develop for the sake of its value” and the fallacy of “anthropomorphic optimism”.
My reading is that, if this article is correct, then an AI trained to “produce outputs that human raters give high scores to” will instead produce out-of-distribution text that fits the category the AI learned, and not the category we wanted the AI to learn, especially when placed in novel situations. Less like Claude, more like Sydney and Bing.
You apparently have the opposite reading to me. I don’t see it, at all.
I think TurnTrout’s point is that in order for the AI to succeed at the “magical category” pointed at by the words “outputs that human raters give high scores to”, it has to also have learned the strictly easier “unnatural category” pointed at by the words “making people smile”. And the results show that it has learned that.
Not being able to figure out what sort of thing humans would rate highly isn’t an alignment failure, it’s a capabilities failure, and Eliezer_2008 would never have assumed a capabilities failure in the way you’re saying he would. He is right to say that attempting to directly encode the category boundaries won’t work. It isn’t covered in this blog post, but his main proposal for alignment was always that as far as possible, you want the AI to do the work of using its capabilities to figure out what it means to optimize for human values rather than trying to directly encode those values, precisely so that capabilities can help with alignment. The trouble is that even pointing at this category is difficult—more difficult than pointing at “gets high ratings”.
I think we probably don’t disagree much; I regret any miscommunication.
If the intent of the great-grandparent was just to make the narrow point that an AI that wanted the user to reward it could choose to say things that would lead to it being rewarded, which is compatible with (indeed, predicts) answering the molecular smiley-face question correctly, then I agree.
Treating the screenshot as evidence in the way that TurnTrout is doing requires more assumptions about the properties of LLMs in particular. I read your claims regarding “the problem the AI is optimizing for [...] given that the LLM isn’t powerful enough to subvert the reward channel” as taking as given different assumptions about the properties of LLMs in particular (viz., that they’re reward-optimizers) without taking into account that the person you were responding to is known to disagree.
I’ll also say to the extent they are optimizing in a utility-maximizing sense, it’s about predicting correctly about the whole world, not a reward function in the traditional sense (though they probably do have more learned utility functions/values as a part of that), so Paul Crowley is still wrong here.
This isn’t a productive response to TurnTrout in particular, who has written extensively about his reasons for being skeptical that contemporary AI training setups produce reward optimizers (which doesn’t mean he’s necessarily right, but the parent comment isn’t moving the debate forward).
I’m not quite seeing how this negates my point, help me out?
Eliezer sometimes spoke of AIs as if they had “reward channel”
But they don’t, instead they are something a bit like “adaption executors, not fitness maximizers”
This is potentially an interesting misprediction!
Eliezer also said that if you give the AI the goal of maximizing smiley faces, it will make tiny molecular ones
TurnTrout points out that if you ask an LLM if that would be a good thing to do, it says no
My point is that this is exactly what Eliezer would have predicted for an LLM whose reward channel was “maximize reader scores”
Our LLMs tend to produce high reader scores for a reason that’s not exactly “they’re trying to maximize their reward channel”
I don’t at all see how this difference makes a difference! Eliezer would always have predicted that an AI aimed at maximizing reader scores would have produced a response to TurnTrout’s question that maximized reader scores, so it’s silly to present them doing so as a gotcha!
This article does not predict that LLM behavior. Here’s another quote from it:
Here, the category boundary you are describing is “outputs that human raters give high scores to”. That is a complex category of human values. This is squarely in both “formal fallacies” described by the article, the fallacy of “underestimating the complexity of a concept we develop for the sake of its value” and the fallacy of “anthropomorphic optimism”.
My reading is that, if this article is correct, then an AI trained to “produce outputs that human raters give high scores to” will instead produce out-of-distribution text that fits the category the AI learned, and not the category we wanted the AI to learn, especially when placed in novel situations. Less like Claude, more like Sydney and Bing.
You apparently have the opposite reading to me. I don’t see it, at all.
I think TurnTrout’s point is that in order for the AI to succeed at the “magical category” pointed at by the words “outputs that human raters give high scores to”, it has to also have learned the strictly easier “unnatural category” pointed at by the words “making people smile”. And the results show that it has learned that.
Not being able to figure out what sort of thing humans would rate highly isn’t an alignment failure, it’s a capabilities failure, and Eliezer_2008 would never have assumed a capabilities failure in the way you’re saying he would. He is right to say that attempting to directly encode the category boundaries won’t work. It isn’t covered in this blog post, but his main proposal for alignment was always that as far as possible, you want the AI to do the work of using its capabilities to figure out what it means to optimize for human values rather than trying to directly encode those values, precisely so that capabilities can help with alignment. The trouble is that even pointing at this category is difficult—more difficult than pointing at “gets high ratings”.
I think we probably don’t disagree much; I regret any miscommunication.
If the intent of the great-grandparent was just to make the narrow point that an AI that wanted the user to reward it could choose to say things that would lead to it being rewarded, which is compatible with (indeed, predicts) answering the molecular smiley-face question correctly, then I agree.
Treating the screenshot as evidence in the way that TurnTrout is doing requires more assumptions about the properties of LLMs in particular. I read your claims regarding “the problem the AI is optimizing for [...] given that the LLM isn’t powerful enough to subvert the reward channel” as taking as given different assumptions about the properties of LLMs in particular (viz., that they’re reward-optimizers) without taking into account that the person you were responding to is known to disagree.
I’ll also say to the extent they are optimizing in a utility-maximizing sense, it’s about predicting correctly about the whole world, not a reward function in the traditional sense (though they probably do have more learned utility functions/values as a part of that), so Paul Crowley is still wrong here.