This article does not predict that LLM behavior. Here’s another quote from it:
The novice thinks that Friendly AI is a problem of coercing an AI to make it do what you want, rather than the AI following its own desires. But the real problem of Friendly AI is one of communication—transmitting category boundaries, like “good”, that can’t be fully delineated in any training data you can give the AI during its childhood.
Here, the category boundary you are describing is “outputs that human raters give high scores to”. That is a complex category of human values. This is squarely in both “formal fallacies” described by the article, the fallacy of “underestimating the complexity of a concept we develop for the sake of its value” and the fallacy of “anthropomorphic optimism”.
My reading is that, if this article is correct, then an AI trained to “produce outputs that human raters give high scores to” will instead produce out-of-distribution text that fits the category the AI learned, and not the category we wanted the AI to learn, especially when placed in novel situations. Less like Claude, more like Sydney and Bing.
You apparently have the opposite reading to me. I don’t see it, at all.
I think TurnTrout’s point is that in order for the AI to succeed at the “magical category” pointed at by the words “outputs that human raters give high scores to”, it has to also have learned the strictly easier “unnatural category” pointed at by the words “making people smile”. And the results show that it has learned that.
Not being able to figure out what sort of thing humans would rate highly isn’t an alignment failure, it’s a capabilities failure, and Eliezer_2008 would never have assumed a capabilities failure in the way you’re saying he would. He is right to say that attempting to directly encode the category boundaries won’t work. It isn’t covered in this blog post, but his main proposal for alignment was always that as far as possible, you want the AI to do the work of using its capabilities to figure out what it means to optimize for human values rather than trying to directly encode those values, precisely so that capabilities can help with alignment. The trouble is that even pointing at this category is difficult—more difficult than pointing at “gets high ratings”.
This article does not predict that LLM behavior. Here’s another quote from it:
Here, the category boundary you are describing is “outputs that human raters give high scores to”. That is a complex category of human values. This is squarely in both “formal fallacies” described by the article, the fallacy of “underestimating the complexity of a concept we develop for the sake of its value” and the fallacy of “anthropomorphic optimism”.
My reading is that, if this article is correct, then an AI trained to “produce outputs that human raters give high scores to” will instead produce out-of-distribution text that fits the category the AI learned, and not the category we wanted the AI to learn, especially when placed in novel situations. Less like Claude, more like Sydney and Bing.
You apparently have the opposite reading to me. I don’t see it, at all.
I think TurnTrout’s point is that in order for the AI to succeed at the “magical category” pointed at by the words “outputs that human raters give high scores to”, it has to also have learned the strictly easier “unnatural category” pointed at by the words “making people smile”. And the results show that it has learned that.
Not being able to figure out what sort of thing humans would rate highly isn’t an alignment failure, it’s a capabilities failure, and Eliezer_2008 would never have assumed a capabilities failure in the way you’re saying he would. He is right to say that attempting to directly encode the category boundaries won’t work. It isn’t covered in this blog post, but his main proposal for alignment was always that as far as possible, you want the AI to do the work of using its capabilities to figure out what it means to optimize for human values rather than trying to directly encode those values, precisely so that capabilities can help with alignment. The trouble is that even pointing at this category is difficult—more difficult than pointing at “gets high ratings”.