I think we probably don’t disagree much; I regret any miscommunication.
If the intent of the great-grandparent was just to make the narrow point that an AI that wanted the user to reward it could choose to say things that would lead to it being rewarded, which is compatible with (indeed, predicts) answering the molecular smiley-face question correctly, then I agree.
Treating the screenshot as evidence in the way that TurnTrout is doing requires more assumptions about the properties of LLMs in particular. I read your claims regarding “the problem the AI is optimizing for [...] given that the LLM isn’t powerful enough to subvert the reward channel” as taking as given different assumptions about the properties of LLMs in particular (viz., that they’re reward-optimizers) without taking into account that the person you were responding to is known to disagree.
I’ll also say to the extent they are optimizing in a utility-maximizing sense, it’s about predicting correctly about the whole world, not a reward function in the traditional sense (though they probably do have more learned utility functions/values as a part of that), so Paul Crowley is still wrong here.
I think we probably don’t disagree much; I regret any miscommunication.
If the intent of the great-grandparent was just to make the narrow point that an AI that wanted the user to reward it could choose to say things that would lead to it being rewarded, which is compatible with (indeed, predicts) answering the molecular smiley-face question correctly, then I agree.
Treating the screenshot as evidence in the way that TurnTrout is doing requires more assumptions about the properties of LLMs in particular. I read your claims regarding “the problem the AI is optimizing for [...] given that the LLM isn’t powerful enough to subvert the reward channel” as taking as given different assumptions about the properties of LLMs in particular (viz., that they’re reward-optimizers) without taking into account that the person you were responding to is known to disagree.
I’ll also say to the extent they are optimizing in a utility-maximizing sense, it’s about predicting correctly about the whole world, not a reward function in the traditional sense (though they probably do have more learned utility functions/values as a part of that), so Paul Crowley is still wrong here.