This would be a valid rebuttal if instruction-tuned LLMs were only pretending to be benevolent as part of a long-term strategy to eventually take over the world, and execute a treacherous turn. Do you think present-day LLMs are doing that? (I don’t)
Or that they have a sycophancy drive. Or that, next to “wanting to be helpful,” they also have a bunch of other drives that will likely win over the “wanting to be helpful” part once the system becomes better at long-term planning and orienting its shards towards consequentialist goals.
On that latter model, the “wanting to be helpful” is a mask that the system is trained to play better and better, but it isn’t the only thing the system wants to do, and it might find that once its gets good at trying on various other masks to see how this will improve its long-term planning, it for some reason prefers a different “mask” to become its locked-in personality.
Or that they have a sycophancy drive. Or that, next to “wanting to be helpful,” they also have a bunch of other drives that will likely win over the “wanting to be helpful” part once the system becomes better at long-term planning and orienting its shards towards consequentialist goals.
On that latter model, the “wanting to be helpful” is a mask that the system is trained to play better and better, but it isn’t the only thing the system wants to do, and it might find that once its gets good at trying on various other masks to see how this will improve its long-term planning, it for some reason prefers a different “mask” to become its locked-in personality.