You can make the “some subnetwork just models its training process and cares about getting low loss, and then gets promoted” argument against literally any loss function, even some hypothetical “perfect” one (which, TBC, I think is a mistaken way of thinking). If I buy this argument, it seems like a whole lot of alignment dreams immediately burst into flame. No loss function would be safe. This conclusion, of course, does not decrease in the slightest the credibility of the argument. But I don’t perceive you to believe this implication.
This might be the cleanest explanation for why alignment is so hard by default. Loss functions do not work, and reward functions don’t work well.
This might be the cleanest explanation for why alignment is so hard by default. Loss functions do not work, and reward functions don’t work well.
I also think this argument is bogus, to be clear.