I think the answer is an obvious yes, all other things held equal. Of course, what happens in reality is more complex than this, but I’d still say yes in most cases, primarily because I think that aligned behavior is very simple, so simple that it either only barely loses out to the deceptive model or outright has the advantage, depending on the programming language and low level details, and thus we only need to transfer 300-1000 bits maximum, which is likely very easy.
Much more generally, my fundamental claim is that the complexity of pointing to human values is very similar to the set of all long-term objectives, and can be easier or harder, but I don’t buy the assumption that pointing to human values is way harder than pointing to the set of long-term goals.
I think the answer is an obvious yes, all other things held equal. Of course, what happens in reality is more complex than this, but I’d still say yes in most cases, primarily because I think that aligned behavior is very simple, so simple that it either only barely loses out to the deceptive model or outright has the advantage, depending on the programming language and low level details, and thus we only need to transfer 300-1000 bits maximum, which is likely very easy.
Much more generally, my fundamental claim is that the complexity of pointing to human values is very similar to the set of all long-term objectives, and can be easier or harder, but I don’t buy the assumption that pointing to human values is way harder than pointing to the set of long-term goals.