BTW speaking about value function rather than reward model is useful here, because convergent instrumental goals are big part of the potential for reuse of others’ (deduced) value function as part of yours. Their terminal goals may then leak into yours due to simplicity bias or uncertainty about how to separate them from the instrumental ones.
The main problem with that mechanism is that you liking chocolate will probably leak as “its good for me too to eat chocolate”, not “its good for me too when beren eat chocolate”—which is more likely to cause conflict then coordination, if there is only that much chocolate.
And specifically for humans, I think the probably was evolutionary pressure that is actively in favor of leaking terminal goals—as the terminal goals of each of us is a noisy approximation of evolution’s “goal” of increasing amount of offspring, that kind of leaking is potential for denoising. I think I explicitly heard this argument in the context of ideals of beauty (though many other things are going on there and pushing in the same direction)
I agree that this will probably wash out with strong optimization against. and that such confusions become less likely the more different the world models of yourself and the other agent that you are trying to simulate is—this is exactly what we see with empathy in humans! This is definitely not proposed as a full ‘solution’ to alignment. My thinking is that a.) this effect may be useful for us in providing a natural hook to ‘caring’ about others which we can then design training objectives and regimens to allow us to extend and optimise this value shard to a much greater extent than it occurs naturally.
BTW speaking about value function rather than reward model is useful here, because convergent instrumental goals are big part of the potential for reuse of others’ (deduced) value function as part of yours. Their terminal goals may then leak into yours due to simplicity bias or uncertainty about how to separate them from the instrumental ones.
The main problem with that mechanism is that you liking chocolate will probably leak as “its good for me too to eat chocolate”, not “its good for me too when beren eat chocolate”—which is more likely to cause conflict then coordination, if there is only that much chocolate.
And specifically for humans, I think the probably was evolutionary pressure that is actively in favor of leaking terminal goals—as the terminal goals of each of us is a noisy approximation of evolution’s “goal” of increasing amount of offspring, that kind of leaking is potential for denoising. I think I explicitly heard this argument in the context of ideals of beauty (though many other things are going on there and pushing in the same direction)
I agree that this will probably wash out with strong optimization against. and that such confusions become less likely the more different the world models of yourself and the other agent that you are trying to simulate is—this is exactly what we see with empathy in humans! This is definitely not proposed as a full ‘solution’ to alignment. My thinking is that a.) this effect may be useful for us in providing a natural hook to ‘caring’ about others which we can then design training objectives and regimens to allow us to extend and optimise this value shard to a much greater extent than it occurs naturally.
We agree 😀
What do you think about some brainstorming in the chat about how to use that hook?