AI Safety Researcher @AE Studio
Currently researching a neglected prior for cooperation and honesty inspired by the cognitive neuroscience of altruism called self-other overlap in state-of-the-art ML models.
Previous SRF @ SERI 21′, MLSS & Student Researcher @ CAIS 22′ and LTFF grantee.
Thanks Ben!
Great catch on the discrepancy between the podcast and this post—the description on the podcast is a variant we experimented with, but what is published here is self-contained and accurately describes what was done in the original paper.
On generalization and unlearning/forgetting: averaging activations across two instances of a network might give the impression of inducing “forgetting,” but the goal is not exactly unlearning in the traditional sense. The method aims to align certain parts of the network’s latent space related to self and other representations without losing useful distinctions. However, I agree with your point about the challenge of generalization using this set-up: this proof of concept is a more targeted technique that benefits from knowing the specific behaviors you want to discourage (or mitigate), rather than yet being a broad, catch-all solution.
And good points on the terminology around “self,” “deception,” and “other:” these simple models clearly don’t have intentionality or a sense of self like humans do, so these terms are being used more as functional stand-ins than precise descriptions. Will aim to keep this clearer as we continue communicating about this work.
Would also love to hear more about your work on empathy and the amygdala as it relates to alignment. Thanks again for the great feedback!