This idea tries to discover translations between the representations of two neural networks, but without necessarily discovering a translation into our representations.
I think this has been under investigation for a few years in the context of model fusion in federated learning, model stitching, and translation between latent representations in general.
I think this has been under investigation for a few years in the context of model fusion in federated learning, model stitching, and translation between latent representations in general.
Relative representations enable zero-shot latent space communication—an analytical approach to matching representations (though this is a new work, it may be not that good, I haven’t checked)
Git Re-Basin: Merging Models modulo Permutation Symmetries—recent model stitching work with some nice results
Latent Translation: Crossing Modalities by Bridging Generative Models—some random application of unsupervised translation to translation between autoencoder latent codes (probably not the most representative example)
Thanks for these links, especially the top one is pretty interesting work