Maybe one way to phrase it is that the X’s represent the “type signature” of the latent, and the type signature is the thing we can most easily hope is shared between the agents, since it’s “out there in the world” as it represents the outwards interaction with things. We’d hope to be able to share the latent simply by sharing the type signature, because the other thing that determines the latent is the agents’ distribution, but this distribution is more an “internal” thing that might be too complicated to work with. But the proof in the OP shows that the type signature is not enough to pin it down, even for agents whose models are highly compatible with each other as-measured-by-KL-in-type-signature.
Sure, but what I question is whether the OP shows that the type signature wouldn’t be enough for realistic scenarios where we have two agents trained on somewhat different datasets. It’s not clear that their datasets would be different the same way P and Q are different here.
Maybe one way to phrase it is that the X’s represent the “type signature” of the latent, and the type signature is the thing we can most easily hope is shared between the agents, since it’s “out there in the world” as it represents the outwards interaction with things. We’d hope to be able to share the latent simply by sharing the type signature, because the other thing that determines the latent is the agents’ distribution, but this distribution is more an “internal” thing that might be too complicated to work with. But the proof in the OP shows that the type signature is not enough to pin it down, even for agents whose models are highly compatible with each other as-measured-by-KL-in-type-signature.
Sure, but what I question is whether the OP shows that the type signature wouldn’t be enough for realistic scenarios where we have two agents trained on somewhat different datasets. It’s not clear that their datasets would be different the same way P and Q are different here.