I do see the intuitive angle of “two agents exposed to mostly-similar training sets should be expected to develop the same natural abstractions, which would allow us to translate between the ontologies of different ML models and between ML models and humans”, and that this post illustrated how one operationalization of this idea failed.
However if there are multiple different concepts that fit the same natural latent but function very differently
That’s not quite what this post shows, I think? It’s not that there are multiple concepts that fit the same natural latent, it’s that if we have two distributions that are judged very close by the KL divergence, and we derive the natural latents for them, they may turn out drastically different. The P agent and the Q agent legitimately live in very epistemically different worlds!
Which is likely not actually the case for slightly different training sets, or LLMs’ training sets vs. humans’ life experiences. Those are very close on some metric X, and now it seems that X isn’t (just) DKL.
Maybe one way to phrase it is that the X’s represent the “type signature” of the latent, and the type signature is the thing we can most easily hope is shared between the agents, since it’s “out there in the world” as it represents the outwards interaction with things. We’d hope to be able to share the latent simply by sharing the type signature, because the other thing that determines the latent is the agents’ distribution, but this distribution is more an “internal” thing that might be too complicated to work with. But the proof in the OP shows that the type signature is not enough to pin it down, even for agents whose models are highly compatible with each other as-measured-by-KL-in-type-signature.
Sure, but what I question is whether the OP shows that the type signature wouldn’t be enough for realistic scenarios where we have two agents trained on somewhat different datasets. It’s not clear that their datasets would be different the same way P and Q are different here.
I do see the intuitive angle of “two agents exposed to mostly-similar training sets should be expected to develop the same natural abstractions, which would allow us to translate between the ontologies of different ML models and between ML models and humans”, and that this post illustrated how one operationalization of this idea failed.
That’s not quite what this post shows, I think? It’s not that there are multiple concepts that fit the same natural latent, it’s that if we have two distributions that are judged very close by the KL divergence, and we derive the natural latents for them, they may turn out drastically different. The P agent and the Q agent legitimately live in very epistemically different worlds!
Which is likely not actually the case for slightly different training sets, or LLMs’ training sets vs. humans’ life experiences. Those are very close on some metric X, and now it seems that X isn’t (just) DKL.
Maybe one way to phrase it is that the X’s represent the “type signature” of the latent, and the type signature is the thing we can most easily hope is shared between the agents, since it’s “out there in the world” as it represents the outwards interaction with things. We’d hope to be able to share the latent simply by sharing the type signature, because the other thing that determines the latent is the agents’ distribution, but this distribution is more an “internal” thing that might be too complicated to work with. But the proof in the OP shows that the type signature is not enough to pin it down, even for agents whose models are highly compatible with each other as-measured-by-KL-in-type-signature.
Sure, but what I question is whether the OP shows that the type signature wouldn’t be enough for realistic scenarios where we have two agents trained on somewhat different datasets. It’s not clear that their datasets would be different the same way P and Q are different here.