Another angle to consider: in this specific scenario, would realistic agents actually derive natural latents for P and Q as a whole, as opposed to deriving two mutually incompatible latents for the Q0 and P0 components, then working with a probability distribution over those latents?
Intuitively, that’s how humans operate if they have two incompatible hypotheses about some system. We don’t derive some sort of “weighted-average” ontology for the system, we derive two separate ontologies and then try to distinguish between them.
If you only care about betting odds, then feel free to average together mutually incompatible distributions reflecting mutually exclusive world-models. If you care about planning then you actually have to decide which model is right or else plan carefully for either outcome.
Like, “just blindly derive the natural latent” is clearly not the whole story about how world-models work. Maybe realistic agents have some way of spotting setups structured the way the OP is structured, and then they do something more than just deriving the latent.
Another angle to consider: in this specific scenario, would realistic agents actually derive natural latents for P and Q as a whole, as opposed to deriving two mutually incompatible latents for the Q0 and P0 components, then working with a probability distribution over those latents?
Intuitively, that’s how humans operate if they have two incompatible hypotheses about some system. We don’t derive some sort of “weighted-average” ontology for the system, we derive two separate ontologies and then try to distinguish between them.
This post comes to mind:
Like, “just blindly derive the natural latent” is clearly not the whole story about how world-models work. Maybe realistic agents have some way of spotting setups structured the way the OP is structured, and then they do something more than just deriving the latent.