I think the fact that natural latents are much lower dimensional than all of physics makes it suitable for specifying the pointer to CEV as an equivalence class over physical processes (many quantum field configurations can correspond to the same human, and we want to ignore differences within that equivalence class).
IMO the main bottleneck is to account for the reflective aspects in CEV, because one constraint of natural latents is that it should be redundantly represented in the environment.
It is redundantly represented in the environment, because humans are part of the environment.
If you tell an AI to imagine what happens if humans sit around in a time loop until they figure out what they want, this will single out a specific thought experiment to the AI, provided humans and physics are concepts the AI itself thinks in.
(The time loop part and the condition for terminating the loop can be formally specified in code, so the AI doesn’t need to think those are natural concepts)
If the AI didn’t have a model of human internals that let it predict the outcome of this scenario, it would be bad at predicting humans.
natural latents are about whether the AI’s cognition routes through the same concepts that humans use.
We can imagine the AI maintaining predictive accuracy about humans without using the same human concepts. For example, it can use low-level physics to simulate the environment, which would be predictively accurate, but that cognition doesn’t make use of the concept “strawberry” (in principle, we can still “single out” the concept of “strawberry” within it, but that information comes mostly from us, not from the physics simulation)
Natural latents are equivalent up to isomorphism (ie two latent variables are equivalent iff they give the same conditional probabilities on observables), but for reflective aspects of human cognition, it’s unclear whether that equivalence class pin down all information we care about for CEV (there may be differences within the equivalence class that we care about), in a way that generalizes far out of distribution
My claim is that the natural latents the AI needs to share for this setup are not about the details of what a ‘CEV’ is. They are about what researchers mean when they talk about initializing, e.g., a physics simulation with the state of the Earth at a specific moment in time.
I think the fact that natural latents are much lower dimensional than all of physics makes it suitable for specifying the pointer to CEV as an equivalence class over physical processes (many quantum field configurations can correspond to the same human, and we want to ignore differences within that equivalence class).
IMO the main bottleneck is to account for the reflective aspects in CEV, because one constraint of natural latents is that it should be redundantly represented in the environment.
It is redundantly represented in the environment, because humans are part of the environment.
If you tell an AI to imagine what happens if humans sit around in a time loop until they figure out what they want, this will single out a specific thought experiment to the AI, provided humans and physics are concepts the AI itself thinks in.
(The time loop part and the condition for terminating the loop can be formally specified in code, so the AI doesn’t need to think those are natural concepts)
If the AI didn’t have a model of human internals that let it predict the outcome of this scenario, it would be bad at predicting humans.
natural latents are about whether the AI’s cognition routes through the same concepts that humans use.
We can imagine the AI maintaining predictive accuracy about humans without using the same human concepts. For example, it can use low-level physics to simulate the environment, which would be predictively accurate, but that cognition doesn’t make use of the concept “strawberry” (in principle, we can still “single out” the concept of “strawberry” within it, but that information comes mostly from us, not from the physics simulation)
Natural latents are equivalent up to isomorphism (ie two latent variables are equivalent iff they give the same conditional probabilities on observables), but for reflective aspects of human cognition, it’s unclear whether that equivalence class pin down all information we care about for CEV (there may be differences within the equivalence class that we care about), in a way that generalizes far out of distribution
My claim is that the natural latents the AI needs to share for this setup are not about the details of what a ‘CEV’ is. They are about what researchers mean when they talk about initializing, e.g., a physics simulation with the state of the Earth at a specific moment in time.
Noted, that does seem a lot more tractable than using natural latents to pin down details of CEV by itself