Steering towards world states, taken literally, for a realistic agent is impossible, because an embedded agent cannot even contain a representation of a detailed world-state.
I’m not imagining AI steering toward a full specification of a physical universe; I’m imagining it steering toward a set of possible worlds. Sets of possible worlds can often be fully understood by reasoners, because you don’t need to model every world in the set in perfect detail in order to understand the set; you just need to understand at least one high-level criterion (or set of criteria) that determines which worlds go in the set vs. not in the set.
E.g., consider the preference ordering “the universe is optimal if there’s an odd number of promethium atoms within 100 light years of the Milky Way Galaxy’s center of gravity, pessimal otherwise”. Understanding this preference just requires understanding terms like “odd” and “promethium” and “light year”; it doesn’t require modeling full universes or galaxies in perfect detail.
Similarly, “maximize the amount of diamond that exists in my future light cone” just requires you to understand what “diamond” is and what “the more X you have, the better” means. It doesn’t require you to fully represent every universe in your head in advance.
(Note that selecting the maximizing action is computationally intractable; but you can have a maximizing goal even if you aren’t perfectly succeeding in the goal.)
I’m not imagining AI steering toward a full specification of a physical universe; I’m imagining it steering toward a set of possible worlds. Sets of possible worlds can often be fully understood by reasoners, because you don’t need to model every world in the set in perfect detail in order to understand the set; you just need to understand at least one high-level criterion (or set of criteria) that determines which worlds go in the set vs. not in the set.
E.g., consider the preference ordering “the universe is optimal if there’s an odd number of promethium atoms within 100 light years of the Milky Way Galaxy’s center of gravity, pessimal otherwise”. Understanding this preference just requires understanding terms like “odd” and “promethium” and “light year”; it doesn’t require modeling full universes or galaxies in perfect detail.
Similarly, “maximize the amount of diamond that exists in my future light cone” just requires you to understand what “diamond” is and what “the more X you have, the better” means. It doesn’t require you to fully represent every universe in your head in advance.
(Note that selecting the maximizing action is computationally intractable; but you can have a maximizing goal even if you aren’t perfectly succeeding in the goal.)
Yes, you can do things approximating steering towards world states...and you still can’t literally steer towards detailed world states, as I said.