Is that something you can see from the outside? If I argmax over actions in expected-paper-clips or over updateless-prior-expected-paper-clips, how can you translate my black box behavior over possible worlds into the dependence of my behavior on the dependence of the worlds on my behavior?
See the section “Utility functions” of this post: it shows how a dependence between two fixed facts could be restored in an ideal case where we can learn everything there is to learn about it. Similarly, you could consider the fact of which dependence holds between two facts, with various specific functions as its possible values, and ask what can you infer about that other fact if you assume that the dependence is given by a certain function.
More generally, a dependence follows possible inferences, things that could be inferred about one fact if you learn new things about the other fact. It needs to follow all of such inferences, to the best of agent’s ability, otherwise it won’t be right and you’ll get incorrect decisions (counterfactual models).
Is that something you can see from the outside? If I argmax over actions in expected-paper-clips or over updateless-prior-expected-paper-clips, how can you translate my black box behavior over possible worlds into the dependence of my behavior on the dependence of the worlds on my behavior?
See the section “Utility functions” of this post: it shows how a dependence between two fixed facts could be restored in an ideal case where we can learn everything there is to learn about it. Similarly, you could consider the fact of which dependence holds between two facts, with various specific functions as its possible values, and ask what can you infer about that other fact if you assume that the dependence is given by a certain function.
More generally, a dependence follows possible inferences, things that could be inferred about one fact if you learn new things about the other fact. It needs to follow all of such inferences, to the best of agent’s ability, otherwise it won’t be right and you’ll get incorrect decisions (counterfactual models).