Well, I was kinda thinking of ν as being, say, a distribution of human behaviors in a certain context (as filtered through a particular user interface), though, I guess that way of doing it would only make sense within limited contexts, not general contexts where whether the agent is physically a human or something else, would matter. And in this sort of situation, well, the action of “modify yourself to no-longer be a quantilizer” would not be in the human distribution, because the actions to do that are not applicable to humans (as humans are, presumably, not quantilizers, and the types of self-modification actions that would be available are not the same). Though, “create a successor agent” could still be in the human distribution.
Of course, one doesn’t have practical access to “the true probability distribution of human behaviors in context M”, so I guess I was imagining a trained approximation to this distribution.
Hm, well, suppose that the distribution over human-like behaviors includes both making an agent which is a quantilizer and making one which isn’t, both of equal probability. Hm. I don’t see why a general quantilizer in this case would pick the quantilizer over the plain optimizer, as the utility...
Hm... I get the idea that the “quantilizers correspond to optimizing an infra-function of form [...]” thing is maybe dealing with a distribution over a single act?
Or.. if we have a utility function over histories until the end of the episode, then, if one has a model of how the environment will be and how one is likely to act in all future steps, given each of one’s potential actions in the current step, one gets an expected utility conditioned on each of the potential actions in the current step, and this works as a utility function over actions for the current step, and if one acts as a quantilizer over that, each step.. does that give the same behavior as an agent optimizing an infra-function defined using the condition with the L1 norm described in the post, in terms of the utility function over histories for an entire episode, and reference distributions for the whole episode?
Well, I was kinda thinking of ν as being, say, a distribution of human behaviors in a certain context (as filtered through a particular user interface), though, I guess that way of doing it would only make sense within limited contexts, not general contexts where whether the agent is physically a human or something else, would matter. And in this sort of situation, well, the action of “modify yourself to no-longer be a quantilizer” would not be in the human distribution, because the actions to do that are not applicable to humans (as humans are, presumably, not quantilizers, and the types of self-modification actions that would be available are not the same). Though, “create a successor agent” could still be in the human distribution.
Of course, one doesn’t have practical access to “the true probability distribution of human behaviors in context M”, so I guess I was imagining a trained approximation to this distribution.
Hm, well, suppose that the distribution over human-like behaviors includes both making an agent which is a quantilizer and making one which isn’t, both of equal probability. Hm. I don’t see why a general quantilizer in this case would pick the quantilizer over the plain optimizer, as the utility...
Hm...
I get the idea that the “quantilizers correspond to optimizing an infra-function of form [...]” thing is maybe dealing with a distribution over a single act?
Or.. if we have a utility function over histories until the end of the episode, then, if one has a model of how the environment will be and how one is likely to act in all future steps, given each of one’s potential actions in the current step, one gets an expected utility conditioned on each of the potential actions in the current step, and this works as a utility function over actions for the current step,
and if one acts as a quantilizer over that, each step.. does that give the same behavior as an agent optimizing an infra-function defined using the condition with the L1 norm described in the post, in terms of the utility function over histories for an entire episode, and reference distributions for the whole episode?
argh, seems difficult...