… except, going through the proof one finds that the latter property heavily relies on the “uniqueness” of the policy. My policy can get the maximum goal-directedness measure if it is the only policy of its competence level while being very deterministic. It isn’t clear that this always holds for the optimal/anti-optimal policies or always relaxes smoothly to epsilon-optimal/anti-optimal policies.
Yeah, uniqueness definitely doesn’t always hold for the optimal/anti-optimal policy. I think the way MEG works here makes sense: if you’re following the unique optimal policy for some utility function, that’s a lot of evidence for goal-directedness. If you’re following one of many optimal policies, that’s a bit less evidence—there’s a greater chance that it’s an accident. In the most extreme case (for the constant utility function) every policy is optimal—and we definitely don’t want to ascribe maximum goal-directedness to optimal policies there.
With regard to relaxing smoothly to epsilon-optimal/anti-optimal policies, from memory I think we do have the property that MEG is increasing in the utility of the policy for policies with greater than the utility of the uniform policy, and decreasing for policies with less than the utility of the uniform policy. I think you can prove this via the property that the set of maxent policies is (very nearly) just Boltzman policies with varying temperature. But I would have to sit down and think about it properly. I should probably add that to the paper if it’s the case.
minimum for uniformly random policy (this would’ve been a good property, but unless I’m mistaken I think the proof for the lower bound is incorrect, because negative cross entropy is not bounded below.)
Thanks for this. The proof is indeed nonsense, but I think the proposition is still true. I’ve corrected it to this.
Thanks for the feedback!
Yeah, uniqueness definitely doesn’t always hold for the optimal/anti-optimal policy. I think the way MEG works here makes sense: if you’re following the unique optimal policy for some utility function, that’s a lot of evidence for goal-directedness. If you’re following one of many optimal policies, that’s a bit less evidence—there’s a greater chance that it’s an accident. In the most extreme case (for the constant utility function) every policy is optimal—and we definitely don’t want to ascribe maximum goal-directedness to optimal policies there.
With regard to relaxing smoothly to epsilon-optimal/anti-optimal policies, from memory I think we do have the property that MEG is increasing in the utility of the policy for policies with greater than the utility of the uniform policy, and decreasing for policies with less than the utility of the uniform policy. I think you can prove this via the property that the set of maxent policies is (very nearly) just Boltzman policies with varying temperature. But I would have to sit down and think about it properly. I should probably add that to the paper if it’s the case.
Thanks for this. The proof is indeed nonsense, but I think the proposition is still true. I’ve corrected it to this.