That sounds to me like PP, or at least PP as it exists, is something that’s compatible with implementing different decision theories, rather than one that implies a specific decision theory by itself.
I generally agree with this. Specifically, I tend to imagine that PP is trying to make our behavior match a model in which we behave like an agent (at least sometimes). Thus, for instance, the tendency for humans to do things which “look like” or “feel like” optimizing for X without actually optimizing for X.
In that case, PP would be consistent with many decision theories, depending on the decision theory used by the model it’s trying to match.
I generally agree with this. Specifically, I tend to imagine that PP is trying to make our behavior match a model in which we behave like an agent (at least sometimes). Thus, for instance, the tendency for humans to do things which “look like” or “feel like” optimizing for X without actually optimizing for X.
In that case, PP would be consistent with many decision theories, depending on the decision theory used by the model it’s trying to match.