Hm. I haven’t thought much about that. Maybe there is something interesting to be said about what aspects of an agent’s internal state can they have preferences over for there still to be an interesting rationality theorem? If you let agents have preferences over all decisions, then there is no rationality theorem.
I don’t believe the VNM theorem describes humans, but on the other hand I don’t think humans should endorse violations of the Independence Axiom.
Hm. I haven’t thought much about that. Maybe there is something interesting to be said about what aspects of an agent’s internal state can they have preferences over for there still to be an interesting rationality theorem? If you let agents have preferences over all decisions, then there is no rationality theorem.
I don’t believe the VNM theorem describes humans, but on the other hand I don’t think humans should endorse violations of the Independence Axiom.
Seems like a good topic to address as directly as possible, I agree.