That’s helpful. But it also seems unduly restrictive. I realize that you’re not saying that we literally have to treat our own minds as immaterial entities (are you?), but it still seems a pretty high price to pay. Can I treat the epistemic states of my loved ones as part of the outcome? Presumably so, so why can’t I give myself the same consideration? I’m trying to make you feel the cost, here, as I see it.
Hm. I haven’t thought much about that. Maybe there is something interesting to be said about what aspects of an agent’s internal state can they have preferences over for there still to be an interesting rationality theorem? If you let agents have preferences over all decisions, then there is no rationality theorem.
I don’t believe the VNM theorem describes humans, but on the other hand I don’t think humans should endorse violations of the Independence Axiom.
That’s helpful. But it also seems unduly restrictive. I realize that you’re not saying that we literally have to treat our own minds as immaterial entities (are you?), but it still seems a pretty high price to pay. Can I treat the epistemic states of my loved ones as part of the outcome? Presumably so, so why can’t I give myself the same consideration? I’m trying to make you feel the cost, here, as I see it.
Hm. I haven’t thought much about that. Maybe there is something interesting to be said about what aspects of an agent’s internal state can they have preferences over for there still to be an interesting rationality theorem? If you let agents have preferences over all decisions, then there is no rationality theorem.
I don’t believe the VNM theorem describes humans, but on the other hand I don’t think humans should endorse violations of the Independence Axiom.
Seems like a good topic to address as directly as possible, I agree.