At this point it’s important to remember that in the VNM framework, the agent’s epistemic state and decision-making procedure cannot be part of the outcome. In this sense VNM-rational agents are Cartesian dualists. Counterfactual world-histories are also not part of the outcome.
So I think whether or not a decision was risky depends on the agent’s epistemic state, as well as on the decision and the agent’s preferences. This is why preferring to come by your money honestly is different from preferring to come by your money in a non-risky way.
That’s helpful. But it also seems unduly restrictive. I realize that you’re not saying that we literally have to treat our own minds as immaterial entities (are you?), but it still seems a pretty high price to pay. Can I treat the epistemic states of my loved ones as part of the outcome? Presumably so, so why can’t I give myself the same consideration? I’m trying to make you feel the cost, here, as I see it.
Hm. I haven’t thought much about that. Maybe there is something interesting to be said about what aspects of an agent’s internal state can they have preferences over for there still to be an interesting rationality theorem? If you let agents have preferences over all decisions, then there is no rationality theorem.
I don’t believe the VNM theorem describes humans, but on the other hand I don’t think humans should endorse violations of the Independence Axiom.
At this point it’s important to remember that in the VNM framework, the agent’s epistemic state and decision-making procedure cannot be part of the outcome. In this sense VNM-rational agents are Cartesian dualists. Counterfactual world-histories are also not part of the outcome.
So I think whether or not a decision was risky depends on the agent’s epistemic state, as well as on the decision and the agent’s preferences. This is why preferring to come by your money honestly is different from preferring to come by your money in a non-risky way.
That’s helpful. But it also seems unduly restrictive. I realize that you’re not saying that we literally have to treat our own minds as immaterial entities (are you?), but it still seems a pretty high price to pay. Can I treat the epistemic states of my loved ones as part of the outcome? Presumably so, so why can’t I give myself the same consideration? I’m trying to make you feel the cost, here, as I see it.
Hm. I haven’t thought much about that. Maybe there is something interesting to be said about what aspects of an agent’s internal state can they have preferences over for there still to be an interesting rationality theorem? If you let agents have preferences over all decisions, then there is no rationality theorem.
I don’t believe the VNM theorem describes humans, but on the other hand I don’t think humans should endorse violations of the Independence Axiom.
Seems like a good topic to address as directly as possible, I agree.