I think the theorem implicitly assumes logical omniscience, and using heuristics instead of doing explicit expected utility calculations should make sense in at least some types of situations for us. The question is whether it makes sense in this one.
I think this is actually an interesting question. Is there an argument showing that we can do better than prase’s heuristic of rejecting all Pascal-like wagers, given human limitations?
If I had to describe my actual choices, I don’t know. No one necessarily, any of the axioms possibly. My inner decision algorithm is probably inconsistent in different ways, I don’t believe for example that my choices always satisfy transitivity.
What I wanted to say is that although I know that my decisions are somewhat irrational and thus sub-optimal, in some situations, like Pascal wagers, I don’t find consciously creating an utility function and to calculate the right decision to be an attractive solution. It would help me to be marginally more rational (as given by the VNM definition), but I am convinced that the resulting choices would be fairly arbitrary and probably will not reflect my actual preferences. In other words, I can’t reach some of my preferences by introspection, and think that an actual attempt to reconstruct an utility function would sometimes do worse than simple, although inconsistent heuristic.
Which of the axioms of the Von Neumann–Morgenstern utility theorem do you reject?
I think the theorem implicitly assumes logical omniscience, and using heuristics instead of doing explicit expected utility calculations should make sense in at least some types of situations for us. The question is whether it makes sense in this one.
I think this is actually an interesting question. Is there an argument showing that we can do better than prase’s heuristic of rejecting all Pascal-like wagers, given human limitations?
If I had to describe my actual choices, I don’t know. No one necessarily, any of the axioms possibly. My inner decision algorithm is probably inconsistent in different ways, I don’t believe for example that my choices always satisfy transitivity.
What I wanted to say is that although I know that my decisions are somewhat irrational and thus sub-optimal, in some situations, like Pascal wagers, I don’t find consciously creating an utility function and to calculate the right decision to be an attractive solution. It would help me to be marginally more rational (as given by the VNM definition), but I am convinced that the resulting choices would be fairly arbitrary and probably will not reflect my actual preferences. In other words, I can’t reach some of my preferences by introspection, and think that an actual attempt to reconstruct an utility function would sometimes do worse than simple, although inconsistent heuristic.