Hrm, could you try to steelman instead of strawmaning my position ?
Steelman yourself. I took your quote and replaced it with an isomorphic version; it’s not my problem if it looks even more transparently irrelevant or wrong.
If it means that the a VNM-based theory can’t handle directly real-life choices without having to convert them into a different set of choices, which can be much bigger than the real-life choices, well, that’s something significant that you can’t just hand-wave with ill-placed irony, and the details of the conversion process have to be part of the decision theory.
Yes, it is significant, but it’s along the lines of “most (optimization) problems are in complexity classes higher than P” or “AIXI is uncomputable”. It doesn’t mean that the axioms or proofs are false; it just means that, yet again, as always, we need to make trade-offs and approximations.
Steelman yourself. I took your quote and replaced it with an isomorphic version; it’s not my problem if it looks even more transparently irrelevant or wrong.
Yes, it is significant, but it’s along the lines of “most (optimization) problems are in complexity classes higher than P” or “AIXI is uncomputable”. It doesn’t mean that the axioms or proofs are false; it just means that, yet again, as always, we need to make trade-offs and approximations.