I’m not sure about the first case:
if you don’t have a VNM utility function, you risk being mugged by wandering Bayesians
I don’t see why this is true. While “VNM utility function ⇒ safe from wandering Bayesians”, it’s not clear to me that “no VNM utility function ⇒ vulnerable to wandering Bayesians.” I think the vulnerability to wandering Bayesians comes from failing to satisfy Transitivity rather than failing to satisfy Completeness. I have not done the math on that.
But the general point, about approximation, I like. Utility functions in game theory (decision theory?) problems normally involve only a small space. I think completeness is an entirely safe assumption when talking about humans deciding which route to take to their destination, or what bets to make in a specified game. My question comes from the use of VNM utility in AI papers like this one: http://intelligence.org/files/FormalizingConvergentGoals.pdf, where agents have a utility function over possible states of the universe (with the restriction that the space is finite).
Is the assumption that an AGI reasoning about universe-states has a utility function an example of reasonable use, for you?
That can’t be right—if the probability of being in the Vulcan Mountain is 1⁄4 and the probability of being in the Vulcan Desert (per the guard) is 0, then the probability of being on Earth would have to be 3⁄4.