Utility theory is significantly more problematic than probability theory.
In both cases, from certain axioms, certain conclusions follow. The difference is in the applicability of those axioms in the real world. Utility theory is supposedly about agents making decisions, but as I remarked earlier in the thread, these are “agents” that make just one decision and stop, with no other agents in the picture.
I have read that Morgenstern was surprised that so much significance was read into the VNM theorem on its publication, when he and von Neumann had considered it to be a rather obvious and minor thing, relegated to the appendix of their book. I have come to agree with that assessment.
[Jeffrey’s] theory doesn’t involve time, like probability theory. It also applies to just one agent, again like probability theory.
Probability theory is not about agents. It is about probability. It applies to many things, including processes in time.
That people fail to solve the Sleeping Beauty paradox does not mean that probability theory fails. I have never paid the problem much attention, but Ape in the coat’s analysis seems convincing to me.
I mean in a subjective interpretation, a probability function represents the beliefs of one person at one point in time. Equally, a (Jeffrey) utility function can represent the desires of one person at one particular point in time. As such it is a theory of what an agent believes and wants.
Decisions can come into play insofar individual actions can be described by propositions (“I do A”, “I do B”) and each of those propositions is equivalent to a disjunction of the form “I do A and X happens or I do A and not-X happens”, which is subject to the axioms. But decisions is not something which is baked into the theory, much like probability theory isn’t necessarily about urns and gambles.
Utility theory is significantly more problematic than probability theory.
In both cases, from certain axioms, certain conclusions follow. The difference is in the applicability of those axioms in the real world. Utility theory is supposedly about agents making decisions, but as I remarked earlier in the thread, these are “agents” that make just one decision and stop, with no other agents in the picture.
I have read that Morgenstern was surprised that so much significance was read into the VNM theorem on its publication, when he and von Neumann had considered it to be a rather obvious and minor thing, relegated to the appendix of their book. I have come to agree with that assessment.
Probability theory is not about agents. It is about probability. It applies to many things, including processes in time.
That people fail to solve the Sleeping Beauty paradox does not mean that probability theory fails. I have never paid the problem much attention, but Ape in the coat’s analysis seems convincing to me.
I mean in a subjective interpretation, a probability function represents the beliefs of one person at one point in time. Equally, a (Jeffrey) utility function can represent the desires of one person at one particular point in time. As such it is a theory of what an agent believes and wants.
Decisions can come into play insofar individual actions can be described by propositions (“I do A”, “I do B”) and each of those propositions is equivalent to a disjunction of the form “I do A and X happens or I do A and not-X happens”, which is subject to the axioms. But decisions is not something which is baked into the theory, much like probability theory isn’t necessarily about urns and gambles.