They [Savage’s axioms] require that the agent’s actions be consistent in commonsensical ways.
This seems to be a common “overselling” of Savage’s ideas (and other axiomatic approaches to decision theory / probability). In order to decide that the axioms apply, you really need to understand them in detail rather than just accept that they are commonsensical.
It appears for example that they don’t apply when indexical uncertainty is involved, and that seems to be why people got nowhere trying to solve problems like Absentminded Driver and Sleeping Beauty while keeping the basic subjective probability framework intact. Ironically, the original paper that spawned off this whole literature actually noted that Savage’s axioms don’t apply:
Another resolution would entail the
rejection of expected utility maximization given consistent beliefs when the
information set includes histories whose probabilities depend on the
decision maker’s actions at that information set. Savage’s theory views a
state as a description of a scenario which is independent of the act. In
contrast, ‘‘being at the second intersection’’ is a state which is not independent
from the action taken at the first, and, consequently, at the second
intersection.
Note that I’m not saying that logical uncertainty shouldn’t be handled using probabilities, just that the amount of work shown in this post seems way too low to determine that it should. Also, rather than trying to determine how to handle logical uncertainty using a foundational approach, we can just try various methods and see what works out in the end, and I’m not arguing against that either.
Okay, I’ve changed the Savage’s theorem entry to specifically call out that actions are defined as the things that lead to outcomes, and can lead to different outcomes depending on external possibilities in event-space. If that stops being true (e.g. if the outcome depends on something not in our external event-space, like which strategy you use), Savage’s theorem no longer applies (at least not to those objects, it still might apply to e.g. strategies that lead to different outcomes depending only on external possibilities in event-space).
This seems to be a common “overselling” of Savage’s ideas (and other axiomatic approaches to decision theory / probability). In order to decide that the axioms apply, you really need to understand them in detail rather than just accept that they are commonsensical.
Ok, I’ll work on making that more precise. Also, “consistent in commonsesnsical ways” is not the same as “commonsensical.” We’ll see why that’s important in two posts.
Note that I’m not saying that logical uncertainty shouldn’t be handled using probabilities, just that the amount of work shown in this post seems way too low to determine that it should.
I’d agree, especially since we are still two posts away from seeing the actual problem of logical uncertainty.
I seem to have promised you unrealistic payoff—probably because I didn’t think I could keep peoples’ interest by just talking about the foundations of probability for a while before any promise of payoff. Ditto for summarizing and then putting in links for people who want more rather than quoting all the definitions, desiderata, and proofs of key results.
This seems to be a common “overselling” of Savage’s ideas (and other axiomatic approaches to decision theory / probability). In order to decide that the axioms apply, you really need to understand them in detail rather than just accept that they are commonsensical.
It appears for example that they don’t apply when indexical uncertainty is involved, and that seems to be why people got nowhere trying to solve problems like Absentminded Driver and Sleeping Beauty while keeping the basic subjective probability framework intact. Ironically, the original paper that spawned off this whole literature actually noted that Savage’s axioms don’t apply:
Note that I’m not saying that logical uncertainty shouldn’t be handled using probabilities, just that the amount of work shown in this post seems way too low to determine that it should. Also, rather than trying to determine how to handle logical uncertainty using a foundational approach, we can just try various methods and see what works out in the end, and I’m not arguing against that either.
Okay, I’ve changed the Savage’s theorem entry to specifically call out that actions are defined as the things that lead to outcomes, and can lead to different outcomes depending on external possibilities in event-space. If that stops being true (e.g. if the outcome depends on something not in our external event-space, like which strategy you use), Savage’s theorem no longer applies (at least not to those objects, it still might apply to e.g. strategies that lead to different outcomes depending only on external possibilities in event-space).
Ok, I’ll work on making that more precise. Also, “consistent in commonsesnsical ways” is not the same as “commonsensical.” We’ll see why that’s important in two posts.
I’d agree, especially since we are still two posts away from seeing the actual problem of logical uncertainty.
I seem to have promised you unrealistic payoff—probably because I didn’t think I could keep peoples’ interest by just talking about the foundations of probability for a while before any promise of payoff. Ditto for summarizing and then putting in links for people who want more rather than quoting all the definitions, desiderata, and proofs of key results.