I am proposing no changes. My claim is that even though we use english words like “event-space” or “actions” when describing Savage’s theorem, the things that actually have the relevant properties in the AMD problem are the strategies.
Cribbing from the paper I linked, the key property of “actions” is that they are functions from the set of “states of the world” (also somewhat mutable) to the set of consequences (the things I have a utility function over). If the state is “I’m at the first intersection” and I take the action (no quotes, actual action) of “go straight,” that does return a consequence.
How do you represent the strategy “always turn right” as a function from states to consequences? What does it return if the state is “I’m at the second intersection”, which is impossible if the agent uses that strategy?
Well, if we’re changing what objects are the “actions” in the proof, we’re probably also changing which objects are the “states.” You only need a strategy once, you don’t need a new strategy for each intersection.
If we have a strategy like “go straight with probability p,” a sufficient “state” is just the starting position and a description of the game.
Hmm, I’m not sure on what grounds we can actually rule out using the individual intersections as states, though, even though that leads to the wrong answer. Maybe they violate axiom 3, which requires the existence of “constant actions.”
Sorry for deleting my comment. I’m still trying to figure out where this approach leads. So now you’re saying that “I’m at the first intersection” isn’t actually a “state” and shouldn’t get a probability?
P(outcome | do(action)) has no proper place in our agent’s decision-making. Savages theorem requires us to use probabilities for the things that determine the outcome; if our action does not determine the outcome, its probability isn’t given by Savage’s theorem.
And I do think that simultaneously, we can use Cox’s theorem to show that the absent-minded driver has some probability P(state | information). It’s just not integrated with decision-making in the usual way—we want to obey Savage’s theorem for that.
So we’ll have a probability due to Cox’s theorem. But for decision-making, we won’t ever actually need that probability, because it’s not a probability of one of the objects Savage’s theorem cares about.
Yes, the key property of actions is that they are functions from the set of states to the set of consequences. Strategies do not have that property, because they can be randomized. If you convert randomized strategies to deterministic ones by “externalizing” random processes into black boxes in the world, Savage’s theorem will only give you some probability distribution over the black boxes, not necessarily the probability distribution that you intended. If you “hardcode” the probabilities of the black boxes into the inputs of Savage’s theorem, you might as well hardcode other things like utilities, and I don’t see the point.
I am proposing no changes. My claim is that even though we use english words like “event-space” or “actions” when describing Savage’s theorem, the things that actually have the relevant properties in the AMD problem are the strategies.
Cribbing from the paper I linked, the key property of “actions” is that they are functions from the set of “states of the world” (also somewhat mutable) to the set of consequences (the things I have a utility function over). If the state is “I’m at the first intersection” and I take the action (no quotes, actual action) of “go straight,” that does return a consequence.
How do you represent the strategy “always turn right” as a function from states to consequences? What does it return if the state is “I’m at the second intersection”, which is impossible if the agent uses that strategy?
Well, if we’re changing what objects are the “actions” in the proof, we’re probably also changing which objects are the “states.” You only need a strategy once, you don’t need a new strategy for each intersection.
If we have a strategy like “go straight with probability p,” a sufficient “state” is just the starting position and a description of the game.
Hmm, I’m not sure on what grounds we can actually rule out using the individual intersections as states, though, even though that leads to the wrong answer. Maybe they violate axiom 3, which requires the existence of “constant actions.”
Sorry for deleting my comment. I’m still trying to figure out where this approach leads. So now you’re saying that “I’m at the first intersection” isn’t actually a “state” and shouldn’t get a probability?
Right. To quote myself:
So we’ll have a probability due to Cox’s theorem. But for decision-making, we won’t ever actually need that probability, because it’s not a probability of one of the objects Savage’s theorem cares about.
Yes, the key property of actions is that they are functions from the set of states to the set of consequences. Strategies do not have that property, because they can be randomized. If you convert randomized strategies to deterministic ones by “externalizing” random processes into black boxes in the world, Savage’s theorem will only give you some probability distribution over the black boxes, not necessarily the probability distribution that you intended. If you “hardcode” the probabilities of the black boxes into the inputs of Savage’s theorem, you might as well hardcode other things like utilities, and I don’t see the point.