Sorry for deleting my comment. I’m still trying to figure out where this approach leads. So now you’re saying that “I’m at the first intersection” isn’t actually a “state” and shouldn’t get a probability?
P(outcome | do(action)) has no proper place in our agent’s decision-making. Savages theorem requires us to use probabilities for the things that determine the outcome; if our action does not determine the outcome, its probability isn’t given by Savage’s theorem.
And I do think that simultaneously, we can use Cox’s theorem to show that the absent-minded driver has some probability P(state | information). It’s just not integrated with decision-making in the usual way—we want to obey Savage’s theorem for that.
So we’ll have a probability due to Cox’s theorem. But for decision-making, we won’t ever actually need that probability, because it’s not a probability of one of the objects Savage’s theorem cares about.
Sorry for deleting my comment. I’m still trying to figure out where this approach leads. So now you’re saying that “I’m at the first intersection” isn’t actually a “state” and shouldn’t get a probability?
Right. To quote myself:
So we’ll have a probability due to Cox’s theorem. But for decision-making, we won’t ever actually need that probability, because it’s not a probability of one of the objects Savage’s theorem cares about.