The note about possibilities being constrained by what you’ll actually choose pushed me to (I think) finally dissolve the mystery about the “action is chosen by self-prediction” view, that tries to avoid utility formalism. At a face value, it’s kind of self-referentially: what makes a good prediction then? If it’s good concentration of probability, then it’s easy to “predict” some simple action with certainty, and to perform it. Probabilities of possible futures are constrained both by knowledge about the world, and by knowledge about your goals. Decision-making is the process of taking the contribution of the goals into account, that updates the initial assessment of probabilities (“car will run me over”) with one that includes the action you’ll take according to your goals (“I’ll step to the side, and the car will move past me”). But these probabilities are not predictions. You don’t choose the action probabilistically, based on distribution that you’ve made, you choose the best one. The value that is estimated has probabilistic properties, because axioms of the probability theory hold for it, but some of the normal intuitions about probability as belief or probability as frequency don’t apply to it. And the quality of the decision-making is in the probability that corresponded to actual action, just as the quality of belief is in what probability corresponded to actual state of the world. The difference is that with normal beliefs, you don’t determine the actual fact that they are measuring; but with decision-making, you both determine the fact, and form a “belief” about it. And in this paradoxical situation, you still must remain rational about the “belief”, and not confuse it with the territory: “belief” is probabilistic, and ranges over possibilities, while the fact is single, and (!) it’s determined by the “belief”.
The note about possibilities being constrained by what you’ll actually choose pushed me to (I think) finally dissolve the mystery about the “action is chosen by self-prediction” view, that tries to avoid utility formalism. At a face value, it’s kind of self-referentially: what makes a good prediction then? If it’s good concentration of probability, then it’s easy to “predict” some simple action with certainty, and to perform it. Probabilities of possible futures are constrained both by knowledge about the world, and by knowledge about your goals. Decision-making is the process of taking the contribution of the goals into account, that updates the initial assessment of probabilities (“car will run me over”) with one that includes the action you’ll take according to your goals (“I’ll step to the side, and the car will move past me”). But these probabilities are not predictions. You don’t choose the action probabilistically, based on distribution that you’ve made, you choose the best one. The value that is estimated has probabilistic properties, because axioms of the probability theory hold for it, but some of the normal intuitions about probability as belief or probability as frequency don’t apply to it. And the quality of the decision-making is in the probability that corresponded to actual action, just as the quality of belief is in what probability corresponded to actual state of the world. The difference is that with normal beliefs, you don’t determine the actual fact that they are measuring; but with decision-making, you both determine the fact, and form a “belief” about it. And in this paradoxical situation, you still must remain rational about the “belief”, and not confuse it with the territory: “belief” is probabilistic, and ranges over possibilities, while the fact is single, and (!) it’s determined by the “belief”.