It has been super interesting to read all your contributions to lukeprog’s post; this ‘paradox’ is no doubt interesting, because there seemed to be a shared gut reaction as to something wrong about the above formulation. I have stumbled across this page with the exact dilemma as lukeprog while reading Peterson’s Introduction to Decision Theory book (2nd edition). As you have all pointed out, it would seem that there is something inherently fishy about his formulation of this particular example of ‘Rival Formalisations’.
I think if you follow the logic of the initial axioms he uses in the book, this example that he provides does not follow. To give you some context, he formulates this ‘paradox’ by invoking two ‘axioms’; the principle of insufficient reason (ir) and merger of states (ms). These principles are as follows (Peterson 2009 page 35):
The Principle of Insufficient Reason (IR): If (Pi) is a formal decision problem in which the probabilities of the states are unknown, then it may be transformed into a formal decision problem (Pi)′ in which equal probabilities are assigned to all states.
Merger of states (MS): If two or more states yield identical outcomes under all acts, then these repetitious states should be collapsed into one, and if the probabilities of the two states are known, then they should be added.
From these two principles he first applies the IR rule and then the MS rule to formulate the ‘paradox’ above (Peterson 2009 page 35).
As per the above post by lukeprog, Peterson, in his (1/3, 1⁄3, 1⁄3) (P, LA, NY) example, insists that the probabilities of LA and NY can be added together to make 2/3rds and that this is a correct application of IR and MS.
The principle of IR would instead contradict this application, as he adds the 1⁄3 probabilities from NY and LA as if these probabilities are apriori known. Contrary to Peterson, they are not apriori known. That’s why IR was invoked in the first place. When evoking MS, it requires that the probabilities of the states to be known in order for the probabilities to be added. From IR, we know that these probabilities have, rather, been arbitrarily assigned into equal proportions, precisely because the probabilities of these states (P, NY, LA) in question are apriori unknown. It should not follow from this that the probabilities of NY can be added to LA. To do so is to suggests that the probabilities are apriori known and unknown at the same time, which is a contradiction.
A good question one could ask is what the difference between ‘collapsing’ states and adding probabilities, and whether it affects the above analysis. Much like the ‘Sure-Thing Principle’, states with identical outcomes are collapsable into one because the probability component is irrelevant; regardless of the likelihood of each state is, the outcomes are the same. I think that is why, in this example, collapsing NY/LA into (NY or LA) is permissible but adding probabilities without an apriori known origin is not. This would suggest that LA and NY should first collapse into one state because of its identical outcomes, and then as a result of the apriori unknown (and unknowable) probabilities of states P and (LA or NY), should 1⁄2 and 1⁄2 then be assigned to these states.
I believe this is where Peterson’s application of these two principles fall short, and contradicts his own application. With this is mind, this would suggest the correct application would be to first use MS (premise 1 of MS) on (LA or NY), then treat that as one “state”, then assign 1⁄2, 1⁄2 probabilities to P and (LA or NY). To do so otherwise would be to contradict IR and MS.
I think this would be the my way of explaining the seemingly ‘paradoxical’ outcome of Peterson’s example; careful reading would suggest that his application is no way compatible with the initial axioms.
Please refer to (Peterson 2009) page 33-35 on An Introduction to Decision Theory (Second Edition) for more reading.
It has been super interesting to read all your contributions to lukeprog’s post; this ‘paradox’ is no doubt interesting, because there seemed to be a shared gut reaction as to something wrong about the above formulation. I have stumbled across this page with the exact dilemma as lukeprog while reading Peterson’s Introduction to Decision Theory book (2nd edition). As you have all pointed out, it would seem that there is something inherently fishy about his formulation of this particular example of ‘Rival Formalisations’.
I think if you follow the logic of the initial axioms he uses in the book, this example that he provides does not follow. To give you some context, he formulates this ‘paradox’ by invoking two ‘axioms’; the principle of insufficient reason (ir) and merger of states (ms). These principles are as follows (Peterson 2009 page 35):
The Principle of Insufficient Reason (IR): If (Pi) is a formal decision problem in which the probabilities of the states are unknown, then it may be transformed into a formal decision problem (Pi)′ in which equal probabilities are assigned to all states.
Merger of states (MS): If two or more states yield identical outcomes under all acts, then these repetitious states should be collapsed into one, and if the probabilities of the two states are known, then they should be added.
From these two principles he first applies the IR rule and then the MS rule to formulate the ‘paradox’ above (Peterson 2009 page 35).
As per the above post by lukeprog, Peterson, in his (1/3, 1⁄3, 1⁄3) (P, LA, NY) example, insists that the probabilities of LA and NY can be added together to make 2/3rds and that this is a correct application of IR and MS.
The principle of IR would instead contradict this application, as he adds the 1⁄3 probabilities from NY and LA as if these probabilities are apriori known. Contrary to Peterson, they are not apriori known. That’s why IR was invoked in the first place. When evoking MS, it requires that the probabilities of the states to be known in order for the probabilities to be added. From IR, we know that these probabilities have, rather, been arbitrarily assigned into equal proportions, precisely because the probabilities of these states (P, NY, LA) in question are apriori unknown. It should not follow from this that the probabilities of NY can be added to LA. To do so is to suggests that the probabilities are apriori known and unknown at the same time, which is a contradiction.
A good question one could ask is what the difference between ‘collapsing’ states and adding probabilities, and whether it affects the above analysis. Much like the ‘Sure-Thing Principle’, states with identical outcomes are collapsable into one because the probability component is irrelevant; regardless of the likelihood of each state is, the outcomes are the same. I think that is why, in this example, collapsing NY/LA into (NY or LA) is permissible but adding probabilities without an apriori known origin is not. This would suggest that LA and NY should first collapse into one state because of its identical outcomes, and then as a result of the apriori unknown (and unknowable) probabilities of states P and (LA or NY), should 1⁄2 and 1⁄2 then be assigned to these states.
I believe this is where Peterson’s application of these two principles fall short, and contradicts his own application. With this is mind, this would suggest the correct application would be to first use MS (premise 1 of MS) on (LA or NY), then treat that as one “state”, then assign 1⁄2, 1⁄2 probabilities to P and (LA or NY). To do so otherwise would be to contradict IR and MS.
I think this would be the my way of explaining the seemingly ‘paradoxical’ outcome of Peterson’s example; careful reading would suggest that his application is no way compatible with the initial axioms.
Please refer to (Peterson 2009) page 33-35 on An Introduction to Decision Theory (Second Edition) for more reading.
Kind Regards,
Derek