Yes that’s right (regardless of whether it’s resolute or whether it’s using ‘strong’ maximality).
A sort of of a decision tree where the agent isn’t representable as having complete preferences is the one you provide here. We can even put the dynamic aspect aside to make the point. Suppose that the agent is fact inclined to pick A+ over A, but doesn’t favour or disfavour B to either one. Here’s my representation: maximal choice with A+ ≻ A and B ⋈/ A,A+. As a result, I will correctly predict its behaviour: it’ll choose something other than A.
Can I also do this with another representation, using a complete preference relation? Let’s try out indifference between A+ and B. I’d indeed make the same prediction in this particular case. But suppose the agent were next to face a choice between A+, B, and B+ (where the latter is a sweetening of B). By transitivity, we know B+ ≻ A+, and so this representation would predict that B+ would be chosen for sure. But this is wrong, since in fact the agent is not inclined to favour B-type prospects over A-type prospects. In contrast, the incomplete representation doesn’t make this error.
Summing up: the incomplete representation works for {A+,A,B} and {A+,B,B+} while the only complete one that also works for the former fails for the latter.
Are you aware that this is incompatible with Thornley’s ideas about incomplete preferences? Thornley’s decision rule might choose A. [Edit: I retract this, it’s wrong].
But suppose the agent were next to face a choice
If the choices are happening one after the other, are the preferences over tuples of outcomes? Or are the two choices in different counterfactuals? Or is it choosing an outcome, then being offered another outcome set that it could to replace it with?
VNM is only well justified when the preferences are over final outcomes, not intermediate states. So if your example contains preferences over intermediate states, then it confuses the matter because we can attribute the behavior to those preferences rather than incompleteness.
My use of ‘next’ need not be read temporally, though it could be. You might simply want to define a transitive preference relation for the agent over {A,A+,B,B+} in order to predict what it would choose in an arbitrary static decision problem. Only the incomplete one I described works no matter what the decision problem ends up being.
As a general point, you can always look at a decision ex post and back out different ways to rationalise it. The nontrivial task is here prediction, using features of the agent.
If we want an example of sequential choice using decision trees (rather than repeated ‘de novo’ choice through e.g. unawareness), it’ll be a bit more cumbersome but here goes.
Intuitively, suppose the agent first picks from {A,B+} and then, in addition, from {A+,B}. It ends up with two elements from {A,A+,B,B+}. Stated within the framework:
The set of possible prospects is X = {A,A+B,B+}×{A,A+B,B+}, where elements are pairs.
There’s a tree where, at node 1, the agent picks among paths labeled A and B+.
If A is picked, then at the next node, the agent picks from terminal prospects {(A,A+),(A,B)}. And analogously if path B+ is picked.
The agent has appropriately separable preferences: (x,y) ≿ (x’,y’) iff x ≿′ x″ and y ≿′ y″ for some permutation (x″,y″) of (x’y’), where ≿′ is a relation over components.
Then (A+,x) ≻ (A,x) while (A,x) ⋈/ (B,x) for any prospect component x, and so on for other comparisons. This is how separability makes it easy to say “A+ is preferred to A” even though preferences are defined over pairs in this case. I.e., we can construct ≿ over pairs out of some ≿′ over components.
In this tree, the available prospects from the outset are (A,A+), (A,B), (B+,A+), (B+,B).
Using the same ≿′ as before, the (dynamically) maximal ones are (A,A+), (B+,A+), (B+,B).
But what if, instead of positing incomparability between A and B+ we instead said the agent was indifferent? By transitivity, we’d infer A ≻′ B and thus A+ ≻′ B. But then (B+,B) wouldn’t be maximal. We’d incorrectly rule out the possibility that the agent goes for (B+,B).
Yes that’s right (regardless of whether it’s resolute or whether it’s using ‘strong’ maximality).
A sort of of a decision tree where the agent isn’t representable as having complete preferences is the one you provide here. We can even put the dynamic aspect aside to make the point. Suppose that the agent is fact inclined to pick A+ over A, but doesn’t favour or disfavour B to either one. Here’s my representation: maximal choice with A+ ≻ A and B ⋈/ A,A+. As a result, I will correctly predict its behaviour: it’ll choose something other than A.
Can I also do this with another representation, using a complete preference relation? Let’s try out indifference between A+ and B. I’d indeed make the same prediction in this particular case. But suppose the agent were next to face a choice between A+, B, and B+ (where the latter is a sweetening of B). By transitivity, we know B+ ≻ A+, and so this representation would predict that B+ would be chosen for sure. But this is wrong, since in fact the agent is not inclined to favour B-type prospects over A-type prospects. In contrast, the incomplete representation doesn’t make this error.
Summing up: the incomplete representation works for {A+,A,B} and {A+,B,B+} while the only complete one that also works for the former fails for the latter.
Are you aware that this is incompatible with Thornley’s ideas about incomplete preferences? Thornley’s decision rulemight choose A.[Edit: I retract this, it’s wrong].If the choices are happening one after the other, are the preferences over tuples of outcomes? Or are the two choices in different counterfactuals? Or is it choosing an outcome, then being offered another outcome set that it could to replace it with?
VNM is only well justified when the preferences are over final outcomes, not intermediate states. So if your example contains preferences over intermediate states, then it confuses the matter because we can attribute the behavior to those preferences rather than incompleteness.
I retract this part of the comment. I misinterpreted the comment that I linked to. Seems like they are compatible.
My use of ‘next’ need not be read temporally, though it could be. You might simply want to define a transitive preference relation for the agent over {A,A+,B,B+} in order to predict what it would choose in an arbitrary static decision problem. Only the incomplete one I described works no matter what the decision problem ends up being.
As a general point, you can always look at a decision ex post and back out different ways to rationalise it. The nontrivial task is here prediction, using features of the agent.
If we want an example of sequential choice using decision trees (rather than repeated ‘de novo’ choice through e.g. unawareness), it’ll be a bit more cumbersome but here goes.
Intuitively, suppose the agent first picks from {A,B+} and then, in addition, from {A+,B}. It ends up with two elements from {A,A+,B,B+}. Stated within the framework:
The set of possible prospects is X = {A,A+B,B+}×{A,A+B,B+}, where elements are pairs.
There’s a tree where, at node 1, the agent picks among paths labeled A and B+.
If A is picked, then at the next node, the agent picks from terminal prospects {(A,A+),(A,B)}. And analogously if path B+ is picked.
The agent has appropriately separable preferences: (x,y) ≿ (x’,y’) iff x ≿′ x″ and y ≿′ y″ for some permutation (x″,y″) of (x’y’), where ≿′ is a relation over components.
Then (A+,x) ≻ (A,x) while (A,x) ⋈/ (B,x) for any prospect component x, and so on for other comparisons. This is how separability makes it easy to say “A+ is preferred to A” even though preferences are defined over pairs in this case. I.e., we can construct ≿ over pairs out of some ≿′ over components.
In this tree, the available prospects from the outset are (A,A+), (A,B), (B+,A+), (B+,B).
Using the same ≿′ as before, the (dynamically) maximal ones are (A,A+), (B+,A+), (B+,B).
But what if, instead of positing incomparability between A and B+ we instead said the agent was indifferent? By transitivity, we’d infer A ≻′ B and thus A+ ≻′ B. But then (B+,B) wouldn’t be maximal. We’d incorrectly rule out the possibility that the agent goes for (B+,B).