Are you aware that this is incompatible with Thornley’s ideas about incomplete preferences? Thornley’s decision rule might choose A. [Edit: I retract this, it’s wrong].
But suppose the agent were next to face a choice
If the choices are happening one after the other, are the preferences over tuples of outcomes? Or are the two choices in different counterfactuals? Or is it choosing an outcome, then being offered another outcome set that it could to replace it with?
VNM is only well justified when the preferences are over final outcomes, not intermediate states. So if your example contains preferences over intermediate states, then it confuses the matter because we can attribute the behavior to those preferences rather than incompleteness.
My use of ‘next’ need not be read temporally, though it could be. You might simply want to define a transitive preference relation for the agent over {A,A+,B,B+} in order to predict what it would choose in an arbitrary static decision problem. Only the incomplete one I described works no matter what the decision problem ends up being.
As a general point, you can always look at a decision ex post and back out different ways to rationalise it. The nontrivial task is here prediction, using features of the agent.
If we want an example of sequential choice using decision trees (rather than repeated ‘de novo’ choice through e.g. unawareness), it’ll be a bit more cumbersome but here goes.
Intuitively, suppose the agent first picks from {A,B+} and then, in addition, from {A+,B}. It ends up with two elements from {A,A+,B,B+}. Stated within the framework:
The set of possible prospects is X = {A,A+B,B+}×{A,A+B,B+}, where elements are pairs.
There’s a tree where, at node 1, the agent picks among paths labeled A and B+.
If A is picked, then at the next node, the agent picks from terminal prospects {(A,A+),(A,B)}. And analogously if path B+ is picked.
The agent has appropriately separable preferences: (x,y) ≿ (x’,y’) iff x ≿′ x″ and y ≿′ y″ for some permutation (x″,y″) of (x’y’), where ≿′ is a relation over components.
Then (A+,x) ≻ (A,x) while (A,x) ⋈/ (B,x) for any prospect component x, and so on for other comparisons. This is how separability makes it easy to say “A+ is preferred to A” even though preferences are defined over pairs in this case. I.e., we can construct ≿ over pairs out of some ≿′ over components.
In this tree, the available prospects from the outset are (A,A+), (A,B), (B+,A+), (B+,B).
Using the same ≿′ as before, the (dynamically) maximal ones are (A,A+), (B+,A+), (B+,B).
But what if, instead of positing incomparability between A and B+ we instead said the agent was indifferent? By transitivity, we’d infer A ≻′ B and thus A+ ≻′ B. But then (B+,B) wouldn’t be maximal. We’d incorrectly rule out the possibility that the agent goes for (B+,B).
The description of how sequential choice can be defined is helpful, I was previously confused by how this was supposed to work. This matches what I meant by preferences over tuples of outcomes. Thanks!
We’d incorrectly rule out the possibility that the agent goes for (B+,B).
There’s two things we might want from the idea of incomplete preferences:
To predict the actions of agents.
Because complete agents behave dangerously sometimes, and we want to design better agents with different behaviour.
I think modelling an agent as having incomplete preferences is great for (1). Very useful. We make better predictions if we don’t rule out the possibility that the agent goes for B after choosing B+. I think we agree here.
For (2), the relevant quote is:
As a general point, you can always look at a decision ex post and back out different ways to rationalise it. The nontrivial task is here prediction, using features of the agent.
If we can always rationalise a decision ex post as being generated by a complete agent, then let’s just build that complete agent. Incompleteness isn’t helping us, because the behaviour could have been generated by complete preferences.
Are you aware that this is incompatible with Thornley’s ideas about incomplete preferences? Thornley’s decision rulemight choose A.[Edit: I retract this, it’s wrong].If the choices are happening one after the other, are the preferences over tuples of outcomes? Or are the two choices in different counterfactuals? Or is it choosing an outcome, then being offered another outcome set that it could to replace it with?
VNM is only well justified when the preferences are over final outcomes, not intermediate states. So if your example contains preferences over intermediate states, then it confuses the matter because we can attribute the behavior to those preferences rather than incompleteness.
My use of ‘next’ need not be read temporally, though it could be. You might simply want to define a transitive preference relation for the agent over {A,A+,B,B+} in order to predict what it would choose in an arbitrary static decision problem. Only the incomplete one I described works no matter what the decision problem ends up being.
As a general point, you can always look at a decision ex post and back out different ways to rationalise it. The nontrivial task is here prediction, using features of the agent.
If we want an example of sequential choice using decision trees (rather than repeated ‘de novo’ choice through e.g. unawareness), it’ll be a bit more cumbersome but here goes.
Intuitively, suppose the agent first picks from {A,B+} and then, in addition, from {A+,B}. It ends up with two elements from {A,A+,B,B+}. Stated within the framework:
The set of possible prospects is X = {A,A+B,B+}×{A,A+B,B+}, where elements are pairs.
There’s a tree where, at node 1, the agent picks among paths labeled A and B+.
If A is picked, then at the next node, the agent picks from terminal prospects {(A,A+),(A,B)}. And analogously if path B+ is picked.
The agent has appropriately separable preferences: (x,y) ≿ (x’,y’) iff x ≿′ x″ and y ≿′ y″ for some permutation (x″,y″) of (x’y’), where ≿′ is a relation over components.
Then (A+,x) ≻ (A,x) while (A,x) ⋈/ (B,x) for any prospect component x, and so on for other comparisons. This is how separability makes it easy to say “A+ is preferred to A” even though preferences are defined over pairs in this case. I.e., we can construct ≿ over pairs out of some ≿′ over components.
In this tree, the available prospects from the outset are (A,A+), (A,B), (B+,A+), (B+,B).
Using the same ≿′ as before, the (dynamically) maximal ones are (A,A+), (B+,A+), (B+,B).
But what if, instead of positing incomparability between A and B+ we instead said the agent was indifferent? By transitivity, we’d infer A ≻′ B and thus A+ ≻′ B. But then (B+,B) wouldn’t be maximal. We’d incorrectly rule out the possibility that the agent goes for (B+,B).
The description of how sequential choice can be defined is helpful, I was previously confused by how this was supposed to work. This matches what I meant by preferences over tuples of outcomes. Thanks!
There’s two things we might want from the idea of incomplete preferences:
To predict the actions of agents.
Because complete agents behave dangerously sometimes, and we want to design better agents with different behaviour.
I think modelling an agent as having incomplete preferences is great for (1). Very useful. We make better predictions if we don’t rule out the possibility that the agent goes for B after choosing B+. I think we agree here.
For (2), the relevant quote is:
If we can always rationalise a decision ex post as being generated by a complete agent, then let’s just build that complete agent. Incompleteness isn’t helping us, because the behaviour could have been generated by complete preferences.
I retract this part of the comment. I misinterpreted the comment that I linked to. Seems like they are compatible.