Yep, my point is that there’s no physical notion of being “offered” a menu of lotteries which doesn’t leak information. IIA will not be satisfied by any physical process which corresponds to offering the decision-maker with a menu of options. Happy to discuss any specific counter-example.
Of course, you can construct a mathematical model of the physical process, and this model might an informative objective to study, but it would be begging the question if the mathematical model baked in IIA somewhere.
I like the idea from Pretentious Penguin that, IIA might not be satisfied in general, but if you first get the agent to read A, B, C, and then offer {A,B} as options and {A,B,C} as options, (a specific instance of) IIA could be satisfied in that context.
You can gain info by being presented with more options, but once you have gained info, you could just be invariant to being presented with the same info again.
so you would get IIA*: “whether you prefer option A or B is independent of whether I offer you an irrelevant option C, provided that you had already processed {A,B,C} beforehand”
You can’t have processed all possible information at a finite time, so above is limited relative to the original IIA.
I also didn’t check whether you get additional problems with IIA*.
What about the physical process of offering somebody a menu of lotteries consisting only of options that they have seen before? Or a 2-step physical process where first one tells somebody about some set of options, and then presents a menu of lotteries taken only from that set? I can’t think of any example where a rational-seeming preference function doesn’t obey IIA in one of these information-leakage-free physical processes.
Yep, my point is that there’s no physical notion of being “offered” a menu of lotteries which doesn’t leak information. IIA will not be satisfied by any physical process which corresponds to offering the decision-maker with a menu of options. Happy to discuss any specific counter-example.
Of course, you can construct a mathematical model of the physical process, and this model might an informative objective to study, but it would be begging the question if the mathematical model baked in IIA somewhere.
I like the idea from Pretentious Penguin that, IIA might not be satisfied in general, but if you first get the agent to read A, B, C, and then offer {A,B} as options and {A,B,C} as options, (a specific instance of) IIA could be satisfied in that context.
You can gain info by being presented with more options, but once you have gained info, you could just be invariant to being presented with the same info again.
so you would get IIA*: “whether you prefer option A or B is independent of whether I offer you an irrelevant option C, provided that you had already processed {A,B,C} beforehand”
You can’t have processed all possible information at a finite time, so above is limited relative to the original IIA.
I also didn’t check whether you get additional problems with IIA*.
What about the physical process of offering somebody a menu of lotteries consisting only of options that they have seen before? Or a 2-step physical process where first one tells somebody about some set of options, and then presents a menu of lotteries taken only from that set? I can’t think of any example where a rational-seeming preference function doesn’t obey IIA in one of these information-leakage-free physical processes.