(sidetrack comment, this is not the main argument thread)
Think about your own preferences.
Let A be some career as an accountant, A+ be that career as an accountant with an extra $1 salary, and B be some career as a musician. Let p be small. Then you might reasonably lack a preference between 0.5p(A+)+(1-0.5p)(B) and A. That’s not instrumentally irrational.
I find this example unconvincing, because any agent that has finite precision in their preference representation will have preferences that are a tiny bit incomplete in this manner. As such, a version of myself that could more precisely represent the value-to-me of different options would be uniformly better than myself, by my own preferences. But the cost is small here. The amount of money I’m leaving on the table is usually small, relative to the price of representing and computing more fine-grained preferences.
I think it’s really important to recognize the places where toy models can only approximately reflect reality, and this is one of them. But it doesn’t reduce the force of the dominance argument. The fact that humans (or any bounded agent) can’t have exactly complete preferences doesn’t mean that it’s impossible for them to be better by their own lights.
I appreciate you writing out this more concrete example, but that’s not where the disagreement lies. I understand partially ordered preferences. I didn’t read the paper though. I think it’s great to study or build agents with partially ordered preferences, if it helps get other useful properties. It just seems to me that they will inherently leave money on the table. In some situations this is well worth it, so that’s fine.
The general principle that you appeal to (If X is weakly preferred to or pref-gapped with Y in every state of nature, and X is strictly preferred to Y in some state of nature, then the agent must prefer X to Y) implies that rational preferences can be cyclic. B must be preferred to p(B-)+(1-p)(A+), which must be preferred to A, which must be preferred to p(A-)+(1-p)B+, which must be preferred to B.
No, hopefully the definition in my other comment makes this clear. I believe you’re switching the state of nature for each comparison, in order to construct this cycle.
There could be agents that only have incomplete preferences because they haven’t bothered to figure out the correct completion. But there could also be agents with incomplete preferences for which there is no correct completion. The question is whether these agents are pressured by money-pump arguments to settle on some completion.
I understand partially ordered preferences.
Yes, apologies. I wrote that explanation in the spirit of ‘You probably understand this, but just in case...’. I find it useful to give a fair bit of background context, partly to jog my own memory, partly as a just-in-case, partly in case I want to link comments to people in future.
I believe you’re switching the state of nature for each comparison, in order to construct this cycle.
I don’t think this is true. You can line up states of nature in any way you like.
(sidetrack comment, this is not the main argument thread)
I find this example unconvincing, because any agent that has finite precision in their preference representation will have preferences that are a tiny bit incomplete in this manner. As such, a version of myself that could more precisely represent the value-to-me of different options would be uniformly better than myself, by my own preferences. But the cost is small here. The amount of money I’m leaving on the table is usually small, relative to the price of representing and computing more fine-grained preferences.
I think it’s really important to recognize the places where toy models can only approximately reflect reality, and this is one of them. But it doesn’t reduce the force of the dominance argument. The fact that humans (or any bounded agent) can’t have exactly complete preferences doesn’t mean that it’s impossible for them to be better by their own lights.
I appreciate you writing out this more concrete example, but that’s not where the disagreement lies. I understand partially ordered preferences. I didn’t read the paper though. I think it’s great to study or build agents with partially ordered preferences, if it helps get other useful properties. It just seems to me that they will inherently leave money on the table. In some situations this is well worth it, so that’s fine.
No, hopefully the definition in my other comment makes this clear. I believe you’re switching the state of nature for each comparison, in order to construct this cycle.
There could be agents that only have incomplete preferences because they haven’t bothered to figure out the correct completion. But there could also be agents with incomplete preferences for which there is no correct completion. The question is whether these agents are pressured by money-pump arguments to settle on some completion.
Yes, apologies. I wrote that explanation in the spirit of ‘You probably understand this, but just in case...’. I find it useful to give a fair bit of background context, partly to jog my own memory, partly as a just-in-case, partly in case I want to link comments to people in future.
I don’t think this is true. You can line up states of nature in any way you like.