If we only had the idea of generalized preferences that allow cycles, it would be necessary to invent a new idea that was acyclic, because that’s a much more useful mental tool for talking about systems that choose based on predicting the impact of their choices. Or more loosely, that are “trying to achieve things in the world.”
The force of money-pump arguments comes from taking for granted that you want to describe the facets of a system that are about achieving things in the world.
If you want to describe other facets of a system, that aren’t about achieving things in the world, feel free to depart from talking about them in terms of a preference ordering.
I don’t think fighting the hypothetical by introducing memory is actually all that interesting, because sure, if you set up the environment such that the agent can never be presented with a new choice, you can’t do money-pumping, but it doesn’t dispose of the underlying question of whether you should model this agent as having a preference ordering over outcomes for the purposes of decision-making. The memoryless setting is just a toy problem that illustrates some key issues.
Or more loosely, that are “trying to achieve things in the world.”
I think this is too loose, to the point of being potentially misleading to someone who is not entirely familiar with the arguments around EU maximization. I have rallied against such language in the past, and I will continue to do so again, because I think it communicates the wrong type of intuition by making the conclusion seem more natural and self-evident (and less dependent on the boilerplate) than it actually is.[1]
By “trying to achieve things in the world,” you really mean “selecting the option that maximizes the expected value of the world-state you will find yourself in, given uncertainty, in a one-shot game.” But, at a colloquial level, someone who is trying to optimize over universe-histories or trajectories would likely also get assigned the attribute of “trying to achieve things in the world”, but now the conclusions reached by money-pump arguments are far less interesting and important.
it doesn’t dispose of the underlying question of whether you should model this agent as having a preference ordering over outcomes for the purposes of decision-making
The money-pump arguments don’t dispose of this question either, they just assume that the answer is “yes” (to be clear, “preference ordering over outcomes” here refers to outcomes as lottery-outcomes instead of world-state-outcomes, since knowing your ordering over the latter tells you very little decision-relevant stuff about the former).
This conforms with the old (paraphrased) adage that “If you don’t tell someone that an idea is hard/unnatural, they will come away with the wrong impression that it’s easy and natural, and then integrate this mistaken understanding into their world-model”, which works nice and good until they get hit with a situation where rigor and deep understanding become crucial.
A basic sample illustration is introductory proof-based math courses, where if students are shown a dozen examples of induction working and no examples of it failing to produce a proof, they will initially think everything can be solved by induction, which will hurt their grade on the midterm when you need a method other than induction.
If we only had the idea of generalized preferences that allow cycles, it would be necessary to invent a new idea that was acyclic, because that’s a much more useful mental tool for talking about systems that choose based on predicting the impact of their choices. Or more loosely, that are “trying to achieve things in the world.”
The force of money-pump arguments comes from taking for granted that you want to describe the facets of a system that are about achieving things in the world.
If you want to describe other facets of a system, that aren’t about achieving things in the world, feel free to depart from talking about them in terms of a preference ordering.
I don’t think fighting the hypothetical by introducing memory is actually all that interesting, because sure, if you set up the environment such that the agent can never be presented with a new choice, you can’t do money-pumping, but it doesn’t dispose of the underlying question of whether you should model this agent as having a preference ordering over outcomes for the purposes of decision-making. The memoryless setting is just a toy problem that illustrates some key issues.
I think this is too loose, to the point of being potentially misleading to someone who is not entirely familiar with the arguments around EU maximization. I have rallied against such language in the past, and I will continue to do so again, because I think it communicates the wrong type of intuition by making the conclusion seem more natural and self-evident (and less dependent on the boilerplate) than it actually is.[1]
By “trying to achieve things in the world,” you really mean “selecting the option that maximizes the expected value of the world-state you will find yourself in, given uncertainty, in a one-shot game.” But, at a colloquial level, someone who is trying to optimize over universe-histories or trajectories would likely also get assigned the attribute of “trying to achieve things in the world”, but now the conclusions reached by money-pump arguments are far less interesting and important.
The money-pump arguments don’t dispose of this question either, they just assume that the answer is “yes” (to be clear, “preference ordering over outcomes” here refers to outcomes as lottery-outcomes instead of world-state-outcomes, since knowing your ordering over the latter tells you very little decision-relevant stuff about the former).
This conforms with the old (paraphrased) adage that “If you don’t tell someone that an idea is hard/unnatural, they will come away with the wrong impression that it’s easy and natural, and then integrate this mistaken understanding into their world-model”, which works nice and good until they get hit with a situation where rigor and deep understanding become crucial.
A basic sample illustration is introductory proof-based math courses, where if students are shown a dozen examples of induction working and no examples of it failing to produce a proof, they will initially think everything can be solved by induction, which will hurt their grade on the midterm when you need a method other than induction.