Or more loosely, that are “trying to achieve things in the world.”
I think this is too loose, to the point of being potentially misleading to someone who is not entirely familiar with the arguments around EU maximization. I have rallied against such language in the past, and I will continue to do so again, because I think it communicates the wrong type of intuition by making the conclusion seem more natural and self-evident (and less dependent on the boilerplate) than it actually is.[1]
By “trying to achieve things in the world,” you really mean “selecting the option that maximizes the expected value of the world-state you will find yourself in, given uncertainty, in a one-shot game.” But, at a colloquial level, someone who is trying to optimize over universe-histories or trajectories would likely also get assigned the attribute of “trying to achieve things in the world”, but now the conclusions reached by money-pump arguments are far less interesting and important.
it doesn’t dispose of the underlying question of whether you should model this agent as having a preference ordering over outcomes for the purposes of decision-making
The money-pump arguments don’t dispose of this question either, they just assume that the answer is “yes” (to be clear, “preference ordering over outcomes” here refers to outcomes as lottery-outcomes instead of world-state-outcomes, since knowing your ordering over the latter tells you very little decision-relevant stuff about the former).
This conforms with the old (paraphrased) adage that “If you don’t tell someone that an idea is hard/unnatural, they will come away with the wrong impression that it’s easy and natural, and then integrate this mistaken understanding into their world-model”, which works nice and good until they get hit with a situation where rigor and deep understanding become crucial.
A basic sample illustration is introductory proof-based math courses, where if students are shown a dozen examples of induction working and no examples of it failing to produce a proof, they will initially think everything can be solved by induction, which will hurt their grade on the midterm when you need a method other than induction.
I think this is too loose, to the point of being potentially misleading to someone who is not entirely familiar with the arguments around EU maximization. I have rallied against such language in the past, and I will continue to do so again, because I think it communicates the wrong type of intuition by making the conclusion seem more natural and self-evident (and less dependent on the boilerplate) than it actually is.[1]
By “trying to achieve things in the world,” you really mean “selecting the option that maximizes the expected value of the world-state you will find yourself in, given uncertainty, in a one-shot game.” But, at a colloquial level, someone who is trying to optimize over universe-histories or trajectories would likely also get assigned the attribute of “trying to achieve things in the world”, but now the conclusions reached by money-pump arguments are far less interesting and important.
The money-pump arguments don’t dispose of this question either, they just assume that the answer is “yes” (to be clear, “preference ordering over outcomes” here refers to outcomes as lottery-outcomes instead of world-state-outcomes, since knowing your ordering over the latter tells you very little decision-relevant stuff about the former).
This conforms with the old (paraphrased) adage that “If you don’t tell someone that an idea is hard/unnatural, they will come away with the wrong impression that it’s easy and natural, and then integrate this mistaken understanding into their world-model”, which works nice and good until they get hit with a situation where rigor and deep understanding become crucial.
A basic sample illustration is introductory proof-based math courses, where if students are shown a dozen examples of induction working and no examples of it failing to produce a proof, they will initially think everything can be solved by induction, which will hurt their grade on the midterm when you need a method other than induction.