Maybe I’m just a less charitable person—it seems very easy to me for someone to say the words “I have unbounded utility” without actually connecting any such referent to their decision-making process.
We can show that there’s a tension between that verbal statement and the basic machinery of decision-making, and also illustrates how the practical decision-making process people use every day doesn’t act like expected utilities diverge.
And I think the proper response to seeing something like this happen to you is definitely not to double down on the verbal statement that sounded good. It’s to stop and think very skeptically about whether this verbal statement fits with what you can actually ask of reality, and what you might want to ask for that you can actually get. (I’ve written too many posts about why it’s the wrong move to want an AI to “just maximize my utility function.” Saying that you want to be modeled as if you have unbounded utility [of this sort that lets you get divergent EV] is the same order of mistake.)
If you think people can make verbal statements that are “not up for grabs,” this probably seems like gross uncharitableness.
I can easily imagine people being mistaken about “would you prefer X or Y?” questions (either in the sense that their decisions would change on reflection, or their utterances aren’t reflective of what should be rightly called their preferences, or whatever).
That said, I also don’t think that it’s obvious that uncertainty should be represented as probabilities with preferences depending only the probability of outcomes.
That said, all things considered I feel like bounded utility functions are much more appealing than the other options. Mostly I wrote this post to help explain my serious skepticism about unbounded utility functions (and about how nonchalantly the prospect of unbounded utility functions is thrown around).
Just posting to say I’m strongly in agreement that unbounded utility functions aren’t viable—and we tried to deal with some of the issues raised by philosophers, with more or less success, in our paper here: https://philpapers.org/rec/MANWIT-6
This is basically what I tried to argue in my preprint with Anders on infinite value—to quote:
”We have been unfortunately unable to come up with a clear defense of the conceivability of infinities and infinitesimals used for decisionmaking, but will note a weak argument to illustrate the nonviable nature of the most common class of objection. The weak claim is that people can conceive of infinitesimals, as shown by the fact that there is a word for it, or that there is a mathematical formalism that describes it. But, we respond, this does not make a claim for the ability to conceive of a value any better than St. Anselm’s ontological proof of the existence of God. More comically, we can say that this makes the case approximately the same way someone might claim to understand infinity because they can draw an 8 sideways — it says nothing about their conception, much less the ability to make decisions on the basis of the infinite or infinitesimal value or probability. ”
This seems plausible to me for people who don’t live and breathe math but still think Expected Utility is a tool they can’t afford not to use. I would be surprised if the typical person, even here, picks up the subtlety with any of the infinite sums and weird implication of that on the first pass. I don’t think infinite sums (and their many pitfalls) are typically taught at all until Calc II, which is not even a graduation requirement for non-STEM undergrad degrees.
People also get a lot of mileage out of realizing that IRL most problems aren’t edge cases and even fewer are corner cases—rightly skipping most of the rigor that’s necessary when discussing philosophy and purposely seeking out weird edge cases.
Now if someone is actually well versed in the math and philosophizing and saying that understanding all the implications that’s an interesting discussion I want to read.
Maybe I’m just a less charitable person—it seems very easy to me for someone to say the words “I have unbounded utility” without actually connecting any such referent to their decision-making process.
We can show that there’s a tension between that verbal statement and the basic machinery of decision-making, and also illustrates how the practical decision-making process people use every day doesn’t act like expected utilities diverge.
And I think the proper response to seeing something like this happen to you is definitely not to double down on the verbal statement that sounded good. It’s to stop and think very skeptically about whether this verbal statement fits with what you can actually ask of reality, and what you might want to ask for that you can actually get. (I’ve written too many posts about why it’s the wrong move to want an AI to “just maximize my utility function.” Saying that you want to be modeled as if you have unbounded utility [of this sort that lets you get divergent EV] is the same order of mistake.)
If you think people can make verbal statements that are “not up for grabs,” this probably seems like gross uncharitableness.
I can easily imagine people being mistaken about “would you prefer X or Y?” questions (either in the sense that their decisions would change on reflection, or their utterances aren’t reflective of what should be rightly called their preferences, or whatever).
That said, I also don’t think that it’s obvious that uncertainty should be represented as probabilities with preferences depending only the probability of outcomes.
That said, all things considered I feel like bounded utility functions are much more appealing than the other options. Mostly I wrote this post to help explain my serious skepticism about unbounded utility functions (and about how nonchalantly the prospect of unbounded utility functions is thrown around).
Just posting to say I’m strongly in agreement that unbounded utility functions aren’t viable—and we tried to deal with some of the issues raised by philosophers, with more or less success, in our paper here: https://philpapers.org/rec/MANWIT-6
This is basically what I tried to argue in my preprint with Anders on infinite value—to quote:
”We have been unfortunately unable to come up with a clear defense of the conceivability of infinities and infinitesimals used for decisionmaking, but will note a weak argument to illustrate the nonviable nature of the most common class of objection. The weak claim is that people can conceive of infinitesimals, as shown by the fact that there is a word for it, or that there is a mathematical formalism that describes it. But, we respond, this does not make a claim for the ability to conceive of a value any better than St. Anselm’s ontological proof of the existence of God. More comically, we can say that this makes the case approximately the same way someone might claim to understand infinity because they can draw an 8 sideways — it says nothing about their conception, much less the ability to make decisions on the basis of the infinite or infinitesimal value or probability. ”
This seems plausible to me for people who don’t live and breathe math but still think Expected Utility is a tool they can’t afford not to use. I would be surprised if the typical person, even here, picks up the subtlety with any of the infinite sums and weird implication of that on the first pass. I don’t think infinite sums (and their many pitfalls) are typically taught at all until Calc II, which is not even a graduation requirement for non-STEM undergrad degrees.
People also get a lot of mileage out of realizing that IRL most problems aren’t edge cases and even fewer are corner cases—rightly skipping most of the rigor that’s necessary when discussing philosophy and purposely seeking out weird edge cases.
Now if someone is actually well versed in the math and philosophizing and saying that understanding all the implications that’s an interesting discussion I want to read.