I can easily imagine people being mistaken about “would you prefer X or Y?” questions (either in the sense that their decisions would change on reflection, or their utterances aren’t reflective of what should be rightly called their preferences, or whatever).
That said, I also don’t think that it’s obvious that uncertainty should be represented as probabilities with preferences depending only the probability of outcomes.
That said, all things considered I feel like bounded utility functions are much more appealing than the other options. Mostly I wrote this post to help explain my serious skepticism about unbounded utility functions (and about how nonchalantly the prospect of unbounded utility functions is thrown around).
Just posting to say I’m strongly in agreement that unbounded utility functions aren’t viable—and we tried to deal with some of the issues raised by philosophers, with more or less success, in our paper here: https://philpapers.org/rec/MANWIT-6
I can easily imagine people being mistaken about “would you prefer X or Y?” questions (either in the sense that their decisions would change on reflection, or their utterances aren’t reflective of what should be rightly called their preferences, or whatever).
That said, I also don’t think that it’s obvious that uncertainty should be represented as probabilities with preferences depending only the probability of outcomes.
That said, all things considered I feel like bounded utility functions are much more appealing than the other options. Mostly I wrote this post to help explain my serious skepticism about unbounded utility functions (and about how nonchalantly the prospect of unbounded utility functions is thrown around).
Just posting to say I’m strongly in agreement that unbounded utility functions aren’t viable—and we tried to deal with some of the issues raised by philosophers, with more or less success, in our paper here: https://philpapers.org/rec/MANWIT-6