To be less aggravating, I’ll pre-explain: nothing personal, of course. I don’t believe any person has a defined utility function. As for unbounded: there’s a largest number your brain can effectively code. I can buy an unbounded (except by mortality) sequence of equally subjectively strong preferences for a sequence of new states, each one equally better than the last, with time elapsed between so as for the last improved state to become the present baseline. But I don’t see how you’d want to call that an “unbounded utility function”. I’d appreciate a precise demonstration of how it is one. Maybe you could say that the magnitude of each preference is the same as would be predicted by a particular utility function.
If i’m charitable, I can believe a similar claim to your original: you don’t know of or accept any reason why it shouldn’t be possible that you actually have an (approximation to?) an unbounded utility function. Okay, but that’s not the same as knowing it’s possible.
(speculation aired to ward off possible tedious game-playing. let me know if I missed the mark)
If your argument is that I can’t have a defined utility function, and concede that therefore I can’t be gamed by this, then I don’t think we actually disagree on anticipations, just linguistics and possibly some philosophy. Certainly nothing I’d be inclined to argue there, yeah :)
Close enough (I didn’t have any therefore in mind, just disagreement with what I thought you claimed), though I wouldn’t call the confusion linguistics or philosophy.
It does seem like I attempted to understand you too literally. I’m not entirely sure exactly what you meant (if you’d offered a reason for your belief, it might have been clearer what that belief was).
Thanks for helping us succeed in not arguing over nothing—probably a bigger coup than whatever it was we were intending to contribute.
Let’s see the definition, then.
To be less aggravating, I’ll pre-explain: nothing personal, of course. I don’t believe any person has a defined utility function. As for unbounded: there’s a largest number your brain can effectively code. I can buy an unbounded (except by mortality) sequence of equally subjectively strong preferences for a sequence of new states, each one equally better than the last, with time elapsed between so as for the last improved state to become the present baseline. But I don’t see how you’d want to call that an “unbounded utility function”. I’d appreciate a precise demonstration of how it is one. Maybe you could say that the magnitude of each preference is the same as would be predicted by a particular utility function.
If i’m charitable, I can believe a similar claim to your original: you don’t know of or accept any reason why it shouldn’t be possible that you actually have an (approximation to?) an unbounded utility function. Okay, but that’s not the same as knowing it’s possible.
(speculation aired to ward off possible tedious game-playing. let me know if I missed the mark)
If your argument is that I can’t have a defined utility function, and concede that therefore I can’t be gamed by this, then I don’t think we actually disagree on anticipations, just linguistics and possibly some philosophy. Certainly nothing I’d be inclined to argue there, yeah :)
Close enough (I didn’t have any therefore in mind, just disagreement with what I thought you claimed), though I wouldn’t call the confusion linguistics or philosophy.
It does seem like I attempted to understand you too literally. I’m not entirely sure exactly what you meant (if you’d offered a reason for your belief, it might have been clearer what that belief was).
Thanks for helping us succeed in not arguing over nothing—probably a bigger coup than whatever it was we were intending to contribute.