I don’t know if my utility function is bounded. My statement was much weaker, that I’m not confident about decision-making in situations involving infinities. You’re right that the problem happens not just for unbounded utilities, but also for arbitrarily fine distinctions between utilities. None of these seem to apply to your original post though, where everything is finite and I can be pretty damn confident.
Algebraic reasoning is independent of the number system used. If you are reasoning about utility functions in the abstract and if your reasoning does not make use of any properties of numbers, then it doesn’t matter what numbers you use. You’re not using any properties of finite numbers to define anything, so the fact of whether or not these numbers are finite is irrelevant.
You’re asking if my utility function is bounded, right? I don’t know. All the intuitions seem unreliable. My original confident answer to you (“second strategy of course”) was from the perspective of an agent for whom your thought experiment is possible, which means it necessarily disagrees with Bob. Didn’t want to make any stronger claim than that.
I don’t know if my utility function is bounded. My statement was much weaker, that I’m not confident about decision-making in situations involving infinities. You’re right that the problem happens not just for unbounded utilities, but also for arbitrarily fine distinctions between utilities. None of these seem to apply to your original post though, where everything is finite and I can be pretty damn confident.
Algebraic reasoning is independent of the number system used. If you are reasoning about utility functions in the abstract and if your reasoning does not make use of any properties of numbers, then it doesn’t matter what numbers you use. You’re not using any properties of finite numbers to define anything, so the fact of whether or not these numbers are finite is irrelevant.
The original post doesn’t require arbitrarily fine distinctions, just 2^trillion distinctions. That’s perfectly finite.
Your comment about Bob not assigning a high utility value to anything is equivalent to a comment stating that Bob’s utility function is bounded.
Right, but Bob was based on your claims in this comment about what’s “reasonable” for you. I didn’t claim to agree with Bob.
Fair enough. I have a question then. Do you personally agree with Bob?
You’re asking if my utility function is bounded, right? I don’t know. All the intuitions seem unreliable. My original confident answer to you (“second strategy of course”) was from the perspective of an agent for whom your thought experiment is possible, which means it necessarily disagrees with Bob. Didn’t want to make any stronger claim than that.
I am, and thanks for answering. Keep in mind that there are ways to make your intuition more reliable, if that’s a thing you want.