UPDATED: If asked whether the problem is solvable, a perfectly rational agent would reply that it isn’t.
If asked what action to take, then the perfectly rational agent is stuck, and therefore finds out it isn’t perfect. Those are two distinct questions. I suppose it all comes down to how you define rationality though.
So, besides the issue of what I will call earlier work, CCC and others have already mentioned that your scenario would allow non-converging expected values as in the St Petersburg paradox. By the usual meaning of utility, which you’ll note is not arbitrary but equivalent to certain attractive axioms, your scenario contradicts itself.
I’ve seen two main solutions offered for the general problem. If we just require bounded utility, there might be something left of the OP—but only with assumptions that appear physically impossible and thus highly suspicious. (Immediately after learning your argument contradicted itself is a bad time to tell us what you think is logically possible!) I tend towards the other option, which says the people complaining about physics are onto something fundamental concerning the probabilities of ever-vaster utilities. This would disintegrate the OP entirely.
“Non-converging expected values”—you can’t conclude that the scenario is contradictory just because your tools don’t work.
As already noted, we can consider the problem where you name any number less than 100, but not 100 itself and gain that much utility, which avoids the whole non-convergence problem.
“This would disintegrate the OP entirely”—as already stated in other comments, claims that my situation aren’t realistic would be a good criticism if I was claiming that the results could be directly applied to the real universe.
If asked whether the problem is solvable, a perfectly rational agent would reply that it is.
Why? It’s a problem without a solution. Would a perfect rational agent say the problem of finding a negative integer that’s greater than 2 is solvable?
UPDATED: If asked whether the problem is solvable, a perfectly rational agent would reply that it isn’t.
If asked what action to take, then the perfectly rational agent is stuck, and therefore finds out it isn’t perfect. Those are two distinct questions. I suppose it all comes down to how you define rationality though.
So, besides the issue of what I will call earlier work, CCC and others have already mentioned that your scenario would allow non-converging expected values as in the St Petersburg paradox. By the usual meaning of utility, which you’ll note is not arbitrary but equivalent to certain attractive axioms, your scenario contradicts itself.
I’ve seen two main solutions offered for the general problem. If we just require bounded utility, there might be something left of the OP—but only with assumptions that appear physically impossible and thus highly suspicious. (Immediately after learning your argument contradicted itself is a bad time to tell us what you think is logically possible!) I tend towards the other option, which says the people complaining about physics are onto something fundamental concerning the probabilities of ever-vaster utilities. This would disintegrate the OP entirely.
“Non-converging expected values”—you can’t conclude that the scenario is contradictory just because your tools don’t work.
As already noted, we can consider the problem where you name any number less than 100, but not 100 itself and gain that much utility, which avoids the whole non-convergence problem.
“This would disintegrate the OP entirely”—as already stated in other comments, claims that my situation aren’t realistic would be a good criticism if I was claiming that the results could be directly applied to the real universe.
Why? It’s a problem without a solution. Would a perfect rational agent say the problem of finding a negative integer that’s greater than 2 is solvable?
Sorry, that was a typo. It was meant to say “isn’t” rather than “is”