So, when we solve linear programming problems (say, with the simplex method), there are three possible outcomes: the problem is infeasible (there are no solutions that satisfy the constraints), the problem has at least one optimal value (which is found), or the problem is unbounded.
That is, if your “perfect theoretical rationality” requires there to not be the possibility of unbounded solutions, then your perfect theoretical rationality won’t work and cannot include simple things like LP problems. So I’m not sure why you think this version of perfect theoretical rationality is interesting, and am mildly surprised and disappointed that this was your impression of rationality.
“Cannot include simple things like LP problems”—Well, linear programming problems are simply a more complex version of the number choosing game. In fact, the number choosing game is equivalent to linear programming maximising x with x>0. So, if you want to criticise my definition of rationality for not being able to solve basic problems, you should be criticising it for not being able to solve the number choosing game!
I wouldn’t say it makes this uninteresting though, as while it may seem obvious to you that perfect rationality as defined by utility maximisation is impossible, as you have experience with linear programming, it isn’t necessarily obvious to everyone else. In fact, if you read the comments, you’ll see that many commentators are unwilling to accept this solution and keep trying to insist on there being some way out.
You seem to be arguing that there must be some solution that can solve these problems. I’ve already proven that this cannot exist, but if you disagree, what is your solution then?
EDIT: Essentially, what you’ve done is take something “absurd” (that there is no perfect rationality for the number choosing game), reduce it to something less absurd (that there’s no perfect rationality for linear programming) and then declared that you’ve found a reductio ad absurdum. That’s not how it is supposed to work!
You seem to be arguing that there must be some solution that can solve these problems. I’ve already proven that this cannot exist, but if you disagree, what is your solution then?
I think you’re misunderstanding me. I’m saying that there are problems where the right action is to mark it “unsolvable, because of X” and then move on. (Here, it’s “unsolvable because of unbounded solution space in the increasing direction,” which is true in both the “pick a big number” and “open boundary at 100″ case.)
In fact, if you read the comments, you’ll see that many commentators are unwilling to accept this solution and keep trying to insist on there being some way out.
Sure, someone who is objecting that this problem is ‘solvable’ is not using ‘solvable’ the way I would. But someone who is objecting that this problem is ‘unfair’ because it’s ‘impossible’ is starting down the correct path.
then declared that you’ve found a reductio ad absurdum.
I think you have this in reverse. I’m saying “the result you think is absurd is normal in the general case, and so is normal in this special case.”
I think you’re misunderstanding me. I’m saying that there are problems where the right action is to mark it “unsolvable, because of X” and then move on. (Here, it’s “unsolvable because of unbounded solution space in the increasing direction,” which is true in both the “pick a big number” and “open boundary at 100″ case.)
But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agent can realize that the problem has no optimal solution and mark it as unsolvable, but afterwards they still have to pick a number, so which number should they pick?
But if we view this as an actual (albeit unrealistic/highly theoretical) situation
There is no such thing as an actual unrealistic situation.
A perfectly rational agent can realize that the problem has no optimal solution and mark it as unsolvable, but afterwards they still have to pick a number
They do not have to pick a number, because the situation is not real. To say “but suppose it was” is only to repeat the original hypothetical question that the agent has declared unsolved. If we stipulate that the agent is so logically omniscient as to never need to abandon a problem as unsolved, that does not tell us, who are not omniscient, what that hypothetical agent’s hypothetical choice in that hypothetical situation would be.
The whole problem seems to me on a level with “can God make a weight so heavy he can’t lift it?”
UPDATED: If asked whether the problem is solvable, a perfectly rational agent would reply that it isn’t.
If asked what action to take, then the perfectly rational agent is stuck, and therefore finds out it isn’t perfect. Those are two distinct questions. I suppose it all comes down to how you define rationality though.
So, besides the issue of what I will call earlier work, CCC and others have already mentioned that your scenario would allow non-converging expected values as in the St Petersburg paradox. By the usual meaning of utility, which you’ll note is not arbitrary but equivalent to certain attractive axioms, your scenario contradicts itself.
I’ve seen two main solutions offered for the general problem. If we just require bounded utility, there might be something left of the OP—but only with assumptions that appear physically impossible and thus highly suspicious. (Immediately after learning your argument contradicted itself is a bad time to tell us what you think is logically possible!) I tend towards the other option, which says the people complaining about physics are onto something fundamental concerning the probabilities of ever-vaster utilities. This would disintegrate the OP entirely.
“Non-converging expected values”—you can’t conclude that the scenario is contradictory just because your tools don’t work.
As already noted, we can consider the problem where you name any number less than 100, but not 100 itself and gain that much utility, which avoids the whole non-convergence problem.
“This would disintegrate the OP entirely”—as already stated in other comments, claims that my situation aren’t realistic would be a good criticism if I was claiming that the results could be directly applied to the real universe.
If asked whether the problem is solvable, a perfectly rational agent would reply that it is.
Why? It’s a problem without a solution. Would a perfect rational agent say the problem of finding a negative integer that’s greater than 2 is solvable?
So, when we solve linear programming problems (say, with the simplex method), there are three possible outcomes: the problem is infeasible (there are no solutions that satisfy the constraints), the problem has at least one optimal value (which is found), or the problem is unbounded.
That is, if your “perfect theoretical rationality” requires there to not be the possibility of unbounded solutions, then your perfect theoretical rationality won’t work and cannot include simple things like LP problems. So I’m not sure why you think this version of perfect theoretical rationality is interesting, and am mildly surprised and disappointed that this was your impression of rationality.
“Cannot include simple things like LP problems”—Well, linear programming problems are simply a more complex version of the number choosing game. In fact, the number choosing game is equivalent to linear programming maximising x with x>0. So, if you want to criticise my definition of rationality for not being able to solve basic problems, you should be criticising it for not being able to solve the number choosing game!
I wouldn’t say it makes this uninteresting though, as while it may seem obvious to you that perfect rationality as defined by utility maximisation is impossible, as you have experience with linear programming, it isn’t necessarily obvious to everyone else. In fact, if you read the comments, you’ll see that many commentators are unwilling to accept this solution and keep trying to insist on there being some way out.
You seem to be arguing that there must be some solution that can solve these problems. I’ve already proven that this cannot exist, but if you disagree, what is your solution then?
EDIT: Essentially, what you’ve done is take something “absurd” (that there is no perfect rationality for the number choosing game), reduce it to something less absurd (that there’s no perfect rationality for linear programming) and then declared that you’ve found a reductio ad absurdum. That’s not how it is supposed to work!
I think you’re misunderstanding me. I’m saying that there are problems where the right action is to mark it “unsolvable, because of X” and then move on. (Here, it’s “unsolvable because of unbounded solution space in the increasing direction,” which is true in both the “pick a big number” and “open boundary at 100″ case.)
Sure, someone who is objecting that this problem is ‘solvable’ is not using ‘solvable’ the way I would. But someone who is objecting that this problem is ‘unfair’ because it’s ‘impossible’ is starting down the correct path.
I think you have this in reverse. I’m saying “the result you think is absurd is normal in the general case, and so is normal in this special case.”
But if we view this as an actual (albeit unrealistic/highly theoretical) situation rather than a math problem we are still stuck with the question of which action to take. A perfectly rational agent can realize that the problem has no optimal solution and mark it as unsolvable, but afterwards they still have to pick a number, so which number should they pick?
There is no such thing as an actual unrealistic situation.
They do not have to pick a number, because the situation is not real. To say “but suppose it was” is only to repeat the original hypothetical question that the agent has declared unsolved. If we stipulate that the agent is so logically omniscient as to never need to abandon a problem as unsolved, that does not tell us, who are not omniscient, what that hypothetical agent’s hypothetical choice in that hypothetical situation would be.
The whole problem seems to me on a level with “can God make a weight so heavy he can’t lift it?”
UPDATED: If asked whether the problem is solvable, a perfectly rational agent would reply that it isn’t.
If asked what action to take, then the perfectly rational agent is stuck, and therefore finds out it isn’t perfect. Those are two distinct questions. I suppose it all comes down to how you define rationality though.
So, besides the issue of what I will call earlier work, CCC and others have already mentioned that your scenario would allow non-converging expected values as in the St Petersburg paradox. By the usual meaning of utility, which you’ll note is not arbitrary but equivalent to certain attractive axioms, your scenario contradicts itself.
I’ve seen two main solutions offered for the general problem. If we just require bounded utility, there might be something left of the OP—but only with assumptions that appear physically impossible and thus highly suspicious. (Immediately after learning your argument contradicted itself is a bad time to tell us what you think is logically possible!) I tend towards the other option, which says the people complaining about physics are onto something fundamental concerning the probabilities of ever-vaster utilities. This would disintegrate the OP entirely.
“Non-converging expected values”—you can’t conclude that the scenario is contradictory just because your tools don’t work.
As already noted, we can consider the problem where you name any number less than 100, but not 100 itself and gain that much utility, which avoids the whole non-convergence problem.
“This would disintegrate the OP entirely”—as already stated in other comments, claims that my situation aren’t realistic would be a good criticism if I was claiming that the results could be directly applied to the real universe.
Why? It’s a problem without a solution. Would a perfect rational agent say the problem of finding a negative integer that’s greater than 2 is solvable?
Sorry, that was a typo. It was meant to say “isn’t” rather than “is”