Thanks for bringing this up. That isn’t quite the issue here though. Imagine that you can name any number less than 100 and you gain that much utility, but you can’t name 100 itself. Furthermore, there is a device that compensates you for any time spent speaking with something worth equivalent utility. So whether you name 99 or 99.9 or 99.99… there’s always another agent more rational than you.
Once you make that change, you’re getting into coastline paradox territory. I don’t think that necessarily is a paradox related specifically to decision theory—it’s more of a problem with our math system and the trouble with representing infintesimals.
It’s not a problem with the math system. It is part of the situation that you aren’t allowed to say 100 minus delta where delta is infinitesimally small. In fact, we can restrict it further and rule that the gamemaker will only accept the number if you list out the digits (and the decimal point if there is one). What’s wrong with perfect rationality not existing? On the other side of the question, on what basis do we believe that perfect rationality does exist?
I actually don’t believe that perfect rationality does exist—but in this case, I think the whole concept of “perfect” is flawed for this problem. You can use the same argument to prove that there’s no perfect cartographer, no perfect shotputter, no perfect (insert anything where you’re trying to get as close as you can to a number without touching it).
As I said, I don’t think it’s proving anything special about rationality—it’s just that this a problem taht we don’t have good language to discuss.
“You can use the same argument to prove that there’s no perfect cartographer, no perfect shotputter, no perfect (insert anything where you’re trying to get as close as you can to a number without touching it).”—Why is that a problem? I don’t think that I am proving too much. Do you have an argument that a perfect shotputter or perfect cartographer does exist?
“As I said, I don’t think it’s proving anything special about rationality”—I claim that if you surveyed the members of Less Wrong, at least 20% would claim that perfect theoretical rationality exists (my guess for actual percentage would be 50%). I maintain that in light of these results, this position isn’t viable.
“We don’t have good language to discuss.”—Could you clarify what the problem with language is?
There exists an irrational number which is 100 minus delta where delta is infinitesimally small. In my celestial language we call it “Bob”. I choose Bob. Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value, and who therefore goes out to spend his utility on blackjack and hookers displays greater rationality than the person who does not.
Seriously, though, isn’t this more of an infinity paradox rather than an indictment on perfect rationality? There are areas where the ability to mathematically calculate breaks down, ie naked singularities, Uncertainty Principle, as well as infinity. Isn’t this more the issue at hand: that we can’t be perfectly rational where we can’t calculate precisely?
I didn’t specify in the original problem how the number has to be specified, which was a mistake. There is no reason why the gamemaker can’t choose to only award utility for numbers provided in decimal notation, just as any other competition has rules.
“Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value”—we are assuming either a) an abstract situation where there is zero cost of any kind of naming extra digits or b) the gamemaker compensates the individual for the extra time and effort required to say longer numbers.
If there is a problem here, it certainly isn’t that we can’t calculate precisely. For each number, we know exactly how much utility it gives us.
EDIT: Further 10-delta is not normally considered a number. I imagine that some people might include x as a number, but they aren’t defining the game, so number means what mathematicians in our society typically mean by (real) number.
I’m just not convinced that you’re saying anything more than “Numbers are infinite” and finding a logical paradox within. You can’t state the highest number because it doesn’t exist. If you postulate a highest utility which is equal in value to the highest number times utility 1 then you have postulated a utility which doesn’t exist. I can not chose that which doesn’t exist. That’s not a failure of rationality on my part any more than Achilles inability to catch the turtle is a failure of his ability to divide distances.
I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don’t know how to get a vinculum over the .9) This is a number. It exists.
I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don’t know how to get a vinculum over the .9) This is a number. It exists.
It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).
In any case, I think casebash successfully specified a problem that doesn’t have any optimal solutions (which is definitely interesting) but I don’t think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.
I was born a non-Archimedean and I’ll die a non-Archimedean.
“0.99 repeating = 1” I only accept that kind of talk from people with the gumption to admit that the quotient of any number divided by zero is infinity. And I’ve got college calculus and 25 years of not doing much mathematical thinking since then to back me up.
I’m kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo “perfect rationality” and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.
That’s fair, I tried to formulate a better definition but couldn’t immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).
When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don’t have an answer. Intuitive answers to questions like “What would I do if I actually found myself in this situation?” and “What would the average intelligent person do?” are unsatisfying because they seem to rely on implicit costs to computational power/time.
On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn’t one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.
If there is no optimal solution, then picking none maximises utility.
This statement is not necessarily true when there is no optimal solution because the solutions are part of an infinite set of solutions. That is, it is not true in exactly the situation described in your problem.
Sorry, that was badly phrased. It should have been: “If there is no optimal solution, then no matter what solution you pick you won’t be able to maximise utility”
Regardless of what number you choose, there will be another agent who chooses a higher number than you and hence who does better at the task of utility optimising than you do. If “perfectly rational” means perfect at optimising utility (which is how it is very commonly used), then such a perfect agent does not exist. I can see the argument for lowing the standards of “perfect” to something achievable, but lowering it to a finite number would result in agents being able to outperform a “perfect” agent, which would be equally confusing.
Perhaps the solution is to taboo the word “rational”. It seems like you agree that there does not exist an agent that scores maximally. People often talk about utility-maximising agents, which assumes it is possible to have an agent which maximises utility, which isn’t true for some situations. That the assumption I am trying to challenge regardless of whether we label it perfect rationality or something else.
Let’s taboo “perfect”, and “utility” as well. As I see it, you are looking for an agent who is capable of choosing The Highest Number. This number does not exist. Therefore it can not be chosen. Therefore this agent can not exist. Because numbers are infinite. Infinity paradox is all I see.
Alternately, letting “utility” back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility which is the sum total of the time, matter, and energy in the universe. There will be an number which corresponds to this. Your opponent can choose a number higher than this but he will find the utility he seeks does not exist.
Alternately, letting “utility” back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility which is the sum total of the time, matter, and energy in the universe.
Why can’t my utility function be:
0 if I don’t get ice cream
1 if I get vanilla ice cream
infinity if I get chocolate ice cream
?
I.e. why should we forbid a utility function that returns infinity for certain scenarios, except insofar that it may lead to the types of problems that the OP is worrying about?
I was bringing the example into the presumed finite universe in which we live, where Maximum Utility = The Entire Universe. If we are discussing a finite-quantity problem than infinite quantity is ipso facto ruled out.
I think Nebu was making the point that while we normally use utility to talk about a kind of abstract gain, computers can be programmed with an arbitrary utility function. We would generally put certain restraints on it so that the computer/robot would behave consistently, but those are the only limitation. So even if there does not exist such a thing as infinite utility, a rational agent may still be required to solve for these scenarios.
I guess I’m asking “Why would a finite-universe necessarily dictate a finite utility score?”
In other words, why can’t my utility function be:
0 if you give me the entire universe minus all the ice cream.
1 if you give me the entire universe minus all the chocolate ice cream.
infinity if I get chocolate ice cream, regardless of how much chocolate ice cream I receive, and regardless of whether the rest of the universe is included with it.
“You are looking for an agent who is capable of choosing The Highest Number”—the agent wants to maximise utility, not to pick the highest number for its own sake, so that is misrepresenting my position. If you want to taboo utility, let’s use the word “lives saved” instead. Anyway, you say “Therefore this agent (the perfect life maximising agent) can not exist”, which is exactly what I was concluding. Concluding the exact same thing as I concluded, supports my argument, it doesn’t contradict it like you seem to think it does.
“Alternately, letting “utility” back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility”—my argument is that there does not exist perfect rationality within the imagined infinite universe. I said nothing about the actual, existing universe.
Sorry, I missed that you postulated an infinite universe in your game.
I don’t believe I am misrepresenting your position. “Maximizing utility” is achieved by-, and therefore can be defined as- “choosing the highest number”. The wants of the agent need not be considered. “Choosing the highest number” is an example of “doing something impossible”. I think your argument breaks down to “An agent who can do the impossible can not exist.” or “It is impossible to do the impossible”. I agree with this statement, but I don’t think it tells us anything useful. I think, but I haven’t thought it out fully, that it is the concept of infinity that is tripping you up.
What you’ve done is take my argument and transform it into an equivalent obvious statement. That isn’t a counter-argument. In fact, in mathematics, it is a method of proving a theorem.
If you read the other comments, then you’ll see that other people disagree with what I’ve said (and in a different manner than you), so I’m not just stating something obvious that everyone already knows and agrees with.
“What you’ve done is take my argument and transform it into an equivalent obvious statement. That isn’t a counter-argument. In fact, in mathematics, it is a method of proving a theorem.
If you read the other comments, then you’ll see that other people disagree with what I’ve said”
You’re welcome? Feel free to make use of my proof in your conversations with those guys. It looks pretty solid to me.
If a Perfect Rational Agent is one who can choose Maximum Finite Utility.
And Utility is numerically quantifiable and exists in infinite quantities.
And the Agent must choose the quantity of Utility by finite number.
Then no such agent can exist.
Therefore a Perfect Rational Agent does not exist in all possible worlds.
I suppose I’m agreeing but unimpressed. Might could be this is the wrong website for me. Any thought experiment involving infinity does run the risk of sounding dangerously close to Theology to my ears. Angels on pinheads and such. I’m not from around here and only dropped in to ask a specific question elsewhere. Cheers.
This seems like another in a long line of problems that come from assuming unbounded utility functions.
Edit:The second game sounds a lot like the St. Petersburg paradox.
Thanks for bringing this up. That isn’t quite the issue here though. Imagine that you can name any number less than 100 and you gain that much utility, but you can’t name 100 itself. Furthermore, there is a device that compensates you for any time spent speaking with something worth equivalent utility. So whether you name 99 or 99.9 or 99.99… there’s always another agent more rational than you.
Once you make that change, you’re getting into coastline paradox territory. I don’t think that necessarily is a paradox related specifically to decision theory—it’s more of a problem with our math system and the trouble with representing infintesimals.
It’s not a problem with the math system. It is part of the situation that you aren’t allowed to say 100 minus delta where delta is infinitesimally small. In fact, we can restrict it further and rule that the gamemaker will only accept the number if you list out the digits (and the decimal point if there is one). What’s wrong with perfect rationality not existing? On the other side of the question, on what basis do we believe that perfect rationality does exist?
I actually don’t believe that perfect rationality does exist—but in this case, I think the whole concept of “perfect” is flawed for this problem. You can use the same argument to prove that there’s no perfect cartographer, no perfect shotputter, no perfect (insert anything where you’re trying to get as close as you can to a number without touching it).
As I said, I don’t think it’s proving anything special about rationality—it’s just that this a problem taht we don’t have good language to discuss.
“You can use the same argument to prove that there’s no perfect cartographer, no perfect shotputter, no perfect (insert anything where you’re trying to get as close as you can to a number without touching it).”—Why is that a problem? I don’t think that I am proving too much. Do you have an argument that a perfect shotputter or perfect cartographer does exist?
“As I said, I don’t think it’s proving anything special about rationality”—I claim that if you surveyed the members of Less Wrong, at least 20% would claim that perfect theoretical rationality exists (my guess for actual percentage would be 50%). I maintain that in light of these results, this position isn’t viable.
“We don’t have good language to discuss.”—Could you clarify what the problem with language is?
What is perfect rationality in the context of an unbounded utility function?
Consider the case where utility approaches 100. The utility function isn’t bounded, so the issue is something else.
It’s still some weird definitions of perfection when you’re dealing with infinities or infinitesimals.
Maybe it is weird, but nothing that can fairly be called perfection exists in this scenario, even if this isn’t a fair demand.
There exists an irrational number which is 100 minus delta where delta is infinitesimally small. In my celestial language we call it “Bob”. I choose Bob. Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value, and who therefore goes out to spend his utility on blackjack and hookers displays greater rationality than the person who does not.
Seriously, though, isn’t this more of an infinity paradox rather than an indictment on perfect rationality? There are areas where the ability to mathematically calculate breaks down, ie naked singularities, Uncertainty Principle, as well as infinity. Isn’t this more the issue at hand: that we can’t be perfectly rational where we can’t calculate precisely?
I didn’t specify in the original problem how the number has to be specified, which was a mistake. There is no reason why the gamemaker can’t choose to only award utility for numbers provided in decimal notation, just as any other competition has rules.
“Also I name the person who recognizes that the increase in utility between a 9 in the googleplex decimal place and a 9 in the googleplex+1 decimal place is not worth the time it takes to consider its value”—we are assuming either a) an abstract situation where there is zero cost of any kind of naming extra digits or b) the gamemaker compensates the individual for the extra time and effort required to say longer numbers.
If there is a problem here, it certainly isn’t that we can’t calculate precisely. For each number, we know exactly how much utility it gives us.
EDIT: Further 10-delta is not normally considered a number. I imagine that some people might include x as a number, but they aren’t defining the game, so number means what mathematicians in our society typically mean by (real) number.
I’m just not convinced that you’re saying anything more than “Numbers are infinite” and finding a logical paradox within. You can’t state the highest number because it doesn’t exist. If you postulate a highest utility which is equal in value to the highest number times utility 1 then you have postulated a utility which doesn’t exist. I can not chose that which doesn’t exist. That’s not a failure of rationality on my part any more than Achilles inability to catch the turtle is a failure of his ability to divide distances.
I see I made Bob unnecessarily complicated. Bob = 99.9 Repeating (sorry don’t know how to get a vinculum over the .9) This is a number. It exists.
It is a number, it is also known as 100, which we are explicitly not allowed to pick (0.99 repeating = 1 so 99.99 repeating = 100).
In any case, I think casebash successfully specified a problem that doesn’t have any optimal solutions (which is definitely interesting) but I don’t think that is a problem for perfect rationality anymore than problems that have more than one optimal solution are a problem for perfect rationality.
I was born a non-Archimedean and I’ll die a non-Archimedean.
“0.99 repeating = 1” I only accept that kind of talk from people with the gumption to admit that the quotient of any number divided by zero is infinity. And I’ve got college calculus and 25 years of not doing much mathematical thinking since then to back me up.
I’ll show myself out.
I’m kind of defining perfect rationality as the ability to maximise utility (more or less). If there are multiple optimal solutions, then picking any one maximises utility. If there is no optimal solution, then picking none maximises utility. So this is problematic for perfect rationality as defined as utility maximisation, but if you disagree with the definition, we can just taboo “perfect rationality” and talk about utility maximisation instead. In either case, this is something people often assume exists without even realising that they are making an assumption.
That’s fair, I tried to formulate a better definition but couldn’t immediately come up with anything that sidesteps the issue (without explicitly mentioning this class of problems).
When I taboo perfect rationality and instead just ask what the correct course of action is, I have to agree that I don’t have an answer. Intuitive answers to questions like “What would I do if I actually found myself in this situation?” and “What would the average intelligent person do?” are unsatisfying because they seem to rely on implicit costs to computational power/time.
On the other hand I can also not generalize this problem to more practical situations (or find a similar problem without optimal solution that would be applicable to reality) so there might not be any practical difference between a perfectly rational agent and an agent that takes the optimal solution if there is one and explodes violently if there isn’t one. Maybe the solution is to simply exclude problems like this when talking about rationality, unsatisfying as it may be.
In any case, it is an interesting problem.
This statement is not necessarily true when there is no optimal solution because the solutions are part of an infinite set of solutions. That is, it is not true in exactly the situation described in your problem.
Sorry, that was badly phrased. It should have been: “If there is no optimal solution, then no matter what solution you pick you won’t be able to maximise utility”
Regardless of what number you choose, there will be another agent who chooses a higher number than you and hence who does better at the task of utility optimising than you do. If “perfectly rational” means perfect at optimising utility (which is how it is very commonly used), then such a perfect agent does not exist. I can see the argument for lowing the standards of “perfect” to something achievable, but lowering it to a finite number would result in agents being able to outperform a “perfect” agent, which would be equally confusing.
Perhaps the solution is to taboo the word “rational”. It seems like you agree that there does not exist an agent that scores maximally. People often talk about utility-maximising agents, which assumes it is possible to have an agent which maximises utility, which isn’t true for some situations. That the assumption I am trying to challenge regardless of whether we label it perfect rationality or something else.
Let’s taboo “perfect”, and “utility” as well. As I see it, you are looking for an agent who is capable of choosing The Highest Number. This number does not exist. Therefore it can not be chosen. Therefore this agent can not exist. Because numbers are infinite. Infinity paradox is all I see.
Alternately, letting “utility” back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility which is the sum total of the time, matter, and energy in the universe. There will be an number which corresponds to this. Your opponent can choose a number higher than this but he will find the utility he seeks does not exist.
Why can’t my utility function be:
0 if I don’t get ice cream
1 if I get vanilla ice cream
infinity if I get chocolate ice cream
?
I.e. why should we forbid a utility function that returns infinity for certain scenarios, except insofar that it may lead to the types of problems that the OP is worrying about?
I was bringing the example into the presumed finite universe in which we live, where Maximum Utility = The Entire Universe. If we are discussing a finite-quantity problem than infinite quantity is ipso facto ruled out.
I think Nebu was making the point that while we normally use utility to talk about a kind of abstract gain, computers can be programmed with an arbitrary utility function. We would generally put certain restraints on it so that the computer/robot would behave consistently, but those are the only limitation. So even if there does not exist such a thing as infinite utility, a rational agent may still be required to solve for these scenarios.
I guess I’m asking “Why would a finite-universe necessarily dictate a finite utility score?”
In other words, why can’t my utility function be:
0 if you give me the entire universe minus all the ice cream.
1 if you give me the entire universe minus all the chocolate ice cream.
infinity if I get chocolate ice cream, regardless of how much chocolate ice cream I receive, and regardless of whether the rest of the universe is included with it.
“You are looking for an agent who is capable of choosing The Highest Number”—the agent wants to maximise utility, not to pick the highest number for its own sake, so that is misrepresenting my position. If you want to taboo utility, let’s use the word “lives saved” instead. Anyway, you say “Therefore this agent (the perfect life maximising agent) can not exist”, which is exactly what I was concluding. Concluding the exact same thing as I concluded, supports my argument, it doesn’t contradict it like you seem to think it does.
“Alternately, letting “utility” back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility”—my argument is that there does not exist perfect rationality within the imagined infinite universe. I said nothing about the actual, existing universe.
Sorry, I missed that you postulated an infinite universe in your game.
I don’t believe I am misrepresenting your position. “Maximizing utility” is achieved by-, and therefore can be defined as- “choosing the highest number”. The wants of the agent need not be considered. “Choosing the highest number” is an example of “doing something impossible”. I think your argument breaks down to “An agent who can do the impossible can not exist.” or “It is impossible to do the impossible”. I agree with this statement, but I don’t think it tells us anything useful. I think, but I haven’t thought it out fully, that it is the concept of infinity that is tripping you up.
What you’ve done is take my argument and transform it into an equivalent obvious statement. That isn’t a counter-argument. In fact, in mathematics, it is a method of proving a theorem.
If you read the other comments, then you’ll see that other people disagree with what I’ve said (and in a different manner than you), so I’m not just stating something obvious that everyone already knows and agrees with.
“What you’ve done is take my argument and transform it into an equivalent obvious statement. That isn’t a counter-argument. In fact, in mathematics, it is a method of proving a theorem. If you read the other comments, then you’ll see that other people disagree with what I’ve said” You’re welcome? Feel free to make use of my proof in your conversations with those guys. It looks pretty solid to me.
If a Perfect Rational Agent is one who can choose Maximum Finite Utility. And Utility is numerically quantifiable and exists in infinite quantities. And the Agent must choose the quantity of Utility by finite number. Then no such agent can exist. Therefore a Perfect Rational Agent does not exist in all possible worlds.
I suppose I’m agreeing but unimpressed. Might could be this is the wrong website for me. Any thought experiment involving infinity does run the risk of sounding dangerously close to Theology to my ears. Angels on pinheads and such. I’m not from around here and only dropped in to ask a specific question elsewhere. Cheers.
“Lives saved” is finite within a given light cone.
A very specific property of our universe, but not universes in general.
Just as an aside, no there isn’t. Infinitesimal non-zero numbers can be defined, but they’re “hyperreals”, not irrationals.