Could you clarify why you think that I am doing infinity wrong? I’m not actually using infinity, just stating that you aren’t allowed to say infinity, but can only choose a finite number.
As stated in the article, I’m considering the theoretical case where either a) there are no costs to identifying and communicating arbitrarily large numbers (as stated, we are considering celestial being not real physical beings) or b) we are considering real beings, but where any costs related to the effort of identifying a larger number are offset by a magical device
I already admitted that the real world is not like this due to aspects such as calculation costs. I find the idea of a purposeful theoretical model being wrong due to real constraints odd. If someone puts out a theoretical situation as modelling the real world, then that might be a valid critique, but when someone is specifically imagining a world that behaves differently from ours there is no requirement for it to be “real”.
All I am claiming is that within at least one theoretical world (which I’ve provided) perfect rationality does not exist. Whether or not this has any bearing on the real world was not discussed and is left to the reader to speculate on.
You’re doing it wrong by trying to use a limit (good) without specifying the function (making it meaningless).
there are no costs
This is the hidden infinity in your example. There can’t be zero cost. When you evaluate the marginal value of a further calculation, you take expected benefit divided by expected cost. oops, infinity!
Alternately—you hypothesize that any agent would actually stop calculating and pick a number. Why not calculate further? If it’s costless, keep going. I’m not sure in your scenario which infinity wins: infinitely small cost of calculation or infinite time to calculate. Either way, it’s not about whether perfect rationality exists, it’s about which infinity you choose to break first.
If you keep going forever then you never realise any gains, even if it is costless, so that isn’t the rational solution.
“This is the hidden infinity in your example. There can’t be zero cost. When you evaluate the marginal value of a further calculation, you take expected benefit divided by expected cost. oops, infinity!”—so let’s suppose I give an agent a once-off opportunity to gain 100 utility for 0 cost. The agent tries to evaluate if it should take this opportunity and fails because there is no cost and it ends up with an infinity. I would argue that such an agent is very far away from rational if it can’t handle this simple situation.
“You’re doing it wrong by trying to use a limit (good) without specifying the function (making it meaningless)”—Sorry, it still isn’t clear what you are getting at here. I’m not trying to use a limit. You are the one who is insisting that I need to use a limit to evaluate this situation. Have you considered that there might actually be other ways of evaluating the situation? The situation is well specified. State any number and receive that much utility. If you want a utility function, u(x)=x is it. If you’re looking for another kind of function, well what kind of function are you looking for then? Simply stating that I haven’t specified a function isn’t very clear unless you answer this question.
If it takes time, that’s a cost. In your scenario, an agent can keep going forever instantly, whatever that means. That’s the nonsense you need to resolve to have a coherent problem. Add in a time limit and calculation rate, and you’re back to normal rationality. As the time limit or rate approach infinity, so does the utility.
“Add in a time limit and calculation rate, and you’re back to normal rationality”—I am intentionally modelling a theoretical construct, not reality. Claims that my situation isn’t realistic aren’t valid, as I have never claimed that this theoretical situation does correspond to reality. I have purposefully left this question open.
Ai-yah. That’s fine, but please then be sure to caveat your conclusion with “in this non-world...” rather than generalizing about nonexistence of something.
The perfectly rational agent considers all possible different world-states, determines the utility of each of them, and states “X”, where X is the utility of the perfect world.
For the number “X+epsilon” to have been a legal response, the agent would have had to been mistaken about their utility function or what the possible worlds were.
Therefore X is the largest real number.
Note that this is a constructive proof, and any attempt at counterexample should attempt to prove that the specific X discovered by a perfectly rational omniscient abstract agent with a genie. If the general solution is true, it will be trivially true for one number.
Could you clarify why you think that I am doing infinity wrong? I’m not actually using infinity, just stating that you aren’t allowed to say infinity, but can only choose a finite number.
As stated in the article, I’m considering the theoretical case where either a) there are no costs to identifying and communicating arbitrarily large numbers (as stated, we are considering celestial being not real physical beings) or b) we are considering real beings, but where any costs related to the effort of identifying a larger number are offset by a magical device
I already admitted that the real world is not like this due to aspects such as calculation costs. I find the idea of a purposeful theoretical model being wrong due to real constraints odd. If someone puts out a theoretical situation as modelling the real world, then that might be a valid critique, but when someone is specifically imagining a world that behaves differently from ours there is no requirement for it to be “real”.
All I am claiming is that within at least one theoretical world (which I’ve provided) perfect rationality does not exist. Whether or not this has any bearing on the real world was not discussed and is left to the reader to speculate on.
You’re doing it wrong by trying to use a limit (good) without specifying the function (making it meaningless).
This is the hidden infinity in your example. There can’t be zero cost. When you evaluate the marginal value of a further calculation, you take expected benefit divided by expected cost. oops, infinity!
Alternately—you hypothesize that any agent would actually stop calculating and pick a number. Why not calculate further? If it’s costless, keep going. I’m not sure in your scenario which infinity wins: infinitely small cost of calculation or infinite time to calculate. Either way, it’s not about whether perfect rationality exists, it’s about which infinity you choose to break first.
If you keep going forever then you never realise any gains, even if it is costless, so that isn’t the rational solution.
“This is the hidden infinity in your example. There can’t be zero cost. When you evaluate the marginal value of a further calculation, you take expected benefit divided by expected cost. oops, infinity!”—so let’s suppose I give an agent a once-off opportunity to gain 100 utility for 0 cost. The agent tries to evaluate if it should take this opportunity and fails because there is no cost and it ends up with an infinity. I would argue that such an agent is very far away from rational if it can’t handle this simple situation.
“You’re doing it wrong by trying to use a limit (good) without specifying the function (making it meaningless)”—Sorry, it still isn’t clear what you are getting at here. I’m not trying to use a limit. You are the one who is insisting that I need to use a limit to evaluate this situation. Have you considered that there might actually be other ways of evaluating the situation? The situation is well specified. State any number and receive that much utility. If you want a utility function, u(x)=x is it. If you’re looking for another kind of function, well what kind of function are you looking for then? Simply stating that I haven’t specified a function isn’t very clear unless you answer this question.
If it takes time, that’s a cost. In your scenario, an agent can keep going forever instantly, whatever that means. That’s the nonsense you need to resolve to have a coherent problem. Add in a time limit and calculation rate, and you’re back to normal rationality. As the time limit or rate approach infinity, so does the utility.
“Add in a time limit and calculation rate, and you’re back to normal rationality”—I am intentionally modelling a theoretical construct, not reality. Claims that my situation isn’t realistic aren’t valid, as I have never claimed that this theoretical situation does correspond to reality. I have purposefully left this question open.
Ai-yah. That’s fine, but please then be sure to caveat your conclusion with “in this non-world...” rather than generalizing about nonexistence of something.
The perfectly rational agent considers all possible different world-states, determines the utility of each of them, and states “X”, where X is the utility of the perfect world.
For the number “X+epsilon” to have been a legal response, the agent would have had to been mistaken about their utility function or what the possible worlds were.
Therefore X is the largest real number.
Note that this is a constructive proof, and any attempt at counterexample should attempt to prove that the specific X discovered by a perfectly rational omniscient abstract agent with a genie. If the general solution is true, it will be trivially true for one number.
That’s not how maths works.