I would like to extract the meaning of your thought experiment, but it’s difficult because the concepts therein are problematic, or at least I don’t think they have quite the effect you imagine.
We will define the number choosing game as follows. You name any single finite number x. You then gain x utility and the game then ends. You can only name a finite number, naming infinity is not allowed.
If I were asked (by whom?) to play this game, in the first place I would only be able to attach some probability less than 1 to the idea that the master of the game is actually capable of granting me arbitrarily astronomical utility, and likely to do so. A tenet of the “rationality” that you are calling into question is that 0 and 1 are not probabilities, so if you postulate absolute certainty in your least convenient possible world, your thought experiment becomes very obscure.
E.g. what about a thought experiment in a world where 2+2=5, and also 2+2=4 as well; I might entertain such a thought experiment, but (absent some brilliant insight which would need to be supplied in addition) I would not attach importance to it, in comparison to thought experiments that take place in a world more comprehensible and similar to our own.
Now when I go ahead and attach a probability less than 1—even if it be an extremely high probability—to the idea that the game works just as described, I would become seriously confused by this game because the definition of a utility function is:
A utility function assigns numerical values (“utilities”) to outcomes, in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities.
yet my utility function would, according to my own (meta-...) reflection, with a separate high probability, differ from the utility function that the game master claims I have.
To resolve the confusion in question, I would have to (or would in other terms) resolve confusions that havebeendescribed clearly on LessWrong and are considered to be the point at which the firm ground of 21st century human rationality meets speculation. So yes, our concept of rationality has admitted limits; I don’t believe your thought experiment adds a new problematic that isn’t implied in the Sequences.
How exactly this result applies to our universe isn’t exactly clear, but that’s the challenge I’ll set for the comments.
Bearing in mind that my criticism of your thought experiment as described stands, I’ll add that a short story I once read comes to mind. In the story, a modern human finds himself in a room in which the walls are closing in; in the centre of the room is a model with some balls and cup-shaped holders, and in the corner a skeleton of a man in knight’s armour. Before he is trapped and suffers the fate of his predecessor, he successfully rearranges the balls into a model of the solar system, gaining utility because he has demonstrated his intelligence (or the scientific advancement of his species) as the alien game master in question would have wished.
If I were presented with a game of this kind, my first response would be to negotiate with the game master if possible and ask him pertinent questions, based on the type of entity he appears to be. If I found that it were in my interests to name a very large number, depending on context I would choose from the following responses:
I have various memories of contemplating the vastness of existence. Please read the most piquant such memory, which I am sure is still encoded in my brain, and interpret it as a number. (Surely “99999...” is only one convenient way of expressing a number or magnitude)
“The number of greatest magnitude that (I, you, my CEV...) (can, would...) (comprehend, deem most fitting...)”
May I use Google? I would like to say “three to the three...” in Knuth’s up-arrow notation, but am worried that I will misspell it and thereby fail according to the nature of your game.
“Now when I go ahead and attach a probability less than 1—even if it be an extremely high probability—to the idea that the game works just as described”—You are trying to apply realistic constraints to a hypothetical situation that is not intended to be realistic nor where there are any claims that the results carry over to the real world (as of yet). Taking down an argument I haven’t made doesn’t accomplish anything.
The gamesmaster has no desire to engage with any of your questions or your attempts to avoid directly naming a number. He simply tells you to just name a number.
You are trying to apply realistic constraints to a hypothetical situation that is not intended to be realistic
Your thought experiment, as you want it to be interpreted, is too unrealistic for it to imply a new and surprising critique of Bayesian rationality in our world. However, the title of your post implies (at least to me) that it does form such a critique.
The gamesmaster has no desire to engage with any of your questions or your attempts to avoid directly naming a number. He simply tells you to just name a number.
If we interpret the thought experiment as happening in a world similar to our own—which I think is more interesting than an incomprehensible world where the 2nd law of thermodynamics does not exist and the Kolmogorov axioms don’t hold by definition—I would be surprised that such a gamesmaster would view Arabic numerals as the only or best way to communicate an arbitrarily large number. This seems, to me, like a primitive human thought that’s very limited in comparison to the concepts available to a superintelligence which can read a human’s source code and take measurements of the neurons and subatomic particles in his brain. As a human playing this game I would, unless told otherwise in no uncertain terms, try to think outside the limited-human box, both because I believe this would allow me to communicate numbers of greater magnitude and because I would expect the gamesmaster’s motive to include something more interesting, and humane and sensible, than testing my ability to recite digits for an arbitrary length of time.
There’s a fascinating tension in the idea that the gamesmaster is an FAI, because he would bestow upon me arbitrary utility, yet he might be so unhelpful as to have me recite a number for billions of years or more. And what if my utility function includes (timeless?) preferences that interfere with the functioning of the gamesmaster or the game itself?
I would like to extract the meaning of your thought experiment, but it’s difficult because the concepts therein are problematic, or at least I don’t think they have quite the effect you imagine.
If I were asked (by whom?) to play this game, in the first place I would only be able to attach some probability less than 1 to the idea that the master of the game is actually capable of granting me arbitrarily astronomical utility, and likely to do so. A tenet of the “rationality” that you are calling into question is that 0 and 1 are not probabilities, so if you postulate absolute certainty in your least convenient possible world, your thought experiment becomes very obscure.
E.g. what about a thought experiment in a world where 2+2=5, and also 2+2=4 as well; I might entertain such a thought experiment, but (absent some brilliant insight which would need to be supplied in addition) I would not attach importance to it, in comparison to thought experiments that take place in a world more comprehensible and similar to our own.
Now when I go ahead and attach a probability less than 1—even if it be an extremely high probability—to the idea that the game works just as described, I would become seriously confused by this game because the definition of a utility function is:
A utility function assigns numerical values (“utilities”) to outcomes, in such a way that outcomes with higher utilities are always preferred to outcomes with lower utilities.
yet my utility function would, according to my own (meta-...) reflection, with a separate high probability, differ from the utility function that the game master claims I have.
To resolve the confusion in question, I would have to (or would in other terms) resolve confusions that have been described clearly on LessWrong and are considered to be the point at which the firm ground of 21st century human rationality meets speculation. So yes, our concept of rationality has admitted limits; I don’t believe your thought experiment adds a new problematic that isn’t implied in the Sequences.
Bearing in mind that my criticism of your thought experiment as described stands, I’ll add that a short story I once read comes to mind. In the story, a modern human finds himself in a room in which the walls are closing in; in the centre of the room is a model with some balls and cup-shaped holders, and in the corner a skeleton of a man in knight’s armour. Before he is trapped and suffers the fate of his predecessor, he successfully rearranges the balls into a model of the solar system, gaining utility because he has demonstrated his intelligence (or the scientific advancement of his species) as the alien game master in question would have wished.
If I were presented with a game of this kind, my first response would be to negotiate with the game master if possible and ask him pertinent questions, based on the type of entity he appears to be. If I found that it were in my interests to name a very large number, depending on context I would choose from the following responses:
I have various memories of contemplating the vastness of existence. Please read the most piquant such memory, which I am sure is still encoded in my brain, and interpret it as a number. (Surely “99999...” is only one convenient way of expressing a number or magnitude)
“The number of greatest magnitude that (I, you, my CEV...) (can, would...) (comprehend, deem most fitting...)”
May I use Google? I would like to say “three to the three...” in Knuth’s up-arrow notation, but am worried that I will misspell it and thereby fail according to the nature of your game.
Googleplex
“Now when I go ahead and attach a probability less than 1—even if it be an extremely high probability—to the idea that the game works just as described”—You are trying to apply realistic constraints to a hypothetical situation that is not intended to be realistic nor where there are any claims that the results carry over to the real world (as of yet). Taking down an argument I haven’t made doesn’t accomplish anything.
The gamesmaster has no desire to engage with any of your questions or your attempts to avoid directly naming a number. He simply tells you to just name a number.
Your thought experiment, as you want it to be interpreted, is too unrealistic for it to imply a new and surprising critique of Bayesian rationality in our world. However, the title of your post implies (at least to me) that it does form such a critique.
If we interpret the thought experiment as happening in a world similar to our own—which I think is more interesting than an incomprehensible world where the 2nd law of thermodynamics does not exist and the Kolmogorov axioms don’t hold by definition—I would be surprised that such a gamesmaster would view Arabic numerals as the only or best way to communicate an arbitrarily large number. This seems, to me, like a primitive human thought that’s very limited in comparison to the concepts available to a superintelligence which can read a human’s source code and take measurements of the neurons and subatomic particles in his brain. As a human playing this game I would, unless told otherwise in no uncertain terms, try to think outside the limited-human box, both because I believe this would allow me to communicate numbers of greater magnitude and because I would expect the gamesmaster’s motive to include something more interesting, and humane and sensible, than testing my ability to recite digits for an arbitrary length of time.
There’s a fascinating tension in the idea that the gamesmaster is an FAI, because he would bestow upon me arbitrary utility, yet he might be so unhelpful as to have me recite a number for billions of years or more. And what if my utility function includes (timeless?) preferences that interfere with the functioning of the gamesmaster or the game itself?
“However, the title of your post”—titles need to be short so they can’t convey all the complexity of the actual situation.
“Which I think is more interesting”—To each their own.