Ok, lets say you are right that there does not exist perfect theoretical rationality in your hypothetical game context with all the assumptions that helps to keep the whole game standing. Nice. So what?
It is useful to be able to dismiss any preconceptions that perfect decisionmakers can exist, or even be reasoned about. I think this is a very elegant way of doing that.
No. It just says that perfect decisionmakers can’t exist in a world that violates basic physics by allowing people to state even bigger numbers without spending additional time. It doesn’t say that perfect decisionmakers can’t exist in a world that operates under the physics under which our world operates.
The fact that you can constructe possible world in which there are no perfect decisionmakers isn’t very interesting.
“World that violates basic physics”—well the laws of physics are different in this scenario, but I keep the laws of logic the same, which is something.
“The fact that you can constructe possible world in which there are no perfect decisionmakers isn’t very interesting.”
Then we can ask whether there are any other situations where perfect theoretical rationality is not possible. Because we are already aware that it depends on the rules of the game (instead of assuming automatically that it is always possible).
Exploring the boundary between the games where perfect theoretical rationality is possible, and the games where perfect theoretical rationality is impossible, could lead to some interesting theoretical results. Maybe.
I was actually reading this post and I was trying to find a solution to the coalition problem where Eliezer wonders how rational agents can solve a problem with the potential for an infinite loop, which lead me to what I’ll call the Waiting Game, where you can wait n units of time and gain n utility for any finite n, which then led me to this post.
Suppose instead that the game is “gain n utility”. No need to speak the number, wait n turns, or even to wait for a meat brain to make a decision or comprehend the number.
I posit that a perfectly rational, disembodied agent would decide to select an n such that there exists no n higher. If there is a possible outcome that such an agent prefers over all other possible outcomes, then by the definition of utility such an n exists.
Ok, lets say you are right that there does not exist perfect theoretical rationality in your hypothetical game context with all the assumptions that helps to keep the whole game standing. Nice. So what?
It is useful to be able to dismiss any preconceptions that perfect decisionmakers can exist, or even be reasoned about. I think this is a very elegant way of doing that.
No. It just says that perfect decisionmakers can’t exist in a world that violates basic physics by allowing people to state even bigger numbers without spending additional time. It doesn’t say that perfect decisionmakers can’t exist in a world that operates under the physics under which our world operates.
The fact that you can constructe possible world in which there are no perfect decisionmakers isn’t very interesting.
“World that violates basic physics”—well the laws of physics are different in this scenario, but I keep the laws of logic the same, which is something.
“The fact that you can constructe possible world in which there are no perfect decisionmakers isn’t very interesting.”
Maybe. This is just part 1 =P.
Then we can ask whether there are any other situations where perfect theoretical rationality is not possible. Because we are already aware that it depends on the rules of the game (instead of assuming automatically that it is always possible).
Exploring the boundary between the games where perfect theoretical rationality is possible, and the games where perfect theoretical rationality is impossible, could lead to some interesting theoretical results. Maybe.
Spoilers, haha.
I was actually reading this post and I was trying to find a solution to the coalition problem where Eliezer wonders how rational agents can solve a problem with the potential for an infinite loop, which lead me to what I’ll call the Waiting Game, where you can wait n units of time and gain n utility for any finite n, which then led me to this post.
Suppose instead that the game is “gain n utility”. No need to speak the number, wait n turns, or even to wait for a meat brain to make a decision or comprehend the number.
I posit that a perfectly rational, disembodied agent would decide to select an n such that there exists no n higher. If there is a possible outcome that such an agent prefers over all other possible outcomes, then by the definition of utility such an n exists.
Not quite. There is no reason inherent in the definition that utility has to be bounded.