What kind of possibility do you have in mind when you say that no such dependency is possible
This universe, things you might actually run into (including omega’s usual tricks, tho ve could certainly come up with something to break my assumption). I know of no reason that there are gains that are lost once you become a rationalist. And I have no reason to believe that there might be.
I can’t explain why. I don’t have introspective access to my algorithms, sorry.
I can definitely imagine simplistic, abstract worlds in which there is a cost that necessarily has to be paid in order to get a larger benefit.
But a cost that can’t be analyzed rationally and paid by a rationalist who knows what they are doing? I don’t buy it.
Game A, Game B, Omega
you may be behaving irrationally in game B, but that’s ok, because Game B isn’t the game you are winning.
You can take almost any rationally planned behavior out of context such that it looks irrational. The proof is that locally optimal/greedy algorithms are not always globally optimal.
If you look at the context where your strategy is winning, it looks rational, so this example does not apply.
If you look at the context where your strategy is winning, it looks rational, so this example does not apply.
I think maybe we’re talking past each other, then. I thought the idea was to imagine cases where the algorithm or collection of behaviors generated by the algorithm is rational even though it has sub-parts that do not look rational. You are absolutely right when you say that in-context, the play on Game B is rational. But that’s the whole point I was making. It is possible to have games where optimal play globally requires sub-optimal play locally.
That is why I put “irrational” in those scare quotes in my first comment. If a behavior really is optimal, then any appearance of irrationality that it has must come from a failure to see the right context.
This universe, things you might actually run into (including omega’s usual tricks, tho ve could certainly come up with something to break my assumption). I know of no reason that there are gains that are lost once you become a rationalist. And I have no reason to believe that there might be.
I can’t explain why. I don’t have introspective access to my algorithms, sorry.
But a cost that can’t be analyzed rationally and paid by a rationalist who knows what they are doing? I don’t buy it.
you may be behaving irrationally in game B, but that’s ok, because Game B isn’t the game you are winning.
You can take almost any rationally planned behavior out of context such that it looks irrational. The proof is that locally optimal/greedy algorithms are not always globally optimal.
If you look at the context where your strategy is winning, it looks rational, so this example does not apply.
I think maybe we’re talking past each other, then. I thought the idea was to imagine cases where the algorithm or collection of behaviors generated by the algorithm is rational even though it has sub-parts that do not look rational. You are absolutely right when you say that in-context, the play on Game B is rational. But that’s the whole point I was making. It is possible to have games where optimal play globally requires sub-optimal play locally.
That is why I put “irrational” in those scare quotes in my first comment. If a behavior really is optimal, then any appearance of irrationality that it has must come from a failure to see the right context.