On reflection, I think the claim I most want to make is something along the lines of “if you identify rationality!sane with ‘hook the world-model-generator up to reality’, then people will eventually realize rationality is insufficient for them and invent postrationality.” If you identify rationality!sane with winning, then it seems much less likely that people will eventually realize rationality is insufficient for them.
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.
I tend to be skeptical of claims that a rational agent cannot in principle do better with greater epistemic accuracy, much in the same way that I distrust claims that a rational agent cannot in principle do better with more information or greater computational power. To me, then, “hooking the world-model-generator up to reality” would equate with “winning”, or at least “making winning a hell of a lot easier”. So from my perspective, a rational agent should always seek greater epistemic accuracy.
Of course, if you’re human, this does not always apply since humans are far from rational agents, but by much the same reasoning as leads to ethical injunctions, it seems like an excellent heuristic to follow.