The problem of how to be rational is hard enough that one shouldn’t expect to get good proposals for complete algorithms for how to be rational in all situations. Instead we must chip away at the problem. And one way to do that is to slowly collect rationality constraints. I saw myself as contributing by possibly adding a new one. I’m not very moved by the complaint “but what is the algorithm to become fully rational from any starting point?” as that is just too hard a problem to solve all at once.
I don’t think “what is the algorithm to become fully rational from any starting point?” is a very good characterization. It is not possible to say anything of interest for any starting point whatsoever. I read Wei Dai as instead asking about the example he provided, where a robot is fully rational in the standard Bayesian sense (by which I mean, violates no laws of probability theory or of expected utility theory), but not pre-rational. It is then interesting to ask whether we can motivate such an agent to be pre-rational, or, failing that, give some advice about how to modify a non-pre-rational belief to a pre-rational one (setting the question of motivation aside).
Speaking for myself, I see this as analogous to the question of how one reaches equilibrium in game theory. Nash equilibrium comes from certain rationality assumptions, but the assumptions do not uniquely pin down one equilibrium. This creates the question of how agents could possibly get to be in a state of equilibrium satisfying those rationality assumptions. To this question, there are many interesting answers. However, the consensus of the field seems to be that in fact it is quite difficult to reach a Nash equilibrium in general, and much more realistic to reach correlated equilibria. This suggests that the alternate rationality assumptions underlying correlated equilibria are more realistic, and Nash equilibria are based on rationality assumptions which are overly demanding.
That being said, your paper is analogous to Nash’s initial proposal of the Nash equilibrium concept. It would be impractical to ask Nash to have articulated an entire theory of equilibrium selection when he first proposed the equilibrium concept. So, while I do think the question of how one becomes pre-rational is relevant, and the inability to construct such an account would ultimately be evidence against pre-rationality as a rationality constraint, it is not something to demand up-front of proposed rationality constraints.
The problem of how to be rational is hard enough that one shouldn’t expect to get good proposals for complete algorithms for how to be rational in all situations. Instead we must chip away at the problem. And one way to do that is to slowly collect rationality constraints. I saw myself as contributing by possibly adding a new one. I’m not very moved by the complaint “but what is the algorithm to become fully rational from any starting point?” as that is just too hard a problem to solve all at once.
I don’t think “what is the algorithm to become fully rational from any starting point?” is a very good characterization. It is not possible to say anything of interest for any starting point whatsoever. I read Wei Dai as instead asking about the example he provided, where a robot is fully rational in the standard Bayesian sense (by which I mean, violates no laws of probability theory or of expected utility theory), but not pre-rational. It is then interesting to ask whether we can motivate such an agent to be pre-rational, or, failing that, give some advice about how to modify a non-pre-rational belief to a pre-rational one (setting the question of motivation aside).
Speaking for myself, I see this as analogous to the question of how one reaches equilibrium in game theory. Nash equilibrium comes from certain rationality assumptions, but the assumptions do not uniquely pin down one equilibrium. This creates the question of how agents could possibly get to be in a state of equilibrium satisfying those rationality assumptions. To this question, there are many interesting answers. However, the consensus of the field seems to be that in fact it is quite difficult to reach a Nash equilibrium in general, and much more realistic to reach correlated equilibria. This suggests that the alternate rationality assumptions underlying correlated equilibria are more realistic, and Nash equilibria are based on rationality assumptions which are overly demanding.
That being said, your paper is analogous to Nash’s initial proposal of the Nash equilibrium concept. It would be impractical to ask Nash to have articulated an entire theory of equilibrium selection when he first proposed the equilibrium concept. So, while I do think the question of how one becomes pre-rational is relevant, and the inability to construct such an account would ultimately be evidence against pre-rationality as a rationality constraint, it is not something to demand up-front of proposed rationality constraints.