Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational.
They are not the same, but that’s ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I’m not interested.
(Of course this is not to say that an idea that has no such applications has literally zero value)
Supposing that some version of pre-rationality does work out, and if I, hypothetically, understood pre-rationality extremely well (better than RH’s paper explains it)… I would expect more insights into at least one of the following: <...>
I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that’s just the usual effect of learning, and not because you would satisfy the pre-rationality condition.
I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn’t. To be fair, this could be a hard question, and even if we don’t immediately see the benefit, that doesn’t mean that there is no benefit. But still, I’m quite suspicious. In my view this is the single most important question, and it’s weird to me that I don’t see it explicitly addressed.
They are not the same, but that’s ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I’m not interested.
(Of course this is not to say that an idea that has no such applications has literally zero value)
I completely agree that if RH was right, and if you understood him well, then you would receive multiple benefits, most of which could translate into winning hypothetical bets, and into having more accurate beliefs about many things. But that’s just the usual effect of learning, and not because you would satisfy the pre-rationality condition.
I continue to not understand in what precise way the agent that satisfies the pre-rationality condition is (claimed to be) superior to the agent that doesn’t. To be fair, this could be a hard question, and even if we don’t immediately see the benefit, that doesn’t mean that there is no benefit. But still, I’m quite suspicious. In my view this is the single most important question, and it’s weird to me that I don’t see it explicitly addressed.