Instrumental and epistemic rationality were always kind of handwavey, IMO. For example, if you want to achieve your goals, it often helps to have money. So if I deposit $10,000 in your bank account, does that make you more instrumentally rational?
You could define instrumental rationality as “mental skills that help people better achieve their goals”. Then I could argue that learning graphic design makes you more instrumentally rational, because it’s a mental skill and if you learn it, you’ll be able to make money from anywhere using your computer, which is often useful for achieving your goals.
You could define epistemic rationality as “mental skills that help you know what’s true”. Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who’s going to win chess games that are in progress.
I like the idea of thinking of rationality in terms of mental skills that are very general in the sense that they can be used by many different people in many different situations, kind of like how Paul Graham defines “philosophy”. “Mental skills that are useful to many people in many situations” seems like it should have received more study as a topic by now… I guess maybe people have developed memetic antibodies towards anything that sounds too good to be true in that way? (In this case, the relevant antibodies would have been developed thanks to the self-help industry?)
I agree there’s been some inconsistency in usage over the years. In fact, I think What Do We Mean By Rationality? and Rationality are simply wrong, which is surprising since they’re two of the most popular and widely-relied-on pages on LessWrong.
Rationality doesn’t ensure that you’ll win, or have true beliefs; and having true beliefs doesn’t ensure that you’re rational; and winning doesn’t ensure that you’re rational. Yes, winning and having true beliefs is the point of rationality; and rational agents should win (and avoid falsehood) on average, in the long haul. But I don’t think it’s pedantic, if you’re going to write whole articles explaining these terms, to do a bit more to firewall the optimal from the rational and recognize that rationality must be systematic and agent-internal.
Instrumental and epistemic rationality were always kind of handwavey, IMO. For example, if you want to achieve your goals, it often helps to have money. So if I deposit $10,000 [≈ Average community college tuition, four years, 2010] in your bank account, does that make you more instrumentally rational?
Instrumental rationality isn’t the same thing as winning. It’s not even the same thing as ‘instantiating cognitive algorithms that make you win’. Rather, it’s, ‘instantiating cognitive algorithms that tend to make one win’. So being unlucky doesn’t mean you were irrational.
Luke’s way of putting this is to say that ‘the rational decision isn’t always the right decision’. Though that depends on whether by ‘right’ you mean ‘defensible’ or ‘useful’. So I’d rather just say that rationalists can get unlucky.
You could define instrumental rationality as “mental skills that help people better achieve their goals”. Then I could argue that learning graphic design makes you more instrumentally rational, because it’s a mental skill and if you learn it, you’ll be able to make money from anywhere using your computer, which is often useful for achieving your goals.
I’m happy to say that being good at graphic design is instrumentally rational, for people who are likely to use that skill and have the storage space to fit more abilities. The main reason we wouldn’t speak of it that way is that it’s not one of the abilities that’s instrumentally rational for every human, and it’s awkward to have to index instrumentality to specific goals or groups.
Becoming good at graphic design is another story. That can require an investment large enough to make it instrumentally irrational, again depending on the agent and its environment.
You could define epistemic rationality as “mental skills that help you know what’s true”. Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who’s going to win chess games that are in progress.
I don’t see any reason not to bite that bullet. This is why epistemic rationality can become trivial when it’s divorced from instrumental rationality.
Are you familiar with Richard Wiseman, who has found that “luck” (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That’s an interesting result! It doesn’t surprise me that people frequently confuse which complex outcomes they can and can’t control, though. Do you think I’m wrong about the intension of “luck”? Or do you think most people are just wrong about its extension?
I think the definition of ‘luck’ as ‘complex outcomes I have only minor control over’ is useful, as well as the definition of ‘luck’ as ‘the resolution of uncertain outcomes.’ For both of them, I think there’s meat to the sentence “rationalists should not be predictably unlucky”: in the first, it means rationalists should exert a level of effort justified by the system they’re dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of ‘luck’ that might avoid triviality.
“Predictable” and “changeable” have limits, but people generally don’t know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one’s head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that “perfectly rational agents are allowed to get unlucky” isn’t a useful meme, either because we tend to misjudge which things are out of our control or because it’s just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would “rationality doesn’t require omnipotence” suit you better?
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.
It seems to me that some of LW’s attempts to avoid “a priori” reasoning have tripped up right at their initial premises, by assuming as premises propositions of the form “The probability of possible-fact X is y%.” (LW’s annual survey repeatedly insists that readers make this mistake, too.)
I may have a guess about whether X is true; I may even be willing to give or accept odds on one or both sides of the question; but that is not the same thing as being able to assign a probability. For that you need conditions (such as where X is the outcome of a die roll or coin toss) where there’s a basis for assigning the number. Otherwise the right answer to most questions of “How likely is X?” (where we don’t know for certain whether X is true) will be some vague expression (“It could be true, but I doubt it”) or simply “I don’t know.”
Refusing to assign numerical probabilities because you don’t have a rigorous way to derive them is like refusing to choose whether or not to buy things because you don’t have a rigorous way to decide how much they’re worth to you.
Explicitly assigning a probability isn’t always (perhaps isn’t usually) worth the trouble it takes, and rushing to assign numerical probabilities can certainly lead you astray—but that doesn’t mean it can’t be done or that it shouldn’t be done (carefully!) in cases where making a good decision matters most.
When you haven’t taken the trouble to decide a numerical probability, then indeed vague expressions are all you’ve got, but unless you have a big repertoire of carefully graded vague expressions (which would, in fact, not be so very different from assigning probabilities) you’ll find that sometimes there are two propositions for both of which you’d say “it could be true, but I doubt it”—but you definitely find one more credible than the other. If you can make that distinction mentally, why shouldn’t you make it verbally?
If it were a case like you describe (two competing products in a store), I would have to guess, and thus would have to try to think of some “upstream” questions and guess those, too. Not impossible, but unlikely to unearth worthwhile information. For questions as remote as P(aliens), I don’t see a reason to bother.
Have you seen David Friedman’s discussion of rational voter ignorance in The Machinery of Freedom?
I thought the difference what what set of beliefs the method was attracted to: For epistemic, it’s whatever is “really true” with no if or but, for instrumental, it’s whatever in actuality leads to the best outcome. Things where it differs include believing the right thing for the wrong reasons/being overconfident in something true, in game theoretical situations like blackmail and signaling, or in situations where mental states are leaky like the placebo effect or expectation-controlled dementors.
Given this interpretation, I decided on the policy of a mixed strategy where most people are mainly instrumentally rational, some are pure epistemic, and the former obey the later unquestioningly in crisis situations.
That last paragraph is really interesting. I don’t know your reasoning behind it, but I’d perhaps suggest that this correlation may be a result of instrumentally rational people working mostly on cached conclusions from society, which were developed somewhat behind the curtains by trial and error and memes being passed around etc., whereas epistemically rational people are able to adapt more quickly, because they can think right away, rather than allow the memetic environment to catch up, which simply won’t happen in crisis situations (the cached-conclusions system for memetic environments doesn’t work that fast).
Maybe you have no idea what I’m talking about though. I can’t tell whether this could bridge inferential distance. Either way though, what’s your reasoning behind that statement? What does it mean that most people are working mostly in instrumental, whereas some are pure epistemic, and why do the former obey the latter in crisis situations?
I assumed that was obvious or I’d have elaborated. Basically, of the situations in which they differ, they do so by the epistemic making a better decision, but the instrumental having some other benefit. Decisions can be delegated, including the decisions of many to just a few, so you can have only a few people need to take the instrumental hit of strict epistemic conduct, while still having everyone get most of the benefits of decisions based in good epistemic rationality. In return for their sacrifice, the epistemics get status.
This is not a “how things are” or “how everyone should do” thing, just one strategy a coordinated group of rationalists could use.
In my other message I said wealth doesn’t automatically make you more rational, because rationality is “systematic and agent-internal”. I don’t want to dismiss the problem you raised, though, because it gets us into deep waters pretty fast. So here’s a different response.
If I reliably use my money in a way that helps me achieve my ends, regardless of how much money I have, then giving me more money can make me more instrumentally rational, in the sense that I consistently win more often. Certainly it’s beyond dispute that being in such a situation has instrumental value, bracketing ‘rationality’. The reason we don’t normally think of this as an increase in instrumental rationality is that when we’re evaluating your likelihood of winning, the contributing factors we call ‘instrumental rationality’ are the set of win-relevant cognitive algorithms. Having money isn’t a cognitive algorithm, so it doesn’t qualify.
Why isn’t having money a cognitive algorithm? ‘Because it’s not in your skull’ isn’t a very satisfying answer. It’s not necessary: A species that exchanges wealth by exchanging memorized passcodes might make no use of objects outside of vocal utterances and memes. And it’s probably not sufficient: If I start making better economic decisions by relying more heavily on a calculator, it’s plausible that part of my increased instrumental rationality is distributed outside my skull, since part of it depends on the proper functioning of the calculator. Future inventions may do a lot more to blur the lines between cognitive enhancements inside and outside my brain.
So the more relevant point may be that receiving a payment is an isolated event, not a repeatable process. If you found a way to receive a steady paycheck, and reliably used that paycheck to get what you wanted more often, then I’d have a much harder time saying that you (= the you-money system) haven’t improved the instrumental rationality of your cognitive algorithms. It would be like trying to argue that your gene-activated biochemistry is agent-internal, but the aspects of your biochemistry that depend on your daily nootropic cocktail are agent-external. I despair of drawing clear lines on the issue.
Money isn’t a cognitive algorithm because it doesn’t actually help you decide what to do. You don’t generally use your money to make decisions. Having more money does put you in a better position where the available options are more favourable, but that’s not really the same thing.
Of course, if you spend that money on nootropics (or a calculator, I suppose), you might be said to have used money to improve your instrumental rationality!
I don’t think they are hand-wavy. I maintain that they are extremely well-defined terms, at least when you are speaking of idealized agents. Here are some counter-points:
So if I deposit $10,000 in your bank account, does that make you more instrumentally rational?
No. Instrumental rationality is about choosing the optimal action, not having nice things happen to you. Take away the element of choice, and there is no instrumental rationality. I’ve got to cause you to drop the money in my account for you to call it to instrumental rationality.
Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who’s going to win chess games that are in progress.′
No, because “learning about chess” is an action. Choosing where to look for evidence is an action. You’d be instrumentally (ir)rational to (not) seek out information about chess, depending on goals and circumstance.
Epistemic rationality is what you do with evidence after acquiring it, not the process of acquiring evidence. It describes your effectiveness at learning the rules of chess given that you have the relevant info. It doesn’t describe your choice to go out and acquire chess learning info. If you were strapped to a chair and made to watch chess (or casually observed it) and failed to make rational guesses concerning the underlying rules, then you failed at epistemic rationality.
I guess this is the point where humans and theoretical rational agents diverge. Rational agents don’t learn rationality—it’s just assumed that they come pre-wired with all the correct mathematics and philosophy required to make optimal choices for all possible games.
But on the human side, I still don’t think that’s really a valid comparison. Being able to use Bayes’ rule improves rationality in the general case. It falls under the heading of “philosophy, epistemology, mathematics”.
Chess just gives you knowledge about a specific system. It falls under the heading of “science, inference, evidence”.
There’s a qualitative difference between the realm of philosophy and mathematics and the realm of reality and observation.
If we go by a definition based on actions, rather than skills, I think this problem goes away:
Let’s define an action as instrumentally rational if it brings you closer to your goal.
Let’s define an action as epistemicly rationality if it brings your mental model of reality closer to reality itself.
Those are the definitions which I generally use and find useful, and I think they successfully sidestep your problems.
The question then remains how does one define rational skills. However, answering that question is less of an issue once you know what actions are instrumentally/epistemicly rational. If you may want to learn a skill, it is possible to ask whether the action of learning that skill falls under the categories mentioned above.
Let’s define an action as instrumentally rational if it brings you closer to your goal.
Suppose my goal is to get rich. Suppose, on a whim, I walk into a casino and put a large amount of money on number 12 in a single game of roulette. Suppose number 12 comes up. Was that rational?
Same objection applies to your definition of epistemicaly rational actions.
Instrumental and epistemic rationality were always kind of handwavey, IMO. For example, if you want to achieve your goals, it often helps to have money. So if I deposit $10,000 in your bank account, does that make you more instrumentally rational?
You could define instrumental rationality as “mental skills that help people better achieve their goals”. Then I could argue that learning graphic design makes you more instrumentally rational, because it’s a mental skill and if you learn it, you’ll be able to make money from anywhere using your computer, which is often useful for achieving your goals.
You could define epistemic rationality as “mental skills that help you know what’s true”. Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who’s going to win chess games that are in progress.
I like the idea of thinking of rationality in terms of mental skills that are very general in the sense that they can be used by many different people in many different situations, kind of like how Paul Graham defines “philosophy”. “Mental skills that are useful to many people in many situations” seems like it should have received more study as a topic by now… I guess maybe people have developed memetic antibodies towards anything that sounds too good to be true in that way? (In this case, the relevant antibodies would have been developed thanks to the self-help industry?)
I agree there’s been some inconsistency in usage over the years. In fact, I think What Do We Mean By Rationality? and Rationality are simply wrong, which is surprising since they’re two of the most popular and widely-relied-on pages on LessWrong.
Rationality doesn’t ensure that you’ll win, or have true beliefs; and having true beliefs doesn’t ensure that you’re rational; and winning doesn’t ensure that you’re rational. Yes, winning and having true beliefs is the point of rationality; and rational agents should win (and avoid falsehood) on average, in the long haul. But I don’t think it’s pedantic, if you’re going to write whole articles explaining these terms, to do a bit more to firewall the optimal from the rational and recognize that rationality must be systematic and agent-internal.
Instrumental rationality isn’t the same thing as winning. It’s not even the same thing as ‘instantiating cognitive algorithms that make you win’. Rather, it’s, ‘instantiating cognitive algorithms that tend to make one win’. So being unlucky doesn’t mean you were irrational.
Luke’s way of putting this is to say that ‘the rational decision isn’t always the right decision’. Though that depends on whether by ‘right’ you mean ‘defensible’ or ‘useful’. So I’d rather just say that rationalists can get unlucky.
I’m happy to say that being good at graphic design is instrumentally rational, for people who are likely to use that skill and have the storage space to fit more abilities. The main reason we wouldn’t speak of it that way is that it’s not one of the abilities that’s instrumentally rational for every human, and it’s awkward to have to index instrumentality to specific goals or groups.
Becoming good at graphic design is another story. That can require an investment large enough to make it instrumentally irrational, again depending on the agent and its environment.
I don’t see any reason not to bite that bullet. This is why epistemic rationality can become trivial when it’s divorced from instrumental rationality.
Rationalists should not be predictably unlucky.
Yes, if it’s both predictable and changeable. Though I’m not sure why we’d call something that meets both those conditions ‘luck’.
Are you familiar with Richard Wiseman, who has found that “luck” (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That’s an interesting result! It doesn’t surprise me that people frequently confuse which complex outcomes they can and can’t control, though. Do you think I’m wrong about the intension of “luck”? Or do you think most people are just wrong about its extension?
I think the definition of ‘luck’ as ‘complex outcomes I have only minor control over’ is useful, as well as the definition of ‘luck’ as ‘the resolution of uncertain outcomes.’ For both of them, I think there’s meat to the sentence “rationalists should not be predictably unlucky”: in the first, it means rationalists should exert a level of effort justified by the system they’re dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of ‘luck’ that might avoid triviality.
“Predictable” and “changeable” have limits, but people generally don’t know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one’s head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that “perfectly rational agents are allowed to get unlucky” isn’t a useful meme, either because we tend to misjudge which things are out of our control or because it’s just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would “rationality doesn’t require omnipotence” suit you better?
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.
It seems to me that some of LW’s attempts to avoid “a priori” reasoning have tripped up right at their initial premises, by assuming as premises propositions of the form “The probability of possible-fact X is y%.” (LW’s annual survey repeatedly insists that readers make this mistake, too.)
I may have a guess about whether X is true; I may even be willing to give or accept odds on one or both sides of the question; but that is not the same thing as being able to assign a probability. For that you need conditions (such as where X is the outcome of a die roll or coin toss) where there’s a basis for assigning the number. Otherwise the right answer to most questions of “How likely is X?” (where we don’t know for certain whether X is true) will be some vague expression (“It could be true, but I doubt it”) or simply “I don’t know.”
Refusing to assign numerical probabilities because you don’t have a rigorous way to derive them is like refusing to choose whether or not to buy things because you don’t have a rigorous way to decide how much they’re worth to you.
Explicitly assigning a probability isn’t always (perhaps isn’t usually) worth the trouble it takes, and rushing to assign numerical probabilities can certainly lead you astray—but that doesn’t mean it can’t be done or that it shouldn’t be done (carefully!) in cases where making a good decision matters most.
When you haven’t taken the trouble to decide a numerical probability, then indeed vague expressions are all you’ve got, but unless you have a big repertoire of carefully graded vague expressions (which would, in fact, not be so very different from assigning probabilities) you’ll find that sometimes there are two propositions for both of which you’d say “it could be true, but I doubt it”—but you definitely find one more credible than the other. If you can make that distinction mentally, why shouldn’t you make it verbally?
If it were a case like you describe (two competing products in a store), I would have to guess, and thus would have to try to think of some “upstream” questions and guess those, too. Not impossible, but unlikely to unearth worthwhile information. For questions as remote as P(aliens), I don’t see a reason to bother.
Have you seen David Friedman’s discussion of rational voter ignorance in The Machinery of Freedom?
I thought the difference what what set of beliefs the method was attracted to: For epistemic, it’s whatever is “really true” with no if or but, for instrumental, it’s whatever in actuality leads to the best outcome. Things where it differs include believing the right thing for the wrong reasons/being overconfident in something true, in game theoretical situations like blackmail and signaling, or in situations where mental states are leaky like the placebo effect or expectation-controlled dementors.
Given this interpretation, I decided on the policy of a mixed strategy where most people are mainly instrumentally rational, some are pure epistemic, and the former obey the later unquestioningly in crisis situations.
That last paragraph is really interesting. I don’t know your reasoning behind it, but I’d perhaps suggest that this correlation may be a result of instrumentally rational people working mostly on cached conclusions from society, which were developed somewhat behind the curtains by trial and error and memes being passed around etc., whereas epistemically rational people are able to adapt more quickly, because they can think right away, rather than allow the memetic environment to catch up, which simply won’t happen in crisis situations (the cached-conclusions system for memetic environments doesn’t work that fast).
Maybe you have no idea what I’m talking about though. I can’t tell whether this could bridge inferential distance. Either way though, what’s your reasoning behind that statement? What does it mean that most people are working mostly in instrumental, whereas some are pure epistemic, and why do the former obey the latter in crisis situations?
I assumed that was obvious or I’d have elaborated. Basically, of the situations in which they differ, they do so by the epistemic making a better decision, but the instrumental having some other benefit. Decisions can be delegated, including the decisions of many to just a few, so you can have only a few people need to take the instrumental hit of strict epistemic conduct, while still having everyone get most of the benefits of decisions based in good epistemic rationality. In return for their sacrifice, the epistemics get status.
This is not a “how things are” or “how everyone should do” thing, just one strategy a coordinated group of rationalists could use.
In my other message I said wealth doesn’t automatically make you more rational, because rationality is “systematic and agent-internal”. I don’t want to dismiss the problem you raised, though, because it gets us into deep waters pretty fast. So here’s a different response.
If I reliably use my money in a way that helps me achieve my ends, regardless of how much money I have, then giving me more money can make me more instrumentally rational, in the sense that I consistently win more often. Certainly it’s beyond dispute that being in such a situation has instrumental value, bracketing ‘rationality’. The reason we don’t normally think of this as an increase in instrumental rationality is that when we’re evaluating your likelihood of winning, the contributing factors we call ‘instrumental rationality’ are the set of win-relevant cognitive algorithms. Having money isn’t a cognitive algorithm, so it doesn’t qualify.
Why isn’t having money a cognitive algorithm? ‘Because it’s not in your skull’ isn’t a very satisfying answer. It’s not necessary: A species that exchanges wealth by exchanging memorized passcodes might make no use of objects outside of vocal utterances and memes. And it’s probably not sufficient: If I start making better economic decisions by relying more heavily on a calculator, it’s plausible that part of my increased instrumental rationality is distributed outside my skull, since part of it depends on the proper functioning of the calculator. Future inventions may do a lot more to blur the lines between cognitive enhancements inside and outside my brain.
So the more relevant point may be that receiving a payment is an isolated event, not a repeatable process. If you found a way to receive a steady paycheck, and reliably used that paycheck to get what you wanted more often, then I’d have a much harder time saying that you (= the you-money system) haven’t improved the instrumental rationality of your cognitive algorithms. It would be like trying to argue that your gene-activated biochemistry is agent-internal, but the aspects of your biochemistry that depend on your daily nootropic cocktail are agent-external. I despair of drawing clear lines on the issue.
Because it isn’t an algorithm—a step-by-step procedure for calculations. (Source: Wikipedia.)
Money isn’t a cognitive algorithm because it doesn’t actually help you decide what to do. You don’t generally use your money to make decisions. Having more money does put you in a better position where the available options are more favourable, but that’s not really the same thing.
Of course, if you spend that money on nootropics (or a calculator, I suppose), you might be said to have used money to improve your instrumental rationality!
It can, if I use the money to pay someone more instrumentally-rational than me to come and make my decisions for me for a time.
I don’t think they are hand-wavy. I maintain that they are extremely well-defined terms, at least when you are speaking of idealized agents. Here are some counter-points:
No. Instrumental rationality is about choosing the optimal action, not having nice things happen to you. Take away the element of choice, and there is no instrumental rationality. I’ve got to cause you to drop the money in my account for you to call it to instrumental rationality.
No, because “learning about chess” is an action. Choosing where to look for evidence is an action. You’d be instrumentally (ir)rational to (not) seek out information about chess, depending on goals and circumstance.
Epistemic rationality is what you do with evidence after acquiring it, not the process of acquiring evidence. It describes your effectiveness at learning the rules of chess given that you have the relevant info. It doesn’t describe your choice to go out and acquire chess learning info. If you were strapped to a chair and made to watch chess (or casually observed it) and failed to make rational guesses concerning the underlying rules, then you failed at epistemic rationality.
Same for learning about Bayes’ rule.
Learning about Bayes’ rule improves one’s epistemic rationality; I’m arguing that learning about chess does the same.
I guess this is the point where humans and theoretical rational agents diverge. Rational agents don’t learn rationality—it’s just assumed that they come pre-wired with all the correct mathematics and philosophy required to make optimal choices for all possible games.
But on the human side, I still don’t think that’s really a valid comparison. Being able to use Bayes’ rule improves rationality in the general case. It falls under the heading of “philosophy, epistemology, mathematics”.
Chess just gives you knowledge about a specific system. It falls under the heading of “science, inference, evidence”.
There’s a qualitative difference between the realm of philosophy and mathematics and the realm of reality and observation.
If we go by a definition based on actions, rather than skills, I think this problem goes away:
Let’s define an action as instrumentally rational if it brings you closer to your goal. Let’s define an action as epistemicly rationality if it brings your mental model of reality closer to reality itself.
Those are the definitions which I generally use and find useful, and I think they successfully sidestep your problems.
The question then remains how does one define rational skills. However, answering that question is less of an issue once you know what actions are instrumentally/epistemicly rational. If you may want to learn a skill, it is possible to ask whether the action of learning that skill falls under the categories mentioned above.
Suppose my goal is to get rich. Suppose, on a whim, I walk into a casino and put a large amount of money on number 12 in a single game of roulette. Suppose number 12 comes up. Was that rational?
Same objection applies to your definition of epistemicaly rational actions.