I agree there’s been some inconsistency in usage over the years. In fact, I think What Do We Mean By Rationality? and Rationality are simply wrong, which is surprising since they’re two of the most popular and widely-relied-on pages on LessWrong.
Rationality doesn’t ensure that you’ll win, or have true beliefs; and having true beliefs doesn’t ensure that you’re rational; and winning doesn’t ensure that you’re rational. Yes, winning and having true beliefs is the point of rationality; and rational agents should win (and avoid falsehood) on average, in the long haul. But I don’t think it’s pedantic, if you’re going to write whole articles explaining these terms, to do a bit more to firewall the optimal from the rational and recognize that rationality must be systematic and agent-internal.
Instrumental and epistemic rationality were always kind of handwavey, IMO. For example, if you want to achieve your goals, it often helps to have money. So if I deposit $10,000 [≈ Average community college tuition, four years, 2010] in your bank account, does that make you more instrumentally rational?
Instrumental rationality isn’t the same thing as winning. It’s not even the same thing as ‘instantiating cognitive algorithms that make you win’. Rather, it’s, ‘instantiating cognitive algorithms that tend to make one win’. So being unlucky doesn’t mean you were irrational.
Luke’s way of putting this is to say that ‘the rational decision isn’t always the right decision’. Though that depends on whether by ‘right’ you mean ‘defensible’ or ‘useful’. So I’d rather just say that rationalists can get unlucky.
You could define instrumental rationality as “mental skills that help people better achieve their goals”. Then I could argue that learning graphic design makes you more instrumentally rational, because it’s a mental skill and if you learn it, you’ll be able to make money from anywhere using your computer, which is often useful for achieving your goals.
I’m happy to say that being good at graphic design is instrumentally rational, for people who are likely to use that skill and have the storage space to fit more abilities. The main reason we wouldn’t speak of it that way is that it’s not one of the abilities that’s instrumentally rational for every human, and it’s awkward to have to index instrumentality to specific goals or groups.
Becoming good at graphic design is another story. That can require an investment large enough to make it instrumentally irrational, again depending on the agent and its environment.
You could define epistemic rationality as “mental skills that help you know what’s true”. Then I could argue that learning about chess makes you more epistemically rational, because you can better know the truth of statements about who’s going to win chess games that are in progress.
I don’t see any reason not to bite that bullet. This is why epistemic rationality can become trivial when it’s divorced from instrumental rationality.
Are you familiar with Richard Wiseman, who has found that “luck” (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That’s an interesting result! It doesn’t surprise me that people frequently confuse which complex outcomes they can and can’t control, though. Do you think I’m wrong about the intension of “luck”? Or do you think most people are just wrong about its extension?
I think the definition of ‘luck’ as ‘complex outcomes I have only minor control over’ is useful, as well as the definition of ‘luck’ as ‘the resolution of uncertain outcomes.’ For both of them, I think there’s meat to the sentence “rationalists should not be predictably unlucky”: in the first, it means rationalists should exert a level of effort justified by the system they’re dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of ‘luck’ that might avoid triviality.
“Predictable” and “changeable” have limits, but people generally don’t know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one’s head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that “perfectly rational agents are allowed to get unlucky” isn’t a useful meme, either because we tend to misjudge which things are out of our control or because it’s just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would “rationality doesn’t require omnipotence” suit you better?
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.
It seems to me that some of LW’s attempts to avoid “a priori” reasoning have tripped up right at their initial premises, by assuming as premises propositions of the form “The probability of possible-fact X is y%.” (LW’s annual survey repeatedly insists that readers make this mistake, too.)
I may have a guess about whether X is true; I may even be willing to give or accept odds on one or both sides of the question; but that is not the same thing as being able to assign a probability. For that you need conditions (such as where X is the outcome of a die roll or coin toss) where there’s a basis for assigning the number. Otherwise the right answer to most questions of “How likely is X?” (where we don’t know for certain whether X is true) will be some vague expression (“It could be true, but I doubt it”) or simply “I don’t know.”
Refusing to assign numerical probabilities because you don’t have a rigorous way to derive them is like refusing to choose whether or not to buy things because you don’t have a rigorous way to decide how much they’re worth to you.
Explicitly assigning a probability isn’t always (perhaps isn’t usually) worth the trouble it takes, and rushing to assign numerical probabilities can certainly lead you astray—but that doesn’t mean it can’t be done or that it shouldn’t be done (carefully!) in cases where making a good decision matters most.
When you haven’t taken the trouble to decide a numerical probability, then indeed vague expressions are all you’ve got, but unless you have a big repertoire of carefully graded vague expressions (which would, in fact, not be so very different from assigning probabilities) you’ll find that sometimes there are two propositions for both of which you’d say “it could be true, but I doubt it”—but you definitely find one more credible than the other. If you can make that distinction mentally, why shouldn’t you make it verbally?
If it were a case like you describe (two competing products in a store), I would have to guess, and thus would have to try to think of some “upstream” questions and guess those, too. Not impossible, but unlikely to unearth worthwhile information. For questions as remote as P(aliens), I don’t see a reason to bother.
Have you seen David Friedman’s discussion of rational voter ignorance in The Machinery of Freedom?
I agree there’s been some inconsistency in usage over the years. In fact, I think What Do We Mean By Rationality? and Rationality are simply wrong, which is surprising since they’re two of the most popular and widely-relied-on pages on LessWrong.
Rationality doesn’t ensure that you’ll win, or have true beliefs; and having true beliefs doesn’t ensure that you’re rational; and winning doesn’t ensure that you’re rational. Yes, winning and having true beliefs is the point of rationality; and rational agents should win (and avoid falsehood) on average, in the long haul. But I don’t think it’s pedantic, if you’re going to write whole articles explaining these terms, to do a bit more to firewall the optimal from the rational and recognize that rationality must be systematic and agent-internal.
Instrumental rationality isn’t the same thing as winning. It’s not even the same thing as ‘instantiating cognitive algorithms that make you win’. Rather, it’s, ‘instantiating cognitive algorithms that tend to make one win’. So being unlucky doesn’t mean you were irrational.
Luke’s way of putting this is to say that ‘the rational decision isn’t always the right decision’. Though that depends on whether by ‘right’ you mean ‘defensible’ or ‘useful’. So I’d rather just say that rationalists can get unlucky.
I’m happy to say that being good at graphic design is instrumentally rational, for people who are likely to use that skill and have the storage space to fit more abilities. The main reason we wouldn’t speak of it that way is that it’s not one of the abilities that’s instrumentally rational for every human, and it’s awkward to have to index instrumentality to specific goals or groups.
Becoming good at graphic design is another story. That can require an investment large enough to make it instrumentally irrational, again depending on the agent and its environment.
I don’t see any reason not to bite that bullet. This is why epistemic rationality can become trivial when it’s divorced from instrumental rationality.
Rationalists should not be predictably unlucky.
Yes, if it’s both predictable and changeable. Though I’m not sure why we’d call something that meets both those conditions ‘luck’.
Are you familiar with Richard Wiseman, who has found that “luck” (as the phrase is used by people in everyday life to refer to people and events) appears to be both predictable and changeable?
That’s an interesting result! It doesn’t surprise me that people frequently confuse which complex outcomes they can and can’t control, though. Do you think I’m wrong about the intension of “luck”? Or do you think most people are just wrong about its extension?
I think the definition of ‘luck’ as ‘complex outcomes I have only minor control over’ is useful, as well as the definition of ‘luck’ as ‘the resolution of uncertain outcomes.’ For both of them, I think there’s meat to the sentence “rationalists should not be predictably unlucky”: in the first, it means rationalists should exert a level of effort justified by the system they’re dealing with, and not be dissuaded by statistically insignificant feedback; in the second, it means rationalists should be calibrated (and so P_10 or worse events happen to them 10% of the time, i.e. rationalists are not surprised that they lose money at the casino).
Ahh, thanks! This helps me better understand what Eliezer was getting at. I was having trouble thinking my way into other concepts of ‘luck’ that might avoid triviality.
“Predictable” and “changeable” have limits, but people generally don’t know where those limits are. What looks like bad luck to one person might look like the probable consequences of taking stupid chances to another.
Or what looks like a good strategy for making an improvement to one person might looking like knocking one’s head against a wall to another.
The point you and Eliezer (and possibly Vaniver) seem to be making is that “perfectly rational agents are allowed to get unlucky” isn’t a useful meme, either because we tend to misjudge which things are out of our control or because it’s just not useful to pay any attention to those things.
Is that a fair summary? And, if so, can you think of a better way to express the point I was making earlier about conceptually distinguishing rational conduct from conduct that happens to be optimal?
ETA: Would “rationality doesn’t require omnipotence” suit you better?
Theoretically speaking (rare though it would be in practice), there are circumstances where that might happen- a rationalist simply refuses to use on moral grounds methods that would grant him an epistemic advantage.
It seems to me that some of LW’s attempts to avoid “a priori” reasoning have tripped up right at their initial premises, by assuming as premises propositions of the form “The probability of possible-fact X is y%.” (LW’s annual survey repeatedly insists that readers make this mistake, too.)
I may have a guess about whether X is true; I may even be willing to give or accept odds on one or both sides of the question; but that is not the same thing as being able to assign a probability. For that you need conditions (such as where X is the outcome of a die roll or coin toss) where there’s a basis for assigning the number. Otherwise the right answer to most questions of “How likely is X?” (where we don’t know for certain whether X is true) will be some vague expression (“It could be true, but I doubt it”) or simply “I don’t know.”
Refusing to assign numerical probabilities because you don’t have a rigorous way to derive them is like refusing to choose whether or not to buy things because you don’t have a rigorous way to decide how much they’re worth to you.
Explicitly assigning a probability isn’t always (perhaps isn’t usually) worth the trouble it takes, and rushing to assign numerical probabilities can certainly lead you astray—but that doesn’t mean it can’t be done or that it shouldn’t be done (carefully!) in cases where making a good decision matters most.
When you haven’t taken the trouble to decide a numerical probability, then indeed vague expressions are all you’ve got, but unless you have a big repertoire of carefully graded vague expressions (which would, in fact, not be so very different from assigning probabilities) you’ll find that sometimes there are two propositions for both of which you’d say “it could be true, but I doubt it”—but you definitely find one more credible than the other. If you can make that distinction mentally, why shouldn’t you make it verbally?
If it were a case like you describe (two competing products in a store), I would have to guess, and thus would have to try to think of some “upstream” questions and guess those, too. Not impossible, but unlikely to unearth worthwhile information. For questions as remote as P(aliens), I don’t see a reason to bother.
Have you seen David Friedman’s discussion of rational voter ignorance in The Machinery of Freedom?