Indeed—that was my first thought, but I was waiting till I figured out a good way of stating it. RH’s definition of ‘rational’ seems to go against the usual definition presented above, while EY’s seems to embrace it.
My definition differs from the one in Wikipedia because I require that your goals not call for any particular ritual of cognition. When you care more about winning then about any particular way of thinking—and “winning” is not defined in such a way as to require in advance any particular method of thinking—then you are pursuing rationality.
This, in turn, ends up implying epistemic rationality: if the definition of “winning” doesn’t require believing false things, then you can generally expect to do better (on average) by believing true things than false things—certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
Conversely you can start with the definition of rational belief as accuracy-seeking, and get to pragmatics via “That which can be destroyed by the truth should be” and the notion of rational policies as those which you would retain even given an epistemically rational prediction of their consequences.
For most people, most of the things they want do in fact prefer some ways of thinking, so your definition requires us to consider a counterfactual pretty far from ordinary experience. In contrast, defining in terms of accuracy-seeking is simple and accessible. If this site is going to use the word “rational” a lot, we’d better have a simple clear definition or we’ll be arguing this definitional stuff endlessly.
I usually define “rationality” as accuracy-seeking whenever decisional considerations do not enter. These days I sometimes also use the phrase “epistemic rationality”.
It would indeed be more complicated if we began conducting the meta-argument that (a) an ideal Bayesian not faced with various vengeful gods inspecting its algorithm should not decide to rewrite its memories to something calibrated away from what it originally believed to be accurate, or that (b) human beings ought to seek accuracy in a life well-lived according to goals that include both explicit truth-seeking and other goals not about truth.
But unless I’m specifically focused on this argument, I usually go so far as to talk as if it resolves in favor of epistemic accuracy, that is, that pragmatic rationality is unified with epistemic rationality rather than implying two different disciplines. If truth is a bad idea, it’s not clear what the reader is doing on Less Wrong, and indeed, the “pragmatic” reader who somehow knows that it’s a good idea to be ignorant, will at once flee as far as possible...
You started off using the word “rationality” on this blog/forum, and though I had misgivings, I tried to continue with your language. But most of the discussion of this post seems to be distracted by my having tried to clarify that in the introductory sentence. I predict we won’t be able to get past this, and so from now on I will revert to my usual policy of avoiding overloaded words like “rationality.”
If truth is a bad idea, it’s not clear what the reader is doing on Less Wrong [...]
Believing the truth is usually a good idea—for real organisms.
However, I don’t think rationality should be defined in terms of truth seeking. For one thing, that is not particularly conventional usage. For another, it seems like a rather arbitrary goal. What if a Buddhist claims that rational behaviour typically involves meditating until you reach nirvana. On what grounds would that claim be dismissed? That seems to me to be an equally biologically realistic goal.
I think that convention has it right here—the details of the goal are irrelevances to rationality which should be factored right out of the equation. You can rationally pursue any goal—without any exceptions.
I’m confused by the phrase “most of the things they want do in fact prefer some ways of thinking”.
I thought that EY was saying that he requires goals like “some hot chocolate” or “an interesting book”, rather than goals like: “the answer to this division problem computed by the Newton-Raphson algorithm”
Eliezer said: This, in turn, ends up implying epistemic rationality: if the definition of “winning” doesn’t require believing false things, then you can generally expect to do better (on average) by believing true things than false things—certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
--
I think this is overstated. Why should we only care what works “generally,” rather than what works well in specific subdomains? If rationality means whatever helps you win, than overconfidence will often be rational. (Examples: placebo effect, dating, job interviews, etc.) I think you need to either decide that your definition of rationality does not always require a preference for true beliefs, or else revise the definition.
It also might be worthwhile, for the sake of clarity, to just avoid the word “rationality” altogether in future conversations. It seems to be at risk of becoming an essentially contested concept, particularly because everyone wants to be able to claim that their own preferred cognitive procedures are “rational.” Why not just talk about whether a particular cognitive ritual is “goal-optimizing” when we want to talk about Eliezer-rationality, while saving the term “truth-optimizing” (or some variant) for epistemic-rationality?
Maybe “truth-seeking” versus “winning”, if there’s a direct appeal to one and not the other. But I am generally willing to rescue the word “rationality”.
Sorry—I meant, but did not make clear, that the word “rationality” should be avoided only when the conversation involves the clash between “winning” and “truth seeking.” Otherwise, things tend to bog down in arguments about the map, when we should be talking about the territory.
Regarding “rationalists should win”—that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
For example, buying lottery tickets is (almost always) a negative EV proposition—but some people do win the lottery. Was it irrational for lottery winners to have bought those specific tickets, which did indeed win?
Given a sufficiently large sample, the most spectacular successes are going to be those who pursued opportunities with the highest possible payoff regardless of the potential downside or even the expected value… for every spectacular success, there are probably several times as many spectacular failures.
Re: Regarding “rationalists should win”—that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
Just don’t go there in the first place. Attempting to increase your utility is enough.
A common example of where rationality and truth-seeking come into conflict is the case where organisms display their beliefs—and have difficulty misrepresenting them. In such cases, it may thus benefit them to believe falsehoods for reasons associated with signalling their beliefs to others:
“Definitely on all fronts is has become imperative not to bristle with hostility every time you encounter a stranger. Instead observe him, find out what he might be. Behave to him with politeness, pretending that you like him more than you do—at least while you find out how he might be of use to you. Wash before you go to talk to him so as to conceal your tribal odour and take great care not to let on that you notice his own, foul as it may be. Talk about human brotherhood. In the end don’t even just pretend that you like him (he begins to see through that); instead, really like him. It pays.”
Discriminating Nepotism—as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.359.
Indeed—that was my first thought, but I was waiting till I figured out a good way of stating it. RH’s definition of ‘rational’ seems to go against the usual definition presented above, while EY’s seems to embrace it.
My definition differs from the one in Wikipedia because I require that your goals not call for any particular ritual of cognition. When you care more about winning then about any particular way of thinking—and “winning” is not defined in such a way as to require in advance any particular method of thinking—then you are pursuing rationality.
This, in turn, ends up implying epistemic rationality: if the definition of “winning” doesn’t require believing false things, then you can generally expect to do better (on average) by believing true things than false things—certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
Conversely you can start with the definition of rational belief as accuracy-seeking, and get to pragmatics via “That which can be destroyed by the truth should be” and the notion of rational policies as those which you would retain even given an epistemically rational prediction of their consequences.
For most people, most of the things they want do in fact prefer some ways of thinking, so your definition requires us to consider a counterfactual pretty far from ordinary experience. In contrast, defining in terms of accuracy-seeking is simple and accessible. If this site is going to use the word “rational” a lot, we’d better have a simple clear definition or we’ll be arguing this definitional stuff endlessly.
I usually define “rationality” as accuracy-seeking whenever decisional considerations do not enter. These days I sometimes also use the phrase “epistemic rationality”.
It would indeed be more complicated if we began conducting the meta-argument that (a) an ideal Bayesian not faced with various vengeful gods inspecting its algorithm should not decide to rewrite its memories to something calibrated away from what it originally believed to be accurate, or that (b) human beings ought to seek accuracy in a life well-lived according to goals that include both explicit truth-seeking and other goals not about truth.
But unless I’m specifically focused on this argument, I usually go so far as to talk as if it resolves in favor of epistemic accuracy, that is, that pragmatic rationality is unified with epistemic rationality rather than implying two different disciplines. If truth is a bad idea, it’s not clear what the reader is doing on Less Wrong, and indeed, the “pragmatic” reader who somehow knows that it’s a good idea to be ignorant, will at once flee as far as possible...
You started off using the word “rationality” on this blog/forum, and though I had misgivings, I tried to continue with your language. But most of the discussion of this post seems to be distracted by my having tried to clarify that in the introductory sentence. I predict we won’t be able to get past this, and so from now on I will revert to my usual policy of avoiding overloaded words like “rationality.”
If truth is a bad idea, it’s not clear what the reader is doing on Less Wrong [...]
Believing the truth is usually a good idea—for real organisms.
However, I don’t think rationality should be defined in terms of truth seeking. For one thing, that is not particularly conventional usage. For another, it seems like a rather arbitrary goal. What if a Buddhist claims that rational behaviour typically involves meditating until you reach nirvana. On what grounds would that claim be dismissed? That seems to me to be an equally biologically realistic goal.
I think that convention has it right here—the details of the goal are irrelevances to rationality which should be factored right out of the equation. You can rationally pursue any goal—without any exceptions.
I’m confused by the phrase “most of the things they want do in fact prefer some ways of thinking”.
I thought that EY was saying that he requires goals like “some hot chocolate” or “an interesting book”, rather than goals like: “the answer to this division problem computed by the Newton-Raphson algorithm”
Eliezer said: This, in turn, ends up implying epistemic rationality: if the definition of “winning” doesn’t require believing false things, then you can generally expect to do better (on average) by believing true things than false things—certainly in real life, despite various elaborate philosophical thought experiments designed from omniscient truth-believing third-person standpoints.
--
I think this is overstated. Why should we only care what works “generally,” rather than what works well in specific subdomains? If rationality means whatever helps you win, than overconfidence will often be rational. (Examples: placebo effect, dating, job interviews, etc.) I think you need to either decide that your definition of rationality does not always require a preference for true beliefs, or else revise the definition.
It also might be worthwhile, for the sake of clarity, to just avoid the word “rationality” altogether in future conversations. It seems to be at risk of becoming an essentially contested concept, particularly because everyone wants to be able to claim that their own preferred cognitive procedures are “rational.” Why not just talk about whether a particular cognitive ritual is “goal-optimizing” when we want to talk about Eliezer-rationality, while saving the term “truth-optimizing” (or some variant) for epistemic-rationality?
Maybe “truth-seeking” versus “winning”, if there’s a direct appeal to one and not the other. But I am generally willing to rescue the word “rationality”.
Sorry—I meant, but did not make clear, that the word “rationality” should be avoided only when the conversation involves the clash between “winning” and “truth seeking.” Otherwise, things tend to bog down in arguments about the map, when we should be talking about the territory.
I agree – in contexts where ‘truth seeking’ and ‘winning’ are different, we should qualify references to ‘rationality’.
Regarding “rationalists should win”—that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
For example, buying lottery tickets is (almost always) a negative EV proposition—but some people do win the lottery. Was it irrational for lottery winners to have bought those specific tickets, which did indeed win?
Given a sufficiently large sample, the most spectacular successes are going to be those who pursued opportunities with the highest possible payoff regardless of the potential downside or even the expected value… for every spectacular success, there are probably several times as many spectacular failures.
Re: Regarding “rationalists should win”—that still leaves us with the problem of distinguishing between someone who won because he was rational and someone who was irrational but won because of sheer dumb luck.
Just don’t go there in the first place. Attempting to increase your utility is enough.
A common example of where rationality and truth-seeking come into conflict is the case where organisms display their beliefs—and have difficulty misrepresenting them. In such cases, it may thus benefit them to believe falsehoods for reasons associated with signalling their beliefs to others:
“Definitely on all fronts is has become imperative not to bristle with hostility every time you encounter a stranger. Instead observe him, find out what he might be. Behave to him with politeness, pretending that you like him more than you do—at least while you find out how he might be of use to you. Wash before you go to talk to him so as to conceal your tribal odour and take great care not to let on that you notice his own, foul as it may be. Talk about human brotherhood. In the end don’t even just pretend that you like him (he begins to see through that); instead, really like him. It pays.”
Discriminating Nepotism—as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.359.