Only say ‘rational’ when you can’t eliminate the word
Almost all instances of the word “true” can be eliminated from the sentences in which they appear by applying Tarski’s formula. For example, if you say, “I believe the sky is blue, and that’s true!” then this can be rephrased as the statement, “I believe the sky is blue, and the sky is blue.” For every “The sentence ‘X’ is true” you can just say X and convey the same information about what you believe—just talk about the territory the map allegedly corresponds to, instead of talking about the map.
When can’t you eliminate the word “true”? When you’re generalizing over map-territory correspondences, e.g., “True theories are more likely to make correct experimental predictions.” There’s no way to take the word ‘true’ out of that sentence because it’s talking about a feature of map-territory correspondences in general.
Similarly, you can eliminate the sentence ‘rational’ from almost any sentence in which it appears. “It’s rational to believe the sky is blue”, “It’s true that the sky is blue”, and “The sky is blue”, all convey exactly the same information about what color you think the sky is—no more, no less.
When can’t you eliminate the word “rational” from a sentence?
When you’re generalizing over cognitive algorithms for producing map-territory correspondences (epistemic rationality) or steering the future where you want it to go (instrumental rationality). So while you can eliminate the word ‘rational’ from “It’s rational to believe the sky is blue”, you can’t eliminate the concept ‘rational’ from the sentence “It’s epistemically rational to increase belief in hypotheses that make successful experimental predictions.” You can Taboo the word, of course, but then the sentence just becomes, “To increase map-territory correspondences, follow the cognitive algorithm of increasing belief in hypotheses that make successful experimental predictions.” You can eliminate the word, but you can’t eliminate the concept without changing the meaning of the sentence, because the primary subject of discussion is, in fact, general cognitive algorithms with the property of producing map-territory correspondences.
The word ‘rational’ should never be used on any occasion except when it is necessary, i.e., when we are discussing cognitive algorithms as algorithms.
If you want to talk about how to buy a great car by applying rationality, but you’re primarily talking about the car rather than considering the question of which cognitive algorithms are best, then title your post Optimal Car-Buying, not Rational Car-Buying.
Thank you for observing all safety precautions.
- “Rationalist Discourse” Is Like “Physicist Motors” by 26 Feb 2023 5:58 UTC; 136 points) (
- The “you-can-just” alarm by 8 Oct 2022 10:43 UTC; 76 points) (
- Why you must maximize expected utility by 13 Dec 2012 1:11 UTC; 50 points) (
- 30 May 2013 4:35 UTC; 35 points) 's comment on The Rational Investor, Part I by (
- 1 Jun 2019 6:18 UTC; 24 points) 's comment on Feedback Requested! Draft of a New About/Welcome Page for LessWrong by (
- 23 May 2013 1:35 UTC; 20 points) 's comment on Preparing for a Rational Financial Planning Sequence by (
- 26 Feb 2014 6:43 UTC; 10 points) 's comment on Rational Evangelism by (
- 12 Apr 2020 21:31 UTC; 8 points) 's comment on How should we model complex systems? by (
- 15 May 2023 18:37 UTC; 7 points) 's comment on Rational retirement plans by (
- 15 Sep 2014 2:53 UTC; 5 points) 's comment on A Guide to Rational Investing by (
- 1 Apr 2014 2:40 UTC; 5 points) 's comment on Book Review: Kazdin’s The Everyday Parenting Toolkit by (
- 11 Jul 2012 23:18 UTC; 4 points) 's comment on Rational Ethics by (
- 25 Apr 2023 22:42 UTC; 4 points) 's comment on Should LW have an official list of norms? by (
- 1 Jun 2012 4:09 UTC; 3 points) 's comment on Open Thread, June 1-15, 2012 by (
- 26 Feb 2023 7:41 UTC; 1 point) 's comment on “Rationalist Discourse” Is Like “Physicist Motors” by (
This is good advice for most words.
In the limit, only speak when you cannot remain silent.
...
But “small talk” seems to be a friendship-enabling technology.
Only make up hasty generalizations when it’s entertaining to do so.
Also: if it gets you internet points.
Internet points are constantly ruining my subreddits.
I might be missing the point of this paragraph, but it seems to me that “it’s rational to believe the sky is blue” and “the sky is blue” do not convey the same information. I can conceive of situations in which it is rational to believe the sky is blue, and yet the sky is not blue. For example, the sky is green, but superintelligent alien pranksters install undetected nanotech devices into my optic and auditory nerves/brain, altering my perceptions and memories so that I see the green sky as blue, and hear (read) the word “blue” where other people have actually said (written) the word “green” when describing the sky.
Under these circumstances, all my evidence would indicate the sky is blue—and so it would be rational to believe that the sky is blue. And yet the sky is not blue. But the first statement doesn’t feel like I am generalising over cognitive algorithms in the sense I took from the big paragraph.
Am I missing or misinterpreting something?
When discussing these third-person as you are now, cognitive algorithms as algorithms are being invoked. But we all know that “p” and “Alice thinks that p” are hardly reducible to each other, it’s first-person items like “I believe that p” that are deflationary. So while it is clearly the case that you can imagine situations where the sky is not blue but it would be epistemically rational to believe that it is, that does not demonstrate situations where one could justifiably claim only one of “the sky is blue” and “it is rational to believe that the sky is blue” (indeed the justifiability of the former just is the content of the latter.)
“I believe that ‘P’.” is only deflationary because it treats belief as if it were binary, but it isn’t. “I have 0.8 belief in ‘P’.” is certainly not the same as “It is true that ‘P’.” Yes? One is a claim about the world, and one is a claim about my model of the world.
I am pretty sure that p and “it is rational to believe that p” can come apart even from a first-person perspective. At least, they can come apart if belief is cashed out in terms of inclination to action in a single case.
Let me illustrate. Suppose there are five live hypotheses to account for some evidence, and suppose that I assign credences as follows:
C(h1) = 0.1; C(h2) = 0.35; C(h3) = 0.25; C(h4) = 0.15; C(h5) = 0.1; and C(other) = 0.05.
Further suppose that I am in a situation where I need to take some action, and each of the five hypotheses recommends a different action in the circumstances.
Assuming that by “belief” one means something like “what one proposes to act on in forced situations,” then it is rational to believe h2. It is rational to act as if h2 were true. But one need not think that h2 is true. It is more likely to be true than any of the other options, but given the credences above, one ought to think that h2 is false. That is, it is much more likely on the evidence that h2 is false than that it is true.
“It’s rational to believe that #32 will win” and “It’s rational to bet on #32″ are not the same thing. In fact, they’re using different senses of “rational”, as we usually carve things up.
Thus in your example, “it’s rational to believe h2” and “h2″ are still equivalent, but “act as though h2” is not.
Could you elaborate on the mistake you think I’m making? I’m not seeing it, yet.
I think the intended meaning is as follows:
As you pointed out, the first sentence is not logically equivalent to the second and third (the second and third are logically equivalent according to Tarski’s semantic theory of truth).
Alternately, if the sky IS blue, and someone objects to jumping to that conclusion, you can point out that the obvious conclusion is in fact rational in addition to claiming that it’s correct.
This. You have created an example that shows that it is utterly impossible for a creature with our limited primate capabilities to actually know The Truth. What we can do is pay close attention to what we think and why we think that to be so.
In your case, there’s an overwhelming amount of good reason to believe the sky is blue to anyone who doesn’t know about these loki-like aliens (I really wanted to call them ‘lokiens’). You might be wrong, and it could possibly lead to bad consequences in the future. But the alternative, believing something to be true without good reason, is crazy.
I agree.
Rationality is kind of like Voldemort.
But seriously, when we keep using the word rational to describe what we ought to do or think, when we really should just say “what action accomplishes this specific goal I have?” or “what’s really there, and what observations do I expect?”, It lets us be lazy and use “rationality” as an identity, rather than a way to win.
As an economics professor can I get an exemption from this rule? Rationality is a default assumption in my classes and I often need to remind students of this by throwing the word “rational” into sentences.
I’d say you need it not just because it’s a default assumption, but because it’s also an inaccurate assumption. Sort of like how you specify that an inclined plane is frictionless and in a vacuum, but you don’t mention that it’s under the force of gravity, or that it will force away any object that would otherwise pass through it.
It sounds like good advice, but the forcefulness of the recommendation seems out of proportion with the importance of the topic.
I guess it is an attempt to top off all the recent “a bit more about r-word” discussions and move one already.
It seems like a blunt way to do it, you could always just say “hopefully that will clear things up for the next month or so” or whatever.
Perhaps you’re confusing ‘rational’ with ‘rationale’ and ‘reason’.
I know we don’t like promoting “meta” posts, but can this one be promoted?
What about saying “it is rational to feel X” when you are not good enough at self modification to actually feel X?
What about the actions of someone else? “It is rational to not eat the cake” vs. “don’t eat the cake”? The later may be interpreted as requesting a favour since you want to eat the cake yourself.
“rational” is a rather versatile word, a general rule to never use it except in specific situation X is not going to turn out well.
I think that both those are actually great examples of where using a word like “rational” obscures rather than clarifies.
We have less direct control over our feelings than our explicit thoughts. It may or may not be irrational to endorse a feeling, but I for one think of feelings as neither rational nor irrational, but mere facts, and I think that’s the common usage. You’d likely be better off thinking, “Y is true, but feeling X is inconsistent with my goals if Y is true. Therefore I want to feel less X.”
For example, let’s say that I am afraid of the dark, but it does not in fact achieve my goals to avoid the dark, or to be afraid when it’s dark. Then what I want to do is notice that the feeling is unhelpful, and take actions to reduce it.
What does it add to say that the feeling itself is irrational?
Here, “rational” is substituting for a claim about whether eating the cake has some specific effect, or optimizes or fails to optimize some goal or utility function. It is more precise to make the claim explicit than to use a vague term like “rational.” For example, you might say that it is unfair for them to eat the cake, or that they probably wouldn’t be better off with the extra calories, etc.
I have no commitment to ‘rational’ in the sense OP wants to eliminate. But what shorthand might one use for “applying the sorts of principles that are the general consensus among the LW community, as best I understand them”?
Anything from “The best X” to “optimal X” to simply stating your opinion. The normal assumption around here is that you’re trying to be rational to the best of your ability.
So, when we’re distinguishing “optimal car-buying” from “rational car-buying,” is the point that using the word “rational” is somehow wrong and distorts or confuses the intended message? Or is really just that we want to save the word for when we need it most, so as to safeguard against death-spiraling around “rationality”? I’m not trying to suggest that the latter wouldn’t be a good enough reason, but I’m trying to figure out if Eliezer’s point is about being precise with this concept on a substantive level, or more about community norms, rhetorical efficacy, and sanity prophylactics. The last sentence of the OP suggests the latter is at least in play, but I’m trying to figure out whether this issue suggests some problem with what we mean by the word in the first place.
My take on it is—“rationality” isn’t the point. Don’t try to do things “rationally” (as though it’s a separate thing), try to do them right.
It’s actually something we see with the nuts that occasionally show up here—they’re obsessed with the notion of rationality as a concrete process or something, insisting (e.g.) that we don’t need to look at the experimental evidence for a theory if it is “obviously false when subjected to rational thought”, or that it’s bad to be “too rational”.
For me it’s this: From a pragmatics perspective “the rational way to buy a car is...” repeats information—when a person shares a method of doing something everyone assumes the speaker thinks that method is rational. Repeating it is redundant and redundant speech acts have a tendency to come off as arrogant and squicky. It’s what you do when you talk down to someone.
It’s also just sloppy to use words with connotations that don’t apply when a better word exists. “Rational” connotes some general discussion of cognitive algorithms.
So I suspect it’s a combination of a)sloppiness is bad and b)sloppiness looks and sounds bad
But then what about “optimal car-buying”? Surely if someone is taking the time to describe how to buy a car, they probably think it’s the optimal method, or at least as close as they can get. So “optimal” would seem to be redundant too, and yet we would seem to prefer one over the other, even though they basically mean the same thing thing in this context.
Now, there may be some arrogance built into “rational” that’s not present in “optimal,” but I don’t see the issue as one of redundancy. Rather, it seems like “rational” can sometimes come off as an assertion of superiority over another—i.e., something like a man telling a female colleague that she needs to be more rational.
Something that is not optimal is merely ‘suboptimal’ whereas something that is not rational is irrational.
Things that are not rational can also be be arational. Most obviously terminal values.
More precisely indicates we want to optimise a decision over a particular utility function, or at least set of desires.
I think the objection to rational stems largely from this. Rationalism has a negative connotation in society thanks to, among other things, Hollywood and Ayn Rand.
See also: Straw Vulcan
Anyone got any bets on the next word LessWrong is going to abuse?
Optimal
Does it count as abuse if we use the term… Optimally?
shades
Let’s try “Bayesian”.
Been there, done that. I think “Bayesian” was preferred over “rational” for quite some time, and I used to complain about it vehemently.
While I agree with the rationality part, I have a nitpick with the truth part.
This is a realist position. An instrumentalist approach is that realism (= map/territory distinction) is a model in itself. Hence the definition of the word true: “theories that are more likely to make correct experimental predictions are provisionally defined as true in the map-territory model”. Thus in instrumentalism “true” is replaced with “useful”: “theories that are more likely to make correct experimental predictions are more useful”, without any ontological claims attached.
I would add the additional constraint that if you can eliminate all instances of the word “rational” from your post then LessWrong isn’t the blog for it, and you should post it elsewhere.
This is seriously false.
Surely you can see the point Oscar is trying to make, though? If a post’s claim to topicality was that it presented “a rational way to do something”, and if (by the advice of the original post) you instead make it present “a way to do something”, then that raises the question of what it’s doing here instead of on the rest of the internet.
I think your claim as stated is literally false, but I completely agree with your sentiment.
I think my claim as stated is literally true, since if I could add that constraint as a community norm, I would.
This is among the highest-voted posts ever and doesn’t use the word “rational”.
I come to Less Wrong to learn about how to think and how to act effectively. I care about general algorithms that are useful for many problems, like “Hold off on proposing solutions” or “Habits are ingrained faster when you pay concious attention to your thoughts when you perform the action”. These posts have very high value to me because they improve my effectiveness across a wide range of areas.
Another such technique is “Dissolving the question”. The post you linked to (Yvain’s “Diseased thinking: dissolving questions about disease”) is valuable as an exemplary performance of this technique. It adds to Eliezer’s description of question-dissolving by giving a demonstration of its use on a real question. It’s main value comes from this, anything I learnt about disease whilst reading it is just a bonus.
To quote badger in the recent thread “Rational Toothpaste: A Case Study”
But we don’t need more than one or two such examples! Yvain’s post about question-dissolving was the only such post I ever need to read.
Posts about toothpaste or house-buying or room-decoration or fashion (EDIT: shaving, computer hardware) only tell me about that particular thing. As good as many of them are they’ll never be as useful as a post that teaches me a general method of thought applicable on many problems. And if I want to know about some particular topic I’ll just look it up on Google, or go to a library.
It’s not possible for LessWrong to give a rational treatment of every subject. There are just too many of them. Even if we did I wouldn’t be able to carry all that info around in my head. That’s why I need to learn general algorithms for producing rational decisions.
(Also: Even though badger makes it clear in the quote I gave that the post is supposed to about the algorithms used, only one of the comments on it (kilobug’s) is talking about this. Most of the rest are actually talking about toothpaste.)
EDIT: Should I repost this to discussion level?
Yes.