Rationality Quotes Thread December 2015
Another month, another rationality quotes thread. The rules are:
Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
--Jon Haidt, The Righteous Mind
I remember reading the idea expressed in this quote in an old LW post, older than Haidt’s book which was published in 2012, and it is probably older than that.
In any case, I think that this is a very good quote, because it highlights a bias that seems to be more prevalent than perhaps any other cognitive bias discussed here and motivates attempts to find better ways to reason and argue. If LessWrong had an introduction whose intention was to motivate why we need better thinking tools, this idea could be presented very early, maybe even in a second or third paragraph.
I think psychologist Tom Gilovich is the original source of the “Can I?” vs. “Must I?” description of motivated reasoning. He wrote about it in his 1991 book How We Know What Isn’t So.
Probably people have seen this before, but I really like it:
-Zig Ziglar
I don’t see the point. The whole point of “motivating doesn’t last” is “you will only be able to sustain effort if there is something in your day-to-day that motivates you to continue, not some distant ideal.
Alastair Roberts
Having recently watched a few of these discussions/debates in the commons (watched via youtube) it is noticeable how the speaker is able to temper the mood and add a little levity.
There is one popular political youtube account called ‘Incorrigible Delinquent’ and he begins each of his uploads with the speaker quite humorously saying ” You are an incorrigible delinquent! ”
This should be developed into a Discussion post (if it hasn’t.)
-- Tyler Cowen
A.N. Whitehead
There’s this guy called William of Occam who must really be spinning in his grave right now.
I interpreted the Whitehead quote to mean that you should seek the simplest explanation that explains whatever it is you are trying to explain. This is consistent with Occam’s Razor. I assumed that “distrust it” meant subject the explanation to additional tests to confirm or falsify the explanation. So, I didn’t see this quote as contradicting William of Occam; instead it built on Occam’s Razor to describe the essence of the scientific method.
This interpretation is supported if you look at the context of the quote:
Here also is Einstein:
Or in the pithier paraphrase usually quoted:
(someone, could be Einstein)
Depends on what the subject matter is. Sometimes, it really doesn’t need to be complicated.
True, though this “depends” applies to pretty much everything.
I’d be even more suspicious of someone telling me that it’s not that simple.
Matt Levine
Cardinal Richelieu
-Admiral Nimitz from Edwin Layton, And I Was There, 1985, p. 357.
Andrew Gelman
Kreia, KOTOR 2
We should have a thread for anti-rationality quotes some time. Kotor 2 would be a gold mine. :)
HK-47, assassin droid.
--Ozymandias (most of the post is unrelated)
Looking for mental information in individual neuronal firing patterns is looking at the wrong level of scale and at the wrong kind of physical manifestation. As in other statistical dynamical regularities, there are a vast number of microstates (i.e., network activity patterns) that can constitute the same ghloal attractor, and a vast numbmer of trajectories of microstate-to-microstate changes that will tend to converge to a common attractor. But it is the final quasi-regular network-level dynamic, like a melody played by a million-instrument orchestra, that is the medium of mental information. - Terrence W. Deacon, Incomplete Nature: How Mind Emerged from Matter, pp. 516 − 517.
Saurabh Jha
That seems like selection bias.
You do a lot of studies and experiments, and filter out most proposed medicine because it causes harm quickly, or doesn’t cause benefits quickly enough or at all. Then you market whatever survived testing. Obviously, if it’s still harmful, the harms will show up only slowly, while the benefits will show up quickly—otherwise you would have filtered it out before it reached the consumer.
This is like saying engineering disproportionately channels optimism, because almost all the appliances you buy in the store work now and only fail later. If they had failed immediately, they would have been flagged in QC and never got to the shop.
If an appliance you buy fails than you know that it fails. If a drug reduces your IQ by 5 points you won’t know. Drugs also don’t get tested for whether or not they reduce your IQ by 5 points.
Yes, it’s still a bias.
The difference is, if they fail, you can always buy a new appliance. You can’t buy a new body.
For some underwhelming value of “always”, and anyway appliances aren’t all that engineering makes.
Off the top of my head, cases when “harms take longer to show up & disprove than benefits” outside medicine included leaded gasoline, chlorofluorocarbons, asbestos, cheap O-rings in space shuttles, the 1940 Tacoma Narrows Bridge, the use of two-digit year numbers...
Look at Feynman’s analysis. I’d say this is a good example of disproportionate channeling of optimism.
Yes. My point was that disproportionate channeling of optimism isn’t something specific to medicine (let alone to evidence-based medicine).
EDIT: Hmm, I guess I originally took “disproportionally” to mean “compared to how much other things channel optimism” whereas it’d make more sense to interpret it as “compared to how much medicine channels pessimism”.
Are there any other systems for judging medicine that more accurately reflects reality? I know very little about medicine in general, but it would be interesting to hear about any alternate methods that get good results.
It’s hard to say how effective various alternative styls of medicine happen to be.
There’s research that suggests Mormon’s can recognize other Mormon from non-Mormons by looking at whether the skin of the other person looks healthy. Then Mormon’s seem to live 6 to 10 years longer than other Americans.
On the other hand the nature of claims like this is that it’s hard to have reliable knowledge about it.
Elon musk
“It is a mistake to hire huge numbers of people to get a complicated job done. Numbers will never compensate for talent in getting the right answer (two people who don’t know something are no better than one), will tend to slow down progress, and will make the task incredibly expensive.”
Elon musk
Merry Christmas beloved LessWrong family. I think I finally get the format of these threads. How did I not read them properly earlier!
“My biggest mistake is probably weighing too much on someone’s talent and not someone’s personality. I think it matters whether someone has a good heart.”
I recently watched a company go from a billion in revenues to zero when a founder stole $90 million from the company.
Integrity, humility, and doing your best is by far the most important consideration when evaluating whether to work for someone.
Elon musk
--Jon Haidt, The Righteous Mind
-- The killer shortly before killing his victim in No Country for Old Men
--Artabanus, uncle of Xerxes; book 7 of Herodotus’s Histories (I could swear I’d seen this on a LW quote thread before, but searching turns up nothing.)
(To make it clear: I have never seen the movie in question, so this is not a comment on the specifics of what happened) Just because it turned out poorly doesn’t make it a bad rule. It could have had a 99% chance to work out great, but the killer is only seeing the 1% where it didn’t. If you’re killing people, then you can’t really judge their rules, since it’s basically a given that you’re only going to talk to them when the rules fail. Everything is going to look like a bad rule if you only count the instances where it didn’t work. Without knowing how many similar encounters the victim avoided with their rule, I don’t see how you can make a strong case that it’s a bad (or good) rule.
That kinda depends on the point of view.
If you take the frequentist approach and think about limits as n goes to infinity, sure, a single data point will tell you very little about the goodness of the rule.
But if it’s you, personally you, who is looking at the business end of a gun, the rule indeed turned out to be very very bad. I think the quote resonates quite well with this.
Besides, consider this. Let’s imagine a rule which works fine 99% of the time, but in 1% of the cases it leaves you dead. And let’s say you get to apply this rule once a week. Is it a good rule? Nope, it’s a very bad rule. Specifically, your chances of being alive at the end of the year are only 0.99^52 = about 60%, not great. Being alive after ten years? About half a percent.
I agree. But this is not how I saw the quote. For me it is just a cogent way of asking “is your application of rationality leading to success”?
Shorn of context, it could be. But what is the context? I gather from the Wikipedia plot summary that Chigurh (the killer) is a hit-man hired by drug dealers to recover some stolen drug money, but instead kills his employers and everyone else that stands in the way of getting the money himself. To judge by the other quotes in IMDB, when he’s about to kill someone he engages them in word-play that should not take in anyone in possession of their rational faculties for a second, in order to frame what he is about to do as the fault of his victims.
Imagine someone with a gun going out onto the street and shooting at everyone, while screaming, “If the rule you followed brought you to this, of what use was the rule?” Is it still a rationality quote?
I saw the movie and the context of the quote was that the killer was about to kill a guy that was chasing him. So we could say that the victim underestimated the killer. He was not randomly selected.
Thomas Watson
These days we often have people who do think but don’t do the other well enough.
I think that this part of the quote is an overstatement.
I actually think it’s naive bullshit.
Alastair Roberts
Jim
This appears to be empirically incorrect, at least in some fields. A few examples:
Creationists are much less willing to adjust their beliefs on the basis of evidence and argument than scientifically-minded evolutionists, but evolution rather than special creation is the consensus position these days.
It looks to me (though I confess I haven’t looked super-hard) as if the most stubborn-minded economists are the adherents of at-least-slightly-fringey theories like “Austrian” economics rather than the somewhere-between-Chicago-and-Keynes mainstream.
Consensus views in hard sciences like physics are typically formed by evidence and rational argument.
Depends on what you mean by “consensus”. For example, in some organizations it means “we will not make a decision until literally everyone agrees with it”. In which case, stubborn people make all the decisions (until the others get sufficiently pissed off and fire them).
Probably true. But I don’t think that’s the sort of thing Jim is talking about in the post redlizard was quoting from; do you?
Oh. I haven’t followed the link before commenting.
Now I did… and I don’t really see the connection between the article and consensus. The most prominent example is how managers misunderstood the technical issues with Challenger: but that’s about putting technically unsavvy managers into positions of power over engineers, not about consensus.
(I wonder if this is an example of a pattern: “Make a statement. Write an article mostly about something else, using arguments that a reader will probably agree with. At the end, a careless reader is convinced about the statement.”)
Technically unsavy manages who insisted that the engineers tell them what they wanted to hear, i.e., who insisted that they be included in the consensus and then refused to shift their position.
I think that level of logical rigour is par for the course for this particular author.
We have a special name for this; it’s called science, and it’s rather rare. It might still be a pretty good generalization of all human behavior to say that consensus tends to be dominated by those who won’t change their opinion.
Actually, I don’t think it’s a good generalization for reasons other than science. Most conflicts or debates devolve to politics, where people support someone instead of some opinion or position. And in politics, the top person or party is often replaced by a different one.
Even a lot of what gets called “science” isn’t.
-Loner Wolf
-robert wiblin
How do you plan to do this without counterfactual knowledge?
take your pick
it requires a good handle of experiment design but biostatisticians do this day in day out. Hopefully risk analysts do this too in defense institutions.
The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here.
This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it’s the best available means to make such an inference.
Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It’s also used by earth scientists, but I haven’t seen it used elsewhere. Based on this approach, analysts can:
make a prediction about outcomes without interventions in libya with and without intervention
when they choose to intervene on non-intervene, calculate those outcomes
over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch
I’m not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.
Just for completion, Anders_H is one of those guys.
How self-referentially absurd. More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.). I said biostatisticians because epidemiology isn’t in the common vernacular. Ironically, counterfactual knowledge is, to those familiar with the distinction, distinctly removed from the biostatistical domain.
Just for the sake of intellectual curiosity, I wonder what kind of paradox was just invoked prior to this clarification.
It wouldn’t be the epimenides paradox since that refers to an individual making a self-referentially absurd claim:
Anyone?
Yes, Anders_H is Doctor of Science in Epidemiology. He’s someone worth listening to when he tells you about what can and can’t be done with experiment design.
Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference.
This is a text conversation, so rhetorical questions aren’t immediately apparent. Moreover, we’re in a community that explicitly celebrates reason over other modes of rhetoric. So, my interpretation of his question about counterfactual conditions was interpreted was sincere rather than disingenuous.
Yes, but if you disagree you can’t simply point to
biostatisticians do this day in day out
and a bunch of wikipedia articles but actually argue the merits of why you think that those techniques can be used in this case.That is a tendentious way of comparing the two: a cold, abstract “level of improvement” against the more concrete “dollars” and very concrete “dead people”. It suggests the writer is predisposed to find that intervention is a bad idea.
But what is improvement, but resources then available to apply to better things, and live people living better lives?
And why the reference class “Western”?
Presumably, Wiblin is talking about Western bombing of ISIS in Syria. If one finds that Turkish interventions have been effective and American interventions haven’t, say, then that’s an argument that Americans shouldn’t intervene now (but Turks should).
Choose your reference class, get the result you want. Is Turkey “Western” or not? It wants to join the EU (but hasn’t been admitted yet). Russia is bombing Syria. Why exclude it from the class of foreign interventions? For that matter, I don’t know what military actions, if any, Turkey has taken in Syria, but that would also be a foreign intervention.
Not to mention the the smallness of N in the proposed study and the elastic assessment.
I googled some of the phrases in the OP but only got hits to the OP. Is this even a quote?
Rating each decision on a scale of 1 to 10 and then taking a weighted average is a recipe for biasing the result against intervention, since you’ve created a hard upper limit for how much you count an intervention as helping, so you’ll count a successful intervention as 10 and be unable to count a successful intervention that does even more good as more than 10. (This has a similar problem at the low end of the scale, but that doesn’t affect the final result since you can’t go below zero intervention.)
This also produces bad results in cases where the intervention failed because it was insufficient. You’d end up concluding that intervention is bad when it may just be that insufficient intervention is bad. This method has clause 2 to cover similarity of case, but not similarity of intervention, and at any rate “similarity” is a fuzzy concept. If bombing half the country is a disaster and bombing a whole country succeeds, is bombing half a country “similar” to bombing a whole country? (Actually, you usually end up compressing all the dispute over intervention into a dispute over how similar two cases are.)
And it’s generally a bad idea to put on a numerical scale things that you can’t actually measure numerically. It gives a false appearance of accuracy and precision, like a company executive who wants to see figures for his company improve but doesn’t actually care where the figures come from.
Also, “level of improvement created” is subject to noise. It is possible for an improvement to fail for reasons unrelated to the effectiveness of the intervention, like if the country gets hit by a meteor the next day (or more realistically, gets invaded or attacked the next day).
Basically one huge problem here is that there isn’t enough data compared to the number of variables involved.
Not to mention that this is a problem in what Taleb would call extremistan, i.e., the distribution of possible outcomes from intervening, or not-intervening, are fat-tailed and include a lot of rare possibilities that haven’t yet shown up in the data at all.
-Bill Gates, quoting someone else
-Elon Musk in the same vid