Rationality Quotes March 2014
Another month has passed and here is a new rationality quotes thread. The usual rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
- 3 Mar 2014 1:38 UTC; 11 points) 's comment on Open Thread February 25 - March 3 by (
-- Betty Medsger
Tears in my eyes… awesome beyond words. The fact that this wasn’t fictional makes it brilliant. The fact that the burglars were acting against an over-reaching institution is just icing on the cake. It’s been sometime since I heard a story that warmed my heart so much.
How did they know they considered leaving a thank-you note? Did they confess afterwards?
Yes.
Well. Sort of. Not exactly.
They were stealing government files on domestic surveillance in order to leak them—the quote comes from the journalist who was their press contact (see the link beneath the quote.)
-- Paul Graham, The Acceleration of Addictiveness
This aged like fine wine.
Thank you for linking to the piece where the quote was drawn. Great article!
BTW, here is Eliezer’s article on the same topic.
Bruce Schneier, expert in security systems
Could you link the source of the quote?
It’s in chapter 16 of Liars and Outliers, which AFAIK has no url.
Found it with google
“Consider the people who routinely disagree with you. See how confident they look while being dead wrong? That’s exactly how you look to them.”—Scott Adams
I sense an asymmetry when both sides are express mere high confidence in their primary conclusions, but they keep on claiming I’ve got some sort of absolute faith.
-- John R. Anderson, Lynne M. Reder & Herbert A. Simon: Applications and Misapplications of Cognitive Psychology to Mathematics Education
Jeremy Clarkson
Yes, but on net primitive tribes (at least in the short-run) seem to be made worse off from contact with technologically advanced civilizations.
Sure, but that doesn’t change the fact that they could learn a lot from us. Indeed, if that weren’t so, they
a) would be primitive and b) wouldn’t be (especially and unusually) harmed by the contact.
Even after controlling for harmful or exploitative behavior by the advanced civilizations?
Yes because of germs.
Depends which primitive tribes. Amerindians died from European diseases and Europeans died from African diseases.
And germs.
Are the ideas harmful?
Some religious views might be.
I’m pretty sure most uncontacted tribes had their own religious weirdness.
Good point, but that doesn’t fall under the “a lot to learn from” rubric. Your general point about contact is likely true.
Pretend I’m a fantastic in-person teacher but if you get near me you will die of some disease. Do you have a lot to learn from me?
As I said, your general point about net value of contact is correct.
Yes. Use a correspondence course...
And if you can’t read?
My reply was not entirely literal; in short, information and disease do not all follow the same vectors. A sterilized phone would be helpful, but of course then you could say what if you don’t know how to use a phone...
So, it’s positive-sum. But anyway, who cares what a primitive tribe learns? We, surely, are the center and purpose of the universe; our gain, however small or large, is the important thing.
Warren Buffett
-- Bertrand Russell, In Praise of Idleness
I once attempted to sum this up on another forum by saying, “Careerism is a subgoal stomp.” Russell, of course, better expresses the broader point that the subgoal stomp of maximizing productivity has almost entirely replaced all discussion of what kind of terminal goals individuals and societies should have.
The problem with spending money—and consumption in general—is the opportunity cost. If by doing something you are not maximizing some value, then it would be better if you did.
The same criticism could be also made about production: if you work hard and make profit by creating value, but by doing something else you could create even more value, then it would be better if you did the other thing.
Perhaps the difference is that on the production side people at least try to maximize (of course with all the human irrationality involved), but on the consumption side we often forget to think about it. So we need to be reminded there more.
Not some value—a lot of people are maximizing the wrong values.
One of the reasons why this whole thing is so complicated is that in reality people very rarely optimize for a single value. They optimize for a set of values which usually isn’t too coherent and the weights for these values tend to fluctuate...
Rick and Morty.
The other thing is that he probably can’t know whether you can hide.
After Rick and Morty hid, he started looking for them in a way that made it pretty clear that he knew they were hiding. It didn’t stop him from saying that they couldn’t hide.
-Venkatesh Rao
Megan McArdle quoting or paraphrasing Jim Manzi.
[Edited in response to Kaj’s comment.]
10% isn’t that bad as long as you continue the programs that were found to succeed and stop the programs that were found to fail. Come up with 10 intelligent-sounding ideas, obtain expert endorsements, do 10 randomized controlled trials, get 1 significant improvement. Then repeat.
Unfortunately we don’t really have the political system to do this.
But I have this great idea that will change that!
...Oh.
Unfortunately, governments are really bad at doing this.
Humans in general are very bad at this. The only reason capitalism works is that the losing experiments run out of money.
That’s a very powerful reason.
True, but that doesn’t mean we’re laboring in the dark. It just means we’ve got our eyes closed.
Unfortunately, the people involved have an incentive to keep them closed.
I don’t think that’s really relevant to the original quote.
It depends on how many completely ineffectual programs would demonstrate improvement versus current practices.
Some relevant links:
“The Iron Law Of Evaluation And Other Metallic Rules”, Rossi 1987
“The Efficacy of Psychological, Educational, and Behavioral Treatment: Confirmation From Meta-Analysis”, Lipsey & Wilson 1993
“Randomized Controlled Trials Commissioned by the Institute of Education Sciences Since 2002: How Many Found Positive Versus Weak or No Effects?”, Coalition for Evidence-Based Policy 2013 (excerpts)
“One Hundred Years of Social Psychology Quantitatively Described”, Bond et al 2003
This is a higher rate than I’d expected. It implies that current policies in these three fields are not really thoroughly thought out, or at least not to the extent that I had expected. It seems that there is substantial room for improvement.
I would have expected perhaps one or two percent.
Remember that programs will not even be tested unless there are good reasons to expect improvement over current protocol. Most programs that are explicitly considered are worse than those that are tested, and most possible programs are worse than those that are explicitly considered. Therefore we can expect that far, far fewer than ten percent of possible programs would yield significant improvements.
That is true. However, there is a second filtering process, after filtering by experts; and that is what I will refer to as filtering by experiment (i.e. we’ll try this, and if it works we keep doing it, and if it doesn’t we don’t). Evolution is basically a mix of random mutation and filtering by experiment, and it shows that, given enough time, such a filter can be astonishingly effective. (That time can be drastically reduced by adding another filter—such as filtering-by-experts—before the filtering-by-experiment step)
The one-to-two percent expectation that I had was a subconscious expectation of the comparison of the effectiveness of the filtering-by-experts in comparison to the filtering-by-experiment over time. Investigating my reasoning more thoroughly, I think that what I had failed to appreciate is probably that there really hasn’t been enough time for filtering-by-experiment to have as drastic an effect as I’d assumed; societies change enough over time that what was a good idea a thousand years ago is probably not going to be a good idea now. (Added to this, it likely takes more than a month to see whether such a social program actually is effective or not; so there hasn’t really been time for all that many consecutive experiments, and there hasn’t really been a properly designed worldwide experimental test model, either).
That’s one possible explanation.
Another possible explanation is that there is a variety of powerful stakeholders in these fields and the new social programs are actually designed to benefit them and not whoever the programs claim to help.
Remember, you expect 5% to give a statistically significant result just by chance...
That’s only true of the programs which can be expected to produce no detriments, surely?
I think the quote is from Jim Manzi rather than Megan McArdle, given that McArdle starts the article with
and later on in the article it says
suggesting that the whole article after the first paragraph is a quote (or possibly paraphrase).
-- C.S. Lewis, The Screwtape Letters
This is better than the other Screwtape quote, but—given the example of Ayn Rand—I think Lewis still gets causality backwards where smart Marxists were concerned. I think they started by being right about God and “materialism” when most people were insistently wrong (or didn’t care about object-level truth.) This gave them an inflated view of their own intelligence and the explanatory power of Marxism.
I admit, I get horribly mind-killed whenever I realize I’m reading something by CS Lewis, especially anything from The Screwtape Letters. That’s because years ago, the arguments in this book were used against me by a girl I was dating as a means to end our relationship (me being non-religious), who herself was convinced by her friends and family that we should break up.
That said, I was able to read this and appreciate it more clearly if I substituted the quote like so:
If we are attempting to spread good rationality around, would it be efficient to not try to convince people that rationality was “true”, but instead attempt to promote good rationality by saying that rationality is “strong, stark, or courageous—that it is the philosophy of the future”?
So you propose to spread rationality by encouraging irrationality?
Even assuming that this will work — that is, not just get people to buy into rationality (that part is simple) but actually become more rational, after this initial dose of irrational motivation — what do you suggest we do when our new recruits turn around and go “Hey, wait a tick; you guys got me into this through blatantly irrational arguments! You cynically and self-servingly pandered to my previously-held biases to get me on your side! You tricked me, you bastards!”? Grin and say “worked, didn’t it”?
That seems to be what the quote is arguing.
The quote is spoken by a devil, who’s deliberately seeking to destroy and devour a person…
The speaker does not himself believe that materialism is true; he is giving advice on how to make another believe a falsehood.
And do you think it’s a good idea?
We aren’t trying to promote the idea that rationality is true. We are trying to promote that it is useful.
More accurately, we have defined “rational” to mean “useful”, and when we argue that something is rational, we are arguing that it’s useful.
If you are, you are being irresponsible because you are not checking what people are going to do with this useful thing.
I can’t know for sure that it won’t come back to bite me, but I suspect helping people generally tends to be helpful. There are things that are easier to use to cause harm than to help, like nuclear weaponry, but in general helpful things seem to have improved humanity’s standard of living, and made them care more about morality.
I don’t actually think that teaching rationality is dangerous. I think that LW expects it to have an edifying effect, to change values for the better. So it is the claim of purely instrumental rationality that is the problem.
The only senses in which you wouldn’t say it’s true are those in which that would be a type error.
We are trying to promote the idea that it is useful because it is true.
While it is true that you shouldn’t be a materialist just because it’s fashionable, there’s a fine line between saying “you shouldn’t be a materialist just because it’s fashionable” and “my opponents are just materialists because it’s fashionable”. The second is a straw man argument, and given that this is CS Lewis putting words in the mouth of Satan, I read this as the straw man argument. Needless to say, a straw man argument is not a good rationalist quote.
That sounds to me like you’re assuming that Lewis wrote the book so that he could put the devil to say strawmannish things, in order to mock the devil. Which is not the case at all—the demon writing the letters is much more similar to MoR!Quirrel, displaying a degree of rationality-mixed-with-cynicism which it uses to point out ways by which the lives of humans can be made miserable, or by which humans make their own lives miserable. Much of it can be read as a treatise on various biases and cognitive mistakes to avoid, made more compelling by them being explained by someone who wants those mistakes to be exploited for actively harming people.
I read that quote as saying that the Devil (or a demon) deceives people by making them believe those things, not that the Devil believes these things himself. That’s how demons behave, they lie to people. This one lies to people about why one should be a materialist and the people fall for it. The point is not to mock the demon, who in the quote is acting as a liar rather than a materialist, but to mock materialists themselves by implying that they are materialists for spurious reasons.
Of course, Lewis has plausible deniability. One can always claim he’s not attributing anything to materialists in general—you’re supposed to infer that; it’s not actually stated.
Edit: Also, remember when Lewis wrote that. 1942 wasn’t like today, when it’s possible to say you don’t believe in the supernatural and (if you live in the right area) not suffer too many consequences except not ever being able to run for political office. Any materialist at the time who claimed he was courageous could easily be just responding to persecution, not claiming that that was his reason for being a materialist. Mocking materialists for that would be like mocking gay pride parades today on the grounds that pride is a sin and a form of arrogance—pride in a vacuum is, but pride in response to someone telling you you’re shameful isn’t.
The demon is not just lying at random—the demon is lying with the purpose of getting a certain reaction (in this case, getting the human to subscribe to the philosophy of materialism). The original quote is advice on how to use the human’s cognitive biases against him, in order to better achieve that goal.
The point of the quote isn’t materialism. That could be replaced with any other philosophy, quite easily. The point of the quote is that, for many people, subscribing to a philosophy isn’t about whether that philosophy is true at all; it’s more about whether that philosophy is popular, or cool, or daring.
The point isn’t to mock the demon, or the materialist. The point is to highlight a common human cognitive mistake.
You correctly describe what the quote literally says, but there’s a fine line between “I’m just writing a story which requires that these particular materialists be biased” and “I’m accusing materialists in general of being biased like this”. The former is often a way for authors to hint at the latter without saying it.
I could easily write a story where the Devil tempts Jews into baking matzohs using the blood of Christian babies. I could then argue that I’m not really accusing any Jews except my fictional characters of anything, and that this is simply a story about how people can do bad things for bad reasons. But you would be completely justified in not believing me when I say that.
Lewis is not accusing materialists in general of having that bias—Lewis is accusing humans in general of having that bias. The idea that humans have that bias, and that that bias can be exploited to convince a human to subscribe to a given philosophy independantly of whether that philosophy is true is rather the point of the quote.
Well, I think it is both.
Lewis described how humans can subscribe to a given philosophy independently of its correctness. That part is completely rational.
But he didn’t use a random example to describe this bias. -- Just look at all other books he wrote, they all deal with the same topic. His choice of Devil is not the same as e.g. Tolkien’s choice of elves. Tolkien wrote fiction, but Lewis wrote fiction as a propaganda tool. Tolkien didn’t believe in elves, but Lewis did believe in Devil. -- Therefore it seems to me very likely that he wanted his readers to think about this specific example instead of using this kind of reasoning generally.
In other words, his work is an equivalent of a hypothetical LW article: “Top 10 cognitive biases that Republicans have, and how it influences their voting”. (Assume that the author wrote dozen articles on Republicans, and none about anything else.) Cognitive biases: okay. Selective attention to one specific group: not okay.
Since the author thought materialism was thoroughly false, this constitutes one of the mildest attacks on the character of materialists he could have written, short of just omitting the topic altogether. Thinking your opponents are wrong because they were misled by epistemic sleight of hand (from malevolent ageless invisible all-seeing schemers, no less) is nicer than thinking they’re wrong due to stupidity or sheer contrarian defiance.
The Screwtape Letters is a hundred-page collection of mistakes to be avoided. It seems silly to assume hostile intent behind this particular passage, rather than reading it in the same “don’t be misled in this particular way” spirit as the rest of the text.
That’s actually one of the reasons I’m more inclined to interpret it that way. If Lewis thought there were also materialists with superb reasoning abilities who believed in materialism by correctly exercising those reasoning abilities, I wouldn’t think a passage about poorly reasoned materialists was meant to be a generalization.
Hypothesis: Lewis was well aware that there are materialists who are good people. Yet his religion forced him to think bad of materialists. He handled this cognitive dissonance by thinking bad of materialists in one of the weakest ways possible. In other words, he knew too much to be able to believe that all materialists are power-hungry maniacs, but he couldn’t avoid at least politely thinking they were all materialists because of everyday human foibles.
It’s like Lewis’s beliefs about homosexuality. He was forced by his religion to believe that homosexuality is a sin and that all gay people should abstain from sex, but he tried to be as polite to them as he could within the confines of these beliefs and did not write about how gays are a menace to our children.
Why do you think he didn’t think this? I’m having a hard time not seeing this exchange as you projecting negativity onto Lewis, when he was writing about fully general cognitive biases and weaknesses with compassion towards all humans who share those biases.
If he thought that materialists became materialists by reasoning correctly, he would either have been a materialist, or would have taken one of a particularly narrow set of positions (such as “materialism is based on correct reasoning, and how something can be false and correctly reasoned at the same time is one of God’s mysteries” or “materialists are only materialists because they start with different premises from me, but correctly reason from those premises”) which as far as I know he didn’t. (Or else taken no opinion on materialism, which he wasn’t going to do.)
I don’t think he was mocking, but I do think he was correct. I claim that it’s perfectly true that most materialists today are materialists for spurious, non-object-level reasons. The same goes for all other widespread philosophies. People in general are biased and also don’t care about philosophical truth much.
I think the non-object-level reasons that the devil names are interesting.
I think few new atheists care about whether atheism is strong or courageous. They rather care about the fact that it’s what the intelligent people believe and they also want to be intelligent.
I suspect that most members of the Democratic Party are Democrats for spurious reasons too. But a Republican who lists a bunch of human foibles and writes a scenario that specifically names Democrats as being subject to them is probably attacking Democrats, at least in passing, not just attacking human beings.
Don’t let yourself be mindkilled. Arguments aren’t soldiers.
Focus on the true things you can say about the the world.
I am tempted to reply to this with “May the Force be with you”, but instead I’ll ask “just what are you trying to say?” You just gave me a reply which consists entirely of slogans, with no hint as to how you think they apply.
I think the argument I made was fairly obvious, but let me break it down.
You care about who’s attacking whom. If you are in that mindset arguments are soldiers. You treat the argument that there are atheists who are atheists because it’s cool to be an atheist as a foreign soldier that has to be fought. A foreign soldier that doesn’t play according to the rules.
Those considerations don’t matter if you want to decide whether there are atheists who are motivated by the coolness of being an atheist. If you care about truth, you want to have true beliefs about how much atheists are motivated by the coolness factor of atheism. It doesn’t matter for this discussion whether that argument is fair. What matters is whether it’s true.
I also want to have true beliefs about Lewis and about Lewis’s writings. Whether the statement is an attack matters for those beliefs.
Whether the statement is an attack also matters for whether it is a rationalist quote. A proper rationalist quote should only be about its apparent subject and should not try to sneak in such an attack under the radar.
It is possible to produce an endless sequence of statements with truth values (or to go through literature and extract an endless sequence of prewritten statements with truth values). Nobody has the time to evaluate all of them; we must pick and choose between them.
The implicit attack “all materialists are materialists because they are flawed humans who make mistakes” also has a truth value, and this truth value is something that I can care about just as much as the truth value of the statement’s literal words. Pointing out the attack is not ignoring truth in favor of something else, it’s recognizing that there’s something else there whose truth may be in question as well.
4 is wrong. The demon is talking about THIS GUY. The subject (or object as appropriate) is “Your man”, “He”, “him”, throughout. Neither the demon nor Lewis is talking about people who really think things through, nor implying that they don’t exist.
Neither the demon nor Lewis is saying words which literally read like that, but words have implications beyond the literal.
Ehhh. The consistency of the pronoun usage is so strong that I would expect him to have generalized somewhere if he (either one) meant it.
The class he’s a part of is ‘philosophically weak borderline Christians’, not ‘Materialists’. After all, they guy isn’t a materialist. And if you’re a philosophically weak borderline Christian, the easiest route to materialism is indeed how useful it is, not a philosophical argument, because these folks don’t give a whit about philosophy.
Judging authors by a single quote without knowing the context is a bad idea. Rousseau begins one of his works by saying “”Man is born free, and everywhere he is in chains.”
Taken on it’s own you might think that Rousseau is somehow criticizing that man is in chains. He isn’t. He advocates that man is chained by the social contract. I really had a hard time with ideas like this because I had read the book and my history teacher hadn’t, so discussing Rousseau was really hard.
Why do you care about Lewis?
We are a group of smart people. There nothing wrong with a quote having multiple layers of meaning and saying something in addition to it’s apparent subject.
On LW we care about rationality. Having accurate beliefs about what makes people become atheists is useful for that purpose.
On the other hand having accurate beliefs about CS Lewis is less important.
The truth value of that statement matters a great deal but that in no way implies that we shouldn’t take about that statement and it has no place on LW.
You didn’t attack it on grounds that it’s wrong and that there evidence that it’s wrong but on the grounds that it’s an unfair attack.
See filtered evidence. It is completely possible to mislead people by giving them only true information… but only those pieces of information which support the conclusion you want them to make.
If you had a perfect superhuman intelligence, perhaps you could give them dozen information about why X is wrong, a zero information about why Y is wrong, and yet the superintelligence might conclude: “Both X and Y are human political sides, so I will just take this generally as an evidence that humans are often wrong, especially when discussing politics. Because humans are so often wrong, it is very likely that the human who is giving this information to me is blind to the flaws of one side (which in this specific case happens to be Y), so all this information is only a very weak evidence for X being worse than Y.”
But humans don’t reason like this. Give them dozen information about why X is wrong, and zero information about why Y is wrong; in the next chapter give them dozen information about why Y is good and zero information about why X is good… and they will consider this a strong evidence that X is worse than Y. -- And Lewis most likely understands this.
I doubt that any LW member would take all of his information about the value of atheism from Lewis. If you let yourself convince that atheism is wrong by reading Lewis than your belief in atheism was very weak in the first place.
I have a hard time imagine pushing anyone in LW into a crisis of faith about atheism in which we wouldn’t come out with better belief system than he started. If someone discovers that he actually follows in atheism because it’s cool and works through his issues, he might end up in following atheism for better reasons.
I agree that this attack would not convince a typical LW-er, but I would claim that an unconvincing attempt to mislead with the truth is still an attempt to mislead with the truth and as such is ineligible to be a good rationality quote.
This quote taken together with the basic LW wisdom can be useful. But taken separately from any LW context, it would probably just push the reader a little towards religion.
In other words, a LW reader has no problem to imagine a complementary and equally valid quote (actually he probably already did something similar before reading Lewis) like:
Why would you want to do that?
The effects of little pushes are complicated. Vaccination is about little pushes.
The difference is that many Christians already argue that one should be Christian because it’s the moral thing to do and it’s socially beneficial. Christians don’t engage in the same behavior of trying to make people think it’s true than atheists do.
Christian missionaries actually do a lot of socially beneficial work in order to convince people of Christianity.
As far as “traditional wisdom” goes it gets interesting. I don’t think that Christianity grows in China because the Chinese consider it traditional wisdom.
Mormons argue that the fact that the religion grows as fast at it does is a sign that it’s true. They don’t see it as a religion of the past but as one of the future.
Despite talking about ancient wisdom New Age folks use the word ‘new’ as part of their brand.
I think it’s very useful to understand why certain movements win and move past your first stereotypes and actually seek understanding. Even Hitler who wanted to bring back traditional values was very clever in painting the status quo as going to end soon and the future as either his nationalism or communism.
He made it the cool philosophy that the young people at the universities wanted to follow before he had success with elections.
Your understanding of 1942 is amazingly flawed. No-one in the developed world was persecuted for being a materialist at that time, but plenty were for their religion. Moreover, the fashionable belief at the time was dialectical materialism, and part of the claim made for it, by dialectical materialists themselves, was that it was the philosophy of the future.
Well, my first thought was Bertrand Russell being fired from CUNY, which was around 1940, although that was mostly because of his beliefs about sex (which are still directly related to his disbelief in religion). Religion classes in public schools were legal until 1948, and compulsory school prayer was legal until 1963. “In God We Trust” was declared the national motto of the US in 1956.
Lewis’ point of reference is the UK, not the US. I don’t know how much that changes the picture.
I think the US counts as part of “the developed world”, however.
Like Salemicus said, no one of those things are persecutions. The closest of your examples is Bertrand Russell’s firing, but even you admit that wasn’t over his materialism.
By way of contrast there were in fact places in the developed world during the 1930′s-1940′s where one could be prosecuted for not being a materialist. And by prosecuted, I mean religious people were being semi-systematically arrested and/or executed (not necessarily in that order).
Saying “people in this time period are persecuted for their religion” implicitly limits it to Western democracies unless you specifically are talking about something else. It’s like claiming that “in the 1980′s, women weren’t allowed to vote”. That’s literally true, because there are countries where in the 1980′s (or even today) women could not vote, but it’s not what most people would mean by saying such a thing.
Furthermore, the existence of laws implies persecution. If school prayer is compulsory, that means that people in schools are punished for not praying or have to pray against their will for fear of punishment. That’s what “compulsory” means.
(Besides, if you’re going to interpret it that way. I could point out that in countries like Saudi Arabia, people could be killed for not believing in God, and that this wasn’t any better in the 1930′s in most of those countries.)
Spain was certainly a democracy at the time this prosecutions were happening, granted it was engaged in a civil war, but it was the democratic side whose partisans were doing the prosecution. Furthermore, “Western democracies” wasn’t a stable category during the period in question, so it was perfectly reasonable for a religious person living in a western democracy to worry that his country would stop being democratic shortly.
So can you cite an example of someone being imprisoned or executed for refusing to engage in school prayer?
I didn’t say people were imprisoned or executed; I used it as an example of persecution. It certainly was that. You may think that persecution only means being imprisoned or executed, but I don’t agree with that.
Bertrand Russell wasn’t a materialist; he believed in Universals. I think you are confusing “materialist” with “people I agree with”.
If accepting universals made one not a materialist, that would rule out some of the great Australian materialists, such as David Armstrong. Thus, that would clearly be a non-standard use of the label “materialist.” Perhaps there are details of Russell’s account of universals which are not shared by Armstrong’s which make it anti-materialist, but you don’t specify any. I know that Russell’s views changed over the years, which of course complicates things, but he certainly didn’t believe in spooky souls, and most of the doctrines of his I can think of which seem to be in possible tension with materialism are either susceptible to varying interpretations or matters he changed his mind on at different points or both.
So given that none of these are examples of people being persecuted for their materialism, can I take it that you agree?
Being proud of something that is actually shameful strikes me as particularly sinful.
How about being ashamed of something that is actually prideworthy?
If someone is a materialist just because it’s fashionable, that’s trouble. Lewis may be wrong on whether or not the Church is ‘true,’ but I don’t think Lewis is wrong on calling out compartmentalization and inconsistency rather than thinking about whether or not doctrines are true or false.
That’s a rather uncharitable reading.
-- John Stuart Mill
That last sentence makes it sound like it makes you selfish and hard-hearted. I noticed it could be interpreted to be that it will make you at least one, which seems more accurate. If you desire to be kind-hearted and help others, you will either have to accept that you can’t help others if you’re kind-hearted, or stop trying to help others. You don’t have to be selfish and you don’t have to be hard-hearted, but you do have to be selfish or hard-hearted.
Also, considering Mill is considered the father of Utilitarianism, it seems unlikely that he was preaching egoism.
— Hermann Weyl
(quoted in Science And Sanity, by Alfred Korzybski, of “the map is not the territory” fame)
Avatar: The last airbender
Context: Aang (“A”) is a classic Batman’s Rule (never kill) hero, as a result of his upbringing in Air Nomad culture. It appears to him that he must kill someone in order to save the world. He is the only one who can do it, because he’s currently the one and only avatar. Yangchen (“Y”) is the last avatar to have also been an Air Nomad, and has probably faced similar dilemmas in the past. Aang can communicate with her spirit, but she’s dead and can’t do things directly anymore.
The story would have been better if Aang had listened to her advice, in my opinion.
May I make a general request to people posting quotes? Please include not just the author’s name but sufficient information to enable a reader to find the relevant quote. This doesn’t necessarily have to be full MLA format; but a title, journal or book name if from a print source, page number or URL, and date would be helpful. Hyperlinked URLs are excellent if available but do not substitute for the rest of this information since these threads will likely outlive the location of some of the sources.
Doing so enables the reader not just to get a brief hit of rationality but to say, “Gee, that’s interesting. I’d like to learn more,” and read further in the source.
In fact, why don’t we add a fifth bullet point to the header:
Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the original source of the quote
That’s what archive.org is for. (Okay, it’s not perfectly reliable, but...)
If you want to avoid that problem, whenever you post a link you should submit it to archive.org or archive.is.
“This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.”—Daniel Kahneman
--H. L. Mencken
Related:
“Luck plays a large role in every story of success; it is almost always easy to identify a small change in the story that would have turned a remarkable achievement into a mediocre outcome.”—Daniel Kahneman
.
It’s from a book: Thinking Fast and Slow.
-The Wise Man in Darkside, a radio play by Tom Stoppard
This quote reminded me of a quote from an anime called Kaiji, albeit your quote is much more succinct.
Yukio Tonegawa in Kaiji
More succinctly...
-- John Lennon, “Beautiful Boy (Darling Boy)”
This is not a drill. Therefore, make sure you have drills for the really important bits.
And bits for the really important drills.
And make sure that the bit is properly secured and the chuck key is removed before operating the drill.
And when someone tries to be the wall the stands in your way, you’ll have something that will open a hole in them every time: your drill.
(In a related matter, if there’s a wall in your way, smash it down. If there isn’t a path, carve one yourself.)
BUT keep Chesterton’s Fence in mind: if you don’t know why there is a wall on your way, don’t go blindly smashing it down. It might be there for a reason. First make absolutely sure you know why the wall exists in the first place; only then you may proceed with the smashing.
--Kamina, in Tengen Toppa Gurren Lagann
Just who the hell do you think we are!?
I like the sentiment, but you do realize that breaking walls has costs.
Dear Journal,
My time among the Earthlings has yielded a strange new insight. These creatures are so spiritually ignorant that they have never even heard of the most sacred text, Tengen Toppa Gurren Lagann (wthdytia). They may not even have yet acquired the Power of the Spiral at all. This explains much about the existential terror in which most of them live their lives, and it will be a pleasure to show them the Right Way.
May this transmission reach the homeworld in good time, Eli
I. This Is Not A Game.
II. Here And Now, You Are Alive.
-- Om and many other gods, Small Gods, Terry Pratchett
Funny. I came up with almost the exact same line:
Eric Raymound
I agree with this. It’s also a good quote. However there is an important caveat. It is possible to get caught-up in an echo chamber of equally crazy people. For extreme examples, consider the Lyndon Larouche crowd, Scientology, or the latest resurgence of the Protocols of the Elders of Zion. So just because not everybody believes you’re crazy, does not imply you are in fact, not crazy.
I think it’s important to distinguish between “crazy” and “irrational”. Many crazy people are very rational. For instance, a fair portion of LW’s users, including myself, have experienced some form of temporary or chronic mental illness; that’s often exactly the impetus that gets someone to distrust their System 1 thinking and spend effort on deliberately becoming more rational.
Instrumentally, sure. Epistemically, I don’t think so.
I might actually expect the opposite. Mental illness is conventionally defined in terms of a marked impairment of everyday cognition compared to the general population, i.e. instrumental irrationality. Yet relatively few mental disorders, from my reading, seem to have direct effects on an epistemic level.
That sounds very strange to me. The standard things like schizophrenia, paranoia, etc. are characterized precisely by a “wrong” picture of reality. If a paranoiac is unwilling to venture out to buy groceries, that’s not a failure of instrumental rationality—if there actually were people outside his door who want to kill him (as he believes), his behavior would be perfectly rational.
Even depression has a strong epistemic component.
Sure, but I don’t think I’d describe that as a deficiency of epistemic rationality. People dealing with disorders like paranoia clearly have unusually strong biases (or other issues, in the case of schizophrenia) to deal with, but exceptional epistemic rationality consists of compensating well for your biases, not of not having any in the first place.
Someone who’s irrationally predisposed to interpret others’ behavior as hostility, and who nonetheless strives with partial success to overcome that predisposition, is displaying better epistemic rationality skills than Joe Sixpack despite worse instrumental outcomes.
I am confused.
If a paranoiac has “unusually strong biases” but is exceptionally good at compensating for them, he would not be diagnosed with paranoia and would not be considered mentally ill.
My understanding of epistemic rationality is pretty simple: it is the degree to which your mental model matches the reality, full stop. It does not care how bad your biases are or how good are you at overcoming them, all that matters is the final result.
I also don’t think full-blown clinical paranoia is a “bias”—I think it is exactly a wrong picture of reality that fails epistemic rationality.
I think you’re modeling epistemic rationality as an externally assessed attribute and I’m modeling it as a skill.
Not really. I am modeling epistemic rationality as a sum total of skill, and biases, and willingness to look for quality evidence, and ability to find such evidence, etc. It is all the constituent parts which eventually produce the final model-of-the-world.
And that final model-of-the-world is what you called “an externally assessed attribute”, but it is the result of epistemic rationality, not the thing itself.
So… you’d have a skill modifier plus or minus an ability modifier, and paranoiacs have a giant unrelated penalty?
awesomequotes4u.com
On the contrary, honesty, conscientiousness, being law-abiding, etc. have powerful reputational effects. This is easily seen by the converse; look, for example, at the effect a criminal record has on chance of getting a job.
This quote only gets any mileage by equivocating on the meaning of fair. What the quote is really saying is: “If you expect the world to fulfil even modest dreams just because you try not to be a jerk, expect disappointment.” But said like that, if loses all its seemingly deep wisdom. In fact, of course, if you personally fulfilled even some modest dream of a large proportion of the people on earth, you would be wealthy beyond the dreams of lucre.
Most of the comment is great, but
this part seems like a Just World Fallacy. You can start a chain of cause and effect that will make billions of people a bit happier, and yet someone else may take the reward.
But I agree that on average making a lot of people happy is a good way to get wealthy.
I see the quote as warning against a certain kind of naivety. I’m known as a trustworthy person and it’s brought me many advantages—people have happily loaned me large sums of money, for example, and I’ve been employed in high-trust-requiring positions. But I have cooperated in Prisoner’s Dilemma-type situations when I really should have realized the other guy was going to defect. In one case, he’d told me he was a narcissist and a Slytherin, and I still thought he’d keep our agreement. I lost a lot.
It always struck me that “fair” is one of the most misused words we have. What we mean when we say “fairness” is a sense that socially-constructed games have fixed rules leading to predictable outcomes, when some notion of a social contract or other ethical framework is exercised. If you enter a game with no rules, what would it even mean to expect a fair reward for fair play?
Scott Aaronson in reply to Max Tegmark replying to Scott’s review of Max’s book. He goes on:
(Emphasis mine.)
This sounds similar to the view that is sometimes called the fragility of deduction. It was why John Stuart Mill distrusted “long chains of logical reasoning” and according to Paul Samuelson it is why “Marshall treated such chains as if their truth content was subject to radioactive decay and leakage.”
And that is why the long chains of logical reasoning used in the UFAI argument should not be regarded as terminating in conclusions of near certainty or high probability.
You could say that about anything.
Maybe, but it would not be very painful in many cases. In most cases, people who put forward highly conjunctive arguments don’t put out them forward as urgent, near certainties which require immediate and copious funding.Moreover, most audiences have enough common sense to implications as lossy.
MIRI/LW presen ts an unusual set of circa,stances which is worth pointing out.
That chain of reasoning isn’t very long.
It is when you supply the unstated assumptions.
Please elaborate.
-Neal Stephenson, Cryptonomicon
-- Francis Bacon, Novum Organum
“Intelligence is not only the ability to reason; it is also the ability to find relevant material in memory and to deploy attention when needed.”—Daniel Kahneman
“Therefore, this kind of experiment can never convince me of the reality of Mrs Stewart’s ESP; not because I assert Pf=0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible than Hf, and none of which is ruled out by the information available to me.
Indeed, the very evidence which the ESP’ers throw at us to convince us, has the opposite effect on our state of belief; issuing reports of sensational data defeats its own purpose. For if the prior probability for deception is greater than that of ESP, then the more improbable the alleged data are on the null hypothesis of no deception and no ESP, the more strongly we are led to believe, not in ESP, but in deception. For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader. As (5.15) shows, the reader’s total prior probability for deception by all mechanisms must be pushed down below that of ESP.”
ET Jaynes, Probability Theory (S 5.2.2)
I found this (and the preceding bit) noteworthy on two points; first in the obvious mathematical respect that explains the relationship between favored hypotheses and less favored hypotheses which are both supported by data;
Second, by the realization that researchers favoring ESP most likely fail to apprehend the hypothesis that they are testing, w/r to their critics. In the case in question, they collected 37100 predictions, which seems a little excessive considering it had essentially no persuasive power to skeptics.
Steve Sailer
-- Napoleon Bonaparte.
Jay A. Labinger
Machiavelli
Anonymous commenter
David Burns in “The feeling good handbook” about the “acceptance paradox” (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 67
I feel like the claim made about the source is strong enough that it should have a link to a study.
It has sort of. It is in the forward of the recent edition of the book.
In general the book is well known on LW and I’m just reiterating a fact that’s already established by other people. At the moment the LW search finds 431 hits for the search “feeling good handbook”.
Quotes from the Screwtape Letters have not been terribly well-received in this thread. So, perversely, I decided I had to take a turn:
-- The demon Screwtape, on how best to tempt a human being to destruction.
The existence of souls notwithstanding, Screwtape is clearly right: if you are charitable to almost everybody—except for those your see every day!--then you are not practicing the virtue of charity and are ill-served to imagine otherwise. You cannot fantasize good mental habits into being; they must be acted upon.
Who does more good with their life—the person who contributes a large amount of money to efficient charities while avoiding the people nearby, or the person who ignores anyone more than 100 miles away while being nice to his mother, his employer, and the man he meets in the train?
If he actually donates the money then the charity is not constrained to fantasy. By the miracle of the world banking network, people thousands of literal miles away can be brought as close as the sphere of action. Those concentric rings are measured in frequency and impactfulness of interaction, not physical distance.
What Screwtape is advocating is that he simply intend to donate the money once Givewell publishes a truely definitive report (which they never will). Or better, that he feel great compassion for people so many steps removed that he could not possibly do anything for them (perhaps the people of North Korea, who are beyond the reach of most charities due to government interdiction).
Yeah, that’s been confusing: I meant this principle of charity.
A tricky question.
The obvious, and trivially true, answer is that he who does both does more good than either. But that’s not what you asked.
So. It can be hard to compare the two options when considering the actions of a single person, since the beneficiaries of the actions do not overlap. Therefore I shall employ a simple heuristic; I shall assume that the option which does the most good when one person does it is also the option that does the most good when everyone does it.
So, the first option; everyone (who can afford it) makes large donations to efficient charities, while everyone avoids those nearby and is unpleasant when forced to deal with someone else directly.
If I make a few assumptions about the effectiveness (and priorities) of the charities and the sum of the donations, I find myself considering a world where everyone is sufficiently fed, clothed, sheltered, medically cared for and educated. However, the fact that everyone is unpleasant to everyone else leads to everyone being grumpy, irritated, and mildly unhappy.
Considering the second option; charitable donations drastically decrease, but everyone is pleasant and helpful to everyone they meet face-to-face. In this possible world, there are people who go hungry, naked, homeless. But probably fewer than in our current world; because everyone they meet will be helpful, aiding if they can in their plight. And because everyone’s pleasant and tries to uplift the mood of those they meet, a large majority of people consider themselves happy.
This assumption seems trivially false to me, and despite being labeled as a mere ‘heuristic’, it is the crucial step in your argument. Can you explain why I should take it seriously?
Well, for most choices between “is this good?” and “is this bad?” the assumption is true. For example, is it good for me to drop my chocolate wrapper on the street instead of finding a rubbish bin? If I assume everyone were to do that, I get the idea of a street awash in chocolate wrappers, and I consider that reason enough to find a rubbish bin.
Furthermore, and more importantly, the aim here is not to produce an argument that one action is better than the other in a single, specific case; rather, it is to produce a general principle (whether it is generally better to be charitable to those nearby, or to those further away).
And if option A is generally better than option B, then I think it is very probable that universal application of A will remain better than universal application of B; and vice versa.
When you ask what it’s like if everyone were to “do that”, the answer you get is going to be determined by how you define “that”. For instance, if everyone were to drop chocolate wrappers on the lawn of your annoying neighbor, you might be happy. So is it okay to drop the wrapper on your neighbor’s lawn?
It’s tempting to reply to this by saying “‘doing the same thing’ means removing all self-serving qualifiers, so the correct question is whether you would like it if people dropped wrappers wherever they wanted, not specifically on your neighbor’s lawn”. This reply doesn’t work, because there are are plenty of situations where you want the qualifier—for instance, putting criminals in jail when the qualifier “criminal” excludes yourself.
(And what’s your stance on homosexuality? If everyone were to do that, humanity would be extinct.)
I do need to be careful to define “that” as a generally applicable rule. In this case, the generally applicable rule would be, is it okay to drop chocolate wrappers on the lawn of people one finds annoying?
So I need to consider the world in which everyone drops chocolate wrappers on the lawn of people they find annoying. Considering this, the chances of someone dropping a wrapper on my lawn becomes dependent on the probability that someone will find me annoying.
So, in short, I can put as many qualifiers on the rule as I like. However, I have to be careful to attach my qualifiers to the true reason for my formulation of the rule; I cannot select the rule “it is acceptable to drop chocolate wrappers on that exact specific lawn over there” without referencing the process by which I chose that exact specific lawn.
I can’t attach a qualifier to a specific person; but I can attach a qualifier to a specific quality, like being annoying, when considering a proposal.
Well, what’s your stance on forcing homosexuals to breed heterosexually to save humanity from extinction? Or forcing all homosexual women and a few men chosen by lottery, since we have an overabundance of sperm?
I don’t think people should be forced to breed, but I wasn’t arguing that people should be forced to breed, I was pointing out that the above argument (would you like if everyone did that) means that it is wrong to refuse to breed. Pointing out that an argument I oppose leads to an uncomfortable conclusion is a reductio ad absurdum and does not mean that I endorse that conlusion myself.
And if everyone were to breed, that would exacerbate the problems that are due to overpopulation. This implies that there should be some process that one follows when deciding whether or not to breed; it should be a process that has a nonzero chance of making the decision either to breed or not.
Exactly what the optimal process is, I do not know.
Yvain in these two old blog posts of his makes the case that it’s not clear that a world with grumpy people is worse than a world with hungry people.
You are correct. It is by no means clear which is better.
Why the Hell would I want to practice the virtue of charity? If anything, I want to help people. And hating people from a foreign country could be an excellent way to do damage!
I’m sorry, my original post was not quite precise. I meant charity in the sense of the Principle of Charity, not charitable contributions. If you prefer, substitute “kind” for “charitable”; it’s not quite the same but illustrates the point just as well.
Keep in mind, we’re talking about the damage you do to yourself. Hating people you’ve never met is not a very efficient way to damage yourself. Much better is to hate people you know intimately and see every day. That way you can practice your vices efficiently, and will have as many opportunities as possible to act them out.
Applying the principle of charity to people you know but not to foreigners is a well-known failure mode that produced Soviet atrocities against Germans, to use Lewis’ own example. And again, here is Lewis applying the principle of charity to a transformation-happy Sith Lord with a mind-altering book after writing these allegedly helpful quotes. He seems to genuinely not see the parallels between borderline-self-insert Coriakin and his most famous villain.
Lewis is explicitly writing religious propaganda. It’s not coincidence that his advice would have his mostly-Christian readers focus on charity to Christian authority figures before Soviets, or even Church critics they don’t personally know.
The three examples given in the quote are:
Which of those are Christian authority figures?
To the women in the Magdalen laundries, that would be their employer. But if you genuinely insist that because Lewis said “every day” rather than every week, he couldn’t have meant a priest or vicar (or one of those supervising nuns) except in the case of those readers who actually do see one every day, then I’m going to insist he didn’t want to help women because he said “he”. Or do you want to argue this quote has nothing to say about people who fall in between the most “immediate” and the “remote circumference”?
Also, we just had another quote from the same source in which Lewis used the narrow form of this principle to attack atheists—or anyone who doubts that “his” neighbors belong to the body of an otherworldly entity, for what Lewis implies are poor reasons. (I agree they aren’t the best reasons—those would start with the absurdly low prior.)
Are you going to respond to my argument that this habit of trusting the familiar hurt Lewis’ rationality?
No, of course not. It has nothing to do with the quote I posted.
You’re clearly suffering and I don’t want to just blow you off, but in this thread you’ve almost exclusively responded to things I didn’t write. I am not the proper object for your anger with Lewis and Christianity, and I’m done engaging with you.
To self-modify, perhaps?
Except, with that attitude you won’t. You’ll sit around telling yourself how virtuous you are for liking people you’ve never met, while being a misanthrope to everyone you personally know. Furthermore, if (or when) you mean one of the foreign people you supposedly love, you’ll wind up being a misanthrope to them as well.
Really? How does you, personally, hating people from a foreign country do damage?
And why would I care about that if my donations produce a giant net benefit? When did I even claim to love anyone?
If you don’t love people, why would your utility function include a term for their wellbeing?
Where did he claim that his utility function included a term for the stranger’s well-being?
I think it’s implicit from when he said “if anything, I want to help people”, and when he described donations that are efficient at helping anonymous strangers as “produc[ing] a giant net benefit”.
If Stalin didn’t love his own people, why would he mildly prefer not to throw them at Hitler?
The people of the soviet union were a resource that Stalin had a great amount of control over, and so even if he was perfectly selfish and uncaring towards his people, he would prefer to preserve that resource. The selfish benefit to be gained from saving a life in Africa is negligible, and is vastly outweighed by the cost.
I suppose I should clarify that I’m taking “love” in this context to basically mean the same thing as “like”—in many contexts it’s not just a mere difference of degree, but in the context of “loving humanity” and such phrases I think it probably is.
Of course, it’s a lot harder to be charitable to the man on the train—or worse, to one’s own exploiter employer—than to one’s mother. For one thing, the wider the circle of charity, the deeper a pit to fill with one’s efforts!
Which was precisely why, during my internship last summer, I eventually just picked one homeless guy and gave him my loose $1 bills every day during my commute.
George Lakoff Progressives Need to Use Language That Reflects Moral Values
-- Dan Geer
(rationality applicability: antifragility & disjunctive prediction vs. optimization for conjunctive prediction)
-- The Book of Mormon (Alma 30.24-28)
Edit: I’m mildly surprised by the reactions to this quote. The thing I find interesting about it is that Joseph Smith was apparently sufficiently familiar with Voltairesque anti-Christian ideas that he could relay them coherently and with some gusto. This goes some way towards passing the ideological Turing test.
I’m hardly an expert on the Book of Mormon, but this quote surprised me so I googled it. It appears to be an accurate quote but is not fully attributed. As best I can make out, the speaker is the antichrist (or some such evil character; not sure on the exact mythology in play here).
Failure to note that means this quote gives either an incorrect view of the Book of Mormon, or of the significance of the text, or both.
When quoting fiction, I recommend identifying both the character and the author. E.g.
--Korihor in the The Book of Mormon (Alma 30.24-28); Joseph Smith, 1830
Having said all that, it’s still a damn good rationality quote.
I would think it’s bad publicity for us to explicitly note a resemblance to antichrist-type characters.
Unless we’re trying to appeal to contrarians.
Considering how much hating on religion there already is around here, I don’t think there’s much left to lose on that front.
Ah, of course, because it’s more important to signal one’s pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.
Upvoted because that really is a failure mode worth keeping in mind, but I don’t think it’s responsible for the attitude towards religion around here; I think that’s a plain old founder effect.
This is a failure mode I worry about, but I’m not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, “oh yeah, my deconversion process was totally slowed down by stuff like that from atheists,” but I’d be surprised.
“Pick a side and stick with it, supporting your friends and bashing your enemies at every cost-effective opportunity” is the dominant strategy in factional politics much the same way tit-for-tat is dominant in iterative prisoner’s dilemma. Generosity to strangers and mercy to enemies are so heavily encouraged because in the absence of that encouragement they’re the rare, virtuous exception.
Yes, obviously we have to interact with outsiders. That’s what makes them outsiders, rather than meaningless hypothetical aliens beyond our light-cone. The question is, should we be interacting with organized religion by trying to ally with, or at least avoid threatening, the people in charge? Or by threatening them so comprehensively that (figuratively speaking) we destroy their armies and take their cattle for our own?
The antichrist is a hypothetical figure who poses the greatest possible ideological threat, an exploit against which the overwhelming majority of Christianity’s (worldly) resources and personnel cannot be secured. The popular theory is, that individual’s public actions would trigger the ultimate ‘evaporative cooling’ event. Everybody who doesn’t really believe, everybody who just checks “christian” on the census form and shows up to church for the social network and the pancakes, will stop doing so. In short, the sanity waterline would rise.
That’s nice, but I’m Jewish ;-). Or in other words, the very nature of an “antichrist” pins you to opposing one kind of religion in specific, and also pins you to moral positions you probably don’t want to take. It’s the ultimate sin of privileging the hypothesis: you’ve assumed it’s a Christian world you have to persuade away from their Christianity.
(In real life, I would argue the greatest utility to be gained from deconversions right now is in the Muslim world, where one currently finds the greatest amount of religious violence over the smallest differences. You could tell me to go become the Anti-Muhammad, but again, I’m already Jewish.)
Remember, the Antichrist is also puppy-kickingly evil. You don’t hate puppies, do you? Then why are you signing up for a role that outright requires you to kick them?
Are you sure there’s not some other evil villain you’d prefer to be?
Jane Austen, Sense and Sensibility
It’s pretty remarkable to detect yourself in that kind of mistake; most people are very good at finding confirming evidence for whatever judgments they’ve made about people, and ignoring any contrary indications.
Yes, this is a good point—we generally don’t realise that we are (self-)deceived, so we can’t even begin to think about where we went wrong.
Of course, Elinor Dashwood is something of an authorial stand-in, so it’s not really surprising that she’s incredibly wise and perspicacious like that.
Yes. This can happen, but what you describe is more common. Once a coworker was sure that I was an ultra-right-wing militiaman, based on one indirect, misleading bit of evidence, ignoring all else.
Merely declaring my general political alignment and support and opposition to various candidates was totally inadequate. I had to explicitly enumerate several political positions to get him to adjust, and even then he seized on the nuances to try to interpret it as my secretly being a right-wing nut.
I think those mistakes usually happen for an entirely different reason. New people remind us of ones we’ve already met, and we unconsciously “fill in the blanks” in what we know about the new person with what we know about person we know, or some kind of average-ish judgement about the group of comparable people we know.
-- C.S. Lewis, The Screwtape Letters
This appears to be saying “anyone who follows my religion and still looks stupid isn’t a True Christian”. How this is a rationality quote is beyond me.
It’s pointing out that people tend to judge the validity of a cause by the superficial appearance of the supporters of that cause. Quotes that point out common fallacies count as rationality quotes to me.
David Burns in “The feeling good handbook” (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 69
Winston Churchill
Edwin Lyngar at Salon
I don’t think certainty is an emotion in the first place. The emotion that people who are certain feel is confidence.
Even then I think it’s frequently better than anger.
Confidence in oneself, or confidence in someone or something external? The two feel quite different to me, subjectively—something like a feeling of lightness and elevation vs. groundedness and solidity, although English doesn’t have a very good vocabulary for this sort of thing.
Bernard and Sir Humphrey are British government functionaries in the comedy show ‘Yes Minister’
Bernard: If it’s our job to carry out government policies, shouldn’t we believe in them?
Sir Humphrey: Oh, what an extraordinary idea! I have served 11 governments in the past 30 years. If I’d believed in all their policies, I’d have been passionately committed to keeping out of the Common Market, and passionately committed to joining it. I’d have been utterly convinced of the rightness of nationalising steel and of denationalising it and renationalising it. Capital punishment? I’d have been a fervent retentionist and an ardent abolitionist. I’d have been a Keynesian and a Friedmanite, a grammar school preserver and destroyer, a nationalisation freak and a privatisation maniac, but above all, I would have been a stark-staring raving schizophrenic!
David Burns in “The feeling good handbook” (The book has been shown to be effective at improving the condition of depressive people in controlled trials) Page 126
Upvoted. I don’t know if this is true; indeed, I suspect it is definitely partially false. But, I don’t think it is entirely false. It’s interesting and has made me think.
It’s the position of someone at the heart of the evidence based framework of Cognitive Behavior Therapy.
Even if you happen to disagree and think you know better how emotions work than the people with the credentials I think it’s still useful to know where you differ with them.
I read David Burns book because it’s the book that’s most recommended on LW when it comes to dealing with emotions.
I’m not quoting someone from a New Age background but someone who actually is trying to teach people to get rid of irrational beliefs by recognizing their mental distortions.
According to the research summarized by Yvain, Cognitive Bahvioral Therapy does not seem to be more strongly supported as effective than Freudian psychoanalysis. Rather, it conducted the research which demonstrated its effectiveness relative to placebo earlier than other types of therapy, and thereby gained a reputation as evidence-based.
Not quite.
In Cognitive Behavior Therapy, there a strong focus on measuring the level of depression of the person who’s coming to the sessions. Cognitive Therapist now tried different approaches of treating depression and believe in making decisions based on what interventions succeed in reducing the scores of someones level of depression.
A Freudian would tell you, that he doesn’t really care about the score but that he cares that the person just feels better and succeeds in dealing with his childhood traumas.
David Burns wants that every patients fill out a questionnaire at every session that measures the amount of depression the person has at that time. He wants that psychologists who don’t succeed in reducing the scores of their clients have no excuse because they are faced with hard numbers.
Given that’s Burns approach of looking at the world there going to be less inferential difference in getting Burn understood be LessWrongers than if I would pick someone who could also produce results but who doesn’t really care about gathering academic evidence for what works.
Don’t confuse people who produce results by looking at published evidence and who base their practice on that evidence with people who just produce results.
The gist of the research though, is that while historically Freudian therapists have taken this approach, more recently many Freudian therapists have actually started performing the same sort of research on the effects of their therapy that Cognitive Behavioral Therapists have, and received essentially the same results. It’s not that there isn’t evidence to support Cognitive Behavioral Therapy working, but rather, more recent evidence seems to suggest that when they do gather the relevant data, other forms of therapy appear to work equally well. So it appears that the things about Cognitive Behavioral Therapy that actually work are not things that are specific to Cognitive Behavioral Therapy, but rather, are general qualities of therapy provided by a trained practitioner.
No, the average Freudian therapist doesn’t focus on the measurement on a scale but rather takes a more holistic approach.
Yes, you can run a controlled test where you pit Freudian therapists against Cognitive Behavioral Therapists and not find any difference in efficiency but that doesn’t mean that both classes value published evidence and an improvement on a numerical score the same way.
“Valuing published evidence” doesn’t necessarily equate to having a more correct theoretical framework though. Reality doesn’t grade for sentiment.
Cognitive behavioral therapists might on average be more relatable to Less Wrong members than Freudian therapists, but that doesn’t mean that prescriptions rooted in a cognitive behavioral model of the human mind are more likely to be correct or helpful.
Yes. But if even the theory that most likely to be relatable to Less Wrong members get’s voted down because the whole emotion business is just to strange, that’s sad.
-- James Dean
I find this a useful quote to keep in mind when I’m experiencing mental states that I don’t want to experience.
What’s that from?
I made it up based on Eliezer’s saying. I’m much too Gryffindor/Sunshine to call it world optimization.
My bad. Deleting.
“As I fear not a child with a weapon he cannot lift, I will never fear the mind of a man who does not think.’”
Words of Radiance, Brandon Sanderson, page 795
Both the metaphor and its literal application only make sense if “cannot” and “does not” means “never”, and they really don’t.
While I’d never fear the mind of a man who literally is in a coma and doesnt think at all, I’d have plenty of reason to fear the mind of a man whose ability to think is merely limited. He can be a stupid moral reasoner and a clever killer at the same time.
I recall one Sherlock Holmes book, where Holmes said that he has a lot of trouble predicting the actions of idiots; an intelligent man, Holmes can work out what actions he would take in a given situation, but an idiot could do anything.
Of course, this presumes that one knows the goals of said intelligent agent.
In that case, though, you’re afraid of the man’s axe more than his mind.
No, because it works as metaphor too.
It’s certainly possible to convince people of a proposition that is false; consider Eliezer doing the Ai-box experiment. In the Ai-box example, Eliezer understands that the proposition is false, but that need not be true in general; you can be bad at thinking in one way (and thus not understand that some proposition is false) but good at thinking in other ways (and thus be able to readily convince other people of the false proposition).
In other words, instead of a stupid reasoner and a clever killer, a stupid reasoner and a clever convincer.
-- Sri Aurobindo (1872-1950), Savitri—A Legend and a Symbol
Professor Utonium, realizing he has a problem
Nassim Taleb
Really curious about the “supposedly” in this. Does Nassim not actually believe or endorse this view?
I read it as him endorsing the empirical proposition, but not the value proposition. That’s the point of the second sentence- he wouldn’t say “these are good friends to lose” if there wasn’t the potential to lose friends.
Nassim doesn’t advocate that you should become totally uncompromising toward BS as a technique to change the people with whom you hang out.
He doesn’t advocate that you should optimize your tactics in that way to get better friends. He rather advocates authenticity where you don’t hide by avoiding to call out BS because you are afraid of offending other people.
If being authentic means calling out BS than do it. If being authentic for you means to not confront other people about the BS they talk about, don’t.
Taleb is responding to the common claim that one shouldn’t call out BS because one will thus loose friends.
“Failure always brings something valuable with it. I don’t let it leave until I extract that value. I have a long history of profiting from failure. My cartooning career, for example, is a direct result of failing to succeed in the corporate environment.”—Scott Adams
“The Six Filters for Truth:
Personal experience (Human perceptions are iffy.) Experience of people you know (Even more unreliable.)
*Experts (They work for money, not truth.)
*Scientific studies (Correlation is not causation.)
*Common sense (A good way to be mistaken with complete confidence.)
*Pattern recognition (Patterns, coincidence, and personal bias look alike.)
″ - Scott Adams
“The antifragile loves randomness and uncertainty, which also means—crucially—a love of errors, a certain class of errors. Antifragility has a singular property of allowing us to deal with the unknown, to do things without understanding them—and do them well. Let me be more aggressive: we are largely better at doing than we are at thinking, thanks to antifragility. I’d rather be dumb and antifragile than extremely smart and fragile, any time.”—Nassim Taleb
Nassim Taleb
What does this mean?
My interpretation is that having an explanation for something is useless if you can’t actually make it happen. And even if you don’t fully understand how something works, it’s good to be able to use it.
For example, I would much rather be able to use a computer than know how it works.
Also, if you can’t do it, that calls into question whether your explanation is actually valid. Anyone can explain something, so long as they’re not required to actually make the explanation useful.
So we could rephrase as: “If I really understand X’s, I can build one, but if I kind of understand X’s, I can at least use one”?
I interpret it as related to expert-at versus expert-on. If you assume that an expert-on is always an expert-at, then someone explaining something they can’t do is clearly not an expert.
I’m not sure that assumption is true, though I could believe it’s a useful rule of thumb.
Think of Steve Jobs vs. the business school professor who wrote a book about entrepreneurship.
I don’t know, but it sounds similar to “It’s smarter to be lucky than it’s lucky to be smart.”
-- José Manuel Rodriguez Delgado
--Edward Said, Beginnings, Intention and Method, p. 75, 1985
The first sentence seems excellent to me, and probably true of a great many things. I have trouble focusing on the rest, and more trouble making sense out of it. Anyone care to take a crack at a summary?
“The fragilista falls for the Soviet-Harvard delusion, the (unscientific) overestimation of the reach of scientific knowledge. Because of such delusion, he is what is called a naive rationalist, a rationalizer, or sometimes just a rationalist, in the sense that he believes that the reasons behind things are automatically accessible to him. And let us not confuse rationalizing with rational—the two are almost always exact opposites. Outside of physics, and generally in complex domains, the reasons behind things have had a tendency to make themselves less obvious to us, and even less to the fragilista.”—Nassim Taleb
Ernest Gellner
Nassim Taleb
Why not?
The problem with a disease is the symptoms. If you can cure the disease, all the symptoms will go away, but if you can only cure the more harmful symptoms, that will generally be almost as good.
Not really. The word “symptoms” generally means “noticeable signs” and with some diseases the first visible sign is sudden death.
Taleb is really talking about treating the cause and not visible manifestations.
--Scott Adams, Interview with Julia Galef, February 10, 2014
— Paul Krugman, “Sergeant Friday Was Not A Fox”
“Just looking at the data like a scientist” does not give you magic scientist powers. Models of the world are what allow you to predict it, without need for magic scientist vision.
Adams doesn’t elaborate on this point, but I read him as saying, if you’ve actually measured things and taken data that goes to your point, then your model is more likely to be correct.
For example, suppose a model says that raising the minimum wage reduces employment. That’s a pretty common model in economics and it can be backed up with a lot of math. However I would not find that alone convincing. On the other hand, if an economist goes out into the world and looks at what actually happened when the minimum wage was raised, that would be more convincing. If they can figure out a way to do an experiment in which, for example, 5 nearby towns raise their minimum wage, 5 keep it the same, and another 5 lower it, that would be even more convincing.
Another example: consider a model that says
heart disease kills people
heart disease is correlated with high cholesterol
eggs contain lots of cholesterol
Those three statements are reasonably well established and backed up by data. However if you throw in a model that says dietary cholesterol causes in-body cholesterol, and in-body cholesterol causes heart disease, and therefore eating eggs reduces life expectancy; you’ve jumped way beyond what the data supports. On the other hand, if you compare the levels of all-cause morbidity among people who eat eggs and people who don’t or, better yet, do a multiyear controlled experiment in which
the only diet variation between groups is that some people eat eggs and others don’t, the answers you get are far more likely to be correct.
Here’s another one: you have lots of detailed calculations that say if you smash two protons together at .999999c relative velocity, and you do it a few million times, then you’ll see certain particles show up in the debris with very precise probabilities. Only when you run the experiment, you discover that the fractions of different particles you see don’t quite match what you expected because there’s an additional resonance you didn’t know about and didn’t include in the model.
In other words, empirical data beats mere models. Models can be self-consistent and plausible, but not fully reflect the real world. Models that go beyond what the data says run the risk of assuming causal connections that don’t exist (dietary cholesterol to in-body cholesterol) or missing factors outside the model (maybe eggs do increase the risk of heart disease but reduce the risk of cancer) that are more important.
Of course all these experiments are really hard to do, and take years of time and millions, even billions, of dollars, so often we muddle along with seriously flawed models instead. However we need to remember that models are just models, not data, and be reasonably skeptical of their recommendations. In particular, if we’re about to do something really expensive and difficult like changing a nation’s dietary preferences based on nothing more than a model, maybe we should step back and spend the money and the time needed to collect real data before we go full speed ahead.
Fair enough—political conditioning has caused me to assume that any non-specialist saying “don’t trust models, just ‘look at the data’,” is the victim of some sort of anti-epistemology.
In context, it’s less likely that that’s the case, but I still think this quote is painting with much too wide a brush.
I would argue that it is this political conditioning itself that is the anti-epistemology.
I don’t suppose you could contribute substance rather than just accusation?
Please, please, kids, stop fighting! Maybe Eugine_Nier & elharo are right about the necessity of looking at the world to decide whether a model’s true, and maybe Manfred & fezziwig have a point about observations and their interpretation not being cleanly separable from the use of models.
Prediction is going beyond the data, so a model that never goes beyond the data isn’t going to be much use.
Climate change models incorporated data, so they are not purely theoretical like the economic model you mentioned.
I … think he’s talking about basic correlation, statistical analysis, that sort of thing?
(I enjoy Scott’s writing, but I didn’t upvote the grandparent.)
What? Scientists do use models. Assuming charitably that he’s not mistaken or bullshitting about what scientists do, by “model” he must mean something different—what?
I’m fairly certain that’s actually horrible advice. It boils down to “substitute your judgement over professionals on those problems that are hardest.”
More like “discount status on problems where expertise is a poor predictor of accuracy”.
I think this steelman is not quite true to the spirit of the original. The contrast he draws between “using a model” and “looking at data like a scientist” is especially strange. One wonders what he thinks of meteorology.
Well giving that meteorologists (or at least climate scientists) have for the last couple of decades been predicting warming and sea level rises that never seems to occur, I’d say it’s a good example.
Carl Sagan
That seems like an attempt to attach emotional connotations to basic physical facts. A rationality quote..?
Feeling Rational.
(I didn’t upvote the quote, but I think that’s what Kawoomba was going for)
Carl Sagan
Nassim Taleb
“There is a love that equals in its power the love of man for woman and reaches inwards as deeply. It is the love of a man or a woman for their world. For the world of their center where their lives burn genuinely and with a free flame.”
Mervyn Peake, Titus Groan
It’s a nice quote but what has it to do with rationality?
It’s easy to assume that my own feelings towards things either do or should map to the feelings of others. LW example, monster aliens recognizing the inherent beauty of women in torn dresses.
I like this quote for emphasizing that others might love, or fear, or hate, or covet, whatever. I regard the notion of where someone sits down to eat as a casual decision, but in prison (or high school), it can have great emotional weight and be fraught with consequences. If someone bites his thumb at me, I might scoff, but Tybalt might call for a duel.
I’ll also confess to a personal bias for the quote. I’m something of a hermit, and take a fierce joy in my, I dunno, groove (in the Emperor’s New Groove sense). Positive depictions of loners are rare enough that I’m perhaps a bit of a sucker for a quote that even hints at one.
-Dr. Ray Peat
-Dr. Ray Peat
Reverse stupidity is not intelligence.
It’s not reversed stupidity. It’s an antiprediction.
… which does not make it correct, of course.
Well, yeah, if an authority says that the Hubble constant is 68 km/s/Mpc and you interpret that in such a way that if the Hubble constant is 67 km/s/Mpc or 69 km/s/Mpc the authority is wrong, then you’d better assume the authority is wrong—though that’s not a very informative assumption and anyway in the real world authorities often make claims more vague (i.e. whose negation is less vague) than that.
Well, sure, you have a relatively high opinion of authority. But even a vague claim needs the expert to be significantly better than chance to be promoted to your attention.
(And, well, this guy isn’t a physicist. He’s involved in medicine. There’s less likely to be clear-cut experimental proof the expert can simply point out in order to overcome your prior. Instead, you get a huge, enormous pile of famous cases where The Experts Screwed Up.)
But there also are many many cases where the experts didn’t screw up, which are less famous for obvious reasons. Believing doctors are always wrong sounds kind of extreme to me; sometimes they are right. (Otherwise, how comes life expectancy has increased so much in the past century?)
Is this reverse stupidity? It’s a demonstrably false statement, but I think it’s a useful heuristic to compensate for a bias we are prone to, allowing you to then collect evidence and evaluate the situation rationally. It might help overcome the also demonstrably false ‘prior belief’ that the authorities are always correct, which prevents people from ever expending energy to confirm or question them.
Guess I’d better start driving without a seatbelt, smoking cigarettes, drinking while pregnant, avoiding healthy food and exercise, having unprotected sex with strangers …
He’s not literally saying to believe this, but to consider this idea to enable you to then look at the evidence yourself.
The problem is so many people hold a ‘prior’ that the authorities are always right, it becomes possible for wrong ideas to become entrenched, and never seriously reinvestigated.
Nobody got the Nobel Price in medicine for showing that smoking causes cancer because of the way the authorities dealt with the issue at the time that discovery was made.
The current policy of requiring warning labels on cigarettes doesn’t seem to be very evidence based and might cause more harm that it’s useful.
That’s beginning the question. Different people have different opinion of what’s healthy and when you read a bit on LW you will find that plenty of people agree with the official canon.
That’s also a pretty broad category.
Instead of doing your three times 30 of jogging per day and sitting in front of your computer it might be better to get a walking desk and damage your ankles by jogging.
It’s not quite clear that the current philosophy of what exercise is supposed to be is optimal.
You’re missing the point. “Authorities are always wrong” is not only demonstrably false (I could have listed a dozen other examples), it practically invites the reader to think that reversed stupidity is intelligence. We follow the wisdom of the majority because we don’t have the cognitive capacity to reason through every problem as if it were new, and by doing what the majority does in scenarios where we’re not domain experts, we’re at least guaranteed a “not-terrible” outcome, even if it’s sub-optimal.
I think you are heavily misreading the intent of the quote, if that’s what you take from it.
The quote basically teaches the kind of relationship that Feynman had with authority. Don’t believe in it just because they say so, but demand evidence and follow where the evidence leads you.
Feynman might have benefited from brushing his teeth more frequently but he still left the legacy he did because of his relationship to authority.
Saying “always wrong” is too strong if that was the intent of the quote. It would be a better quote if the author said “could be wrong”.
Fair enough, I disagree with it less now that I’ve read it through again.
My interpretation is that this quote is aimed at people who do have the cognitive capacity to reason through specific problems that are important to them, but are failing to do so because they put too much trust in authorities.
I suspect you meant to type something else. :-)
He is not wrong!
Corrected.
I googled Ray Peat, and he is someone with rather definite views about nutrition and biochemistry. Can I go against his advice in the other quote to read a lot of technical stuff, and ask those on LW who have done so, to say how they judge his ideas?
His ideas are all based on the Association-Induction hypothesis, which is a little known and iconoclastic theory of cell biology… however it seems to have a strong experimental basis.
His writing seemed crazy to me at first (almost like schizophrenic word salad, despite having graduate level training in biology), but I’ve spent much of the last year studying the papers he cites… and I cannot find any mistakes in his reasoning yet. It’s seeming more and more reasonable, but I think it’s better to use his writings to find new ideas about basic biology, rather than just follow his health recommendations without understanding them.
I’m planning a post on the Association-Induction hypothesis. It’s status is very similar to that of timeless physics: it’s largely ignored and unknown however it is researched seriously by a small number of academics.
Ernest Gellner
-Enders Game I thought this had been posted before but I couldn’t find it anywhere.
Needs more context. You and I know what this quote refers to; others might not.
EDIT: Here’s a non-Tweeted version of the quote. It is used again later in the book, but to quote that scene would be a spoiler.
Confirm that I don’t know what this means.
He’s fighting a force which has foolishly employed fortifications partially made of insulating feathers.
Also, the portcullis of this fortification is not open.
Also, he is choosing a coordinate system in which said fortification is in the -Z direction.