Rationality Quotes May 2016
Another month, another rationality quotes thread. The rules are:
Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
In response to the Quora question “What are some important, but uncomfortable truths that many people learn when transitioning into adulthood?”
Every person is responsible for their own happiness—not their parents, not their boss, not their spouse, not their friends, not their government, not their deity.
One day we will all die, and 999 out of 1,000 people will be remembered by nobody on earth within a hundred years of that date.
Practically all of the best opportunities (in business, in romance, etc) are only offered to people who already have more than they need.
The idea that you will be happy after you make X amount of dollars is almost certainly an illusion.
The idea that you will be happy after you meet [some amazing person] is almost certainly an illusion.
For most people, death is pretty messy and uncomfortable.
When you don’t possess leverage (go look up “BATNA”), people will take advantage of you, whether they mean to or not.
Almost everybody is making it up as they go along. Also, many (most?) people are incompetent at their jobs.
When talking about their background and accomplishments, almost everybody is continually overstating their abilities, impact, relevance, and contributions.
Physical beauty decays.
Compared to others, certain ethnicities and races (and genders, and sexual orientations, and so on) are just plain royally f*cked from the day they’re born.
Bad things constantly happen to good people. Good things constantly happen to bad people.
Very few people will ever give you 100% candid, honest feedback.
People are constantly making enormous life decisions (marriage, children, etc) for all of the wrong reasons.
Certain people—some of whom are in positions of enormous power—just do not give a damn about other human beings. A certain head of state in Syria comes to mind.
Often, the most important and consequential moments of our lives (chance encounter, fatal car accident, etc) happen completely at random and seemingly for no good reason.
Your sense of habitating a fully integrated reality is an illusion, and a privilege. Take the wrong drug, suffer a head injury, or somehow trigger a latent psychotic condition like schizophrenia—and your grip on reality can be severed in an instant. Forever.
From Patrick Mathieson
This long list needs a post scriptum: Very few people manage to accomplish this transition :-/
I’d also say that your ability to care about other people, along with overall sanity, will diminish under constant stress. That’s why “Preserve own sanity” is #1 on my rules to be followed in case of sudden world domination list and something I need to stay aware of even in my current (and normally not that stressful or important) job.
I’ll try to reframe those that hit me the hardest
One day we will all die, and 999 out of 1,000 people will be remembered by nobody on earth within a hundred years of that date.
The duration of our relevance to others is just one of many dimensions of relevance. And, does the duration of the meaningful experience have diminishing returns?
Practically all of the best opportunities (in business, in romance, etc) are only offered to people who already have more than they need.
The idea that you will be happy after you make X amount of dollars is almost certainly an illusion.
The idea that you will be happy after you meet [some amazing person] is almost certainly an illusion.
For most people, death is pretty messy and uncomfortable.
Discomfort doesn’t have to be distressful, it can be eustressful. Same with mess.
When you don’t possess leverage (go look up “BATNA”), people will take advantage of you, whether they mean to or not.
But not everyone will. You can select your social circle (or at least, most people reading thing can) to choose those who are normatively non-exploitative. Make friends with deontologists!
Almost everybody is making it up as they go along. Also, many (most?) people are incompetent at their jobs.
When talking about their background and accomplishments, almost everybody is continually overstating their abilities, impact, relevance, and contributions.
Physical beauty decays.
Compared to others, certain ethnicities and races (and genders, and sexual orientations, and so on) are just plain royally f*cked from the day they’re born.
Very few people will ever give you 100% candid, honest feedback.
People are constantly making enormous life decisions (marriage, children, etc) for all of the wrong reasons.
Certain people—some of whom are in positions of enormous power—just do not give a damn about other human beings. A certain head of state in Syria comes to mind.
Often, the most important and consequential moments of our lives (chance encounter, fatal car accident, etc) happen completely at random and seemingly for no good reason.
Your sense of habitating a fully integrated reality is an illusion, and a privilege. Take the wrong drug, suffer a head injury, or somehow trigger a latent psychotic condition like schizophrenia—and your grip on reality can be severed in an instant. Forever.
Edit: To this I can also refer you to the weakness of strength. My pain sacrificially broadens the minds of others to be grateful for their sanity.
Thank you for sharing this James_Miller and author Patrick Mathieson.
False hopes are more dangerous than fears.
J. R. R. TOLKIEN, The Children of Hurin
Well, an evolutionary purpose of fear and our reactions to it is to protect us from dangers. It would be doing a bad job if it wasn’t at least a little bit helpful.
...Newton’s monochromator turned out to be not perfect enough, despite him using a collimator with a narrow slit; fluorescence in homogeneous lighting Newton did not notice, and the principle was saved. We see here a not-so-rare case of the imperfection of experiment facilitating the development of science. It is hard to imagine the confusion in optical ideas if Stokes’s shift had been discovered in the XVIIth century.
S. I. Vavilov, The principles and hypotheses of Newton’s optics (a kind of introduction to two Newton’s optical memoirs he translated into Russian), 1927.
Fuck-you money is not “you will be happy”, it’s “you will be able to remove one particular cause of unhappiness”.
Well, quite a few particular causes of unhappiness. (But not all of them, I agree.)
I don’t think that it makes sense to see all Western archievements as coming out of the Western analytical mindset. Western society has always been diverse and contained people with different mindsets.
The Greek had steam engines before the invention of the modern scientific method, so I don’t see how the modern scientific method was a requirement for the invention of steam engines.
I do grant that the scientific method has important effect on the way our modern technology works, but I think you get into problems when you start to claim that all of our modern technology is due to the scientific method and analytic thinking.
Actually, you brought the invention of the steam engine into the conversation.
And, while the Greeks invented a rudimentary steam engine, the ancient Greek engine was not really of any practical use. Developing a commercially viable steam engine did not occur until much later. Developing a steam engine that could be used reliably, safely and efficiently for transportation, etc., required scientific knowledge of thermodynamics, behavior of gases, metallurgy, etc.
Which is what led to my question about why the author thinks that the Eastern mode of thought is superior to the Western mode of thought for “understanding the contribution knowledge makes to the technical accomplishment of our civilization”. When I phrased the question, I did not mean it in an argumentative sense, I actually meant I am interested to hear his thoughts on the subject—which is one of the reasons I intend to read the book.
I spoke about the invention of the steam engine as a means for pumping water out of mines. The Greeks never tried to use it for that purpose.
I don’t think that Thomas Newcomen had much scientific knowledge of thermodynamics. Most of thermodynamics developed after there were already commercial steam engines.
I think knowledge about metallurgy at the time wasn’t mainly scientific but based on trades. You had smiths who learned it by being smiths and who then passed it down to an apprentice.
True, but you and two other people pointed out that the Greeks had invented the steam engine as if that somehow invalidated something that I had said.
This is not really true. Scientific work in the area of thermodynamics had been done in the 17th century by Denis Papin, Otto von Guericke , Robert Boyle, Thomas Savery and others. Some of this work was directly applicable to steam engines.
I think it is likely that Newcomen was familiar with Savery’s earlier work on steam engines, at a minimum. And, whereas you are focused on the invention (or reinvention, in Newcomen’s case) of the steam engine, I think that the ongoing development of the steam engine is at least as relevant. The development of the steam engine continued well past then end of Newcomen’s life—the late 18th century engines and the 19th century steam engines used on trains, ships and in industry were much improved over the versions produced by Newcomen—and many of these improvements came about from scientific knowledge in the areas of gas laws, thermodynamics, etc.
This is largely true, particularly in the 18th century. But as noted above, the steam engine continued to be developed and improved throughout the 19th century. Some of these improvements were possible by improved materials (metals), and by the latter half of the 19th century, metallurgy was becoming more scientific, particularly with regard to improvements in steel production.
19th century steel manufacturing also gave a big boost to the steam engine industry in an indirect manner—quality steel greatly improved the strength and longevity of railroad tracks and trestles, leading to increased use of rail and increased demands for more powerful and more efficient steam engines. Since the quote that started this conversation was about the “technical accomplishment of our civilization” and “the ingenuity of the inventions, the range and density of technical mediation, the multiplicity of artifactual interfaces in a global technoscientific economy”, I think that it is useful to look at how various industries (such as the steel and railroad industry) affected the later development of the steam engine rather than focusing exclusively on its early commercialization by Newcomen.
I agree with your general point. A lot of science was needed to create the transistor. But in this particular case, the design of Newcomen’s engine is very simple, needs no science beyond the notion of ‘hot water expands’, and could certainly be comprehended and built by the ancient Greeks and Romans. Of course, it may be that the ancients didn’t use (or use up) enough coal to be in need of a mine water pumping solution, and a steam locomotive is rather more complex.
Barry Allen in Vanishing into Things
What reason is there to think that Allen is correct when he says that the “contemplative, logocentric approach” is a poor match for understanding the relationship between knowledge and technology? In the passage you quote, he makes a number of claims that seem (at best) extremely doubtful. Does he justify them elsewhere?
(Perhaps he—or you—might consider this a fruitlessly contemplative and logocentric question, too much concerned with evidence, warrant and justification. Too bad.)
Let’s take the best computer programmer. Imagine he tries to write down all his important knowledge in a book. He writes down all statements where he believes that he can justify that they are true in a book.
Then he gives the book to a person who never programmed with equal IQ.
How much of the knowledge of the expert knowledge get’s passed down through this process? I grant that some knowledge get’s passed down, but I don’t think that all knowledge does get passed down. The expert programmer has what’s commonly called “unconscious competence”.
Allen might call that kind of knowledge part of the best knowledge of our civilization. It’s crucial knowledge for our technological progress.
But to get back to the main point, to accept that the contemplative, logocentric approach has flaws is not simply about focusing on it itself but on demonstrating alternatives.
This seems to be a complicated, abstruse way of saying “reading statements of knowledge doesn’t thereby convey practical skills”.
If I explain one paradigm in the concepts of another paradigm that leads in it’s nature to complicated and abstruse ways of making a statement.
But in this case the claim is more general. There are cases where the programmer can describe a heuristic that he uses to make decisions without pointing to a statement that has justified veracity.
Google for example wants to give it’s managers good management skills. To do that they give them checklists of what they are supposed to do when faced with a new recruit. Lazlo Bock from Google’s People Operations credits the email that gives that checklist with a resulting productivity improvement of 25% due to new recruits coming up to speed faster.
You don’t need to understand the justification for a checklist item to be able profit from following a ritual that goes through all the items on checklist. Following a ritualistic checklist would be knowledge in the Chinese sense where there’s a huge emphasis of following proper protocols but it wouldn’t be seen as knowledge in the philosophic western tradition.
But why does it matter? What harm can come from thinking that knowledge is about demonstrable truths? If generating knowledge is about generating demonstrable truths you can use the patent system to effectively reward knowledge creation.
I don’t understand your point. The western tradition is perfectly capable of talking about the knowledge that following this checklist results in a measurable 25% improvement. So you must mean something else but I don’t know what.
Nobody knows everything at the same time. The knowledge is split between the person following the checklist and the one who designed it. That doesn’t make it a different kind of knowledge. And if the person who designed it just tested lots of random variations and has no idea why this one works, or if the designer is dead and didn’t pass on his ideas, then there is less knowledge, but it’s still the same kind of knowledge.
The programmer is a paradigm case. He works with very well defined logical or mathematical models of code execution. But he constantly relies on the correct functioning of a myriad other pieces of software and hardware. He doesn’t know in full detail why he has to talk to these other things the way he does; he just memorizes a great deal of API details which are neither arbitrary not clearly self-evident, and trusts that the hardware designers knew what they were doing.
So when you say:
It seems to me that almost everything the programmer ever does can be framed this way. Suppose I know that under high contention I should switch to a different lock implementation; but I don’t know how the two implementations actually work, so I don’t know why each one is better in a different case. I also don’t know where exactly the cutoff is, because it’s hard to measure, so there’s an indeterminate zone in between where I’m not sure what to use; so I have a heuristic that uses an arbitrary cutoff.
Is this a heuristic that has no “justified veracity”, or is this a kind of knowledge where I can prove (with benchmarks) that the heuristic leads to good results, with an underlying model (map) of ‘lock A has less contention overhead, but lock B takes less time to acquire’?
I don’t think knowing the API is sufficient to being a good programmer. The productivity difference between what Google sees in a 10x programmer from a normal programmer is not about the 10x programmer having memorized more API calls.
Simply teaching someone about API calls and about the specifics about the cutoff between different lock implementations isn’t going to make someone a 10x programmer.
Part of being a good programmer is knowing when to check in your code and when it makes sense to write additional tests to check your code. One way to check that some people at the local LW dojo use is to ask themselves: “Would I be surprised if my changes crash the program?” If there system I wouldn’t be suprised they go and spent additional time writing tests or cleaning up the code.
That heuristic is knowledge that’s able to be verbalized but that moves farther away from justified veracity. You can go a step further and talk about how to pass down the system I sense of surprise from one programmer to another.
If you go and study computer science you won’t find classes on developing an appropriate level of getting surprised. It’s not the kind of knowledge that professors of computer science work to create.
How many checklists have you used the last week? How many thing you do follow strict checklists? How many serious philosophers deal with the issues surrounding checklists?
I think that there societal resistance towards adopting more checklists.
In Google’s case they have hard data to justify their checklist but a lot of checklists that are in usage don’t have hard data that backs them up but are still useful.
To move meta, of course the ideas that I try to express on LW can be expressed in the English language. Trying to express ideas that can’t be expressed in English doesn’t make any sense.
Of course it isn’t (but it is necessary). I didn’t mean to imply that it was. But I do think this example generalizes to almost all the other things that a very good programmer needs to do.
That heuristic is knowledge that’s able to be verbalized but that moves farther away from justified veracity.
Why do you think so? To me (as a programmer) heuristics about when to check what feel perfectly knowable and able to be verbalized. To be sure, they would take a lot words. Maybe more importantly, they’re highly entanged with many other things a programmer needs to know and do. But I don’t see what would make them less justified or less explicit, just more complex.
It’s a truism that you can’t gain habits of thought, or mental heuristics, just by abstractly understanding and memorizing a bunch of facts; that’s just not how humans learn things.
That doesn’t necessarily mean there’s a lot of information in the heuristics that isn’t contained in the dry facts. You can’t get the heuristics by practicing without being aware of the facts. If you can’t explain why you act out the heuristics you do in terms of the facts you learned, or if you can’t verbalize what heuristics you’re acting on, that is more likely to be a failure of introspection, rather than evidence that your mind developed extra incommunicable knowledge the facts didn’t imply.
Because they’re studying computer science, not programming.
Yes, if you look at software engineering, its state of formal education is quite bad compared to some other engineering professions. I even have a good idea of the historical causes of this. But that doesn’t mean programming can’t be taught or even that nobody learns it well formally, just that most programmers don’t, as a social fact. They’re encouraged to experiment and self-teach; they start working as soon as someone will pay them, which is much earlier than ‘when they’ve mastered programming’; they influence one another; and the industry on average doesn’t have a lot of quality control, quality standards or external verification, just ‘ship it once it’s ready’.
No checklists that I can think of. I have no idea what philosophers en masse spend their time on, serious or otherwise.
Checklists are a specific solution which need to be justified wrt. specific problems, most of which have alternative solutions. I don’t think ‘not using checklists’ is a good proxy for ‘not doing a job as well as possible’ without considering alternatives and the details of the job involved. At least, as long as you’re talking about explicit checklists consulted by humans, and not generalized automated processes that reify dependencies in a way that doesn’t let you proceed without completing the “checklist” items.
Going back to your general argument, are you saying that Eastern philosophical traditions are better at getting people to use checklists (or other tools) without understanding them, while Western ones encourage people not to use things they don’t understand explicitly?
In Confucism a wise person is a person who follows the proper rituals for every occasion (as the book argues). I think checklists do define rituals. A person who values following rituals is thus more likely to accept a checklist and follow it.
Culturally there’s a sense that asking a Western doctor to use a checklist means to assumes that he’s not smart enough to do the right thing. I don’t think that exists to the same extend in China.
Before germ theory Western doctors refused to wash their hands because they didn’t see the point of cleanness as a value. I need to do a bit of research to get data about Chinese medicine but from what I have seen of Ajuvedic medicine they do tons of saucha rituals that are about producing cleanness like tongue-scrapping.
I think you can describe me easily a system II heuristic that you use to decide when to check more. I don’t think you can easily describe how you feel the emotion of surprise that exists on a system I level. Transfering triggers of the emotion of surprise from one person to another is hard.
I would say it’s because the relevant professors see issues of algorithm design as higher status than asking themselves when programmers should recheck their code. It seems no Computer Science professor took the time to setup a study to test whether teaching programmers to be faster at typing increases their programming output. That’s because the mathematical knowledge get’s seen as more pure and more worthy. It has to do with the kind of the knowledge that’s valued.
Mathematical proofs can provide strong justification and are thus more high status than messy experiements about teaching programming that can be confounded by all sorts of factors.
This leads to a misallocation of intellectual resources.
Checklists are known to be very helpful with certain things, even if the relevant profession (e.g. doctors) don’t always widely recognize this. On the other hand, why should I wash my hands if you can’t give me a reason for cleanliness, neither theoretical (germ theory) nor empirical (it reduces disease incidence)?
Ideally, we should value checklists and rituals as a tool, but also require there to be good reasons for rituals, and trust that those who institute or choose the rituals know what they’re doing. We should also be open to changing rituals, sometimes quickly, as new evidence comes in.
Maybe Eastern traditions achieve a better social balance than Western ones on this matter; I wouldn’t know.
I think everyone agrees on this. Humans can’t fully learn new behaviors just through abstract knowledge without practice.
I would say it’s because most CS professors don’t really care about programming, and certainly not about typing speed. Programming isn’t computer science! CS is a branch of applied math. The professors don’t care about misallocation of intellectual resources across different fields, because they’ve already chosen their own field. You’d see the same problems if electrical engineers all studied physics instead, and picked up all the missing knowledge outside of formal education.
There are dedicated software engineering majors, some of them are even good (or at least better at teaching to program than CS ones), but numerically they produce far fewer graduates.
At the time where the hand washing conflict happened there wasn’t much of evidence-based medicine.
Today there is some evidence for checklists improving medical outcomes but they don’t get easily adopted.
I think there’s decent evidence that combining hypnosis and anesthetic drugs is an improvement over just using anesthetic drugs.
I think the ability to be suprised by the right things is reasonably called knowledge and not only behavior.
According to Google some of their programmers are 10x as productive as the average. Can a decidated software engineering major teach the knowledge to be required to reach that level reliably? I don’t think so. I don’t think it even get’s 2x.
Is there any software engineering major that tested whether they produce better programmers if they also teach typing? I don’t think so.
This is all true, but it’s a rather far jump from here to ‘and a culture permeated by Eastern philosophy handles this better, controlling for the myriad unrelated differences, and accounting for whatever advantages Western philosophy may or may not have.’
I agree.
Google hires programmers who are already 10x as productive as the average. It doesn’t hire average programmers and train them to be 10x as productive using checklists or anything else. Maybe it hires programmers 9x as productive as the average and then helps them improve, but that’s a lot harder to measure than a whole order of magnitude improvement.
If you’re asking whether there exist two different institutions with software engineering majors, where the graduates of one are 2x as good as those of the other, or 2x better than the industry average, then the answer is clearly yes.
If you’re asking the same, but want to control for incoming freshman quality (i.e. measure the actual improvement due to teaching), then you hit the problem that there are no RCTs and there’s no control group (other than those who don’t go to college at all). There’s also no way to make two test groups of college students not learn anything ‘on the side’ from the Internet or from their friends, or to do so in the same way. So it’s really hard to measure anything on the scale of a whole major.
Lots of people have measured interventions on the scale of a single course. Some of them may help (like typing); in fact I hope some of them do help, otherwise the whole major would only give you credentials. I’m not disputing this, but I also don’t see the relation between there being some useful skills that aren’t explicit knowledge (in this case they’re motor skills everyone has explicit knowledge about) and a grand difference between societal or philosophical differences.
I’m a programmer, and the only part of college that was useful in my field was the freshman “intro to coding” courses. Six months in I was able to do the job I was hired for out of college.
College is a racket.
http://lesswrong.com/lw/h6b/explicit_and_tacit_rationality/
I read the source before reading the quote and was expecting a quote from The Flash.
I just now looked up Vanishing into Things on Amazon and it looks quite interesting. Have you read the book in its entirety? What are your thoughts about it?
I haven’t yet finished it.
I bring it up because many people here still equate knowledge with justified truth and see it as only one form of knowledge.
Being clear about the fact that there are different ways of knowing is very important for the quest of rationality. The example of Chinese philosophy then is relatively benign and doesn’t trigger mindkilling reflexes they way that postcolonial thought does.
The Chinese also actually act based on their idea of knowledge with makes it more believable. As China becomes culturally more influential it’s also useful to understand their thought better.
The book sounds interesting. When I read your quote from the book, I initially misinterpreted it as a anti-philosophy comment of the sort one occasionally encounters but after reading the blurb for the book on Amazon, realized the quote was contrasting Eastern vs Western thought.
One thing I am curious about - if the Eastern mode of thought is really superior to the Western mode of thought for “understanding the contribution knowledge makes to the technical accomplishment of our civilization”, how does the author explain the fact that the scientific method, the industrial revolution, and (to use his words), “the multiplicity of artifactual interfaces in a global technoscientific economy” grew out of the Western intellectual tradition?
However, I do think that there are interesting differences between the traditional Eastern way of thinking and the traditional Western way of thinking, and that each has its unique strengths. An interesting book on this topic is The Geography of Thought by Richard Nisbett; it points out the differences between Eastern and Western thought without really painting one as “better” than the other. Note that Nisbett’s book is aimed at a general audience whereas I suspect that Allen’s may be aimed at an academic audience.
I’d be interested in hearing your thoughts about Allen’s book once you’ve finished reading it. I’m putting it on my “to read” list, but I’m not sure when I’ll get to it.
Whether the industrial revolution came out of the intellectual tradition is up for debate. If you take Henry Ford as of of the core people of the industrial revolution, Ford didn’t go to university. I think most of the knowledge that made Ford successful wasn’t about him believing in justified true statements but of more implicit nature.
The people who invented the steam engine also didn’t have university degrees. They were rather tradesman who relied on mechanical skill for their inventions. Western intellectuals didn’t concerns themselves with optimal systems of pumping water out of mines like Thomas Newcomen did.
The Industrial Revolution was pretty much complete decades before Henry Ford was born. Newcomen is much more to the point.
http://www.serotonindealer.com
If you simply the university to dimensions of space and time, I guess that could be true. This quote got me to really stretch to see its truth.
“simply the university” ⇒ “simplify the universe”?
Yes, thanks for catching my mistake :) Upvote for you!