The Most Important Thing You Learned
My current plan does still call for me to write a rationality book—at some point, and despite all delays—which means I have to decide what goes in the book, and what doesn’t. Obviously the vast majority of my OB content can’t go into the book, because there’s so much of it.
So let me ask—what was the one thing you learned from my posts on Overcoming Bias, that stands out as most important in your mind? If you like, you can also list your numbers 2 and 3, but it will be understood that any upvotes on the comment are just agreeing with the #1, not the others. If it was striking enough that you remember the exact post where you “got it”, include that information. If you think the most important thing is for me to rewrite a post from Robin Hanson or another contributor, go ahead and say so. To avoid recency effects, you might want to take a quick glance at this list of all my OB posts before naming anything from just the last month—on the other hand, if you can’t remember it even after a year, then it’s probably not the most important thing.
Please also distinguish this question from “What was the most frequently useful thing you learned, and how did you use it?” and “What one thing has to go into the book that would (actually) make you buy a copy of that book for someone else you know?” I’ll ask those on Saturday and Sunday.
PS: Do please think of your answer before you read the others’ comments, of course.
- Practical Advice Backed By Deep Theories by 25 Apr 2009 18:52 UTC; 70 points) (
- The Most Frequently Useful Thing by 28 Feb 2009 18:43 UTC; 12 points) (
- 27 Mar 2009 21:32 UTC; 7 points) 's comment on On Seeking a Shortening of the Way by (
- 23 Mar 2009 13:45 UTC; 6 points) 's comment on Playing Video Games In Shuffle Mode by (
- 14 Oct 2010 17:49 UTC; 5 points) 's comment on LW favorites by (
- 25 Apr 2009 13:47 UTC; 2 points) 's comment on Less Wrong: Progress Report by (
- 1 Aug 2009 18:41 UTC; 0 points) 's comment on Open Thread: August 2009 by (
“the map is not the territory” has stuck in my mind as one of the over-arching principles of rationality. it reinforces the concept of self-doubt, implies one should work to make their map conform more closely to the territory, and is invaluable when one believes to have hit a cognitive wall. there are no walls, just the ones drawn on your map.
the post, “mysterious answers to mysterious questions” is my favorite post that dealt with this topic, though it has been reiterated (and rightly so) over a multitude of postings.
link: http://www.overcomingbias.com/2007/08/mysterious-answ.html
I second Tim’s post. Mysterious Answers and the “map vs territory” analogy have had a huge influence on my thinking
“Newcomb’s Problem and Regret of Rationality” is one of my favorites. For all the excellent tools of rationality that stuck with me, this is the one that most globally encompassed Eliezer’s general message: that rationality is about success, first and foremost, and if whatever you’re doing isn’t getting you the best outcome, then you’re not being rational, even if you appear rational.
“A rationalist should win”. Very high-level meta-advice and almost impossible to directly apply, but it keeps me oriented.
I agree that, on average, improvements in rationality lead to more winning, but I’m not convinced that every improvement in rationality does. It seems possible that a non-trivial number make winning harder.
I was late to vote to put it mildly, but nonetheless… The Power of Intellect. This is probably what impressed me the most and changed my attitude towards intelligence like Spock. From memory: a gun is stronger than the brain. As if people were born with them. Social skills are more important than intelligence. As if charisma is in the kidneys. Money is more powerful than the mind. As if they grew on trees. And people ask how AI can make money when it only has a mind. A million years ago, soft creatures roamed the savanna, and you would call it absurd to claim that they will rule the planet, and not lions. Soft Creatures have no armor, claws or poison. How could they work metal if they don’t breathe fire? If you say that they split the nucleus of an atom, then this is just nonsense, they are not even radioactive. And evolution will not have time to work here, because it’s just that no one can reproduce so fast as to get all these results in just a million years. But the brain is more dangerous than nuclear weapons, because the brain generates nuclear weapons and things are even more dangerous. Look at the difference between man and monkey. And now tell me what artificial intelligence can’t do. Heal all diseases and old age, invent a unified field theory and solve the millennium problems, create nanotechnology and colonize other galaxies. Do you really think he can do so little? P.S. I chose it not only because, in addition to being important, he is also impressive, but also because, unlike others, in this form he unites and conveys not one, but many important thoughts at once: a false separation of social skills from intellect and the idea that he is intellectual the character should not have them, the capabilities of AI, its enormous speed of its development and the degree of influence comparable to the emergence of a new species, the correct intuition about intelligence as the difference between a man and a monkey, and not Einstein and a peasant, an understanding that intelligence is stronger than any technology, because that he creates them, abilities for good future and dangers of AI.
Your explanation / definition of intelligence as an optimization process. (Efficient Cross-Domain Optimization)
That was a major “aha” moment for me.
The most important thing I can recall is conservation of expectation. In particular, I’m thinking of Making Beliefs Pay Rent and Conservation of Expected Evidence. We need to see a greater commitment to deciding in advance which direction new evidence will shift our beliefs.
Most frequently referenced concepts:
Mind projection fallacy and “The map is not the territory.”
“The opposite of stupidity is not intelligence.”
Engines of cognition was the final thing I needed to assimilate the idea that nothing’s for free and that intelligence does not magically allow to do anything, has a cost, limitations, and obey the second law of thermodynamics. Or rather, that they both obey the same underlying principle.
http://www.overcomingbias.com/2008/02/second-law.html
The most important thing I learned from Overcoming Bias was to stop viewing the human mind as a blank slate, ideally a blank slate, an approximation to a blank slate, or anything with properties even slightly resembling blankness or slateness. The rest is just commentary—admittedly very, very good commentary.
The posts I associate with this are everything on evolutionary psychology such as Godshatter (second most important thing I learned: study evolutionary psychology!), the free will series, the “ghost in the machine” and “ideal philosopher of perfect emptiness” series, and the Mind Projection Fallacy.
The biggest “aha” post was probably the one linking thermodynamics to beliefs ( The Second Law of Thermodynamics, and Engines of Cognition, and the following one, Perpetual Motion Beliefs ), because it linked two subjects I knew about in a surprising and interesting way, deepening my understanding of both.
Apart from that, “Tsuyoku Naritai” was the one that got me hooked, though I didn’t really “learn” anything by it—I like the attitude it portrays.
I agree about Engines of Cognition. It got me really interested in the parallels between information theory and thermodynamics and led me to start reading a lot more about the former, including the classic Jaynes papers. I think it gave me a deeper understanding of why e.g. the Carnot limit holds, and let me to read about the interesting discovery that the thermodynamic availability (extractable work) of a system is equal to its Kullback-Leibler divergence (a generalization of informational entropy) from its environment.
Second for me would have to be Artificial Addition, which helped me understand why attempts to “trick” a system into displaying intelligence are fundamentally misguided.
“Obviously the vast majority of my OB content can’t go into the book, because there’s so much of it.”
I know this is not what you asked for, but I’d like to vote for a long book. I feel that the kind of people who will be interested by it (and readers of OB) probably won’t be intimidated by the page count, and I know that I’d really like to have a polished paper copy of most of the OB material for future reference. The web just isn’t quite the same.
In short: Something that is Godel, Escher, Bach-like in lenght probably wouldn’t be a problem, though maybe there are other good reasons to keep it shorter other than “there is too much material”.
I’m going to have to choose “How to Convince Me That 2 + 2 = 3.” It did quite a lot to illuminate the true nature of uncertainty.
http://www.overcomingbias.com/2007/09/how-to-convince.html
The ideas in itare certainly not the most important, but another really striking posts for me is “Surprised by Brains.” The lines “Skeptic: Yeah? Let’s hear you assign a probability that a brain the size of a planet could produce a new complex design in a single day. / Believer: The size of a planet? (Thinks.) Um… ten percent.” in particular are really helpful in fighting biases that cause me to regard conservative estimates as somehow more virtuous.
Taboo very useful in discussion I believe.
A while back, I posted on my blog two lists with the posts I considered the most useful on Overcoming Bias so far.
If I just had to pick one? That’s tough, but perhaps burdensome details. The skill of both cutting away all the useless details from predictions, and seeing the burdensome details in the predictions of others.
An example: Even though I was pretty firmly an atheist before, arguments like “people have received messages from the other side, so there might be a god” wouldn’t have appeared structurally in error. I would have questioned whether or not people really had received messages from the dead, but not the implication. Now I see the mistake—“there’s something after death” and “there is a supernatural entity akin to the traditional Christian god” may be hypotheses that are traditionally (in this culture) asssociated with the same memeplex, but as hypotheses they’re entirely distinct.
I would vote for “Burdensome Details” as well.
This is to nominate “The Bottom Line” / “A Rational Argument.”
I second this one, also as related to Making Beliefs Pay Rent: what you think and what you present as argument needs to be valid, needs to actually have the strength as evidence that it claims to have. Failure to abide by this principle results in empty or actively stupid thoughts.
Hard to pick a favourite, of course, but there’s a warning against confirmation bias that cautions us against standing firm, to move with the evidence like grass in the wind, that has stuck with me.
On the general discussions of what sort of book I want, I want one no more than a couple of hundred pages long which I can press into the hands of as many of my friends as possible. One that speaks as straightforwardly as possible, without all the self-aggrandizing eastern-guru type language...
A near-tie. Either:
(1) The Bottom Line, or
(2) Realizing there’s actually something at stake that, like, having accurate conclusions really matters for (largely, Eliezer’s article on heuristics and biases in global catastrophic risks, which I read shortly before finding OB), or
(3) Eliezer’s re-definition of humility in “12 virtues”, and the notion in general that I should aim to see how far my knowledge can take me, and to infer all I can, rather than just aiming to not be wrong (by erring on the side of underconfidence).
(1) wasn’t a new thought for me, but I wasn’t applying it consistently, and Eliezer’s meditations on it helped. (2) and (3) more or less were new to me. I’ve gotten the most out of some of the most basic OB content, and probably continue to get the most out of reflecting on it.
I’m going to go with “Knowing About Biases Can Hurt People”, but only because I got the Mind Projection Fallacy straight from Jaynes.
The most important thing for me, basically, was the morality sequence and in particular The Moral Void. I was worrying heavily about whether any of the morals I valued were justified in a universe that lacked Intrinsic Meaning. The Morality sequence (and Nietzsche, incidentally) helped me internalize that it’s OK after all to value certain things— that it’s not irrational to have a morality— that there’s no Universal Judge condemning me for the crime of parochialism if I value myself, my friends, humanity, beauty, knowledge, etc— and that even my flight from value judgments was the result of a slightly more meta value judgment.
Seems probable to me that many potential readers aren’t currently too worried about the Moral Void, but those who are need a pretty substantial push in this direction.
“Shut up and multiply.”
I refuse to name just one thing. I can’t rank a number of ideas by how important they were relative to each other, they were each important in their own right. So, to preserve the voting format, I’ll just split my suggestions into several comments.
Some notes in general. The first year I used to partially misinterpret some of your essays, but after I got a better grasp of underlying ideas, I saw many of the essays as not contributing any new knowledge. This is not to say that the essays were unimportant: they act as exercises, exploring the relevant ideas in excruciating detail, which makes them ideal for forming solid intuitive understanding of these ideas, a level of ownership for habits of thought without which it hardly makes sense to bother learning them. Focusing attention on each of the explored facets of rationality allows to think about extending and adapting them to your own background. At the same time, I think the verbosity in your writing should be significantly reduced.
I too would like to support more brevity in your writings—but maybe that just isn’t your style.
Overcoming Bias: Thou Art Godshatter: understanding how intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy, how there’s more to other mental operations than meets the eye.
Prices or Bindings? and to a lesser extent (although with simpler formal statement) Newcomb’s Problem and The True Prisoner’s Dilemma: show just how insanely alien the rational thing can be, even if it’s directed to your own cause. You may need to conscientiously avoid preventing the world destruction, not take free money, and trade a billion human lives for one paperclip.
The Simple Truth followed by A Technical Explanation of Technical Explanation, given some familiarity with probability theory, formed the basic understanding of Bayesian perspective on probability as quantity of belief. The most confusing point of Technical Explanation involving a tentacle was amended in the post about antiprediction on OB. It’s very important to get this argument early on, as it forms the language for thinking about knowledge.
When I first read “The Simple Truth,” I didn’t really get it. I realized just how much I didn’t get it when I re-read it after reading some of the sequences. I think it would work best as a review-of-what-you-just-learned rather than as an introduction.
Righting a Wrong Question: how everything you observe calls for understanding, how even an utter confusion or a lie can communicate positive knowledge. There are always causes behind any apparent confusion, so if the situation doesn’t make sense in a way it’s supposed to be interpreted, you can always step back and see how it really works, even if you are not supposed to look at the situation this way. For example, don’t trust your thought, instead catch your own mind in the process of making a mistake.
There are no genuine mysteries, only things that I am ignorant or confused about.
deleted
The most important and useful thing I learned from your OB posts, Eliezer, is probably the mind-projection fallacy: the knowledge that the adjective “probable” and the adverb “probably” always makes an implicit reference to an agent (usually the speaker).
Honorable mention: the fact that there is no learning without (inductive) bias.
It’s hard to answer this question, given how much of your philosophy I have incorporated wholesale into my own, but I think it’s the fundamental idea that there are Iron Laws of evidence, that they constrain exactly what it is reasonable to believe, and that no mere silly human conceit such as “argument” or “faith” can change them even in the millionth decimal place.
The most important thing I learned may have been how to distinguish actual beliefs from meaningless sounds that come out of our mouths. Beliefs have to pay the rent. (http://www.overcomingbias.com/2007/07/making-beliefs-.html)
If my priors are right, then genuinely new evidence is a random walk. Especially: when I see something complicated I think is new evidence and think the story behind it is obviously something confirming my beliefs in every particular, I need to be very suspicious.
http://www.overcomingbias.com/2007/08/conservation-of.html
http://www.overcomingbias.com/2007/09/conjunction-fal.html
http://www.overcomingbias.com/2007/09/rationalization.html
I didn’t get your point here, could you elaborate (re “evidence is a random walk”).
#1 Teacher’s Password
I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.
I realize you mostly care about #1, but just for more data: #2 I’d probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages.
For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.
I’ve been enjoying the majority of OB posts, but here’s the list of ideas I consider the most important for me:
Intelligence as a process steering the future into a constrained region.
The map / territory distinction.
The use of probability theory to quantify the degree of belief.
Is this to be a book that somebody could give to their grandmother and expect the first page to convince her that the second is worth reading?
The Wrong Question sequence was amazing. One of the very unintuitive sequences that greatly improved my categorization methods. Especially with the ‘Disguised Queries’ post.
Your debunking of philosophical zombieism really stuck with me. I don’t think I’ve ever done a faster 180 on my stance on a philosophical argument.
The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.
definitely “materialism”...especially the idea that there are no ontologically basic mental entities.
That whole post is good, but that idea is due to Richard Carrier.
The most important thing for me, is the near-far bias—even though that’s a relatively recent “discovery” here, it still resonates very well with why I argue with people about things, and why people who I respect argue with each other.
The Blegg / Rube series, which I’ll still list as separate from...
The Map / Territory distinction
An Alien God
All things that, if pushed with the right questions, I’d have come to on my own, but all three put very beautifully.
Every Cause Wants To Be A Cult, Science as Attire, The Simple Truth
That clear thinking can take you from obvious but wrong to non-obvious but right, and on issues of great importance. That we frequently incur great costs just because we’re not really nailing things down.
Looking over the list of posts, I suggest the ones starting with ‘Fake’
Shut up and do the impossible, and dependencies.
The concept of fighting a rearguard action against the truth.
The series of post about the “free will”. I was always a determinist but somehow refused to think about “free will” in detail, holding a belief that determinism and free will are compatible for some mysterious reason. OB helped me to see things clearly (now it seems all pretty obvious).
I vote for “Conservation of Expected Evidence.” The essential answer to supposed evidence from irrationalists.
Second place, either “Occam’s Razor” or “Decoherence is Falsifiable and Testable” for the understandable explanation of technical definitions of Occam’s Razor.
The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the “knowing biases can hurt you” problem, and while it’s obvious if put in formal terms, it’s counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.
That sort of makes sense if what you mean is “whatever we humans think about A has no effect on the truth or falsehood of P in a Platonic sense” but surely showing that A is invalid ought to change how likely you think that P is true?
Similarly, if P is actually true, a random argument that concludes with “P is true” is more likely to be valid than a random argument that concludes with “P is false”. So showing P is true ought to make you think that A is more or less likely to be valid depending on its conclusion.
(Given that this comment was voted up to 3 and nobody gave a counterargument, I wonder if I’m missing something obvious.)
I wrote that two years ago, and you’re right that it’s imprecise in a way that makes it not literally true. In particular, if a skilled arguer gives you what they think is the best argument for a proposition, and the argument is invalid, then the proposition is likely false. What I was getting at, I think, is that my intuition used to vastly overestimate the correlation between the validity of arguments encountered and the truth of propositions they argue for, because people very often make bad arguments for true statements. This made me reject things I shouldn’t have, and easily get sidetracked into dealing with arguments too many layers removed from the interesting conclusions.
Ok, that makes a lot more sense. Thanks for the clarification.
3 is still a small number. If it were 10+ then you should worry. I’m confused by this too.
The nearest correct idea I can think of to what Jim actually said, is that if you have a proposition P with an associated credence based on the available evidence, then finding an additional but invalid argument A shouldn’t affect your credence in P. The related error is assuming that if you argue with someone and are able to demolish all their arguments, that this means that you are correct, and giving too little weight to the possibility that they are a bad arguer with a true opinion. Jim, is that close to what you meant?
EDIT: Whoops, didn’t see Jim’s response. But it looks like I guessed right. I’ve also made the related error in the past, and this quote from Black Belt Bayesian was helpful in improving my truth-finding ability:
“You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions”
It wasn’t much of an “aha!” moment- when I first read it, I thought something along the lines of “Of course higher standards are possible, but if no one can find flaws in your argument, you’re doing pretty well.” but the more I thought about it, the more I realized that EY made a good point. I had later stumbled upon flaws in my long standing arguments that I had overlooked, yet no one called me on.
Not only was the standard lower than I had previously realized, but it is entirely possible for someone to 1) not believe you 2) not be able put their refutation into words, and 3) still be right.
http://www.overcomingbias.com/2008/09/refutation-prod.html
The big problem with relying on someone else to save you is “Why would they bother?”. No one is likely to be as motivated to find mistakes in your beliefs are you are (or at least as you should be).
I’ve been reading OB for a comparatively short time, so I haven’t yet been through the vast majority of your posts. But “The Sheer Folly of Callow Youth” really puts in perspective the importance of truth-seeking and why its necessary.
Quote: “Of this I learn the lesson: You cannot manipulate confusion. You cannot make clever plans to work around the holes in your understanding. You can’t even make “best guesses” about things which fundamentally confuse you, and relate them to other confusing things. Well, you can, but you won’t get it right, until your confusion dissolves. Confusion exists in the mind, not in the reality, and trying to treat it like something you can pick up and move around, will only result in unintentional comedy. Similarly, you cannot come up with clever reasons why the gaps in your model don’t matter. You cannot draw a border around the mystery, put on neat handles that let you use the Mysterious Thing without really understanding it—like my attempt to make the possibility that life is meaningless cancel out of an expected utility formula. You can’t pick up the gap and manipulate it.”
Link: http://www.overcomingbias.com/2008/09/youth-folly.html
How to make sense out of metaethics. I would particularly name The Meaning of Right.
For me this is a tough question since I’ve been reading your stuff for nearly 10 years now, but thinking of only OB I’d have to say it was the quantum physics stuff, but only because I had encountered essentially everything else in one form or another already, so your writing was just refining the way of presenting what I had already generally learned from you.
Clearing up my meta-ethical confusion regarding utilitarianism. From The “Intuitions” Behind “Utilitarianism”:
Realizing that the expression of any set of values must inherently “sum to 1” was quite an abrupt and obviously-true-in-retrospect revelation.
This is really from times before OB, and might be all too obvious, but the most important thing I’ve learned from your writings (so far) is bayesian probability. I had come in touch with the concept previously, but I didn’t understand it fully or understand why it was very important until I read your early explanatory essays on the topic. When you write your book, I’m sure that you will not neglect to include really good explanations of these things, suitable for people who have never heard of them before, but since no one else has mentioned it in this thread so far, I thought I might.
1) I learned to reconcile my postmodernist trends with physical reality. Sounds cryptic? Well let’s say I learned to appreciate science a little more than I did.
2) I learned to think more “statistically” and probabilistically—though I didn’t learn to multiply.
3) Winning is also a pretty good catch-word for an attitude of mind—and maybe a better title than less-wrong.
4) Oh! - And I stopped buying a lottery ticket.
5) The absence of evidence is not the evidence of absence—the symmetry of confirming and disconfirming evidence
6) Something I didn’t learn was the long list of cognitive biases. It’s ok to know about them—but I don’t think they are very useable in practice. The one I like best is “overconfidence”.
7) Something else I didn’t learn or understand was your stance on ethics. I am going to rush and take the child of the rails as well—but all else is muddled mud to me.
“Thou art Godshatter”—this was one of the first posts I read, and it made the entire heuristics and biases program feel more immediate / compelling than before
Expecting Short Inferential Distances
The ‘Shut up and do the impossible’ sequence.
Newcombe’s problem.
Godshatter.
Einstein’s arrogance.
Joy in the merely real.
The cartoon Godel’s theorem.
Science isn’t strict enough.
The bottom line.
Well, I’d say the most important thing I learned was to be less confident when taking a stand on controversial topics. So to that end, I’ll nominate
Twelve Virtues of Rationality
Politics is the Mind-Killer
The Simple Truth followed by A Technical Explanation of Technical Explanation, given some familiarity with probability theory, formed the basic understanding of Bayesian perspective on probability as quantity of belief. The most confusing point of Technical Explanation involving a tentacle was amended in the post about antiprediction on OB. It’s very important to get this argument early on, as it forms the language for thinking about knowledge.
Thanks for the link to the list—I keep forgetting that exists. And thanks again to Andrew Hay for making it.
That said, I don’t think I would say I learned anything from your OB posts, at least about rationality. I think I did learn about young Eliezer and possibly about aspiring rationalists in general. If that’s a reasonable topic, then I’d have to suggest something in the “Young Eliezer” sequence, possibly My Wild and Reckless Youth.
There are several variations on the questions you’re asking that I think I could find answers to:
“Which post do you think other people should read so that they will learn something?” (That might be the same as your third question) The Failures of Eld Science
“Which post did you enjoy the most?” Three Worlds Collide—if that counts as 1 post
“Which post do you recommend to people most frequently?” Zombies: the Movie
“Which post do you refer to most frequently in philosophical discussions?” Sorting Pebbles into Correct Heaps
It seems any ‘favorite’ type question will turn up fiction from me.
Philosophers are notably bad at following directions.
I liked philosophy before OB, so I knew you were supposed to question everything. OB revelealed new things to question, and taught me to expect genuine answers.
In fact, I’d say that OB reinforced in a more concrete way the belief I got from Wittgenstein that not all questions are meaningful (in particular, the ones for which there cannot be “genuine answers”).
“I suspect that most existential angst is not really existential. I think that most of what is labeled ‘existential angst’ comes from trying to solve the wrong problem”, from Existential Angst Factory.
I don’t know about “most important”, but the one post that really stuck in my mind was Archimedes’s Chronophone. I spent a while thinking about that one.
Just did a quick search of this page and it didn’t turn up… so, by far, the most memorable and referred-to post I’ve read on OB is Crisis of Faith.
Did practicing the Crisis of Faith technique cause you to change your mind about anything?
I really can’t think of any one single thing. Part of it is I think I hadn’t yet “dehindsightbiased” myself, (still haven’t, except now sometimes I can catch myself as it’s happening and say “No! I didn’t know that before, stop trying to pretend that I did.”)
Another part is that lots of posts helped crystallize/sharpen notions I’d been a bit fuzzy on. Part of it is just, well, the total effect.
Stuff like the Evolution sequence and so on were useful to me too.
If I had to pick one thing that stands out in my mind though, I guess I’d have to say the consciousness sequence. Specifically, making it much easier for me to imagine the day that it could be explained (REALLY explained, in the sense of “ooooh, now it really does make sense”) in terms of perfectly ordinary stuff.
Bits and pieces I’d thought out on my own, but, again, you brought home the point strongly.
That’s the best I can think of as far as specific things. The rest, well, it’s the effect of all of it on me, rather than any single one that I can point to.
EDIT: oh, really really important thing: your definition of, well, definitions. ie, the whole clusters in thingspace, natural boundries around them, etc.
The idea that the purpose of the law is to provide structure for optimization.
I’m not sure this is the most important thing I’ve learned yet, but it’s the only really ‘aha’ moment I’ve had in the admittedly small sample I’ve been able to catch up on thus far.
I find I think about this most often as I contemplate the effect traffic laws and implements have in shaping my 20 minute optimization exercise in getting to work each morning.
I’m not sure I’ve “learned” anything. You’ve largely convinced me that we don’t really “know” anything but rather have varying degrees of belief, but I believed that to some degree before reading this site and am not 100% convinced of it now.
The most important thing I can think of that I would have said is almost certainly wrong before and that I’d say is probably right now is that it is legitimate to multiply the utility of a possible outcome by its probability to get the utility of the possibility.
Prices or Bindings? and to a lesser extent (although with simpler formal statement) Newcomb’s Problem and The True Prisoner’s Dilemma: show just how insanely alien the rational thing can be, even if it’s directed to your own cause. You may need to conscientiously avoid preventing the world destruction, not take free money, and trade a billion human lives for one paperclip.
Righting a Wrong Question: how everything you observe calls for understanding, how even an utter confusion or a lie can communicate positive knowledge. There are always causes behind any apparent confusion, so if the situation doesn’t make sense in a way it’s supposed to be interpreted, you can always step back and see how it really works, even if you are not supposed to look at the situation this way. For example, don’t trust you thought, instead catch your own mind in the process of making a mistake.
Overcoming Bias: Thou Art Godshatter: understanding how insanely intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy, how there’s more to other mental operations than meets the eye.
Intelligence as a blind optimization process shaping the future—esp. in comparison with evolution—and how the effect of our built-in anthropomorphism makes us see intelligence as existing, when in fact, ALL intelligence is blind. Some intelligence processes are just a little less blind than others.
(Somewhat offtopic, but related: some studies show that the number of “good” ideas produced by any process is linearly proportional to the TOTAL number of ideas produced by that process… which suggests that even human intelligence searches blindly, once we go past the scope of our existing knowledge and heuristics.)
I’m going to echo CatDancer: for me the most valuable insight was that a little information goes a very long way. From the example of the simulated beings breaking out to the Bayescraft interludes to the few observations and lots of cogitations in Three Worlds Collide to GuySrinivasan’s random-walk point, I’ve become more convinced that you can get a surprising amount of utility out of a little data; this changes other beliefs like my assessment of how possible AI rapid takeoff is.
The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.
Generalising from one example.
Making Beliefs Pay Rent
The explanation of Bayes Theorem and pointer to E. T. Jaynes. It gave me a statistics that is useful as well as rigorous, as opposed to the gratuitously arcane and not very useful frequentist stuff I was exposed to in grad school.
Second would be the quantum mechanics posts—finally an understandable explanation of the MW interpretation.
#1: Teacher’s Password http://www.overcomingbias.com/2007/08/guessing-the-te.html
I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.
I realize you mostly care about #1, but just for more data: #2 I’d probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.
#1: Teacher’s Password
I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.
I realize you mostly care about #1, but just for more data: #2 I’d probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.
Priors as Mathematical Objects: prior is not something arbitrary, a state of lack-of-knowledge, nor can sufficient evidence turn arbitrary prior into precise belief. Prior is the whole algorithm of what to do with evidence, and bad prior can easily turn evidence into stupidity.
P.S. I wonder if this post was downvoted exclusively because of Eliezer’s administrative remark, and not because of its content.
Vlad, if you’re going to do this, at least do it as replies to your original comment!
Right. I moved other comments under the original one.
I’m going to break with the crowd here.
I don’t think that the Overcoming Bias posts, even cleaned up, are suitable for a book on how to be rational. They are something like a sequence of diffs of a codebase as it was developed. You can get a feel of the shape of the codebase by reading the diffs, particularly if you read them steadily, but it’s not a great way to communicate the shape.
A book probably needs more procedures on how to behave rationally:
How to use likelihood ratios How to use utility functions Dutch Books: what they are and how to avoid them
The posts are amazing, well connected and very detailed. I think one of the best insights you had was to make concise these biases as the words of your Confessor:
“[human] rationalists learn to discuss an issue as thoroughly as possible before suggesting any solutions. For humans, solutions are sticky...We would not be able to search freely through the solution space, but would be helplessly attracted toward the ‘current best’ point, once we named it. Also, any endorsement whatever of a solution that has negative moral features, will cause a human to feel shame—and ‘best candidate’ would feel like an endorsement. To avoid feeling that shame, humans must avoid saying which of two bad alternatives is better than the other.”
Any understanding of what it means to be rational must come to terms with the treacherous nature of the mind; the myriad number of traps that hold us back and the lack of any one true principle.
http://www.overcomingbias.com/2009/02/super-happy-people.html