What is wrong with “Traditional Rationality”?
In several places in the sequences, Eliezer writes condescendingly about “Traditional Rationality”. The impression given is that Traditional Rationality was OK in its day, but that today we have better varieties of rationality available.
That is fine, except that it is unclear to me just what the traditional kind of rationality included, and it is also unclear just what it failed to include. In one essay, Eliezer seems to be saying that Traditional Rationality was too concerned with process, whereas it should have been concerned with winning. In other passages, it seems that the missing ingredient in the traditional version was Bayesianism (a la Jaynes). Or sometimes, the missing ingredient seems to be an understanding of biases (a la Kahneman and Tversky).
In this essay, Eliezer laments that being a traditional rationalist was not enough to keep him from devising a Mysterious Answer to a mysterious question. That puzzles me because I would have thought that traditional ideas from Peirce, Popper, and Korzybski would have been sufficient to avoid that error. So apparently I fail to understand either what a Mysterious Answer is or just how weak the traditional form of rationality actually is.
Can anyone help to clarify this? By “Traditional Rationality”, does Eliezer mean to designate a particular collection of ideas, or does he use it more loosely to indicate any thinking that is not quite up to his level?
- 4 Aug 2014 19:17 UTC; 9 points) 's comment on Open thread, August 4 − 10, 2014 by (
I don’t think there’s a single defining point of difference, but I tend to think of it as the difference between the traditional social standard of having beliefs you can defend and the stricter individual standard of trying to believe as accurately as possible.
The How to Have a Rational Discussion flowchart is a great example of the former: the question addressed there is whether you are playing by the rules of the game. If you are playing by the rules and can defend your beliefs, great, you’re OK! This is how we are built to reason.
X-rationality emphasizes having accurate beliefs over having defensible beliefs. If you fail to achieve a correct answer, it is futile to protest that you acted with propriety. Instead of asking “does this evidence allow me to keep my belief or oblige me to give it up?”, it asks “what is the correct level of confidence for me to have in this idea given this new evidence?”
Excellent summary. This goes really well with Oscar_Cunningham’s list in his comment to this post.
Eliezer uses “Traditional Rationality” to mean something like “Rationality, as practised by scientists everywhere, especially the ones who read Feynman and Popper”. It refers to the rules that scientists follow.
A surely incomplete list of deficiencies:
The practitioners only use it within some small domain.
Maybe they even believe that one can only be rational in this domain.
Designed to work for groups, not for individuals. Telling someone to use Science to become smart is like telling them to use Capitalism to become rich.
It doesn’t tell you how to create hypotheses, only how to test them.
Imprecise understanding of probability and knowledge (which are the same thing).
Bizarre fetishisation of “falsification”.
Failure to concentrate on the important problems.
Focus on logical fallacies—rejecting arguement from authority, etc., and ignoring Aumann.
Excellent additions to the list.
Thx. Seems like a very good summary.
Traditional rationality goes back to Aristotle and is something that both Feynman and Popper rejected. Among other things, traditionality rationality is:
instrumentalist (thinks theories are only instruments for making predictions)
empiricist (thinks all knowledge comes from evidence)
foundationalist (thinks knowledge requires foundations)
inductivist (thinks knowledge can be induced by repeated observations)
essentialist (thinks things have essences; emphasizes “what is” questions)
justificationist (thinks knowledge must be justified)
Popper rejected all of the above, so calling his philosophy “traditional rationality” is highly misleading. Bayesianism, on the other hand, is firmly in the tradition of Aristotle.
Popperian philosophy is not a set of rules; Popper emphasized that the truth is not manifest and that there is no road to truth.
Well, no. Popperism has been applied in many domains, including morality (which empiricist, instrumentalist, Bayesianism is pretty much silent about). See, for example, David Deutsch’s “Taking Children Seriously”. Also see his new book The Beginning of Infinity. As a point of logic, saying that something is used in a small domain is not a criticism that something can’t be used outside that domain.
This just betrays a lack of familiarity with Popperism.
Popperian philosophy is important for individuals because it is about how knowledge is created and everybody creates knowledge. Again, I refer you to Deutsch. Also, since you mentioned capitalism, Popperian philisophy offers explanations for why capitalism is good, and it can do so because economics is another domain in which it applies :)
Lol. That is a criticism of Bayesianism, not Popperism. Bayesianism is about assigning probabilities to hypotheses and not about how to create new hypotheses. In Popperism, we don’t just want to create hypotheses, we want to create explanatory knowledge. Talking about hypotheses is just another sign of instrumentalism. Popperism says that explanatory knowledge arises as conjectural solutions to problem situations. Bayesianism says knowledge is induced from data, which, as Popper argued, is impossible (And this is a very hard thing for people to get their head around because the memes of traditional rationality have seeped into all aspects of most people’s thinking. Popper really is different).
It is an utter debasement of knowledge to say it is all probability. In what sense does a probability correspond to an explanation? How can you reduce the content of an explanation to a probability?
It has now been pointed out repeatedly in these forums that Popperism is not falsificationism. Bayesians: please pay attention.
Huh? What important problems did Popper and Feynman not concentrate on?
As others have said in the subcomments, this disputes the definition. We can have a debate about what should be called “traditional rationality”, but it is not the discussion we are having now. The original post instead asked what “traditional rationality” precisely means in Yudkowsky’s texts and what is wrong with that.
Does it mean that there are no rules in Popperianism? Can a Popperian scientist do whatever he wishes? (Yudkowsky’s critique was that the rules of the traditional rationality—as defined by him—are rather insufficient, that TR allows one to hold much beliefs that Bayesianism rejects. So, even if it is the case that Popper = “anything goes”, the critique would apply.)
In the sense that each explanation has associated a probability.
Definitely not something you should include in your comment if you want your interlocutors to respond unemotionally.
As prase said, you’ve been confused by the specific term used—the “Traditional Rationality” that EY was talking about isn’t the actual human being that was Karl Popper, but the pop-culture version of Popper which has been a major influence on the thinking of most scientifically-literate people of the modern era.
To make an analogy: if someone asked me what “Romeo” and “Juliet” meant in Taylor Swift’s song “Love Story”, my answer would be quite inaccurate as a description of the play—because the “Romeo” and “Juliet” in the song aren’t the two love-besotted idiots in the play, they’re the stereotypical young lovers of pop culture.
You say Eliezer is just talking about the pop-culture version of Popper, rather than actual Popperian philosophy. So he knows the difference right? He knows that pop-culture contains a lot of myths about Popper right? I don’t think so. Eliezer’s criticisms are actually directed at Popper, but he doesn’t understand Popper, only some pop-culture version.
Here is an example from my wild and reckless youth:
This is directed at Popper. It shows that Eliezer doesn’t know that criticism and explanation are major components of Popperian philosophy and that rather than spending 30 years trying to test a “silly idea”, a Popperian would criticize it to see if it stands up as a good explanation. The idea is presumed to be silly, so it would not stand up and the scientist can get on with enjoying the next 30 years. If Eliezer recognized it was just a pop-culture cartoon he would have said so and he would have differentiated that from the actual Popper. He didn’t.
What makes you say that?
The issue of criticism and explanation is raised in A Prodigy of Refutation, but note that Eliezer never brings up Popper at all, only the expectations that the common culture of traditional rationalists imposed on him.
It’s the sort of thing people who don’t know much about Popperian philosophy say when they try to criticize him. People who know a lot about Popper encounter the same myths time and again. Here the myth is Popperism is falsificationism.
Eliezer doesn’t mention explantion in the link you gave.
Let me see if I understand this. Many people criticise Popper for being X. Eliezer criticises X. Therefore Eliezer criticises Popper.
I’m afraid I don’t follow the chain of logic here at all.
I think the problem is that Eliezer mentions Popper by name, in the vicinity of X, thereby encouraging an association between Popper and X. I don’t have quotes handy but I did see a quote like that cited in the last day or two.
Eliezer has mentioned Popper by name in a number of places and said that “Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism”. See: http://yudkowsky.net/rational/bayes
So he thinks (or did think) Popperism is falsificationism. He doesn’t realize he is criticizing a pop-culture myth.
He is also wrong about the popularity of Popper.
(BTW, this rate filtering is a pain. I’m now aware of three people, including myself, who are critical of Bayesianism and who have zero kharma. Does this happen a lot?)
Seems to have just happened recently. Though similar things have happened before, I’m sure.
To try to see why you were at 0 points, I looked through the first two pages of your comments. Sorry if this advice is unsolicited, but I think there are some things you could fix.
Downvoted thing one: “Aristotle invented the idea of induction. It is a major false idea in philosophy, one that Less Wrong subscribes to. If you disagree, please show me a criticism of induction in the sequences.”
Reasons for getting downvoted: Not being charitable (i.e. doing your homework even when the other person seems wrong) leading to a fairly false equivalence between different things called “induction.” Demand that someone else show you a specific piece of evidence that you could find as easily as they.
2: “Good criticisms here, yet downvoted to −3. Do LWer’s really want to be less wrong?”
Reasons for getting downvoted: Fairly obvious, this didn’t work, try to do something more effective next time.
3: This long comment.
Things you could do better in this comment: Stick close to a few key points rather than trying to argue against everything—if you’d just posted the response to the first quote you would have communicated much better despite saying less. In fact arguing against everything is generally a bad sign, since (charity here) you should start out working from the assumption that the other person is partially right. You come across as too attached to one “big idea” and not sensitive enough to context because you bring Popper into your replies to points (e.g. his second one) that had nothing to do with Popper. If you’re feeling confrontational, try to not let it show through in the post—win by being better than the other person at this sort of argumentation, and don’t start any of your replies with “Lol.”
You might also focus on making witty, insightful, or helpful posts, but it’s harder for me to say how to make things go right.
I actually don’t care about kharma—I’m not posting to get good kharma. Neither is curi. Disagreements should be resolved by discussion and by criticism, not by voting. I was just wondering how many people who disagree with Bayesianism end up with 0 kharma on LW and whether that isn’t a bias? BTW, how do you know the reason something got downvoted?
With regard to your comments:
I have not found something on LW arguing that induction is impossible, the Popperian position. I have read a bunch of stuff here (done some homework) and it seems to me to be in the inductivist tradition of Aristotelian philosophy. I know other people who say the same thing and LW’ers that I have talked to seem incredulous that induction is impossible. So if you claim not to be in this mainstream tradition, I don’t see how that can be and asking for material I cannot find is reasonable.
That wasn’t an attempt to get upvotes. It was a comment to curi, who I know.
If I just commented on the first quote, people would have accused me of disputing the definition (which they did anyway—oh well). The “rules followed by scientists” refers to “traditional philosophy”, by which Eliezer/Oscar mean Popper. Some commenters think Eliezer is only criticizing pop-culture. That is not so: he is criticizing Popper, and there are other posts where he makes this explicit. So Popper has everything to do with this.
You said not to start any replies with “lol”. Popperians will try doing different things in conversation to see how the other person reacts. Are they concerned with style over substance? Do they place too much emphasis on emotional reactions? Are they conformists? I wasn’t doing that in this instance, but by enforcing rigid standards of communication you lose knowledge. curi talks more about this in his threads.
Karma is not a method of resolving disagreements here, it’s a feedback mechanism. If your comments are being heavily downvoted, it lets you know that people are finding something objectionable about them. Ideally we would like to be able to resolve disagreements here by discussion or experiment, but not all discussion is fruitful, and when a debate persists without a useful exchange of information or changing of opinions, then many people are going to want to see less of it.
This reads to me as “I don’t care about karma, just about knowledge that can be derived from karma.” These two positions seem to be, for all practical purposes, indistinguishable.
Also, for #1, AFAIK bayesians do not seek knowledge in the platonic sense.
If you are interested in communicating ideas playing experiments with your audience is probably not helpful for your goals. Moreover, just because someone is “concerned with style over substance” or is a “conformist” does not mean they have nothing useful to offer.
Moreover, in most internet conversations, the vast majority of readers are people who will never comment. If you have any interest in getting them to listen, coming across as rude, or unnecessarily obnoxious will not endear you to them.
It can be. Conventional social rules often mask disagreements and are designed to do that. If you stick to the social rules, the truth can take longer to come out.
I agree, but I didn’t say that.
I think stating the truth about things is enough not to endear yourself to a lot of people, so trying to endear yourself to them isn’t going to help.
I’m pretty sure it’s a mistake to lump together everyone who says induction is possible as “the mainstream tradition”.
They are all in the justificationist tradition, which is mainstream.
By that same logic, I could say “Popper is in the non-quantitative tradition, which is mainstream (in contrast to Bayesian epistemology)”. Reflecting one aspect of the mainstream, even a particularly important one, is still not sufficient for actually being mainstream.
You’re just arguing terminology. I don’t know what for. I was explaining what Brian meant.
Oops, I misread his “this mainstream tradition” as “the mainstream tradition”. Apologies.
There a variety of issues going on here. Manfred pointed out many of them. There’s another issue here that is you’ve had an influx of users all of whom are arguing for essentially the same set of positions and not doing it very well with a bit of rudeness thrown in. One of the three is being particularly egregious, and I suspect that there may be some spill-over in attitude from that user’s behavior towards how people are voting about you. I will note that in the threads responding to the various Popperian criticisms, various LW regulars are willing to say when another LWian has said something they think is wrong. It might help to distinguish yourselves if you were willing to point out when you think the others are wrong. For example, you haven’t posted at all in this thread. Do you agree with everything he has said there? If you disagree will you say so or do you feel a need to stay silent to protect a fellow member of your tribal group?
For what it is worth, I’m not a Bayesian. I think that Bayesianism has deep problems especially surrounding 1) the difficulty of where priors come from 2) the difficulty of meaningfully making Bayesian estimates about abstract systems. I’ve voiced those concerns before here, and many of those comments have been voted up. Indeed, I recently started a subthread discussing a problem with the Solomonoff prior approach which has been voted up.
I agree with curi that the Conjunction Fallacy does not exist. But if I disagreed I would say so—Popperians don’t hold back from criticism of each other. If my criticism hit its mark, then curi would change his mind and I know that because I participate in Popperian forums that curi participates in. That said, most Popperians I know think along similar lines; I see more disagreement among Bayesians about their philosophy here.
Your thread is about a technical issue and I think Bayesians are more comfortable discussing these sort of things.
He’s not doing a very good job making that case. Do you think you can do a better job?
Also, let’s go through some of his other claims in that thread. I’m curious which you agree with: Do you agree with the rest of what he has to say in that thread when he claims that “bad pseudo-scientific research designed to prove that people are biased idiots”? Do you agree with him that there is a deliberate “agenda” within the cognitive bias research “which has a low opinion of humans” which is “treating humans like dirt, like idiots”?
Do you agree with his claim that the conjunction fallacy is a claim about all thought about conjunctions and not some conjunctions?
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
I don’t know if there is a deliberate agenda and I wouldn’t have stated things so baldly (and that might just be a hangup on my part). Let’s look at the Tversky and Kahneman paper that curi cited. The first sentence says:
So in the very first sentence, the authors’ have revealed a low opinion of humans. They think humans have a condition, although they don’t explain what it is, only that uncertainty is part of it.
Later in the paper, they say:
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it). It’s not that humans come up with explanations and solve problems, it’s not that we are universal knowledge creators, it’s that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don’t do induction—as Popper and others such as Deutsch have explained, induction is impossible, it’s not a way we reason.
curi noted the authors also say:
So they admit bias.
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
Do you think foreign policy experts use probabilities rather than explanations?
Um, I think you are possibly taking a poetic remark too seriously. If they had said “uncertainty is part of everyday life” would you have objected?
Heuristics are not necessarily genetic. They can be learned. I see nothing in their paper that implies that they were genetic, and having read a fair amount of what both T & K wrote, there’s no indication that I saw that they strongly thought that any of these heuristics were genetic.
Ok. This confuses me. Let’s says that humans use genetic heuristics, how is that a low opinion? Moreover, how does that prevent us from being universal knowledge creators? You also seem to be confused about whether or not something is a good epistemology being related to whether or not a given entity uses it. Whether humans use induction and whether induction is a good epistemological approach are distinct questions.
This seems close to, if anything, Christian apologists saying how if humans don’t have souls then everything is meaningless. Do you see the connection here? Just because humans have flaws doesn’t make humans terrible things. We’ve split the atom. We’ve gone to the Moon. We understand the subtle behavior of the prime numbers. We can look back billions of years in time to the birth of the universe. How does thinking we have flaws mean one has a low opinion of humans?
I’m curious, when a psychologist finds a new form of optical illusion, do you discount it in the same way? Does caring about that or looking for those constitute a low opinion of humans?
That’s a tortured reading of the sentence. The point is that they wanted to see if humans engaged in conjunction errors. So they constructed situations where, if humans were using the representativeness heuristic or similar systems the errors would be likely to show up. This is, from the perspective of Popper in LScD, a good experimental protocol, since if it didn’t happen, it would be a serious blow to the idea that humans use a representativeness heuristic to estimate likelyhood. They aren’t admitting “bias”- their point is that since their experimental constructions were designed to maximize the opportunity for a representativeness heuristic to show up, they aren’t a good estimate for how likely these errors are to occur in the wild.
So it seems to me that you are essentially saying that you disagree with their experimental evidence on philosophical grounds. If your evidence disagrees with your philosophy the solution is not to deny the evidence.
In some contexts, yes. For example, foreign policy experts working with economists or financial institutions sometimes will make probability estimates for them to work with. But let’s say they never do. How is that at all relevant to the questions at hand? Do you really think that the idea of estimating a probability is so strange and technical that highly educated individuals shouldn’t be expected to be able to understand what is being asked of them? And yet you think that Tversky had a low opinion of humans? Moreover, even if they did have trouble understanding what was meant, do you expect that would cause all the apparent bias to go by sheer coincidence just as one would expect given the conjunction fallacy?
You can read “human condition” as a poetic remark, but choosing a phrase such as that to open a scientific paper is imprecise and vague and that they chose this phrase reveals something of the authors’ bias I think.
No, Tversky and Kahneman have not specifically said here whether the heuristics in question are genetic or not. Don’t you think that’s odd? They’re just saying we do reasoning using heuristics, but not explaining anything. Yet explanations are important; from these everything else follows.
That they think the heuristics are genetic is an inference and googling around I see that researchers in this field talk about “evolved mental behaviour” so I think the inference is correct. It means that some ideas we hold can’t be changed, only worked around, and that these ideas are part of us even though we did not voluntarily take them onboard. So we involuntarily hold unchangeable ideas that we may or may not agree with and that may be false. It’s leading towards the idea we are not autonomous agents in the world, not fully human. The idea that we are universal knowledge creators means that all our our ideas can be changed and improved on. If there are flaws in our ideas, we discard them once the flaws are discovered.
With regard to induction, epistemology tells us that it is impossible, therefore no creature can use it. Yes, I disagree with the experimental evidence on philosophical grounds; the philosophy is saying the evidence is wrong, that the researchers made mistakes. curi has given some theories about the mistakes the researchers made, so it does indeed seem as though the evidence is wrong.
I have no problem with the idea that probabilities help solve problems. Probabilities arise as predictions of theories, so are important. But probability has nothing to do with the uncertainty of theories, which can’t be quantified, and no role in epistemology whatsoever. It’s taking an objective physical concept and applying it in a domain it doesn’t belong. I could go on, but you mention LSD, so I presume you know some of these ideas right? Have you read Conjectures and Refutations or Deutsch?
Well said. And btw about “human condition” at first I thought you might be overreacting to the phrase, from your previous comments here, but I found your email very convincing and I think you have it right. I think “poetic remark” is a terrible excuse—it’s merely a generic denial that they meant what they said. With the implicit claim that: this is unrepresentative, and they were right the rest of the time. The apologist doesn’t argue this claim, or even state it plainly; it’s just the subtext.
How you explain how their work pushes in the direction of denying we’re fully human, via attacking our autonomy (and free will, I’d add) is nice.
One thing I disagree with is the presumption that an LScD reader would know what you mean. You’re so much more advanced than just the content of LScD. You can’t expect someone to fill in the blanks just from that.
It’s not an agenda in the sense of a political agenda (though it does have some connections to political ideas), nor a conspiracy, nor a consciously intended and promoted agenda.
But, they have a bunch of unconscious ideas—a particular worldview—which informs how they approach their research and, because they do not use the rigor of science which prevents such things, their worldview/agenda biases all their results.
The proper rigor of science includes things like describing the experimental procedure in your paper so mistakes can be criticized and it can be repeated without introducing unintended changes, and having a “sources of error” section where you discuss all the ways your research might be wrong. When you leave out standard parts of science like those, and other more subtle ones, you get unscientific results. The scientific method, as Feynman explained, is our knowledge about how not to fool ourselves (i.e. it prevents our conclusions from being based on our biases). When you don’t use it, you get wrong, useless and biased results by default.
One of the ways these paper goes wrong is it doesn’t pay enough attention to the correct interpretation of the data. Even if the data was not itself biased—which they openly admit it is—their interpretation would be A) problematic and B) not argued for by the data itself (interpretations of data never are argued for by the data itself, but must be considered as a separate and philosophical issue!)
If you try enough, you can get people to make mistakes. I agree with that much. But what mistake are the people making? That’s not obvious, but the authors don’t seriously discuss the matter. For example, how much of the mistake people are making is due to miscommunication—that they read the question they are asked as having a meaning a bit different than the literal meaning the researchers consider the one true meaning? The possibility that the entire phenomenon they were observing, or part of it, is an aspect of communication not biases about probability is simply not addressed. Many other issues of interpretation of the results aren’t addressed either.
They simply interpret the experimental data in a way in line with their biases and unconscious agendas, and then claim that empirical science has supported their conclusions.
Yes, I agree, and the ideas are not all unconscious either. What do you think the worldview is? I’m guessing the worldview has ideas in it like animals create knowledge, but not so much as people, and that nature (genes) influence human thought leading to biases that are difficult to overcome and to special learning periods in childhood. It’s a worldview that denies people their autonomy isn’t it? I guess most researchers looking at this stuff would be politically left, be unaware of good philosophy, and have never paid close attention to issues like coercion.
Yes.
I think they would sympathize with Haldane’s “queerer than we can suppose” line (quoted in BoI) and the principle of mediocrity (in BoI).
There’s something subtle but very wrong with their worldview that has to do with the difference between problem finding and problem solving. These people are not bubbling with solutions.
A lot of what they are doing is excusing faults. Explaining faults without blaming human choices. Taking away our responsibility and our ability to be responsible. They like to talk about humans being influenced—powerless and controlled—but small and subtle things. This connects with the dominant opinion on Less Wrong that morality does not exist.
They have low standards. They know their “science” is biased, but it’s good enough for them anyway. They don’t expect, and strive for, better. They think people are inherently parochial—including themselves, who they consider only a little less so—and they don’t mind.
Morality can’t exist without explanations, btw, and higher level concepts. Strong empiricism and instrumentalism—as dominate Less Wrong—destroy it pretty directly.
They would not like Ayn Rand. And they would not like Deutsch.
http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf
They take for granted rationality is bounded and then sought out ways to show it, e.g. by asking people to use their intuition and then comparing that intuition against math—a dirty trick, with a result easily predictable in advance, in line with the conclusion they assumed in advance. Rationality is bounded—they knew that since college merely by examining their own failings—and they’re just researching where the bounds are.
EDIT
http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=415636
That’s another aspect of it. It’s the same kind of thing. If you establish how biased we are, then our success or failure is dependent not on us—human ideas and human choices—but parochial details like our environment and whether it happens to be one our biases will thrive in or not.
First: bear in mind that Popper was brought up by Oscar Cunningham—EY has probably mentioned him at some point, but not often, and never in the essay you quoted from.
Second: Familiarity with the pop culture idea in no wise implies familiarity with the real thing—more often the opposite.
Most bad explanations, even scientific ones, don’t/shouldn’t get tested at all. DD explained this in FoR and BoI both.
Could you justify why you consider Aristotle a Bayesian, or that Bayes is in the Aristotle tradition, other than because all rationality is in the tradition of Aristotle? (In the West, anyway). Honestly, it just sounds like name-dropping to me. I like Aristotle, and his work on logic is important; so when I see Aristotle cited I usually expect something more substantial.
-3 for this great information and no replies (in particular, no criticisms). wow...
I didn’t downvote, but I suspect those who did did so because Oscar listed features of traditional rationality, which he described as being things that scientists (specifically, philosophically-literate ones) did, and Brian interpreted this as a description of Popperianism. Oscar wasn’t trying to describe a philosophy, he was trying to describe the habits of scientists, so a rant about Popper was inappropriate.
If you would pay attention, Oscar wrote
which is why Brian took it to be about Popper and also Feynman
Oscar said that scientists who read Popper did certain things.
Brian wrote 400 arguing that Popper did not do those things, unless he was assuming that Popper fell into the category of “scientists who had read Popper.”
This in no way contradicted anything that Oscar wrote.
Just because someone mentions a noun in their text does not mean they’re writing about that noun. You seem to be suffering from the halo effect; interpreting any negative sentence that mentions Popper as an attack on Popper. Consider; what are the odds that it would be you who would notice that a post had been unfairly downvoted, despite being helpful? Small—LW gets thousands of hits, and hundreds of commenters. On the other hand, your commenting thus was far more likely under the affect heuristic hypothesis.
Although I too have now a sort of negative gut reaction to the defenses of Popper, as a result of reading through the discussions below the recent curi’s posts, I think that this reply is a bit unfair. Saying “scientist who read Popper do X” weakly implies that Popper really suggests X, or at least that it is easy to mistakenly derive X from his work; it certainly is a statement about Popper, even if an indirect one.
is literally true, but belongs to the class of literal-interpretation nitpickery so often found in traditional debates. It is logically possible that Popper had in no way suggested the things his followers were doing, but it is not probable and we should not be interested in mere logical possibility.
Although understandable, that was a mistake. It could have been avoided by paying closer attention to the general topic rather than associating concepts based on the previous post. Also, a habit of giving people the benefit of the doubt would have helped prevent interpreting Oscar’s post as, for example, an attack on Popper.
It did take a while for someone to point out the chief problem, I see. But Brian started his grandparent comment by disputing a definition. Moreover, he did so while arguing for a position that (he tells us) rejects essences and de-emphasizes “what is” questions. Brian goes on to “lol” at a claim he gives no obvious sign of understanding.
I’m really writing this to ask if you’ve already answered the question at the end of this comment, or shown some other advantage Popper has over Bayes, in a way that I wouldn’t have seen when I scanned your posts.
This is not a novel contribution, it’s based on a confusion of the position we hold here. The traditional response to comments like this is “read the sequences,” because a lot of effort was put into them so we wouldn’t have to spend time sorting out comments like this. But the sequences are a lot of reading, and it’s not always fair to expect people to do that much work to participate in a conversation, so we don’t do that as much anymore, but when a person persistently doesn’t show the commitment to catching up on the information everyone else in the discussion is already operating on, a lot of people will just get frustrated and downvote.
I did not downvote, but I am going to suggest that Brian Scurfield read the sequences if he’s going to participate further. That’s why they were written after all; to bring people up to speed to the point where they’re able to meaningfully participate.
The comment I responded to made factual errors about Popper and Feynman and attributed to them positions they did not hold. I don’t need to read the sequences to point that out.
The comment referred to characteristics of modern individuals, particularly scientists, who identify as rational individuals, and associate themselves with the intellectual traditions of Feynman and Popper. If you read the sequences, you would receive examples which show exactly to what these criticisms refer.
could you link a page in the sequences with a (high quality) criticism of Popper?
No.
There are many pages in the sequences devoted to addressing mistakes made by individuals who identify as rational, who associate themselves with the traditions of modern science, and showing how to do better (not just arguments for a procedure that alleges to be more epistemically sound but how it produces better results in the real world.)
In devoting ourselves to the procedures that produce the best tangible results, we have found no reason to take a particular interest in producing criticisms of Popper. If Critical Rationalism distinguished itself as an epistemology that produced exceptional real world results, then matters would be different.
Hmm...I had something else written here, but had a thought causing me to be less certain of what I wrote. I do think Popper should be criticized by someone on this site, to point out what is wrong with his epistemology.
I agree that the whole Popper debate has passed the point of being silly; I’m ashamed to have continued to participate in it so far past the point where it was clear that further headway was unlikely to be made. I dispute the allegation of bad scholarship though.
The purpose isn’t to criticize the authors, but how the specified people behave. What the authors actually say is irrelevant; the criticisms of the people specified by the reference to “traditional rationalists” would be equally applicable whether Popper and Feynman’s writings on epistemology were complete nonsense or identical to what Eliezer is arguing.
There are, of course, wide selections of views encompassed in mainstream philosophy and traditional rationality, but the differences between them are only salient to the discussion if they distinguish them from the qualities that are being referenced.
I apologize, I edited my after submitting it. I did realize the issue of relevance, and I also think that my criticism was unfair in that I think the critique of “Traditional Rationality” is meant to be a methodological critique. I think the critique is very much in terms of valuing process (even a particular scholarly process) over results; which was also part of your point.
I guess I’m very much used to the scholarship process, and I’m not entirely clear on what “Traditional Rationality” ultimately is meant to imply, other than finding clues on various pages. I shouldn’t have expressed my confusion as disagreement.
David Deutsch is an accomplished scientist/philosopher who is in the intellectual tradition of Feynman and Popper (but not of Aristotle). He has none of the characteristics Oscar mentions. Oscar has no idea what people like David Deutsch are like. Plus Deutsch has achieved way more than anybody here, but so much for being a Popperian eh?
I would question how you know what everyone here has achieved.
Anyway, Deutsch may have an impressive list of accomplishments but I wouldn’t say he surpasses Newton. Does that mean he should give up on Popper and go back to 16th century Christianity?
Well, for one thing, I know that you haven’t built an AI or come anywhere near close and I know that because your philosophy is empiricist, instrumentalist, and inductivist, all mistakes that prevent your project getting off the ground. Deutsch, on the other hand, invented universal quantum computation, founded Taking Children Seriously (which anyone interested in AI really ought to know about), wrote a best seller, pioneered universal constructor theory, advanced Popperian philosophy, and has a new book just out. So, unless someone here has done something significant outside AI that I don’t know about, I rest my claim.
Depends what counts as an AI, but no, we havn’t built an AGI. Nor are we currently trying to. Nor has Deutsch, and none of his actual achievements are even close to being of similar difficulty, so this is an unreasonably high bar.
For someone whose philosophy is supposed to avoid circularity, you have a bad tendency to resort to “you’re wrong because you’re wrong, so there” arguments.
Less impressive than universal laws of motion. Once again, why aren’t you a 16th century Christian like Newton?
Many very irrational people have founded political movements
Many very irrational people have written popular books
See above.
Bottom line, you’re arguing from authority, which I though was a big no-no for Popper.
Do they usually let irrational cranks into the Royal Society? Honest question. I know that some crap people get Nobel prizes (though that’s the peace and economics prizes, which are politicized. I don’t know if anyone awful got the physics prize).
Probably, I’m not qualified to comment. Since I don’t think Deutsch is a crank that’s fairly irrelevant.
I pointed those out not to argue from authority but to point out what a scientist in the tradition of Popper and Feynman has done, and it is a world away from what was suggested scientists in that tradition do.
Taking Children Seriously is not a political movement; it is a philosophy.
Many irrational people have founded those as well.
Looking into it, it looks very similar to something Hanson came up with independently.
Also, read the sequences before you make accusations about their content! Yudkowsky is a big fan of Feynmann, he doesn’t view himself as refuting Feynmann but building upon him. He is happy to say that traditional rationality (his definition not yours) is a great thing (he does not say the same of Aristotle). He merely points out that it is not enough to prevent many people, including himself, from believing stupid things. One of the running themes of the sequences is high standards.
Do you have a link to Hanson? Taking Children Seriously is trickier than it appears, so new people often mistakenly think it is similar to ideas they heard before or were thinking before.
Cool that we all like Feynman. But Feynman was in the Popperian tradition, so I don’t see how Yudkowsky could be building on Feynman when he says he is “dethroning Popper”. Can you point me to a place in the sequences where Feynman is discussed?
Yes, it is easy to fool ourselves, as Feynman said. That’s why you need a philosophy that focuses on finding errors and correcting them, as Popperism does. You’re always going to make mistakes—the truth is not obvious after all—but it is through our mistakes that we progress, so be relentless in uncovering mistakes, make your mistakes fast, and celebrate them!
Can I make a serious request?
Please, try really hard to cut back on the ‘us versus them’ mentality. Earlier in this thread someone tried to explain what Yudkowsky means when he says ‘traditional rationality’ and the process he mentioned that such people often mention Popper, which they do (not every Popperian shares your views). Now you are saying that someone is not allowed to say ‘I think Feynmann was a reality smart guy, he had a lot of good advice and reading his books as a kid set me down a very good path’ without accepting everything Popper said.
Where can I find these many people who mention Popper?
How come they find you guys, but don’t manage to find any of the Popperian meeting places online that Brian, I, and others we know frequent? Do they have any websites where they post Popperian related material? I’ve done plenty of searches for such things. I don’t think there’s very much by people I don’t know.
I have not looked into it closely, they may or may not have their own website. My philosophy teacher claims to be a Popperian, but he sounds nothing like you or Brian, he does place a lot of emphasis on the whole ‘black swan white swan, falsification is possible but confirmation isn’t’ stuff.
Many of the people I am referring to are more casual fans than you and Brian, they may have read a few of his books or maybe just some secondary texts. They probably haven’t seriously looked into the details or underlying principles, and they definitely haven’t looked into the alternatives. When questioned about philosophy of science, Popper is their fall-back option.
FYI most people with casual knowledge of Popper have read summaries rather than Popper’s books (and, if anything, just read LScD and maybe OSE). In general secondary sources are unreliable and introduce many errors. In the case of Popper in particular the situation is much worse than usual and the secondary sources are jam packed with myths.
There are several reasons for this:
1) Popper questioned some deeply ingrained common sense cultural assumptions. People have a hard time grasping what his position even is, and that those assumptions aren’t laws of nature and are possible to be questioned.
2) Popper pissed some people off by criticizing them. In particular, Marxists. Marxists played a major role in spreading myths about Popper. Marxists are low on moral qualms about high quality scholarship.
3) Popper somewhat associated with some people he didn’t agree with. In particular the Vienna Circle. They published some of his work and took an interest in it. This encouraged the myth that Popper agreed with their main program, which he never did.
4) Some members of the Vienna Circle tried to understand on their own terms. Two major mistakes they made were:
A) they reinterpreted Popper’s criterion of demarcation between science and non-science (which is: science is stuff where empirical observations are relevant and used) as a criterion of meaningfulness. That is, they took it to mean non-science was meaningless. That is in line with their other philosophy, but Popper never thought anything like that.
B) they mistakenly took Popper’s ideas about falsification as a replacement for confirmation, instead of recognizing them as a different kind of thing.
Due to issues like these, people with a casual acquaintance with Popper aren’t really Popperians. They don’t get it. One has to study him more closely to get past issues like this, as well as the difficulty of the material (Popper solved major philosophical problems that many others failed to solve. It’s not that easy to understand.)
My friend Rafe Champion (http://www.the-rathouse.com/) has a particular interest in this. He takes new philosophy books, especially ones used by schools, and checks what they say about Popper. The answer is basically always: not much, and most of it wrong. Yudkowsky’s comments on Popper at http://yudkowsky.net/rational/bayes are representative of the mistakes found in most general overview philosophy books.
If one has a manufacturing process that often produces below-specification products, it seems odd to suggest that one should develop an extensive inspection and testing process.
One should really just develop a better manufacturing process.
(in this analogy, traditional rationalism + science is the old manufacturing process, Popperism is the extensive inspection process, and Bayesianism is the new manufacturing process)
All epistemologies purport to be good for developing knowledge. We dispute the notion here that Popperism is as good at arriving at true conclusions in practice. You tend to run into errors with regards to privileging the hypothesis in particular, and to privileging clever arguers whose conclusions are not tied to the evidence.
If you want to further discuss this matter, please read the sequences, which were written to provide people with the shared base of knowledge necessary to hold fruitful discussions here, so that we wouldn’t have to keep providing constant corrections, explanations and clarifications.
If you want to dispute that the criticisms we have with regards to Popper’s epistemology are legitimate, please do so after reading the sequences; it will help you understand why we’ve made them, and encourage others to take your arguments seriously. Otherwise you would be wasting your time.
If Yudkowsky hates Popper and likes Feynman, then either
1) he likes some individual, narrow aspects of feynman (perfectly fine and unobjectionable)
2) he hasn’t understood what feynman is about
3) i have misunderstood feynman, badly
Agree so far? Do you think it’s number 1?
I don’t think he hates Popper. I will resist the urge to answer this question, because I can’t and shouldn’t speak for Eliezer.
My own opinion is that both Popper and Feynmann were intelligent, and far more rational than the average person, or even the average scientist, especially when for their time. Both of them pushed rationality forwards, but with the introduction of Bayesian epistemology it can be pushed further still, and for the first time made rigorous. It is not their fault that they were born to early to see this happen, but this doesn’t mean we should prevent it from happening out of respect for them.
As Eliezer said, “heroes are milestones to tick off in your rear-view mirror”.
I don’t see how
… could be interpreted as making claims about Popper or Feynman, or attributing any positions to them. Oscar’s writing was quite clear and understandable.
You really don’t see how that could be done, even with the usage of words such as “especially”?
Read the context. Oscar makes a set of claims about scientists, especially those who read Popper and Feynman. Such scientists, apparently, make a fetish of falsification, they operate only in a small domain, don’t explain how knowledge is created etc. Well, those sort of things are not in the tradition of Popper and Feynman and if there are scientists who do that, and who have read Popper and Feynman, then they did not understand what they read. Not only is Oscar’s comment rude to the tradition of Popper and Feynman, he doesn’t understand that tradition.
Right, and if that’s the case, then Oscar’s characterization was correct, and not attributing any positions to Feynman and Popper.
Oscar was just summarizing Eliezer (with caveats like “something like”), it seems a bit like a wate of time to attack his summary in detail, where instead you could just find from which of Eliezer’s writings Oscar formed that impression, and point out any errors at their source.
My vague recollection of Eliezer’s position would be something like “Here are the kind of mistakes that I made, that listening to Feynman didn’t prevent, and that scientists still make”. But again, that’s just my vague summary, no point in trying to take it apart.
Accurately understanding a work is no prerequisite to being influenced by it.
most of brian’s post was about stuff he knows about (e.g. popper). it was correcting mistaken comments about that topic. “read the sequences” is a stupid response to that.
But it’s based on misunderstandings of what we’re actually talking about, which he would not hold had he read the sequences.
His first statement “Traditional rationality goes back to Aristotle and is something that both Feynman and Popper rejected” is irrelevant because it’s not addressing what anyone else in the conversation is talking about. Oscar Cunningham clarified what Eliezer was talking about, and Eliezer’s commentary on it is elucidated in the sequences, and if Brian Scurfield had read them, he could have dispensed with the remainder of his post as well.
if you would pay any attention, you would notice that oscar wrote
and that brian’s reply was relevant to that.
your comment is plainly factually false.
If he was disputing the definition, then his comment was irrelevant. What he was describing was not what was under discussion.
Oscar made a list of things that were allegedly in the tradition of Feynman and Popper. He was wrong about those. Furthermore, Feynman and Popper are in a different tradition to Aristotle, which is conventionally called “traditional rationality”. Oscar says that Bayesianism is not “traditional rationality”, meaning it is not Popperism, but it is firmly in the mainstream tradition of Aristotle: it is conventional traditional rationality.
It really isn’t.
Aristotle is one of the few philosophers who is explicitly named and criticized in the sequences.
Aristotle invented the idea of induction. It is a major false idea in philosophy, one that Less Wrong subscribes to. If you disagree, please show me a criticism of induction in the sequences.
Reversed stupidity is not intelligence. Just because we think Aristotle was wrong about some things doesn’t mean we are obliged to disagree with him about everything.
Also, I’m pretty sure Aristotle did not invent induction. He may have been the first person to call it that, but he didn’t invent the concept, which probably predates writing.
Thinking about it, dogs are capable of induction, which suggests that no human invented it at all, in the same way that no human invented sensory perception.
No, he invented it. Read Popper’s The World of Parmenides. (BTW, Popper took the trouble to learn ancient Greek)
Then explain how my dog figured out that when if she sits when I say ‘sit’ I give her food.
I’m pretty sure she’s never read Aristotle.
Aristotle may have codified induction, he may have taken credit for it, but he didn’t invent it.
Bit of a waste of time for someone so important, no? Is there anything he gained from that which he couldn’t have gained from a translation?
Translation quality is, in general, terrible. By “terrible” I mean not nearly good enough for philosophy where some precision and detail matters. I’ve read 5 different translations of Xenophanes’ fragments. They are all significantly different, and they change the meaning.
BTW I’m not even sure if an English translation of Xenophanes existed yet when Popper learned Greek. Lesher’s book was published in 1992. Of course Popper was fluent in German, but the German translators are in general significantly worse, and the German language is not good for philosophy. Once Popper learned English he stopped doing philosophy in German saying it was much worse for it.
Popper did his own translations of some text and published criticisms of other translation which had got it wrong. He gives good arguments about why he has it right which are persuasive. Some of the people replied, and you can read their view of the matter and judge for yourself who had it right (Popper :).
To do good translations of philosophers, you have to not just know the language but also have some understanding of the philosophy. That’s the main reason Popper was able to do better than other translators who knew the language better than him. Popper came up with good explanations about what the people were trying to say, while others focussed on words too directly.
About the dog, you’re correct that on the theory that both people and animals do induction all the time it must have predated Aristotle. So if Popper is wrong about his major ideas, he’s wrong about this one too; but if not then you’re argument wouldn’t hold for this. On our theory that induction is a substantive philosophical idea, not ever done by anyone but merely a misconception, then it was invented. And Aristotle is the best candidate for who did it, as Popper explained.
One thing to consider is: if induction predates aristotle, which philosophers predating aristotle are in the inductivist tradition? In my reading, they are all different in their attitudes, assumptions and outlooks. Xenophanes is a good example of this (who Aristotle disliked). If you can’t find any induction in the presocratics, then saying it was popular since prehistory wouldn’t really make sense.
The dog is just enacting programs encoded in its genes by evolution (uniquely among animals, dog genes contain knowledge of human memes). Dogs can’t create knowledge, humans are the only animal that have that capacity, and the evolutionary process that created the knowledge in a dog’s genes is not an inductive process.
Popper learnt ancient Greek because he knew that translations are often wildly inaccurate and they are inaccurate because all translations are interpretations. He liked to get his facts correct.
Dogs do not have ‘knowledge’ in their genes. What they do have is pattern-matching capabilities. If they see a pattern enough times, they start expecting it to occur more often.
This same pattern matching goes on in the brains of humans, with the difference that the patterns it can spot are more sophisticated. Without it we would never have invented science or technology, and for that matter we would never have survived in the ancestral environment.
If the first three people to wander into the swamp get eaten by crocodiles, and you don’t consider this a valid argument for not walking into the swamp, then your genes won’t be present in the next generation.
I take it you have a subjectivist conception of knowledge. Is that right?
If they considered something else a good argument for the same conclusion, then that argument wouldn’t work (had to do induction or die). Agreed?
In some ways, Eliezer is too hard on Traditional Rationalists (TRists). In the “wild and reckless youth” essay, which you cite, he focuses on how TR didn’t keep him from privileging a hypothesis and wasting years of his life on it.
But TR, as represented by people like Sagan and Feynman, does enjoin you to believe things only on the basis of good evidence. Eliezer makes it sound like you can believe whatever crazy hypothesis you want, as long as it’s naturalistic and in-principle-falsifiable, and as long as you don’t expect others to be convinced until you deliver good evidence. But there are plenty of TRists who would say that you ought not to be convinced yourself until your evidence is strong.
However, Eliezer still makes a very good point. This injunction doesn’t get you very far if you don’t know the right way to evaluate evidence as “strong”, or if you don’t have a systematic method for synthesizing all the different evidences to arrive at your conclusion. This is where TR falls down. It gives you an injunction, but it leaves too much of the details of how to fulfill the injunction up to gut instinct. So, Eliezer will be contributing something very valuable with his book.
Plus, the focus in TR on whether you ought to be convinced makes it seem like belief is binary. Under TR, you’re always asking yourself, “Is there enough evidence yet so that we ought to be convinced?” TRists will talk about probabilities and error bars, but there is an incomplete acknowledgement of the fact that you ought not to think in terms of thresholds of belief at all.
I just started listening to THIS (perhaps 15min of it on my drive to work this morning), and EY has already mentioned a little about traditional rationality vs. where he is now with respect to reading Feynman. I’m not sure if he’ll talk more about this, but Luke’s page does have as a bullet point of the things covered:
so perhaps he’ll continue in detail about this. Off hand, all I can specifically remember is that at one point he encountered some who thought that multiple routes to solve a problem might lead to different, but correct, answers. Then he read Jaynes, who said that this was all wrong—all mathematically valid routes to real solution should and will converge.
Also, there’s a whole series of posts on EY’s “Coming of Age” HERE, but maybe you knew that and still aren’t satisfied.
Re: The podcast—the relevant bit is about 4 minutes in.
And is that all he says about it? Or is there any more later?
One relevant attempt at a definition:
All of those are problems with traditional rationality, and Elizeer has critiques traditional rationality for all of them. Traditional rationality should have helped Elizeer more than it did, except for one thing: His mysterious answer was falsifiable, but not until we’d developed the technology to test it which wouldn’t be for decades.
I’ve done some work on the wiki page, but was unsure how much info to add. Should I just combine my and ciphergoth’s replies and put it up there? Help is appreciated.