As prase said, you’ve been confused by the specific term used—the “Traditional Rationality” that EY was talking about isn’t the actual human being that was Karl Popper, but the pop-culture version of Popper which has been a major influence on the thinking of most scientifically-literate people of the modern era.
To make an analogy: if someone asked me what “Romeo” and “Juliet” meant in Taylor Swift’s song “Love Story”, my answer would be quite inaccurate as a description of the play—because the “Romeo” and “Juliet” in the song aren’t the two love-besotted idiots in the play, they’re the stereotypical young lovers of pop culture.
You say Eliezer is just talking about the pop-culture version of Popper, rather than actual Popperian philosophy. So he knows the difference right? He knows that pop-culture
contains a lot of myths about Popper right? I don’t think so. Eliezer’s criticisms are actually directed at Popper, but he doesn’t understand Popper, only some pop-culture version.
The way Traditional Rationality is designed, it would have been acceptable for me to spend 30 years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera.
This is directed at Popper. It shows that Eliezer doesn’t know that criticism and explanation are major components of Popperian philosophy and that rather than spending 30 years trying to test a “silly idea”, a Popperian would criticize it to see if it stands up as a good explanation. The idea is presumed to be silly, so it would not stand up and the scientist can get on with enjoying the next 30 years. If Eliezer recognized it was just a pop-culture cartoon he would have said so and he would have differentiated that from the actual Popper. He didn’t.
The issue of criticism and explanation is raised in A Prodigy of Refutation, but note that Eliezer never brings up Popper at all, only the expectations that the common culture of traditional rationalists imposed on him.
It’s the sort of thing people who don’t know much about Popperian philosophy say when they try to criticize him. People who know a lot about Popper encounter the same myths time and again. Here the myth is Popperism is falsificationism.
Eliezer doesn’t mention explantion in the link you gave.
I think the problem is that Eliezer mentions Popper by name, in the vicinity of X, thereby encouraging an association between Popper and X. I don’t have quotes handy but I did see a quote like that cited in the last day or two.
Eliezer has mentioned Popper by name in a number of places and said that “Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism”. See: http://yudkowsky.net/rational/bayes
So he thinks (or did think) Popperism is falsificationism. He doesn’t realize he is criticizing a pop-culture myth.
He is also wrong about the popularity of Popper.
(BTW, this rate filtering is a pain. I’m now aware of three people, including myself, who are critical of Bayesianism and who have zero kharma. Does this happen a lot?)
Seems to have just happened recently. Though similar things have happened before, I’m sure.
To try to see why you were at 0 points, I looked through the first two pages of your comments. Sorry if this advice is unsolicited, but I think there are some things you could fix.
Downvoted thing one: “Aristotle invented the idea of induction. It is a major false idea in philosophy, one that Less Wrong subscribes to. If you disagree, please show me a criticism of induction in the sequences.”
Reasons for getting downvoted: Not being charitable (i.e. doing your homework even when the other person seems wrong) leading to a fairly false equivalence between different things called “induction.” Demand that someone else show you a specific piece of evidence that you could find as easily as they.
2: “Good criticisms here, yet downvoted to −3. Do LWer’s really want to be less wrong?”
Reasons for getting downvoted: Fairly obvious, this didn’t work, try to do something more effective next time.
Things you could do better in this comment: Stick close to a few key points rather than trying to argue against everything—if you’d just posted the response to the first quote you would have communicated much better despite saying less. In fact arguing against everything is generally a bad sign, since (charity here) you should start out working from the assumption that the other person is partially right. You come across as too attached to one “big idea” and not sensitive enough to context because you bring Popper into your replies to points (e.g. his second one) that had nothing to do with Popper. If you’re feeling confrontational, try to not let it show through in the post—win by being better than the other person at this sort of argumentation, and don’t start any of your replies with “Lol.”
You might also focus on making witty, insightful, or helpful posts, but it’s harder for me to say how to make things go right.
I actually don’t care about kharma—I’m not posting to get good kharma. Neither is curi. Disagreements should be resolved by discussion and by criticism, not by voting. I was just wondering how many people who disagree with Bayesianism end up with 0 kharma on LW and whether that isn’t a bias? BTW, how do you know the reason something got downvoted?
With regard to your comments:
I have not found something on LW arguing that induction is impossible, the Popperian
position. I have read a bunch of stuff here (done some homework) and it seems to me to be in the inductivist tradition of Aristotelian philosophy. I know other people who say the same thing and LW’ers that I have talked to seem incredulous that induction is impossible. So if you claim not to be in this mainstream tradition, I don’t see how that can be and asking for material I cannot find is reasonable.
That wasn’t an attempt to get upvotes. It was a comment to curi, who I know.
If I just commented on the first quote, people would have accused me of disputing the definition (which they did anyway—oh well). The “rules followed by scientists” refers to “traditional philosophy”, by which Eliezer/Oscar mean Popper. Some commenters think Eliezer is only criticizing pop-culture. That is not so: he is criticizing Popper, and there are other posts where he makes this explicit. So Popper has everything to do with this.
You said not to start any replies with “lol”. Popperians will try doing different things in conversation to see how the other person reacts. Are they concerned with style over substance? Do they place too much emphasis on emotional reactions? Are they conformists? I wasn’t doing that in this instance, but by enforcing rigid standards of communication you lose knowledge. curi talks more about this in his threads.
I actually don’t care about kharma—I’m not posting to get good kharma. Neither is curi. Disagreements should be resolved by discussion and by criticism, not by voting.
Karma is not a method of resolving disagreements here, it’s a feedback mechanism. If your comments are being heavily downvoted, it lets you know that people are finding something objectionable about them. Ideally we would like to be able to resolve disagreements here by discussion or experiment, but not all discussion is fruitful, and when a debate persists without a useful exchange of information or changing of opinions, then many people are going to want to see less of it.
I actually don’t care about kharma—I’m not posting to get good kharma. Neither is curi. Disagreements should be resolved by discussion and by criticism, not by voting. I was just wondering how many people who disagree with Bayesianism end up with 0 kharma on LW and whether that isn’t a bias? BTW, how do you know the reason something got downvoted?
This reads to me as “I don’t care about karma, just about knowledge that can be derived from karma.” These two positions seem to be, for all practical purposes, indistinguishable.
Also, for #1, AFAIK bayesians do not seek knowledge in the platonic sense.
You said not to start any replies with “lol”. Popperians will try doing different things in conversation to see how the other person reacts. Are they concerned with style over substance? Do they place too much emphasis on emotional reactions? Are they conformists?
If you are interested in communicating ideas playing experiments with your audience is probably not helpful for your goals. Moreover, just because someone is “concerned with style over substance” or is a “conformist” does not mean they have nothing useful to offer.
Moreover, in most internet conversations, the vast majority of readers are people who will never comment. If you have any interest in getting them to listen, coming across as rude, or unnecessarily obnoxious will not endear you to them.
If you are interested in communicating ideas playing experiments with your audience is probably not helpful for your goals.
It can be. Conventional social rules often mask disagreements and are designed to do that. If you stick to the social rules, the truth can take longer to come out.
Moreover, just because someone is “concerned with style over substance” or is a “conformist” does not mean they have nothing useful to offer.
I agree, but I didn’t say that.
Moreover, in most internet conversations, the vast majority of readers are people who will never comment. If you have any interest in getting them to listen, coming across as rude, or unnecessarily obnoxious will not endear you to them
I think stating the truth about things is enough not to endear yourself to a lot of people, so trying to endear yourself to them isn’t going to help.
I have not found something on LW arguing that induction is impossible, the Popperian position. I have read a bunch of stuff here (done some homework) and it seems to me to be in the inductivist tradition of Aristotelian philosophy. I know other people who say the same thing and LW’ers that I have talked to seem incredulous that induction is impossible. So if you claim not to be in this mainstream tradition, I don’t see how that can be and asking for material I cannot find is reasonable.
I’m pretty sure it’s a mistake to lump together everyone who says induction is possible as “the mainstream tradition”.
By that same logic, I could say “Popper is in the non-quantitative tradition, which is mainstream (in contrast to Bayesian epistemology)”. Reflecting one aspect of the mainstream, even a particularly important one, is still not sufficient for actually being mainstream.
There a variety of issues going on here. Manfred pointed out many of them. There’s another issue here that is you’ve had an influx of users all of whom are arguing for essentially the same set of positions and not doing it very well with a bit of rudeness thrown in. One of the three is being particularly egregious, and I suspect that there may be some spill-over in attitude from that user’s behavior towards how people are voting about you. I will note that in the threads responding to the various Popperian criticisms, various LW regulars are willing to say when another LWian has said something they think is wrong. It might help to distinguish yourselves if you were willing to point out when you think the others are wrong. For example, you haven’t posted at all in this thread. Do you agree with everything he has said there? If you disagree will you say so or do you feel a need to stay silent to protect a fellow member of your tribal group?
For what it is worth, I’m not a Bayesian. I think that Bayesianism has deep problems especially surrounding 1) the difficulty of where priors come from 2) the difficulty of meaningfully making Bayesian estimates about abstract systems. I’ve voiced those concerns before here, and many of those comments have been voted up. Indeed, I recently started a subthread discussing a problem with the Solomonoff prior approach which has been voted up.
I agree with curi that the Conjunction Fallacy does not exist. But if I disagreed I would say so—Popperians don’t hold back from criticism of each other. If my criticism hit its mark, then curi would change his mind and I know that because I participate in Popperian forums that curi participates in. That said, most Popperians I know think along similar lines; I see more disagreement among Bayesians about their philosophy here.
Your thread is about a technical issue and I think Bayesians are more comfortable discussing these sort of things.
I agree with curi that the Conjunction Fallacy does not exist.
He’s not doing a very good job making that case. Do you think you can do a better job?
Also, let’s go through some of his other claims in that thread. I’m curious which you agree with:
Do you agree with the rest of what he has to say in that thread when he claims that “bad pseudo-scientific research designed to prove that people are biased idiots”? Do you agree with him that there is a deliberate “agenda” within the cognitive bias research “which has a low opinion of humans” which is “treating humans like dirt, like idiots”?
Do you agree with his claim that the conjunction fallacy is a claim about all thought about conjunctions and not some conjunctions?
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
Do you agree with the rest of what he has to say in that thread when he claims that “bad pseudo-scientific research designed to prove that people are biased idiots. Do you agree with him that there is a deliberate “agenda” within the cognitive bias research “which has a low opinion of humans” which is “treating humans like dirt, like idiots”?
I don’t know if there is a deliberate agenda and I wouldn’t have stated things so baldly (and that might just be a hangup on my part). Let’s look at the Tversky and Kahneman paper that curi cited. The first sentence says:
Uncertainty is an unavoidable aspect of the human condition.
So in the very first sentence, the authors’ have revealed a low opinion of humans. They think humans have a condition, although they don’t explain what it is, only that uncertainty is part of it.
Later in the paper, they say:
Our studies of inductive reasoning have focused on systematic errors because they are diagnostic of the heuristics that generally govern inference and judgement.
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it). It’s not that humans come up with explanations and solve problems, it’s not that we are universal knowledge creators, it’s that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don’t do induction—as Popper and others such as Deutsch have explained, induction is impossible, it’s not a way we reason.
curi noted the authors also say:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
So they admit bias.
Do you agree with his claim that the conjunction fallacy is a claim about all thought about conjunctions and not some conjunctions?
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
Do you think foreign policy experts use probabilities rather than explanations?
Uncertainty is an unavoidable aspect of the human condition.
So in the very first sentence, the authors’ have revealed a low opinion of humans. They think humans have a condition, although they don’t explain what it is, only that uncertainty is part of it.
Um, I think you are possibly taking a poetic remark too seriously. If they had said “uncertainty is part of everyday life” would you have objected?
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it).
Heuristics are not necessarily genetic. They can be learned. I see nothing in their paper that implies that they were genetic, and having read a fair amount of what both T & K wrote, there’s no indication that I saw that they strongly thought that any of these heuristics were genetic.
It’s not that humans come up with explanations and solve problems, it’s not that we are universal knowledge creators, it’s that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don’t do induction—as Popper and others such as Deutsch have explained, induction is impossible, it’s not a way we reason.
Ok. This confuses me. Let’s says that humans use genetic heuristics, how is that a low opinion? Moreover, how does that prevent us from being universal knowledge creators? You also seem to be confused about whether or not something is a good epistemology being related to whether or not a given entity uses it. Whether humans use induction and whether induction is a good epistemological approach are distinct questions.
This seems close to, if anything, Christian apologists saying how if humans don’t have souls then everything is meaningless. Do you see the connection here? Just because humans have flaws doesn’t make humans terrible things. We’ve split the atom. We’ve gone to the Moon. We understand the subtle behavior of the prime numbers. We can look back billions of years in time to the birth of the universe. How does thinking we have flaws mean one has a low opinion of humans?
I’m curious, when a psychologist finds a new form of optical illusion, do you discount it in the same way? Does caring about that or looking for those constitute a low opinion of humans?
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
So they admit bias
That’s a tortured reading of the sentence. The point is that they wanted to see if humans engaged in conjunction errors. So they constructed situations where, if humans were using the representativeness heuristic or similar systems the errors would be likely to show up. This is, from the perspective of Popper in LScD, a good experimental protocol, since if it didn’t happen, it would be a serious blow to the idea that humans use a representativeness heuristic to estimate likelyhood. They aren’t admitting “bias”- their point is that since their experimental constructions were designed to maximize the opportunity for a representativeness heuristic to show up, they aren’t a good estimate for how likely these errors are to occur in the wild.
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
So it seems to me that you are essentially saying that you disagree with their experimental evidence on philosophical grounds. If your evidence disagrees with your philosophy the solution is not to deny the evidence.
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
Do you think foreign policy experts use probabilities rather than explanations?
In some contexts, yes. For example, foreign policy experts working with economists or financial institutions sometimes will make probability estimates for them to work with. But let’s say they never do. How is that at all relevant to the questions at hand? Do you really think that the idea of estimating a probability is so strange and technical that highly educated individuals shouldn’t be expected to be able to understand what is being asked of them? And yet you think that Tversky had a low opinion of humans? Moreover, even if they did have trouble understanding what was meant, do you expect that would cause all the apparent bias to go by sheer coincidence just as one would expect given the conjunction fallacy?
You can read “human condition” as a poetic remark, but choosing a phrase such as that to open a scientific paper is imprecise and vague and that they chose this phrase reveals something of the authors’ bias I think.
No, Tversky and Kahneman have not specifically said here whether the heuristics in question are genetic or not. Don’t you think that’s odd? They’re just saying we do reasoning using heuristics, but not explaining anything. Yet explanations are important; from these everything else follows.
That they think the heuristics are genetic is an inference and googling around I see that researchers in this field talk about “evolved mental behaviour” so I think the inference is correct. It means that some ideas we hold can’t be changed, only worked around, and that these ideas are part of us even though we did not voluntarily take them onboard. So we involuntarily hold unchangeable ideas that we may or may not agree with and that may be false. It’s leading towards the idea we are not autonomous agents in the world, not fully human. The idea that we are universal knowledge creators means that all our our ideas can be changed and improved on. If there are flaws in our ideas, we discard them once the flaws are discovered.
With regard to induction, epistemology tells us that it is impossible, therefore no creature can use it. Yes, I disagree with the experimental evidence on philosophical grounds; the philosophy is saying the evidence is wrong, that the researchers made mistakes. curi has given some theories about the mistakes the researchers made, so it does indeed seem as though the evidence is wrong.
I have no problem with the idea that probabilities help solve problems. Probabilities arise as predictions of theories, so are important. But probability has nothing to do with the uncertainty of theories, which can’t be quantified, and no role in epistemology whatsoever. It’s taking an objective physical concept and applying it in a domain it doesn’t belong. I could go on, but you mention LSD, so I presume you know some of these ideas right? Have you read Conjectures and Refutations or Deutsch?
Well said. And btw about “human condition” at first I thought you might be overreacting to the phrase, from your previous comments here, but I found your email very convincing and I think you have it right. I think “poetic remark” is a terrible excuse—it’s merely a generic denial that they meant what they said. With the implicit claim that: this is unrepresentative, and they were right the rest of the time. The apologist doesn’t argue this claim, or even state it plainly; it’s just the subtext.
How you explain how their work pushes in the direction of denying we’re fully human, via attacking our autonomy (and free will, I’d add) is nice.
One thing I disagree with is the presumption that an LScD reader would know what you mean. You’re so much more advanced than just the content of LScD. You can’t expect someone to fill in the blanks just from that.
It’s not an agenda in the sense of a political agenda (though it does have some connections to political ideas), nor a conspiracy, nor a consciously intended and promoted agenda.
But, they have a bunch of unconscious ideas—a particular worldview—which informs how they approach their research and, because they do not use the rigor of science which prevents such things, their worldview/agenda biases all their results.
The proper rigor of science includes things like describing the experimental procedure in your paper so mistakes can be criticized and it can be repeated without introducing unintended changes, and having a “sources of error” section where you discuss all the ways your research might be wrong. When you leave out standard parts of science like those, and other more subtle ones, you get unscientific results. The scientific method, as Feynman explained, is our knowledge about how not to fool ourselves (i.e. it prevents our conclusions from being based on our biases). When you don’t use it, you get wrong, useless and biased results by default.
One of the ways these paper goes wrong is it doesn’t pay enough attention to the correct interpretation of the data. Even if the data was not itself biased—which they openly admit it is—their interpretation would be A) problematic and B) not argued for by the data itself (interpretations of data never are argued for by the data itself, but must be considered as a separate and philosophical issue!)
If you try enough, you can get people to make mistakes. I agree with that much. But what mistake are the people making? That’s not obvious, but the authors don’t seriously discuss the matter. For example, how much of the mistake people are making is due to miscommunication—that they read the question they are asked as having a meaning a bit different than the literal meaning the researchers consider the one true meaning? The possibility that the entire phenomenon they were observing, or part of it, is an aspect of communication not biases about probability is simply not addressed. Many other issues of interpretation of the results aren’t addressed either.
They simply interpret the experimental data in a way in line with their biases and unconscious agendas, and then claim that empirical science has supported their conclusions.
It’s not an agenda in the sense of a political agenda (though it does have some connections to political ideas), nor a conspiracy, nor a consciously intended and promoted agenda.
But, they have a bunch of unconscious ideas—a particular worldview—which informs how they approach their research
Yes, I agree, and the ideas are not all unconscious either. What do you think the worldview is? I’m guessing the worldview has ideas in it like animals create knowledge, but not so much as people, and that nature (genes) influence human thought leading to biases that are difficult to overcome and to special learning periods in childhood. It’s a worldview that denies people their autonomy isn’t it? I guess most researchers looking at this stuff would be politically left, be unaware of good philosophy, and have never paid close attention to issues like coercion.
I think they would sympathize with Haldane’s “queerer than we can suppose” line (quoted in BoI) and the principle of mediocrity (in BoI).
There’s something subtle but very wrong with their worldview that has to do with the difference between problem finding and problem solving. These people are not bubbling with solutions.
A lot of what they are doing is excusing faults. Explaining faults without blaming human choices. Taking away our responsibility and our ability to be responsible. They like to talk about humans being influenced—powerless and controlled—but small and subtle things. This connects with the dominant opinion on Less Wrong that morality does not exist.
They have low standards. They know their “science” is biased, but it’s good enough for them anyway. They don’t expect, and strive for, better. They think people are inherently parochial—including themselves, who they consider only a little less so—and they don’t mind.
Morality can’t exist without explanations, btw, and higher level concepts. Strong empiricism and instrumentalism—as dominate Less Wrong—destroy it pretty directly.
They would not like Ayn Rand. And they would not like Deutsch.
Together, we explored the psychology of intuitive beliefs and choices and examined their bounded rationality.
They take for granted rationality is bounded and then sought out ways to show it, e.g. by asking people to use their intuition and then comparing that intuition against math—a dirty trick, with a result easily predictable in advance, in line with the conclusion they assumed in advance. Rationality is bounded—they knew that since college merely by examining their own failings—and they’re just researching where the bounds are.
Why did the jump to universality occur in our Western society and not elsewhere? Deutsch rejects the explanations of Karl Marx, Friedrich Engels and Jared Diamond that the dominance of the West is a consequence of geography and climate
That’s another aspect of it. It’s the same kind of thing. If you establish how biased we are, then our success or failure is dependent not on us—human ideas and human choices—but parochial details like our environment and whether it happens to be one our biases will thrive in or not.
First: bear in mind that Popper was brought up by Oscar Cunningham—EY has probably mentioned him at some point, but not often, and never in the essay you quoted from.
Second: Familiarity with the pop culture idea in no wise implies familiarity with the real thing—more often the opposite.
As prase said, you’ve been confused by the specific term used—the “Traditional Rationality” that EY was talking about isn’t the actual human being that was Karl Popper, but the pop-culture version of Popper which has been a major influence on the thinking of most scientifically-literate people of the modern era.
To make an analogy: if someone asked me what “Romeo” and “Juliet” meant in Taylor Swift’s song “Love Story”, my answer would be quite inaccurate as a description of the play—because the “Romeo” and “Juliet” in the song aren’t the two love-besotted idiots in the play, they’re the stereotypical young lovers of pop culture.
You say Eliezer is just talking about the pop-culture version of Popper, rather than actual Popperian philosophy. So he knows the difference right? He knows that pop-culture contains a lot of myths about Popper right? I don’t think so. Eliezer’s criticisms are actually directed at Popper, but he doesn’t understand Popper, only some pop-culture version.
Here is an example from my wild and reckless youth:
This is directed at Popper. It shows that Eliezer doesn’t know that criticism and explanation are major components of Popperian philosophy and that rather than spending 30 years trying to test a “silly idea”, a Popperian would criticize it to see if it stands up as a good explanation. The idea is presumed to be silly, so it would not stand up and the scientist can get on with enjoying the next 30 years. If Eliezer recognized it was just a pop-culture cartoon he would have said so and he would have differentiated that from the actual Popper. He didn’t.
What makes you say that?
The issue of criticism and explanation is raised in A Prodigy of Refutation, but note that Eliezer never brings up Popper at all, only the expectations that the common culture of traditional rationalists imposed on him.
It’s the sort of thing people who don’t know much about Popperian philosophy say when they try to criticize him. People who know a lot about Popper encounter the same myths time and again. Here the myth is Popperism is falsificationism.
Eliezer doesn’t mention explantion in the link you gave.
Let me see if I understand this. Many people criticise Popper for being X. Eliezer criticises X. Therefore Eliezer criticises Popper.
I’m afraid I don’t follow the chain of logic here at all.
I think the problem is that Eliezer mentions Popper by name, in the vicinity of X, thereby encouraging an association between Popper and X. I don’t have quotes handy but I did see a quote like that cited in the last day or two.
Eliezer has mentioned Popper by name in a number of places and said that “Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism”. See: http://yudkowsky.net/rational/bayes
So he thinks (or did think) Popperism is falsificationism. He doesn’t realize he is criticizing a pop-culture myth.
He is also wrong about the popularity of Popper.
(BTW, this rate filtering is a pain. I’m now aware of three people, including myself, who are critical of Bayesianism and who have zero kharma. Does this happen a lot?)
Seems to have just happened recently. Though similar things have happened before, I’m sure.
To try to see why you were at 0 points, I looked through the first two pages of your comments. Sorry if this advice is unsolicited, but I think there are some things you could fix.
Downvoted thing one: “Aristotle invented the idea of induction. It is a major false idea in philosophy, one that Less Wrong subscribes to. If you disagree, please show me a criticism of induction in the sequences.”
Reasons for getting downvoted: Not being charitable (i.e. doing your homework even when the other person seems wrong) leading to a fairly false equivalence between different things called “induction.” Demand that someone else show you a specific piece of evidence that you could find as easily as they.
2: “Good criticisms here, yet downvoted to −3. Do LWer’s really want to be less wrong?”
Reasons for getting downvoted: Fairly obvious, this didn’t work, try to do something more effective next time.
3: This long comment.
Things you could do better in this comment: Stick close to a few key points rather than trying to argue against everything—if you’d just posted the response to the first quote you would have communicated much better despite saying less. In fact arguing against everything is generally a bad sign, since (charity here) you should start out working from the assumption that the other person is partially right. You come across as too attached to one “big idea” and not sensitive enough to context because you bring Popper into your replies to points (e.g. his second one) that had nothing to do with Popper. If you’re feeling confrontational, try to not let it show through in the post—win by being better than the other person at this sort of argumentation, and don’t start any of your replies with “Lol.”
You might also focus on making witty, insightful, or helpful posts, but it’s harder for me to say how to make things go right.
I actually don’t care about kharma—I’m not posting to get good kharma. Neither is curi. Disagreements should be resolved by discussion and by criticism, not by voting. I was just wondering how many people who disagree with Bayesianism end up with 0 kharma on LW and whether that isn’t a bias? BTW, how do you know the reason something got downvoted?
With regard to your comments:
I have not found something on LW arguing that induction is impossible, the Popperian position. I have read a bunch of stuff here (done some homework) and it seems to me to be in the inductivist tradition of Aristotelian philosophy. I know other people who say the same thing and LW’ers that I have talked to seem incredulous that induction is impossible. So if you claim not to be in this mainstream tradition, I don’t see how that can be and asking for material I cannot find is reasonable.
That wasn’t an attempt to get upvotes. It was a comment to curi, who I know.
If I just commented on the first quote, people would have accused me of disputing the definition (which they did anyway—oh well). The “rules followed by scientists” refers to “traditional philosophy”, by which Eliezer/Oscar mean Popper. Some commenters think Eliezer is only criticizing pop-culture. That is not so: he is criticizing Popper, and there are other posts where he makes this explicit. So Popper has everything to do with this.
You said not to start any replies with “lol”. Popperians will try doing different things in conversation to see how the other person reacts. Are they concerned with style over substance? Do they place too much emphasis on emotional reactions? Are they conformists? I wasn’t doing that in this instance, but by enforcing rigid standards of communication you lose knowledge. curi talks more about this in his threads.
Karma is not a method of resolving disagreements here, it’s a feedback mechanism. If your comments are being heavily downvoted, it lets you know that people are finding something objectionable about them. Ideally we would like to be able to resolve disagreements here by discussion or experiment, but not all discussion is fruitful, and when a debate persists without a useful exchange of information or changing of opinions, then many people are going to want to see less of it.
This reads to me as “I don’t care about karma, just about knowledge that can be derived from karma.” These two positions seem to be, for all practical purposes, indistinguishable.
Also, for #1, AFAIK bayesians do not seek knowledge in the platonic sense.
If you are interested in communicating ideas playing experiments with your audience is probably not helpful for your goals. Moreover, just because someone is “concerned with style over substance” or is a “conformist” does not mean they have nothing useful to offer.
Moreover, in most internet conversations, the vast majority of readers are people who will never comment. If you have any interest in getting them to listen, coming across as rude, or unnecessarily obnoxious will not endear you to them.
It can be. Conventional social rules often mask disagreements and are designed to do that. If you stick to the social rules, the truth can take longer to come out.
I agree, but I didn’t say that.
I think stating the truth about things is enough not to endear yourself to a lot of people, so trying to endear yourself to them isn’t going to help.
I’m pretty sure it’s a mistake to lump together everyone who says induction is possible as “the mainstream tradition”.
They are all in the justificationist tradition, which is mainstream.
By that same logic, I could say “Popper is in the non-quantitative tradition, which is mainstream (in contrast to Bayesian epistemology)”. Reflecting one aspect of the mainstream, even a particularly important one, is still not sufficient for actually being mainstream.
You’re just arguing terminology. I don’t know what for. I was explaining what Brian meant.
Oops, I misread his “this mainstream tradition” as “the mainstream tradition”. Apologies.
There a variety of issues going on here. Manfred pointed out many of them. There’s another issue here that is you’ve had an influx of users all of whom are arguing for essentially the same set of positions and not doing it very well with a bit of rudeness thrown in. One of the three is being particularly egregious, and I suspect that there may be some spill-over in attitude from that user’s behavior towards how people are voting about you. I will note that in the threads responding to the various Popperian criticisms, various LW regulars are willing to say when another LWian has said something they think is wrong. It might help to distinguish yourselves if you were willing to point out when you think the others are wrong. For example, you haven’t posted at all in this thread. Do you agree with everything he has said there? If you disagree will you say so or do you feel a need to stay silent to protect a fellow member of your tribal group?
For what it is worth, I’m not a Bayesian. I think that Bayesianism has deep problems especially surrounding 1) the difficulty of where priors come from 2) the difficulty of meaningfully making Bayesian estimates about abstract systems. I’ve voiced those concerns before here, and many of those comments have been voted up. Indeed, I recently started a subthread discussing a problem with the Solomonoff prior approach which has been voted up.
I agree with curi that the Conjunction Fallacy does not exist. But if I disagreed I would say so—Popperians don’t hold back from criticism of each other. If my criticism hit its mark, then curi would change his mind and I know that because I participate in Popperian forums that curi participates in. That said, most Popperians I know think along similar lines; I see more disagreement among Bayesians about their philosophy here.
Your thread is about a technical issue and I think Bayesians are more comfortable discussing these sort of things.
He’s not doing a very good job making that case. Do you think you can do a better job?
Also, let’s go through some of his other claims in that thread. I’m curious which you agree with: Do you agree with the rest of what he has to say in that thread when he claims that “bad pseudo-scientific research designed to prove that people are biased idiots”? Do you agree with him that there is a deliberate “agenda” within the cognitive bias research “which has a low opinion of humans” which is “treating humans like dirt, like idiots”?
Do you agree with his claim that the conjunction fallacy is a claim about all thought about conjunctions and not some conjunctions?
Do you agree with his claim that “”Probability estimate” is a technical term which we can’t expect people to know? Do you agree with his implicit claim that this should apply even to highly educated people who work as foreign policy experts?
I don’t know if there is a deliberate agenda and I wouldn’t have stated things so baldly (and that might just be a hangup on my part). Let’s look at the Tversky and Kahneman paper that curi cited. The first sentence says:
So in the very first sentence, the authors’ have revealed a low opinion of humans. They think humans have a condition, although they don’t explain what it is, only that uncertainty is part of it.
Later in the paper, they say:
So inference and judgement are governed by heuristics, genetic in origin (though this is just implied and the authors do nothing to address it). It’s not that humans come up with explanations and solve problems, it’s not that we are universal knowledge creators, it’s that we use heuristics handed down to us from our genes and we must be alerted to biases in them in order to correct them, otherwise we make systematic errors. So, again, a low opinion of humans. And we don’t do induction—as Popper and others such as Deutsch have explained, induction is impossible, it’s not a way we reason.
curi noted the authors also say:
So they admit bias.
Yes. It is based on inductivist assumptions about how people think, as the quote above illustrates. They disregard the importance of explanations and they think humans do probabilistic reasoning using in-born heuristics and that these are universal.
Do you think foreign policy experts use probabilities rather than explanations?
Um, I think you are possibly taking a poetic remark too seriously. If they had said “uncertainty is part of everyday life” would you have objected?
Heuristics are not necessarily genetic. They can be learned. I see nothing in their paper that implies that they were genetic, and having read a fair amount of what both T & K wrote, there’s no indication that I saw that they strongly thought that any of these heuristics were genetic.
Ok. This confuses me. Let’s says that humans use genetic heuristics, how is that a low opinion? Moreover, how does that prevent us from being universal knowledge creators? You also seem to be confused about whether or not something is a good epistemology being related to whether or not a given entity uses it. Whether humans use induction and whether induction is a good epistemological approach are distinct questions.
This seems close to, if anything, Christian apologists saying how if humans don’t have souls then everything is meaningless. Do you see the connection here? Just because humans have flaws doesn’t make humans terrible things. We’ve split the atom. We’ve gone to the Moon. We understand the subtle behavior of the prime numbers. We can look back billions of years in time to the birth of the universe. How does thinking we have flaws mean one has a low opinion of humans?
I’m curious, when a psychologist finds a new form of optical illusion, do you discount it in the same way? Does caring about that or looking for those constitute a low opinion of humans?
That’s a tortured reading of the sentence. The point is that they wanted to see if humans engaged in conjunction errors. So they constructed situations where, if humans were using the representativeness heuristic or similar systems the errors would be likely to show up. This is, from the perspective of Popper in LScD, a good experimental protocol, since if it didn’t happen, it would be a serious blow to the idea that humans use a representativeness heuristic to estimate likelyhood. They aren’t admitting “bias”- their point is that since their experimental constructions were designed to maximize the opportunity for a representativeness heuristic to show up, they aren’t a good estimate for how likely these errors are to occur in the wild.
So it seems to me that you are essentially saying that you disagree with their experimental evidence on philosophical grounds. If your evidence disagrees with your philosophy the solution is not to deny the evidence.
In some contexts, yes. For example, foreign policy experts working with economists or financial institutions sometimes will make probability estimates for them to work with. But let’s say they never do. How is that at all relevant to the questions at hand? Do you really think that the idea of estimating a probability is so strange and technical that highly educated individuals shouldn’t be expected to be able to understand what is being asked of them? And yet you think that Tversky had a low opinion of humans? Moreover, even if they did have trouble understanding what was meant, do you expect that would cause all the apparent bias to go by sheer coincidence just as one would expect given the conjunction fallacy?
You can read “human condition” as a poetic remark, but choosing a phrase such as that to open a scientific paper is imprecise and vague and that they chose this phrase reveals something of the authors’ bias I think.
No, Tversky and Kahneman have not specifically said here whether the heuristics in question are genetic or not. Don’t you think that’s odd? They’re just saying we do reasoning using heuristics, but not explaining anything. Yet explanations are important; from these everything else follows.
That they think the heuristics are genetic is an inference and googling around I see that researchers in this field talk about “evolved mental behaviour” so I think the inference is correct. It means that some ideas we hold can’t be changed, only worked around, and that these ideas are part of us even though we did not voluntarily take them onboard. So we involuntarily hold unchangeable ideas that we may or may not agree with and that may be false. It’s leading towards the idea we are not autonomous agents in the world, not fully human. The idea that we are universal knowledge creators means that all our our ideas can be changed and improved on. If there are flaws in our ideas, we discard them once the flaws are discovered.
With regard to induction, epistemology tells us that it is impossible, therefore no creature can use it. Yes, I disagree with the experimental evidence on philosophical grounds; the philosophy is saying the evidence is wrong, that the researchers made mistakes. curi has given some theories about the mistakes the researchers made, so it does indeed seem as though the evidence is wrong.
I have no problem with the idea that probabilities help solve problems. Probabilities arise as predictions of theories, so are important. But probability has nothing to do with the uncertainty of theories, which can’t be quantified, and no role in epistemology whatsoever. It’s taking an objective physical concept and applying it in a domain it doesn’t belong. I could go on, but you mention LSD, so I presume you know some of these ideas right? Have you read Conjectures and Refutations or Deutsch?
Well said. And btw about “human condition” at first I thought you might be overreacting to the phrase, from your previous comments here, but I found your email very convincing and I think you have it right. I think “poetic remark” is a terrible excuse—it’s merely a generic denial that they meant what they said. With the implicit claim that: this is unrepresentative, and they were right the rest of the time. The apologist doesn’t argue this claim, or even state it plainly; it’s just the subtext.
How you explain how their work pushes in the direction of denying we’re fully human, via attacking our autonomy (and free will, I’d add) is nice.
One thing I disagree with is the presumption that an LScD reader would know what you mean. You’re so much more advanced than just the content of LScD. You can’t expect someone to fill in the blanks just from that.
It’s not an agenda in the sense of a political agenda (though it does have some connections to political ideas), nor a conspiracy, nor a consciously intended and promoted agenda.
But, they have a bunch of unconscious ideas—a particular worldview—which informs how they approach their research and, because they do not use the rigor of science which prevents such things, their worldview/agenda biases all their results.
The proper rigor of science includes things like describing the experimental procedure in your paper so mistakes can be criticized and it can be repeated without introducing unintended changes, and having a “sources of error” section where you discuss all the ways your research might be wrong. When you leave out standard parts of science like those, and other more subtle ones, you get unscientific results. The scientific method, as Feynman explained, is our knowledge about how not to fool ourselves (i.e. it prevents our conclusions from being based on our biases). When you don’t use it, you get wrong, useless and biased results by default.
One of the ways these paper goes wrong is it doesn’t pay enough attention to the correct interpretation of the data. Even if the data was not itself biased—which they openly admit it is—their interpretation would be A) problematic and B) not argued for by the data itself (interpretations of data never are argued for by the data itself, but must be considered as a separate and philosophical issue!)
If you try enough, you can get people to make mistakes. I agree with that much. But what mistake are the people making? That’s not obvious, but the authors don’t seriously discuss the matter. For example, how much of the mistake people are making is due to miscommunication—that they read the question they are asked as having a meaning a bit different than the literal meaning the researchers consider the one true meaning? The possibility that the entire phenomenon they were observing, or part of it, is an aspect of communication not biases about probability is simply not addressed. Many other issues of interpretation of the results aren’t addressed either.
They simply interpret the experimental data in a way in line with their biases and unconscious agendas, and then claim that empirical science has supported their conclusions.
Yes, I agree, and the ideas are not all unconscious either. What do you think the worldview is? I’m guessing the worldview has ideas in it like animals create knowledge, but not so much as people, and that nature (genes) influence human thought leading to biases that are difficult to overcome and to special learning periods in childhood. It’s a worldview that denies people their autonomy isn’t it? I guess most researchers looking at this stuff would be politically left, be unaware of good philosophy, and have never paid close attention to issues like coercion.
Yes.
I think they would sympathize with Haldane’s “queerer than we can suppose” line (quoted in BoI) and the principle of mediocrity (in BoI).
There’s something subtle but very wrong with their worldview that has to do with the difference between problem finding and problem solving. These people are not bubbling with solutions.
A lot of what they are doing is excusing faults. Explaining faults without blaming human choices. Taking away our responsibility and our ability to be responsible. They like to talk about humans being influenced—powerless and controlled—but small and subtle things. This connects with the dominant opinion on Less Wrong that morality does not exist.
They have low standards. They know their “science” is biased, but it’s good enough for them anyway. They don’t expect, and strive for, better. They think people are inherently parochial—including themselves, who they consider only a little less so—and they don’t mind.
Morality can’t exist without explanations, btw, and higher level concepts. Strong empiricism and instrumentalism—as dominate Less Wrong—destroy it pretty directly.
They would not like Ayn Rand. And they would not like Deutsch.
http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf
They take for granted rationality is bounded and then sought out ways to show it, e.g. by asking people to use their intuition and then comparing that intuition against math—a dirty trick, with a result easily predictable in advance, in line with the conclusion they assumed in advance. Rationality is bounded—they knew that since college merely by examining their own failings—and they’re just researching where the bounds are.
EDIT
http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=415636
That’s another aspect of it. It’s the same kind of thing. If you establish how biased we are, then our success or failure is dependent not on us—human ideas and human choices—but parochial details like our environment and whether it happens to be one our biases will thrive in or not.
First: bear in mind that Popper was brought up by Oscar Cunningham—EY has probably mentioned him at some point, but not often, and never in the essay you quoted from.
Second: Familiarity with the pop culture idea in no wise implies familiarity with the real thing—more often the opposite.
Most bad explanations, even scientific ones, don’t/shouldn’t get tested at all. DD explained this in FoR and BoI both.