My guess is that you are pretty close to the truth with your “best rebuttal contest” and “fun” hypotheses. You are participating in a forum with several hundred people. You can’t expect that all of them will share your own austere tastes in intellectual entertainment.
Yes, but at the same time, people who enjoy that should realize they might be damaging LW’s very good signal to noise ratio. I certainly was guilty of that by repeatedly replying.
At the risk of sounding glib, one man’s signal is another man’s noise. About half of those threads consisted of reasonably good and interesting arguments. And the other half included some links to good ideas expressed less shallowly.
Yes, there are good and interesting arguments there, as well as good links. I think many people on LW underestimate the difficulty of communicating, especially communicating new ideas. Learning is difficult, it takes time and effort, and there will be many misunderstandings. curi, myself, and other Popperians have a very different philosophy to the one here at LW. It can take a lot of discussion with someone, often where seemingly no progress is made, before they begin to understand something like why support is impossible.
Another point is that the traditions here are actually not conducive to truth-seeking discussions, especially where there is fundamental disagreement. There is this voting system for one thing. It’s designed to get people into line and to score posts. It encourages you to write to get good kharma but writing to get good kharma shouldn’t be a goal in any truth-seeking discussion. It places too much emphasis on standards and style, but dissent, when it is expressed, may not conform to the standards and style one expects, yet still be truthful. It is supposed to be some kind of troll detector, and a way of labelling them, so that people can commit the fallacy of judging ideas by their source. The whole thing, in short, is authoritarian.
Yes, there are good and interesting arguments there, as well as good links.
I must apologize for my lack of clarity. You have apparently taken me as a “fellow traveler”. Sorry. The “good and interesting arguments” that I made reference to were written by the anti-Popper ‘militia’ rather than the Popperian ‘invaders’. I meant to credit the invaders only with “good links”.
It can take a lot of discussion with someone, often where seemingly no progress is made, before they begin to understand something like why support is impossible.
What I have found discouraging in this regard is that you and Curi have been so hopelessly tone-deaf in coming to understand just why your doctrines have been so difficult to sell. Why (and in what sense) people here think that support is possible.
That folks who pay lip service to Popper seem to treat their own positions as given by authority and measure ‘progress’ by how much the other guy changes his mind.
Another point is that the traditions here are actually not conducive to truth-seeking discussions, especially where there is fundamental disagreement. There is this voting system for one thing. … The whole thing, in short, is authoritarian.
Probably the most frustrating thing about these episodes is how little you guys have learned from your experience here, and how much of what you think you have learned is incorrect. People can engage in persistent disagreement with the local orthodoxy without losing karma. People can occasionally speak too impolitely without losing excessive karma. I serve as a fairly good example of both. There are many other examples. It is actually fairly easy. All you have to do is to expend as much effort in reacting to what other people say as in trying to get your own points across. In trying to understand what they are saying, rather than reacting negatively.
Curi’s posting against the conjunction fallacy was a perfect example of how not to do things here—rising to the level of self-parody. Attacking non-existent beliefs in a doctrine he clearly didn’t understand. How could an intelligent person possibly have become so deluded about what his interlocutors understood by the “conjunction fallacy”? How could he have thought it worthwhile to use sock puppets to raise his karma enough to make that posting? Was he really trying to engage in instruction and consciousness raising? Or was he just trying to assert intellectual superiority?
In a sense, it is a shame that the posting can no longer be seen, because it was so perfect as a negative example.
Occasional trollish behavior does not brand one as inherently and essentially trollish. IMHO, our Popperian friends are exhibiting sufficiently good behavior in this conversation as to deserve being treated with a modicum of respect.
Curi’s posting against the conjunction fallacy was a perfect example of how not to do things here—rising to the level of self-parody. [...] In a sense, it is a shame that the posting can no longer be seen, because it was so perfect as a negative example.
That’s this post - which is still somewhat visible.
I would be happy to discard the hypothesis should it be falsified. Or even if a more plausible hypothesis were to be brought to my attention. So let me ask. Do you know why your karma jumped sufficiently to make that conjunction fallacy posting? I would probably accept your explanation if you have a plausible one.
My karma was fluctuating a lot. It got up around 90 three times, and back to 0 each time. It also had many smaller upticks, getting to the 20-50 range.
The reason it stopped fluctuated is because of the conjunction fallacy post: that −330 from that one post keeps my karma solidly negative.
Well, since you don’t offer a plausible hypothesis, I have no reason to discard my plausible one. But I will repeat the question, since you failed to answer it. Do you know why your karma jumped sufficiently for you to make that posting? That is, for example, do you know that someone (whether or not they wear socks) was systematically upvoting you?
I think I was systematically upvoted at least once, maybe a few times. I also think I was systematically downvoted at least one. I often saw my karma drop 10 or 20 points in a couple minutes (not due to front page post), sometimes more. shrug. Wasn’t paying much attention.
You’re really going to assume as a default—without evidence—that I’m a lying cheater rather than show a little respect? Maybe some people liked my posts.
To the extent I’ve shown them to non-Bayesians I’ve received nothing but praise and agreement, and in particular the conjunctional fallacy post was deeply appreciated by some people.
I think I was systematically upvoted at least once, maybe a few times.
Upvoted (systematically?) for honesty. ETA: And the sock-puppet hypothesis is withdrawn.
You’re really going to assume as a default—without evidence—that I’m a lying cheater rather than show a little respect? Maybe some people liked my posts.
I liked some of you posts. As for accusing you of being a “lying cheater”, I explicitly did not accuse you of lying. I suggested that you were being devious and disingenuous in the way you responded to the question.
the conjunctional fallacy post was deeply appreciated by some people.
Really???? By people who understand the conjunction fallacy the way it is generally understood here? Because, as I’m sure you are aware by now, the idea is not that people always attach a higher probability to (A & B) than they do to (B) alone. It is that they do it (reproducibly) in a particular kind of context C which makes (A | C) plausible and especially when (B | A, C) is more plausible than (B | C).
So what you were doing in that posting was calling people idiots for believing something they don’t believe—that the conjunction fallacy applies to all conjunctions without exception. You were exhibiting ignorance in an obnoxious fashion. So, of course you got downvoted. And now we have you and Scurfield believing that we here are just unwilling to listen to the Popperian truth. Are you really certain that you yourselves are trying hard enough to listen?
The “good and interesting arguments” that I made reference to were written by the anti-Popper ‘militia’ rather than the Popperian ‘invaders’. I meant to credit the invaders only with “good links”.
Both. I particularly liked curi’s post linking to the Deutsch lecture. But, in the reference in question, here, I was being cute and saying that half the speeches in the conversation (the ones originating from this side of the aisle) contained good and interesting arguments. This is not to say that you guys have never produced good arguments, though the recent stuff—particularly from curi—hasn’t been very good. But I was in agreement with you (about a week ago) when you complained that curi’s thoughtful comments weren’t garnering the upvotes they deserved. I always appreciate it when you guys provide links to the Critical Rationalism blog and/or writings by Popper, Miller, or Deutsch. You should do that more often. Don’t just provide a bare link, but (assertion, argument, plus link to further argument) is a winning combination.
I have some sympathy with you guys for three reasons:
By people who understand the conjunction fallacy the way it is generally understood here?
Well, no, they have a different way of thinking about it than you. They disagree with you and agree with me.
Your equivocations about what the papers do and don’t say do not, in my mind, make anything better. You’re first of all defanging the paper as a way to defend it—when you basically deny it made any substantive claims you’re making it easy to defend against anything—and that’s not something, I think, that it’s authors would appreciate.
So what you were doing in that posting was calling people idiots for believing something they don’t believe—that the conjunction fallacy applies to all conjunctions without exception.
You’ve misunderstood the word “applies”. It applies in the sense of being relevant, not happening every time.
The idea of the conjunction fallacy is there is a mistake people sometimes make and it has something to do with conjunctions (in general, not a special category of them). It is supposed to apply to all conjunctions in the sense that whenever there is a conjunction it could happen.
What I think is: you can trick people into making various mistakes. The ones from these studies have nothing to do with conjunctions. The authors simply designed it so all the mistakes would look like they had something to do with conjunctions, but actually they had other causes such as miscommunication.
Can you see how, if you discovered a way to miscommunicate with people to cause them to make mistakes about many topics (for example), you could use it selectively, e.g. by making all your examples that you publish have to do with conjunctions, and can you see how, in that case, I would be right that the conjunctional fallacy is a myth?
Well, no, they have a different way of thinking about it than you. They disagree with you and agree with me.
Your equivocations about what the papers do and don’t say do not, in my mind, make anything better. You’re first of all defanging the paper as a way to defend it—when you basically deny it made any substantive claims you’re making it easy to defend against anything—and that’s not something, I think, that it’s authors would appreciate.
I’m pretty sure you are wrong here, but I’m willing to look into it. Which paper, exactly, are you referring to?
Incidentally, your reference to my “equivocations about what the papers do and don’t say” suggest that I said one thing at one point, and a different thing at a different point.
Did I really do that? Could you point out how/where?
Can you see how, if you discovered a way to miscommunicate with people to cause them to make mistakes about many topics (for example), you could use it selectively, e.g. by making all your examples that you publish have to do with conjunctions, and can you see how, in that case, I would be right that the conjunctional fallacy is a myth?
I can see that if I discovered a way to cause people to make mistakes, I would want to publish about it. If the clearest and most direct demonstration (that the thinking I had caused actually was a mistake) were to exhibit the mistaken thinking in the form of a conjunction, then I might well choose to call my discovery the conjunction fallacy. I probably would completely fail to anticipate that some guy who had just had a very bad day (in failing to convince a Bayesian forum of his Popperian brilliance) would totally misinterpret my claims.
They said people have bounded rationality in the nobel prize paper. In the 1983 paper they were more ambiguous about their conclusions.
Besides their papers, there is the issue of what their readership thinks. We can learn about this from websites like Less Wrong and wikipedia and what they say it means. I found they largely read stronger claims into the papers than were actually there. I don’t blame them: the papers hinted at the stronger claims on purpose, but avoided saying too much for fear of being refuted. But they made it clear enough what they thought and meant, and that’s how Less Wrong people understand the papers.
The research does not even claim to demonstrate the conjunction fallacy is common—e.g. they state in the 1983 paper that their results have no bearing (well, only a biased bearing, they say) on the prevalence of the mistakes. Yet people here took it as a common, important thing.
It’s a little funny though, b/c the researchers realize that in some senses their conclusions don’t actually follow from their research, so they have to be careful what they say.
By equivocate I meant making ambiguous statements, not changing your story. You probably don’t regard them as ambiguous. Different perspective.
Everything is ambiguous in regards to some issues, and which issues we regard as the important ones determines which statements we regard as ambiguous in any important way.
I think if your discovery has nothing to do with conjunctions, it’s bad to call it the conjunction fallacy, and then have people make websites saying it has to do with the difference in probability between A&B and A, and that it shows people estimate probability wrong.
Your comments about me having a bad day are silly and should be withdrawn. Among other things, I predicted in advance what would happen and wasn’t disappointed.
Your comments about me having a bad day are silly and should be withdrawn. Among other things, I predicted in advance what would happen and wasn’t disappointed.
“Not disappointed”, huh? You are probably not surprised that people call you a troll, either. :)
As for your claims that people (particularly LessWrongers) who invoke the Conjunction Fallacy are making stronger claims than did the original authors—I’ll look into them. Could you provide two actual (URL) links of LessWrong people actually doing that to the extent that you suggested in your notorious posting?
The moral? Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.
This moral is not what the researchers said; they were careful not to say much in the way of a conclusion, only hint at it by sticking it in the title and some general remarks. It is an interpretation added on by Less Wrong people (as the researchers intended it to be).
This moral contains equivocation: it says it “can” happen. The Less Wrong people obviously think it’s more than “can”: it’s pretty common and worth worrying about, not a one in a million.
The moral, if taken literally, is pretty vacuous. Removing detail or assumptions can also make an event seem more plausible, if you do it in particular ways. Changing ideas can change people’s judgment of them. Duh.
It’s not meant to be taken that literally. It’s meant ot say: this is a serious problem, it’s meaningful, it really has something to do with adding conjunctions!
The research is compatible with this moral being false (if we interpret it to have any substance at all), and with it only being a one in a million event.
You may think one in a million is an exaggeration. But it’s not. The size of set of possible questions to ask that the researchers selected from was … I have no idea but surely far more than trillions. How many would have worked? The research does not and cannot say. The only things that could tell us that are philosophical arugments or perhaps further research.
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
In this paper, the researchers clearly state that their research does not tell us anything about the prevalence of the conjunction fallacy. But Less Wrongers have a different view of the matter.
They also state that it’s not arbitrary conjunctions which trigger the “conjunction” fallacy, but ones specifically designed to elicit it. But Less Wrong people are under the impression that people are in danger of doing a conjunction fallacy when there is no one designing situations to elicit it. That may or may not be true; the research certainly doesn’t demonstrate it is true.
Note: flawed results can tell you something about prevalence, but only if you estimate the amount of error introduced by the flaws. They did not do that (and it is very hard to do in this case). Error estimates of that type are difficult and would need to be subjected to peer review, not made up by readers of the paper.
There is no plausible way that the students could have misinterpreted this question because of ambiguous understandings of phrases involving “probability”.
The research does not claim the students could not have misinterpretted. It suggests they wouldn’t have—the less wronger has gotten the idea the researchers wanted him to—but they won’t go so far as to actually say miscommunication is ruled out because it’s not. Given they were designing their interactions with their subjects on purpose in a way to get people to make errors, miscommunication is actually a very plausible interpretation, even in cases where the not very detailed writeup of what they actually did fails to explicitly record any blatant miscommunication.
The issue is that when a person intuitively tries to make a judgement on probabilities, their intuition gives them the wrong result, because it seems to use a heuristic based on representativeness rather than actual probability.
This also goes beyond what the research says.
Also note that he forgot to equivocate about how often this happen. He didn’t even put a “sometimes” or a “more often than never”. But the paper doesn’t support that.
In short how is the experimental setting so different that we should completely ignore experimental results? If you have a detailed argument for that, then you’d actually be making a point.
He is apparently unaware that it differs from normal life in that in normal life people’s careers don’t depend on tricking you into making mistakes which they can call conjunction fallacies.
When this was pointed out to him, he did not rethink things but pressed forward:
So, your position is that it is completely unrealistic to try to trick people, because that never happens in real life?
But when you read about the fallacy in the Less Wrong articles about it, they do not state “only happens in the cases where people are trying to trick you”. If it only applies in those situations, well, say so when telling it to people so they know when it is and isn’t relevant. But this cosntraint on applicability, from the paper, is simply ignored most of the time to reach a conclusion the paper does not support.
I understand this criticism when applied to the Linda experiment, but not when it is applied to the color experiment. There was no “trickery” here.
Here’s someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice.
Note the heavy equivocation. He retreats from “always” which is a straw man, to “sometimes” which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often. But the paper does not say that.
BTW the paper is very misleading in that it says things like, in the words of a Less Wronger:
65% still chose sequence 2 despite it being a conjunction of sequence 1 with another event
The paper is full of statements that sound like they have something to do with the prevalence of the conjunction fallacy. And then one sentence admiting all those numbers should be disregarded. If there is any way to rescue those numbers as legitimate at all, it was too hard for the researchers and they didn’t attempt it (I don’t mean to criticize their skill here. I couldn’t do it either. Too hard.)
It is difficult for me to decide how to respond to this. You are obviously sincere—not trolling. The “conjunction fallacy” is objectionable to you—an ideological assault on human dignity which must be opposed. You see evidence of this malevolent influence everywhere. Yet I see it nowhere. We are interpreting plain language quite differently. Or rather, I would say that you are reading in subtexts that I don’t think are there.
To me, the conjunction fallacy is something like one of those optical illusions you commonly find in psychology books. “Which line is longer?” Our minds deceive us. No, a bit more than that. One can set up trick situations in which our minds predictably deceive us. Interesting. And definitely something that psychologists should look at. Maybe even something that we should watch for in ourselves, if we want to take our rationality to the next level. But not something which proves some kind of inferiority of human reason; something that justifies some political ideology.
I suggested this ‘deflationary’ take on the CF, and your initial response was something like “Oh, no. People here agree with me. The CF means much more than you suggest.” But then you quote Kahneman and Tversky:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
Yes, you admit, K&T were deflationary, if you take their words at face value, but you persist that they intended their idea to be taken in an inflated sense. And you claim that it is taken in that sense here. So you quote LessWrong:
Adding more detail or extra assumptions can make an event seem more plausible.
I emphasized the can because you call attention to it. Yes, you admit, LessWrong was deflationary too, if you take the words at face value. But again you insist that the words should not be taken at face value—that there is some kind of nefarious political subtext there. And as proof, you suggest that we look at what the people who wrote comments in response to your postings wrote.
All I can say is that here too, you are looking for subtexts, while admitting that the words themselves don’t really support the reading you are suggesting:
Here’s someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice.
Note the heavy equivocation. He retreats from “always” which is a straw man, to “sometimes” which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often.
You are making a simple mistake in interpretation here. The vision of the conjunction fallacy that you attacked is not the vision of the conjunction fallacy that people here were defending. It is not all that uncommon for simple confusions like this one to generate inordinate amounts of sound and fury. But still, this one was a doozy.
Bounded rationality is what they believe they were studying.
Our unbounded rationality is the topic of The Beginning of Infinity.
I don’t think the phrase “bounded rationality” means what you think it means. “Bounded rationality” is a philosophical term that means rationality as performed by agents that can only think for a finite amount of time, as opposed to unbounded rationality which is what an agent with truly infinite computational resources would perform instead. Since humans do not have infinite time to think, “our unbounded rationality” does not exist.
Where do they define it to have this technical meaning?
I’m not sure if your distinction makes sense (can you say what you mean by “rationality”?). One doesn’t need infinite time be fully rational, in the BoI worldview. I think they conceive of rationality itself—and also which bounds are worth attention—differently. btw surely infinite time has no relevance to their studies in which people spent 20 minutes or whatever. of if you’re comparing with agents that do infinite thinking in 20 minutes, well, that’s rather impossible.
Where do they define it to have this technical meaning?
They don’t, because it’s a pre-existing standard term. Its Wikipedia article should point you in the right direction. I’m not going to write you an introduction to the topic when you haven’t made an effort to understand one of the introductions that’s already out there.
Attacking things that you don’t understand is obnoxious. If you haven’t even looked up what the words mean, you have no business arguing that the professionals are wrong. You need to take some time off, learn to recognize when you are confused, and start over with some humility this time.
But wikipedia says it means exactly what I thought it meant:
Bounded rationality is the idea that in decision making, rationality of individuals is limited by the information they have, the cognitive limitations of their minds, and the finite amount of time they have to make decisions.
Let me quote the important part again:
the cognitive limitations of their minds
So everything I said previously stands.
BoI says there are no limits on human minds, other than those imposed by the laws of physics on all minds, and also the improvable-without-limit issue of ignorance.
My surprise was due to you describing the third clause alone. I didn’t think that was the whole meaning.
No, you’re still confused. Unbounded rationality is the theoretical study of minds without any resource limits, not even those imposed by the laws of physics. It’s a purely theoretical construct, since the laws of physics apply to everyone and everything. This conversation started with a quote where K&T used the phrase “bounded rationality” to clarify that they were talking about humans, not about this theoretical construct (which none of us really care about, except sometimes as an explanatory device).
Our “unbounded rationality” is not the type you are talking about. It’s about the possibility of humans making unlimited progress. We conceive of rationality differently than you do. We disagree with your dichotomy, approach to research, assumptions being made, etc...
So, again, there is a difference in world view here, and the “bounded rationality” quote is a good example of the anti-BoI worldview found in the papers.
The phrase “unbounded rationality” was first introduced into this argument by you, quoting K&T. The fact that Beginning of Infinity uses that phrase to mean something else, while talking about a different topic, is completely irrelevant.
I’m just going to quickly say that I endorse what jimrandomh has written here. The concept of “bounded rationality” is a concept in economics/psychology going back to Herbert Simon. It is not, as you seem to think, an ideology hostile to human dignity.
I’m outta here! The depth and incorrigibility of your ignorance rivals even that of our mutual acquaintance Edser. Any idiot can make a fool of himself, but it takes a special kind of persistence to just keep digging as you are doing.
If the source of your delusion is Deutsch—well, it serves you right for assuming that any physicist has enough modesty to learn something about other fields before he tries to pontificate in them.
If the source of your delusion is Deutsch—well, it serves you right for assuming that any physicist has enough modesty to learn something about other fields before he tries to pontificate in them.
I find nothing in Deutsch that supports curi’s confusion.
Here is an passage from Herbert A. Simon: the bounds of reason in modern America:
Simon’s exploration of how people acquire and employ heuristics led him to posit that there are two kinds of heuristics: “weak” universal ones and “strong” domain-specific ones, the latter depending heavily on the development of domain-specific memory structures. In addition, Simon increasingly emphasized that both heuristics and memory structures evolved through interaction with the environment. The mind was an induction machine. This process of induction produced evolutionary growth patterns, giving both heuristics and memories a common treelike organization. Thus, for Simon, Darwin’s evolutionary tree of life became not only a decision tree but also the tree of knowledge.
Do you think this is accurate? If so, can you see how a Popperian would think it is wrong?
They don’t know anything. They just make up stuff about people they haven’t read. We have read half the stuff they’ve tried to cite more than them. Then they down vote instead of answering.
i checked out the Mathematical Statistics book today that one of them said had a proof. it didn’t have it. and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
they are in love with citing authority and judging ideas by sources (which they think make the value of the ideas probabilistically predictable with an error margin). not so much in love with the notion that if they can’t answer all the criticisms of their position they’re wrong. not so much in love with the notion that popper published criticisms no bayesian ever answered. not so much in love with the idea that, no, saying why your position is awesome doesn’t automatically mean everything else ever is worse and can safely be ignored.
They don’t know anything. They just make up stuff about people they haven’t read. We have read half the stuff they’ve tried to cite more than them. Then they down vote instead of answering.
While making false accusations like this does get you attention, it is also unethical behavior. We have done our very best to help you understand what we’re talking about, but you have a pattern where every time you don’t understand something, you skip the step where you should ask for clarification, you make up an incorrect interpretation, and then you start attacking that interpretation.
Almost every sentence you have above is wrong. Against what may be my better judgment I’m going to comment here because a) I get annoyed when my own positions are inaccurately portrayed and b) I hope that showing that you are wrong about one particular issue might convince you that you are interpreting things through a highly emotional lens that is causing you to misread and misremember what people have to say.
and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
I presume you are referring to my remarks.
Let’s go back to the context.
You wrote:
It’s fine if most people haven’t read Popper. But they should be able to point to some Bayesian somewhere who did, or they should know at least one good argument against a major Popperian idea
and then wrote:
One particular thing I would be interested in is a Bayesian criticism of Popper. Are there any? By contrast (maybe), Popper did criticize Bayesian epistemology in LScD and elsewhere.
I haven’t read it myself but I’ve been told that Earman’s “Bayes or Bust” deals with a lot of the philosophical criticisms of Bayesianism as well as giving a lot of useful references. It should do a decent job in regards to the scholarly concerns.
There are many different ideas out there and practically speaking, there are too many ideas out there for people to have to deal with every single one. Sure, making incorrect arguments is bad. And making arguments against strawmen is very bad. But people don’t have time actually research every single idea out there or even know which one’s to look at. Now, I think that Popper is important enough and has relevant enough points that he should be on the short list of philosophers that people can grapple with at least to some limited extent.
You then wrote:
The number of major ideas in epistemology is not very large. After Aristotle, there wasn’t very much innovation for a long time. It’s a small enough field you can actually trace ideas all the way back to the start of written history. Any professional can look at everything important. Some Bayesian should have. Maybe some did, but I haven’t seen anything of decent quality.
Note that this comment of yours is the first example where the word “professional” shows up.
As to a professional, I already referred you to Earman. Incidentally, you seem to be narrowing the claim somewhat. Note that I didn’t say that the set of major ideas in epistemology isn’t small, I referred to the much larger class of philosophical ideas (although I can see how that might not be clear from my wording). And the set is indeed very large. However, I think that your claim about “after Aristotle” is both wrong and misleading. There’s a lot of what thought about epistemological issues in both the Islamic and Christian worlds during the Middle Ages. Now, you might argue that that’s not helpful or relevant since it gets tangled up in theology and involves bad assumptions. But that’s not to say that material doesn’t exist. And that’s before we get to non-Western stuff (which admittedly I don’t know much about at all).
(I agree when you restrict to professionals, and have already recommended Earman to you.)
Which you stated you had not read. I have rather low standards for recommendations of things to read, but “I never read it myself” isn’t good enough.
I don’t agree with “restrict to professionals”. How is it to be determined who is a professional? I don’t want to set up arbitrary, authoritative criteria for dismissing ideas based on their source.
My final comment on that issue was then:
Earman is a philosopher and the book has gotten positive reviews from other philosophers. I don’t know what else to say in that regard.
And asking about your comment about professionals.
Hrrm? You mentioned professionals first. I’m not sure why you are now objecting to the use of professionals as a relevant category.
You never replied to that question. I’ve never mentioned Earman in the about 20 other comments to you after that exchange. This doesn’t seem like “repeatedly pressuring me to read some book he’d never read” Note also that I never once discussed Earman being a “professional” until you brought up the term.
You appear to be reading conversations through an adversarial lens in which people with whom you disagree must be not just wrong but evil, stupid, or slavishly adhering to authority. This is a common failure mode of conversation. But we’re not evil mutants. Humans like to demonize the ones they disagree with and it isn’t productive. It is a sign that one is unlikely to be listening with the actual content. At that point, one needs to either radically change approach or just spend some time doing something else and wait for the emotions to cool down.
Now, you aren’t the only person here with that issue; there’s some reaction by some of the regulars here in the same way, but that’s to a large extent a reaction against what you’ve been doing. So, please leave for some time; take a break. Think carefully about what you intend to get out of Less Wrong and whether or not anyone here has any valid points. And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct? Is it really that likely that you are right about every single detail?
If it helps, imagine it we weren’t talking about you but some other person. That person wanders over to a forum they’ve never been to, and has extensive discussions about technical issues where that person disgarees with the forum regulars on almost everything, including peripherally related issues. How likely would you be to say that that person had everything right, even if you took for granted that they were correct about the overarching primary issues? So, if the person never agrees that they were wrong about even the tiniest of issues, do you think it is more likely that the person actually got everything right or that they just aren’t listening?
(Now, everyone who thinks that this discussion is hurting our signal to noise ratio please go and downvote this comment. Hurting my karma score is probably the most efficient method of giving me some sort of feedback to prevent further engagement.)
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
note that, ironically, your comment here complaining that you didn’t repeatedly pressure is itself pressure, and not for the first time...
And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct?
When they have policies like resorting to authoritarian, ad hominem arguments rather than substance ones … and attacking merits such as confidence and lack of appeasement … then yes. You’re just asking me to concede because one person can’t be right so much against a crowd. It’s so deeply anti-Ayn-Rand.
Let me pose a counter question: when you contradict several geniuses such as Popper, Deutsch, Feynman (e.g. about caring what other people think), and Rand, does that worry you? I don’t particularly think you should say “yes” (maybe mild worry causing only an interest in investigating a bit more, not conceding). The point is your style of argument can easily be used against you here. And against most people for most stuff.
BTW did you know that improvements on existing ideas always start out as small minority opinions? Someone thinks of them first. And they don’t spread instantly.
Why shouldn’t I be right most of the time? I learned from the best. And changed my mind thousands of times in online debates to improve even more. You guys learned the wrong stuff. Why should the number of you be important? Lots of people learning the same material won’t make it correct.
Hurting my karma score is probably the most efficient method of giving me some sort of feedback to prevent further engagement.
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
Let’s do a scientific test. Find someone impartial to take a look at these threads, and report how they think you did. To avoid biasing the results, don’t tell them which side you were on, and don’t use the same name, just say “There was recently a big argument on Less Wrong (link), what do you think of it?” and have them email back the results.
I can do that, if you can agree on a comment to link to. It’ll double-blind it in a sense—I do have a vague opinion on the matter but haven’t been following the conversation at all over the last couple days, so it’d be hard for me to pick a friend who’d be inclined to, for example, favor a particular kind of argument that’s been used.
That is true of one of the two people I have in mind. The other has participated on less than five occasions. I have definitely shown articles from here to the latter and possibly to the former, but neither of them have done any extended reading here that I’m aware of (and I expect that I would be if they had). Also, neither of them are the type of person who’s inclined to participate in LW-type places, and the former in particular is inclined toward doing critical analysis of even concepts that she likes. (Just not in a very LW-ish way.)
I’d suggest using the primary thread about the conjunction fallacy and then the subthread on the same topic with Brian. Together that makes a long conversation with a fair number of participants.
How am I supposed to find someone impartial? Most people are justificationists. Most Popperians I know will recognize my writing style (plus, having showed them plenty already, i already know they agree with me, and i don’t attribute that to bias about the source of each post). The ones who wouldn’t recognize my writing style are just the ones who won’t want to read all this and discuss—it’s by choosing not to read much online discussion that they stayed that way.
EDIT: what does it even mean for someone to be neutral? neither justificationist nor popperian? are there any such people? what do they believe? probably ridiculous magical thinking! or hardcore skepticism. or something...
How am I supposed to find someone impartial? Most people are justificationists.
How about someone who hasn’t read much online discussion, and hasn’t thought about the issue enough to take a side? There are lots of people like that. It doesn’t have to be someone you know personally; a friend-of-a-friend or a colleague you don’t interact with much will do fine (although you might have to bribe them to spend the time reading).
Surely you can find someone who seems like they should be impartial and who’s smart enough to follow an online discussion. They don’t have to understand everything perfectly, they’re only judging what’s been written. Maybe a distant relative who doesn’t know your online alias?
If you pick a random person on the street, and you try to explain Popper to them for 20 minutes—and they are interested enough to let you have those 20 minutes—the chances they will understand it are small.
This doesn’t mean Popper was wrong. But it is one of many reasons your test won’t work well.
Right, it takes a while to understand these things. But they won’t understand Bayes, either; but they’ll still be able to give some impression of which side gave better arguments, and that sort of thing.
How about a philosopher who works on a topic totally unrelated to this one?
Have you ever heard what Popper thinks of the academic philosophy community?
No, an academic philosopher won’t do.
This entire exercise is silly. What will it prove? Nothing. What good is a sample size of 1 for this? Not much. Will you drop Bayesianism if the person sides against you? Of course not.
How about a physicist, then? They’re generally smart enough to figure out new topics quickly, but they don’t usually have cause to think about epistemology in the abstract.
The point of the exercise is not just to judge epistemology, it’s to judge whether you’ve gotten too emotional to think clearly. My hypothesis is that you have, and that this will be immediately obvious to any impartial observer. In fact, it should be obvious even to an observer who agrees with Popper.
How about asking David Deutsch what he thinks? If he does reply, I expect it will be an interesting reply indeed.
In fact, it should be obvious even to an observer who agrees with Popper.
But I’ve already shown lots of this discussion to a bunch of observers outside Less Wrong (I run the best private Popperian email list). Feedback has been literally 100% positive, including some sincere thank yous for helping them see the Conjunction Fallacy mistake (another reaction I got was basically: “I saw the conjunction fallacy was crap within 20 seconds. Reading your stuff now, it’s similar to what I had in mind.”) I got a variety of other positive reactions about other stuff. They’re all at least partly Popperian. I know that many of them are not shy about disagreeing with me or criticizing me—they do so often enough—but when it comes to this Bayesian stuff none of them has any criticism of my stance.
Not a single one of them considers it plausible my position on this stuff is a matter of emotions.
I’m skeptical of the claim about Deutsch. Why not actually test it? A Popperian should try to poke holes in his ideas. Note also that you seem to be missing Jim’s point: Jim is making an observation about the level of emotionalism in your argument, not the correctness. These are distinct issues.
As another suggestion, we could take a bunch of people who have epistemologies that are radically different from either Popper or Bayes. From my friends who aren’t involved in these issues, I could easily get an Orthodox Jew, a religious Muslim, and a conspiracy theorist. That also handles your concern about a sample size of 1. Other options include other Orthodox Jews including one who supports secular monarchies as the primary form of government, and a math undergrad who is a philosophical skeptic. Or if you want to have some real fun, we could get some regulars from the Flat Earth Forum. A large number of them self-identify as “zetetic” which in this context means something like only accepting evidence that is from one’s own sensory observation.
Not a single one of them considers it plausible my position on this stuff is a matter of emotions.
Jim is not suggesting that your position is “a matter of emotions”- he’s suggesting that you are being emotional. Note that these aren’t the same thing. For example, one could hypothetically have a conversation between a biologist and a creationist about evolution and the biologist could get quite angry with the creationist remaining calm. In that case, the biologist believes what they do due to evidence, but they could still be unproductively emotional about how they present that evidence.
You’ve already been told that you made that the point of Jim’s remark was about the emotionalism in your remarks, not about the correctness of your arguments. In that context, your sibling comment is utterly irrelevant. The fact that you still haven’t gotten that point is of course further evidence of that problem. We are well past the point where there’s any substantial likelyhood of productive conversation. Please go away. If you feel a need to come back, do so later, a few months from now when you are confident you can do so without either insulting people, deliberately trolling, and are willing to actually listen to what people here have to say.
A) your ignorance
B) your negative assumptions to fill in the gaps in knowledge. you don’t think I’m a serious person who has reasons for what he says, and you became skeptical before finding out, just by assumption.
And you’re wrong.
Maybe by pointing it out in this case, you will be able to learn something. Do you think so?
Here’s two mild hints:
1) I interviewed David Deutsch yesterday
2) I’m in the acknowledgements of BoI
Here you are lecturing me about how a Popperian should try to poke holes in his ideas, as if I hadn’t. I had. (The hints do not prove I had. They’re just hints.)
I still don’t understand what you think the opinions of several random people will show (and now you are suggesting people that A) we both think are wrong (so who cares what they think?) B) are still justificationists). It seems to me the opinions of, say, the flat earth forum, should be regarded as pretty much random (well, maybe not random if you know a lot about their psychology. which i don’t want to study) and not a fair judge of the quality of arguments!
A large number of them self-identify as “zetetic” which in this context means something like only accepting evidence that is from one’s own sensory observation.
Also I want to thank you for continuing to post. You are easily the most effective troll that LW has ever had, and it is interesting and informative to study your techniques.
Yes. Someone with a worldview similar to mine will be more inclined to agree with me in all the threads. Someone with one I think is rather bad … won’t.
This confuses me. Why should whether someone is a justificationist impact how they read the thread about the conjunction fallacy? Note by the way that you seem to be misinterpreting Jim’s point. Jim isn’t saying that one should have an uninvolved individual decide who is correct. Jim’s suggestion (which granted may not have been explained well in his earlier comment) is to focus on seeing if anyone is behaving too emotionally rather than calmly discussing the issues. That shouldn’t be substantially impacted by philosophical issues.
No offense, but you haven’t demonstrated that you understand Bayesian epistemology well enough to classify it. I read the Wikipedia page on Theory of Justification, and it pretty much all looks wrong to me. How sure are you that your Justificationist/Popperian taxonomy is complete?
BTW induction is a type of justificationism. Bayesians advocate induction. The end.
I have no confidence the word “induction” means the same thing from one sentence to the next. If you could directly say precisely what is wrong with Bayesian updating, with concrete examples of Bayesian updating in action and why they fail, that might be more persuasive, or at least more useful for diagnosis of the problem wherever it may lie.
For example, you update based on selective observation (all observation is selective).
And you update theories getting attention selectively, ignoring the infinities (by using an arbitrary prior, but not actually using it in the sense of applying it infinitely many times, just estimating what you imagine might happen if you did).
Why don’t you—or someone else—read Popper. You can’t expect to fully understand it from discussion here. Why does this community have no one who knows what they are talking about on this subject? Who is familiar with the concepts instead of just having to ask me?
For example, you update based on selective observation (all observation is selective).
Since all observation is selective, then the selectivity of my observations can hardly be a flaw. Therefore the flaw is that I update. But I just asked you to explain why updating is flawed. Your explanation is circular.
And you update theories getting attention selectively, ignoring the infinities (by using an arbitrary prior, but not actually using it in the sense of applying it infinitely many times, just estimating what you imagine might happen if you did).
I don’t know what that means. I’m not even sure that
you update theories getting attention selectively
is idiomatic English. And
ignoring the infinities
needs to be clarified, I’m sure you must realize. Your parenthetical does not cear it up. You write:
by using an arbitrary prior, but not actually using it
Using it but not actually using it. Can you see why that is hard to interpret? And then:
in the sense of applying it infinitely many times
My flaw is that I don’t do something infinitely many times? Can you see why this begs for clarification? And:
just estimating what you imagine might happen if you did
I’ve lost track: is my core flaw that I am estimating something? Why?
Why don’t you—or someone else—read Popper.
I’ve read Popper. He makes a lot more sense to me than you do. He is not cryptic. He is crystal clear. You are opaque.
You can’t expect to fully understand it from discussion here. Why does this community have no one who knows what they are talking about on this subject? Who is familiar with the concepts instead of just having to ask me?
You seem to be saying that for me to understand you, first I have to understand Popper from his own writing, and that nobody here seems to have done that. I’m not sure why you’re posting here if that’s what you believe.
If you’ve read Popper (well) then you should understand his solution to the problem of induction. How about you tell us what it is, rather than complaining my summary of what you already read needs clarifying.
If you’ve read Popper (well) then you should understand his solution to the problem of induction. How about you tell us what it is, rather than complaining my summary of what you already read needs clarifying.
You are changing the subject. I never asked you to summarize Popper’s solution to the problem of induction. If that’s what you were summarizing just now then you ignored my request. I asked you to critique Bayesian updating with concrete examples.
Now, if Popper wrote a critique of Bayesian updating, I want to read it. So tell me the title. But he must specifically talk about Bayesian updating. Or, if that is not available, anything written by Popper which mentions the word “Bayes” or “Bayesian” at all. Or that discusses Bayes’ equation.
There was nothing in Open Society and its Enemies, Objective Knowledge, Popper Selections, or Conjectures and Refutations, nor in any of the books on Popper that I located. The one exception I found was in The Logic of Scientific Discovery, and the mentions of Bayes are purely technical discussion of the theorem (with which Popper, reasonably enough, has no problem), with no criticism of Bayesians. In summary, I found no critiques by Popper of Bayesians after going through the indices of Popper’s main works as you recommended. I did find mention of Bayes, but it was a mention in which Popper did not criticize Bayes or Bayesians.
Nor were Bayes or Bayesians mentioned anywhere in David Deutsch’s book The Beginning of Infiinity.
So I return to my earlier request:
I asked you to critique Bayesian updating with concrete examples.
I have repeatedly requested this, and in reply been given either condescension, or a fantastically obscure and seemingly self-contradictory response, or a major misinterpretation of my request, or a recommendation that I look to Popper, which recommendation I have followed with no results.
I think you’re problem is you don’t understand what the issues at stake are, so you don’t know what you’re trying to find.
You said:
anything written by Popper which mentions the word “Bayes” or “Bayesian” at all. Or that discusses Bayes’ equation.
But then when you found a well known book by Popper which does have those words, and which does discuss Bayes’ equation, you were not satisfied. You asked for something which wasn’t actually what you wanted. That is not my fault.
You also said:
You are changing the subject. I never asked you to summarize Popper’s solution to the problem of induction.
But you don’t seem to understand that Popper’s solution to the problem of induction is the same topic. You don’t know what you’re looking for. It wasn’t a change of topic. (Hence I thought we should discuss this. But you refused. I’m not sure how you expect to make progress when you refuse to discuss the topic the other guy thinks is crucial to continuing.)
Bayesian updating, as a method of learning in general, is induction. It’s trying to derive knowledge from data. Popper’s criticisms of induction, in general, apply. And his solution solves the underlying problem rendering Bayesian updating unnecessary even if it wasn’t wrong. (Of course, as usual, it’s right when applied narrowly to certain mathematical problems. It’s wrong when extended out of that context to be used for other purposes, e.g. to try to solve the problem of induction.)
So, question: what do you think you’re looking for? There is tons of stuff about probability in various Popper books including chapter 8 of LScD titled “probability”. There is tons of explanation about the problem of induction, and why support doesn’t work, in various Popper books. Bayesian updating is a method of positively supporting theories; Popper criticized all such methods and his criticisms apply. In what way is that not what you wanted? What do you want?
So for example I opened to a random page in that chapter and found, p 183, start of section 66, the first sentence is:
Probability estimates are not falsifiable.
This is a criticism of the Bayesian approach as unscientific. It’s not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at? How would you do it without Bayes’ theorem?). In any case it is a criticism and it applies straightforwardly to Bayesian epistemology. It’s not the only criticism.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up (no, making up a “prior” which assigns all of them at once, in a way vague enough that you can’t even use it in real life without “estimating” arbitrarily, doesn’t mean you haven’t just made them up).
EDIT: read the first 2 footnotes in section 81 of LScD, plus section 81 itself. And note that the indexer did not miss this but included it...
Bayesian updating, as a method of learning in general, is induction. It’s trying to derive knowledge from data.
Only in a sense so broad that Popper can rightly be accused of the very same thing. Bayesians use experience to decide between competing hypotheses. That is the sort of “derive” that Bayesians do. But if that is “deriving”, then Popper “derives”. David Deutsch, who you know, says the following:
But, in reality, scientific theories are not ‘derived’ from anything. We do not read them in nature, nor does nature write them into us. They are guesses – bold conjectures. Human minds create them by rearranging, combining, altering and adding to existing ideas with the intention of improving upon them. We do not begin with ‘white paper’ at birth, but with inborn expectations and intentions and an innate ability to improve upon them using thought and experience. Experience is indeed essential to science, but its role is different from that supposed by empiricism. It is not the source from which theories are derived. Its main use is to choose between theories that have already been guessed. That is what ‘learning from experience’ is.
I direct you specifically to this sentence:
Experience is indeed essential to science, but its role is different from that supposed by empiricism. It is not the source from which theories are derived. Its main use is to choose between theories that have already been guessed.
This is what Bayesians do. Experience is what Bayesians use to choose between theories which have already been guessed. They do this using Bayes’ Theorem. But look back at the first sentence of the passage:
But, in reality, scientific theories are not ‘derived’ from anything.
Clearly, then, Deutsch does not consider using the data to choose between theories to be “deriving”. But Bayesians use the data to choose between theories. Therefore, as Deutsch himself defines it, Bayesians are not “deriving”.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up
Yes, the Bayesians make them up, but notice that Bayesians therefore are not trying to derive them from data—which was your initial criticism above. Moreover, this is not importantly different from a Popperian scientist making up conjectures to test. The Popperian scientist comes up with some conjectures, and then, as Deutsch says, he uses experimental data to “choose between theories that have already been guessed”. How exactly does he do that? Typical data does not decisively falsify a hypothesis. There is, just for starters, the possibility of experimental error. So how does one really employ data to choose between competing hypotheses? Bayesians have an answer: they choose on the basis of how well the data fits each hypothesis, which they interpret to mean how probable the data is given the hypothesis. Whether he admits it or not, the Popperian scientist can’t help but do something fundamentally the same. He has no choice but to deal with probabilities, because probabilities are all he has.
The Popperian scientist, then, chooses between theories that he has guessed on the basis of the data. Since the data, being uncertain, does not decisively refute either theory but is merely more, or less, probable given the theory, then the Popperian scientist has no choice but to deal with probabilities. If the Popperian scientist chooses the theory that the data fits best, then he is in effect acting as a Bayesian who has assigned to his competing theories the same prior.
Do you understand DD’s point that the majority of the time theories are rejected without testing which is in both his books? Testing is only useful when dealing with good explanations.
Do you understand that data alone cannot choose between the infinitely many theories consistent with it, which reach a wide variety of contradictory and opposite conclusions? So Bayesian Updating based on data does not solve the problem of choosing between theories. What does?
Do you understand DD’s point that the majority of the time theories are rejected without testing which is in both his books? Testing is only useful when dealing with good explanations.
Bayesians are also seriously concerned with the fact that an infinity of theories are consistent with the evidence. DD evidently doesn’t think so, given his comments on Occam’s Razor, which he appears to be familiar with only in an old, crude version, but I think that there is a lot in common between his “good explanation” criterion and parsimony considerations.
We aren’t “seriously concerned” because we have solved the problem, and it’s not particularly relevant to our approach.
We just bring it up as a criticism of epistemologies that fail to solve the problem… Because they have failed, they should be rejected.
You haven’t provided details about your fixed Occam’s razor, a specific criticism of any specific thing DD said, a solution to the problem of induction (all epistemologies need one of some sort), or a solution to the infinity of theories problem.
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified. At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Is this true? Popper wrote LScD in 1934. Keynes and Ramsey wrote about using probability to handle uncertainty in the 1920s although I don’t think anyone paid attention to that work for a few years. I don’t know enough about their work in detail to comment on whether or not Popper is taking it into account although I certainly get the impression that he’s influenced by Keynes.
According to the wikipedia page, Cox’s theorem first appeared in R. T. Cox, “Probability, Frequency, and Reasonable Expectation,” Am. Jour. Phys., 14, 1–13, (1946). Prior to that, I don’t think probability had much in the way of philosophical foundations, although they may’ve gotten the technical side right. And correct use of probability for more complex things, like causal models, didn’t come until much later. (And Popper was dealing with the case of science-in-general, which requires those sorts of advanced tools.)
The English version of LScD came out in 1959. It wasn’t a straight translation; Popper worked on it. In my (somewhat vague) understanding he changed some stuff or at least added some footnotes (and appendices?).
Anyway Popper published plenty of stuff after 1946 including material from the LScD postscript that got split into several books, and also various books where he had the chance to say whatever he wanted. If he thought there was anything important to update he would have. And for example probability gets a lot of discussion in Popper’s replies to his critics, and Bayes’ theorem in particular comes up some; that’s from 1974.
So for example on page 1185 of the Schilpp volume 2, Popper says he never doubted Bayes’ theorem but that “it is not generally applicable to hypotheses which form an infinite set”.
How can something be partially falsified? It’s either consistent with the evidence or contradicted. This is a dichotomy. To allow partial falsification you have to judge in some other way which has a larger number of outcomes. What way?
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified.
You’re saying you started without them, and come up with some in the middle. But how does that work? How do you get started without having any?
the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Changing the math cannot answer any of his non-mathematical criticisms. So his challenge remains.
(It’s subject to limitations that do not constrain the Bayesian approach, and as near as I can tell, is mathematically equivalent to a non-informative Bayesian approach when it is applicable, but the author’s justification for his procedure is wholly non-Bayesian.)
I think you mixed up Bayes’ Theorem and Bayesian Epistemology. The abstract begins:
By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution.
They have a problem with a prior distribution, and wish to do without it. That’s what I think the paper is about. The abstract does not say “we don’t like bayes’ theorem and figured out a way to avoid it.” Did you have something else in mind? What?
I had in mind a way of putting probability distributions on unknown constants that avoids prior distributions and Bayes’ theorem. I though that this would answer the question you posed when you wrote:
It’s not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at?)
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
Karma has little bearing on the abstract truth of an argument, but it works pretty well as a means of gauging whether or not an argument is productive in the surrounding community’s eyes. It should be interpreted accordingly: a higher karma score doesn’t magically lead to a more perfect set of opinions, but paying some attention to karma is absolutely rational, either as a sanity check, a method of discouraging an atmosphere of mutual condescension, or simply a means of making everyone’s time here marginally more pleasant. Despite their first-order irrelevance, pretending that these goals are insignificant to practical truth-seeking is… naive, at best.
Unfortunately, this also means that negative karma is an equally good gauge of an statement’s disruptiveness to the surrounding community, which can give downvotes some perverse consequences when applied to people who’re interested in being disruptive.
Would you mind responding to the actual substance of what JoshuaZ said, please? Particularly this paragraph:
If it helps, imagine it we weren’t talking about you but some other person. That person wanders over to a forum they’ve never been to, and has extensive discussions about technical issues where that person disgarees with the forum regulars on almost everything, including peripherally related issues. How likely would you be to say that that person had everything right, even if you took for granted that they were correct about the overarching primary issues? So, if the person never agrees that they were wrong about even the tiniest of issues, do you think it is more likely that the person actually got everything right or that they just aren’t listening?
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
note that, ironically, your comment here complaining that you didn’t repeatedly pressure is itself pressure, and not for the first time...
I think there may be some definitional issues here. A handful of remarks suggesting one read a certain book would not be what most people would call pressure. Moreover, given the negative connotations of pressure, it worries me that you find that to be pressure. Conversations are not battles. And suggesting that one might want to read a book should not be pressure. If one feels that way, it indicates that one might be too emotionally invested in the claim.
And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct?
When they have policies like resorting to authoritarian, ad hominem arguments rather than substance ones …
And is that what you see here? Again, most people don’t seem to see that. Take an outside view; it is possible that you are simply too emotionally involved?
Let me pose a counter question: when you contradict several geniuses such as Popper, Deutsch, Feynman (e.g. about caring what other people think), and Rand, does that worry you? I don’t particularly think you should say “yes” (maybe mild worry causing only an interest in investigating a bit more, not conceding). The point is your style of argument can easily be used against you here. And against most people for most stuff.
I don’t have reason to think that Rand is a genius. But, let’s say I did, and let’s say the other three are geniuses. Should one in that context, be worried that their opinions contradict my own? Let’s use a more extreme example, Jonathan Sarfati is an accomplished chemist, a highly ranked chess master, and a young earth creationist. Should I be worried by that? The answer I think is yes. But, at the same time, even if I were only vaguely aware of Sarfati’s existence, would I need to read everything by him to decide that he’s wrong? No. In this case, having read some of their material, and having read your arguments for why I should read it, it is falling more and more into the Sarfati category. It wouldn’t surprise me at all if there’s something I’m wrong about that Popper or Deutsch gets right, and if I read everything Popper had to say and read everything Deutsch had to say and didn’t come away with any changed opinions, that should cause me to doubt my rationality. Ironically, BoI was already on my intended reading list before interacting with you. It has no dropped farther down on my priority list because of these conversation.
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
I don’t think you understood why I made that remark. I have an interest in cooperating with other humans. We have a karma system for among other reasons to decide whether or not people want to read more of something. Since some people have already expressed a disinterest in this discussion, I’m explicitly inviting them to show that disinterest so I know not to waste their time. If it helps, imagine this conversation occurring on a My Little Pony forum with a karma system. Do you see why someone would want to downvote in that context a Popper v. Bayes discussion? Do you see how listening in that context to such preferences could be rational?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary? More than 10? Any detailed, serious discuss of Popper? If it does, how do you know that having not read it? Is the discussion of Popper tainted by the common myths?
Some Less Wrongers recommend stuff that doesn’t have the specific stuff they claim it does have, e.g. the Mathematical Statistics book. IME it’s quite common that recommendations don’t have what they are supposed to even when people directly claim it’s there.
Take an outside view; it is possible that you are simply too emotionally involved?
You seem to think I believe this stuff because I’m a beginner; that’s an insulting non-argument. I already know how to take outside views, I am not emotionally involved. I already know how to deal with such issues, and have done so.
None of your post here really has any substance. It’s also psychological and meta topics you’ve brought up. Maybe I shouldn’t have fed you any replies at all about them. But that’s why I’m not answering the rest.
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary?
Please don’t move goalposts and pretend they were there all along. You asked whether he mentioned Popper, to which I answered yes. I don’t know the answers to your above questions, and they really don’t have anything to do with the central points, the claims that I “pressured” you to read Earman based on his being a professional.
Take an outside view; it is possible that you are simply too emotionally involved?
You seem to think I believe this stuff because I’m a beginner; that’s an insulting non-argument. I already know how to take outside views, I am not emotionally involved. I already know how to deal with such issues, and have done so.
I’m also confused about where you are getting any reason to think that I think that you believe what you do because you are a “beginner”- it doesn’t help matters that I’m not sure what you mean by that term in this context.
But if you are very sure that you don’t have any emotional aspect coming into play then I will try to refrain from suggesting otherwise until we get more concrete data such as per Jim’s suggestion.
I have never been interested in people who mention Popper but people who address his ideas (well). I don’t know why you think I’m moving goalposts.
There’s a difference between “Outside View” and “outside view”. I certainly know what it means to look at things from another perspective which is what the non-caps one means (or at least that’s how I read it).
I am not “very sure” about anything; that’s not how it works; but I think pretty much everything I say here you can take as “very sure” in your worldview.
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary? More than 10? Any detailed, serious discuss of Popper? If it does, how do you know that having not read it? Is the discussion of Popper tainted by the common myths?
--
Moving the goalposts, also known as raising the bar, is an informal logically fallacious argument in which evidence presented in response to a specific claim is dismissed and some other (often greater) evidence is demanded. In other words, after an attempt has been made to score a goal, the goalposts are moved to exclude the attempt. This attempts to leave the impression that an argument had a fair hearing while actually reaching a preordained conclusion.
Stop being a jerk. He didn’t even state whether it mentions Popper, let alone whether it does more than mention. The first thing wasn’t a goal post. You’re being super pedantic but you’re also wrong. It’s not charming.
i checked out the Mathematical Statistics book today that one of them said had a proof. it didn’t have it. and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
Links? I ask not to challenge the veracity of your claims but because I am interested in avoiding the kind of erroneous argument you’ve observed.
The concept of “bounded rationality” is a concept in economics/psychology going back to Herbert Simon. It is not, as you seem to think, an ideology hostile to human dignity.
How does that prevent it from being a mistake? With consequences? With appeal to certain moral views?
Bounded rationality is not an ideology, it’s a definition. Calling it a mistake makes as much sense as calling the number “4” a mistake; it is not a type of thing which can be correct or mistaken.
I’m curious why this has been downvoted. Does it mean people don’t believe you? I think this is another illustration of why the voting system sucks. Your protestations of innocence are now hidden and no-one is fronting up with an argument. I know you have been upvoted and I upvoted a bunch of posts myself once because I was annoyed that you were getting systematically downvoted to, I think, rate-limit your posts (which you were annoyed about to) and to hide them (which makes it a pain reading threads). Oh, the sort of silliness the kharma system leads to!
I’m curious why this has been downvoted. Does it mean people don’t believe you?
I presume it was a reaction to the argument that Perplexed was a troll. Had curi made the same protestation of innocence without adding the personal attack, it might not have been downvoted at all.
How could he have thought it worthwhile to use sock puppets to raise his karma enough to make that posting? Was he really trying to engage in instruction and consciousness raising? Or was he just trying to assert intellectual superiority?
Trollish is a good description of this, so I think curi’s comment was accurate. And, note, the comment in which the above insult appears in was upvoted.
Trollish is a good description of this, so I think curi’s comment was accurate.
Whether Perplexed was trolling depends on Perplexed’s intent. Perplexed probably believed what he was saying, so he probably did not intend to accuse curi unjustly. Had he known curi was innocent and then said it anyway in order to upset him, then he would have been trolling.
The issue isn’t whether he honestly believed it but that he came up with a rather harsh claim without evidence.
He did have evidence. Perhaps what you mean is that there were alternative hypotheses explaining the same evidence. But the existence of alternative hypotheses does not stop the evidence being evidence. There are always alternative hypotheses, and yet there is evidence. Therefore the existence of alternatives does not prevent something from being evidence.
Consider the following three hypotheses:
A) Nobody was systematically voting up your posts.
B) You were systematically voting up your own posts.
C) Somebody else was systematically voting up your posts.
The evidence includes the fact that after having your karma voted down to oblivion, your karma shot up for no obvious reason, allowing you to post a main article to the front page. This evidence decreases the probability of hypothesis A, and simultaneously increases the probabilities of B and C, by Bayesian updating. I don’t know what the Popperian account is precisely, but since it is not completely insane then I believe it leads to roughly the same scientific result, though stated in Popperian terms, eg without talking about the probability of the hypothesis. The evidence is in any case inconsistent with A, and consistent with both B and C, or less consistent with A than with B or C, however you want to put it, and thus is evidence for each as contrasted with A.
As I recall, you yourself proposed hypothesis C. Do you claim that you proposed a hypothesis without evidence, and that this was a bad thing? For that would seem to follow from your own logic. An alternative hypothesis consistent with your own experience is that you did it yourself and just forgot, call this C1. C1 is highly improbable, but to say such a thing is nothing other than to say that the prior of C1 is extremely low, which is Bayesian thinking, which I understand you disapprove of.
In any case, your complaint that he offered a conjecture in advance of evidence is odd coming from a Popperian. Deutsch the Popperian writes that conjectures are not “derived” from anything. They are guesses—bold conjectures, so he writes. Experience is not the source from which conjectures are derived. Its main use is to choose between conjectures that have already been guessed. So he writes. But now you complain about a conjecture offered—so you claim—in advance of evidence!
And whence comes this new appreciation of yours for social niceties such as holding one’s tongue and not blurting out uncomfortable truths, or, since we are after all fallible (as Popper says), uncomfortable guesses at the truth? Did you not claim that we should all be beyond such social niceties, such sacrifices of the bold pursuit of truth to mere matters of “style”? Did you not yourself explain that you deliberately post in a style that does not pull any punches, and that anyone who complains merely proves that they are not worthy of your time? So why the apparent change of heart?
As I recall, you yourself proposed hypothesis C. Do you claim that you proposed a hypothesis without evidence, and that this was a bad thing? For that would seem to follow from your own logic.
Of course, they presumably had additional evidence regarding the truth or falsehood of B (leaving aside highly unlikely scenarios like them performing acts of which they were unaware), which other people don’t have. So the situation isn’t quite symmetrical.
(Not disagreeing with your main point. Or, in fact, engaging with it at all. Just nitpicking.)
You agree that’s a bad way to approach discussion, right?
Maybe I used to agree, but maybe a week of posts like this have gradually persuaded me, through persuasive arguments, otherwise. Quoting from you:
There were a few people I inspired to flame me. I know I provoked them. I didn’t actually do anything that deserves being flamed. But I broke etiquette some. It’s not a surprising result. Flaming me for some of the things I did is pretty normal. (Btw a few of the flames were deleted or edited a bit after being posted.) Some people would regard that as disaster. I regard is as success: I stopped speaking to those people. If I’d been super polite they might have pretended to have a civil discussion with me for longer while having rather irrational thoughts going through their head. The more they hide emotional reactions (for example), while actually having them, the more discussion can go wrong for unstated reasons.
But now you complain:
very rude
Oh great master of the breaking of etiquette. I am confused!
Then when swimmer963 wrote:
I hate confrontation
You responded:
You could change this. It’s not human nature. It’s not your genes. It’s a cultural bias. A very common one. And it’s important because criticism is the main tool by which we learn. When all criticism has to be made subtle, indirect, formal, filled with equivocation about whether the person stating it really means it, or various other things, then it slows down learning a lot.
Have you fallen into cultural bias?
Then when swimmer963 wrote:
I have this annoying tendency to care about anything that anyone says to or about me.
You responded:
You know, Feynman had this problem. He got over it. Maybe reading his books would help you. One of them is titled like “What do you care what other people think?”
Is it time for you to re-read this book?
I am confused. If you want others to respect some very basic ground rules of etiquette, then why did you preach the opposite only days ago?
It’s not that I care what he thinks, I just think posting crazy factual libels is a bad idea.
I never said one should ignore all etiquette of all types. There’s objective limits on what’s a good or bad idea in this area.
I’m not sure what you hope to gain by arguing this point with me. Do you just want to make me concede something in a debate or are you hoping to learn something?
I already know that etiquette is important, and why. I am pointing out that you also know that it is important when you are the target of a breach. So, even the one who preaches that etiquette be set aside in the bold pursuit of truth, does not really believe it, not when it’s his own ox being gored.
You have all along been oblivious to the reactions of others—admittedly so, proudly so. You have argued that your social obliviousness is a virtue, an intellectual strength, because such things as etiquette are mere obstacles in the road to reality. But this is mistaken, and even you intuit that it is, once the shoe is on the other foot—that is, once you stand in the place of those that you have antagonized and exasperated for an entire week. Your philosophy of antagonism serves no purpose but to justify your own failure or refusal to treat others well.
I don’t need you to concede anything. What I’ve done here is put together the pieces and given you a chance to respond.
I never said one should ignore all etiquette of all types. There’s objective limits on what’s a good or bad idea in this area.
Uh huh. Yet you offer no guidelines as to what is allowed, and what is off limits, after a week of preaching and practicing. Or to be more precise, you do offer a guide of sorts: you are off limits, and everyone else is fair game. What you are inclined to break, is okay to break. What you, for whatever reason, don’t break, nobody else must break.
What I have found discouraging in this regard is that you and Curi have been so hopelessly tone-deaf in coming to understand just why your doctrines have been so difficult to sell. Why (and in what sense) people here think that support is possible.
You’re not giving any reasons here why you think we are “tone-deaf” other than you think we have not explained, for example, why LW people think support is possible. But that’s not tone-deafness. Tone deafness is a complaint not about the substance of ideas but about how well you have expressed those ideas with respect to some norm: it is essentially a complaint about style, and it would seem you want us to pay attention to kharma. We think rational people ought to be able to look beyond those things. Right? With regard to support, we explained and gave examples but, briefly, just to illustrate again, here is a quote from where recursive justification hits bottom
Everything, without exception, needs justification.
Perhaps it is you that has been deaf?
We know that Popperian philosophy is a hard sell and we know that in order to sell it we have to combat lies and myths that have been spread around. Popper himself spent a lot of time doing that. Some of those myths are right here at LW, in the sequences, and we gave examples. But, again, deafness—has anybody admitted that there are these mistakes? Is that deafness a reaction to our “tone-deafness” and is that rational?
All you have to do is to expend as much effort in reacting to what other people say as in trying to get your own points across. In trying to understand what they are saying, rather than reacting negatively.
I agree, but rather than just asserting I haven’t been, perhaps you should illustrate with some examples.
People can engage in persistent disagreement with the local orthodoxy without losing karma. People can occasionally speak too impolitely without losing excessive karma.
You seem to care about kharma. If one thinks that kharma is an authoritarian mistake, as I do, then how much respect to you think I should have?
That folks who pay lip service to Popper seem to treat their own positions as given by authority and measure ‘progress’ by how much the other guy changes his mind.
That you think we are paying lip service is reacting negatively and rather hostile don’t you think? Also I do want to hear some good arguments against Popper. Unfortunately most arguments, including those here, are to do with the myths—such as Popper is falsificationism, and come from people that don’t know Popper well or got their information from second-hand sources.
How could he have thought it worthwhile to use sock puppets to raise his karma enough to make that posting?
Why do you think he did that? You’re just making things up here.
Attacking non-existent beliefs in a doctrine he clearly didn’t understand
Tversky and Kahneman believe in such ideas as “evolved mental behaviour” and “bounded rationality”. These beliefs exist right enough. If you read The Beginning of Infinity by David Deutsch you will see arguments against these sort of things. You’re arguing by assertion again and haven’t carefully looked at the substance.
Please give up on us. We’re obviously not as careful and rational thinker as you are. Utterly hopeless. We’re stuck in our realm of imagined cognitive biases. Go find something productive to do. Please go away.
First, in response to your first paragraph, my complaint about ‘tone-deafness’ was not intended as a complaint about style. It was a complaint about your failure to do well at the listening half of conversation. A failure to tailor your arguments to the responses you receive. A failure to understand the counterarguments. My complaint may be wrong and unjustified, but it is definitely not a complaint about style.
But, speaking of style, you suggest:
We think rational people ought to be able to look beyond [issues of style]. Right?
Well, I can see the attractiveness of that slogan, but we tend to think of it a bit differently here. Here, we think that rational people ought to be able to fix any rough edges in their ‘style’ that prevent them from communicating their ideas successfully. We don’t believe that it makes sense to place the entire onus of adjustment on the listener. And we especially don’t believe that only one side has the onus of listening.
Perhaps it is you that has been deaf?
Perhaps. But that’s enough about me. Lets talk about you. :) As you may have noticed, responding to an attack with a counterattack usually doesn’t achieve very much here.
You seem to care about kharma. If one thinks that kharma is an authoritarian mistake, as I do, then how much respect to you think I should have?
I guess that would depend on how interested you are in having me listen to your ideas.
I do want to hear some good arguments against Popper. Unfortunately most arguments, including those here, are to do with the myths—such as Popper is falsificationism, and come from people that don’t know Popper well or got their information from second-hand sources.
You are probably right. And that presents you with a problem. How do you induce people to come to know Popper well? How do you tempt them to get their information from some non-second-hand source?
Now I’m sure you guys have given long and careful thought to this problem and have developed a plan. But if you should discover that things are not going well, I have some ideas that might help. Which is simply that you might consider producing some discussion postings consisting mostly of long quotes from Popper and his most prominent disciples, with only short glosses from yourselves.
Tversky and Kahneman believe in such ideas as “evolved mental behaviour” and “bounded rationality”. These beliefs exist right enough. If you read The Beginning of Infinity by David Deutsch you will see arguments against these sort of things.
Hmmm. There is something here I just don’t understand. Why all this hostility to what seems to me to be the fairly uncontroversial realization that people are often less good at reasoning than we would like them to be. It is almost as if you had religious or political objections to some evil doctrine. Do you think it would be possible to enlighten me as to why it seems to you that the stakes are so high with this issue?
As for reading Deutsch, I intend to. I don’t think I have ever had a book recommended to me so many times before it is even published in this country.
As for reading Deutsch, I intend to. I don’t think I have ever had a book recommended to me so many times before it is even published in this country.
Somehow I was able to buy it in the Amazon Kindle store for about $18, but the highlight feature is not working properly. My introduction to Deutsch was several years ago with The Fabric of Reality, in which he defends the Everett interpretation, among other things. At that point he became a must-read author (which means I find him worth reading, not that I agree fully with him), one of only a handful. (Daniel Dennett is another). If you want to read Deutsch now, The Fabric of Reality is immediately available. As I recall, it’s a mix of persuasive arguments and dubious arguments.
Tversky and Kahneman believe in such ideas as “evolved mental behaviour” and “bounded rationality”. These beliefs exist right enough. If you read The Beginning of Infinity by David Deutsch you will see arguments against these sort of things. You’re arguing by assertion again and haven’t carefully looked at the substance.
I’d be very curious to see where anything Tversky wrote contains the phrase “evolved mental behavior”- as I explained to you T&K have classically been pretty agnostic about where these biases and heuristics are coming from. That other people in the field might think that they are evolved is a side issue. I can’t speak as strongly about Kahneman, but I’d be surprised to see any joint paper of the two had where that phrase was used.
But there’s a more serious issue here which I pointed out to you earlier and you are still missing: You cannot let philosophy override evidence. When evidence and your philosophy contradict, philosophy must lose. No matter how good my philosophical arguments are, they cannot withstand empirical data. If my philosophy says the Earth is flat, my philosophy is bad, since the evidence is overwhelming that the Earth is not flat. If my philosophy requires a geocentric universes, then my philosophy is bad. If my philosophy requires irreducible mental entities then my philosophy is bad. And if my philosophy requires humans to be perfect reasoners then my philosophy is bad.
As long as you keep insisting that your philosophical desires about what humans should be override the evidence of what humans are you will not be doing a good job understanding humans or the rest of the universe.
And to be blunt, as long as you keep making this sort of claim, people here are going to not take you seriously. So please go elsewhere. We don’t have much to say to each other.
No, as far as I know, they don’t use the phrase “evolved mental behaviour”, but I didn’t say they did, only that they believe in such things. That they do is evident here:
“From its earliest days, the research that Tversky and I conducted was guided
by the idea that intuitive judgments occupy a position – perhaps corresponding
to evolutionary history – between the automatic operations of perception
and the deliberate operations of reasoning.”
Read the wording closely. To me it indicates they don’t have a good explanation for these heuristics, or, if they do have an explanation, it is vague so that it is consistent with both evolved and not-evolved. But they don’t have a problem with evolved. I also gave you other arguments in my comments to you in our other discussion.
Why are you continuing this when you’ve already sarcastically told me to go away?
Edit: This wikipedia page says “Cognitive biases are instances of evolved mental behavior”: Do you think that is an accurate description of what cognitive biases are supposed to be? Is there any controversy about whether they are evolved or not?
Yes, I understood that, but my question was about why you wrote:
With regard to support, we explained and gave examples but, briefly, just to illustrate again, here is a quote from where recursive justification hits bottom
So, apparently, what was illustrated was that Eliezer was not a good and faithful disciple of Popper when he wrote that. I’m a bit surprised you thought that needed illustration.
ETA: Or maybe you meant that your ability to dredge up that quote illustrates that you have been paying attention to whether and why LesWrongers believe support is possible. Yeah, that makes more sense, is more charitable, and is the interpretation I’ll go with.
Ok, with that out of the way, I will respond to your long great-grandfather comment (above) directly.
Tversky and Kahneman believe in such ideas as “evolved mental behaviour” and “bounded rationality”. These beliefs exist right enough. If you read The Beginning of Infinity by David Deutsch you will see arguments against these sort of things.
That sounds pretty awful. Bounded rationality is a standard concept. Surely if you argue against it, you are confused, or don’t understand it properly. I’m not sure what an “evolved mental behaviour” is, but that sounds pretty uncontroversial too. Looking at Deutsch on video about 28:00 in he is using the term “bounded rationality” to refer to something different—so this seems like a simple confusion based on different definitions of terms.
If you are going to assume that people are confused for arguing against “standard concepts” or because you think something is uncontroversial, then that is just argument from authority.
The supposed heuristics which Herbert Simon and others propose which give rise to our alledged cognitive biases are held by them to have evolved via biological evolution, to be based on induction, and to be bounded. Hard-coded processes based on induction that can generate some knowledge but not all knowledge goes against the ideas that Deutsch discusses in The Beginning of Infinity. For one thing, induction is impossible and doesn’t happen anywhere including in human brains. For another, knowledge creation is all or nothing; a machine that can generate some knowledge can generate all knowledge (the jump to universality) - halfway houses like these heuristics would be very difficult to engineer, they would keep jumping. And, for another, human knowledge and reasoning is memetic, not genetic, and there are no hard-coded reasoning rules.
This is just an argument over the definition of the phrase “bounded rationality”. Let’s call these two definitions BR1 and BR2. The definition that timtyler, Kahnemann and Tversky, and I are using is BR1; the definition that you, curi, and David Deutsch use is BR2.
BR1 means “rationality that is performed using a finite amount of resources”. Think of this as bounded-resource rationality. All rationality done in this universe is BR1, by definition, because you only get a limited amount of time and memory to think about things. This definition does not contain any claims about what sort of knowledge BR1 can or can’t generate. A detailed theory of BR1 would say things like “solving this math problem requires at least 10^9 operations”. More commonly, people refer to BR1 to distinguish it from things like AIXI, which is a mathematical construct that can theoretically figure out anything given sufficient data, but which is impossible to construct because it contains several infinities. A mind with universal reasoning is BR1 if it only has a finite amount of time to do it in.
BR2 means “rationality that can generate some types of knowledge, but not others”. Think of this as bounded-domain rationality. Whether this exists at all depends on what you mean by “knowledge”. For example, if you have a computer program that collects seismograph data and predicts earthquakes, you might say it “knows” where earthquakes will occur; this would make it a BR2. If you say that this sort of thing doesn’t count as knowledge until a human reads it from a screen or printout, then no BR2s exist.
BR1 is a standard concept, but as far as I know BR2 is unique to Deutsch’s book Beginning of Infinity. BR1 exists, tautologically from its definition. Whether BR2 exists or not depends on how you define some other things, but personally I don’t find BR2 illuminating so I see no reason to take a stance either way on it.
I’m pretty sure there’s a similar issue with the definition of “induction”. I know of at least two definitions relevant to epistemology, but neither of them seems to make sense in context so I suspect that Deutsch has come up with a third. Could you explain what Deutsch uses the word induction to mean? I think that would clear up a great deal of confusion.
All your points are wrong, though. Induction has been discussed to death already. Computation universality doesn’t mean intelligent systems evolve without cognitive biases, and the fact that human cultural knowledge is memetic doesn’t mean there are not common built-in biases either. The human brain is reprogrammable to some extent, but much of the basic pattern-recognition circuitry has a genetically specified architecture.
Many of the biases in question are in the basic psychology textbooks—this is surely not something that is up for debate.
The conjunction fallacy: explanations of the linda problem by the theory of
hints
Empirical research has shown that in some situations, subjects tend to assign
a probability to a conjunction of two events that is larger than the probability
they assign to each of these two events. This empirical phenomenon is
traditionally called the conjunction fallacy. One of the best-known experiments
used to demonstrate the conjunction fallacy is the Linda problem introduced by
Tversky and Kahneman in 1982. They explain the “fallacious behavior” by their
so-called judgemental heuristics. These heuristics have been criticized heavily
as being far “too vague to count as explanations”. In this article, it is shown
that the “fallacious behavior” in the Linda problem can be explained by the
so-called theory of hints.
Why do you argue from authority saying things like something surely cannot be up for debate because it’s in all the textbooks? curi and I are fallibilists: nothing is beyond question.
You say you’re a fallibist, but you’re actually falling into the failure mode described in this article. Suppose you’ve got a question with positions A and B, with a a bunch of supporting arguments for A, and a bunch of supporting arguments for B. Some of those arguments for each side will be wrong, or ambiguous, or inapplicable—that’s what fallibilism predicts and I think we all agree with that.
Suppose there are 3 valid and 3 invalid arguments for A, and 3 valid and 3 invalid arguments for B. Now suppose someone decides to get rid any of the arguments that are invalid, but they happen to think A is better. Most people will end up attacking all the arguments for B, but they won’t look as closely at the arguments for A. After they’re finished, they’ll have 3 valid and 3 invalid arguments for A, and 3 valid arguments for B—which looks like a preponderance of evidence in favor of B, but it isn’t.
Now read the abstract of that paper you linked again. That paper disagrees with where K&T draw the boundary between questions that trigger the conjunction fallacy and questions that don’t, and describe the underlying mechanism that produces it differently. They do not claim that the conjunction fallacy doesn’t exist.
Empirical research has shown that in some situations, subjects tend to assign a probability to a conjunction of two events that is larger than the probability they assign to each of these two events. This empirical phenomenon is traditionally called the conjunction fallacy. One of the best-known experiments used to demonstrate the conjunction fallacy is the Linda problem introduced by Tversky and Kahneman in 1982. They explain the “fallacious behavior” by their so-called judgemental heuristics. These heuristics have been criticized heavily as being far “too vague to count as explanations”. In this article, it is shown that the “fallacious behavior” in the Linda problem can be explained by the so-called theory of hints. source
It seems as though they acknowledge the conjunction fallacy and are proposing different underlying mechanisms to explain how it is produced.
Why do you argue from authority saying things like something surely cannot be up for debate because it’s in all the textbooks? curi and I are fallibilists: nothing is beyond question.
If you want to argue with psychology 101, fine, but do it in public, without experimental support, and a dodgy theoretical framework derived from computation universality and things are not going to go well.
If citing textbooks is classed as “arguing from authority”, one should point out that such arguments are usually correct.
They have put fallacious behaviour in quotes to indicate that they don’t agree the fallacy exists. I could be wrong, however, as I am just going from the abstract and maybe the authors do claim it exists. However they seem to be saying it is just an artifact of hints. I’ll need to read the paper to understand better. Maybe I’ll end up disagreeing with the authors.
Textbook arguments are often wrong. Consider quantum physics and the Copenhagen Interpretation for example. And one way of arguing against CI is from a philosophical perspective (it’s instrumentalist and a bad explanation).
I looked through the whole paper and don’t think you’re wrong.
I don’t agree with the hints paper in various respects. But it disagrees with the conjunction fallacy and argues that conjunction isn’t the real issue and the biases explanation isn’t right either. So certainly there is disagreement on these issues.
If citing textbooks is classed as “arguing from authority”, one should point out that such arguments are usually correct.
Do you mean in the context of arguments in textbooks? This seems like a very weak claim, given how frequently some areas change. Indeed, psychology is an area where what an intro level textbook would both claim to be true and would even discuss as relevant topics has changed drastically in the last 60 years. For example, in a modern psychology textbook the primary discussion of Freud will be to note that most of his claims fell into two broad categories:untestable or demonstrably false. Similarly, even experimentally derived claims about some things (such as how children learn) has changed a lot in the last few years as more clever experimental design has done a better job separating issues of planning and physical coordination from babies’ models of reality. Psychology seems to be a bad area to make this sort of argument.
Do you mean in the context of arguments in textbooks?
Yes.
This seems like a very weak claim, given how frequently some areas change.
It is weak, in that it makes no bold claims, and merely states what most would take for granted—that most of the things in textbooks are essentially correct.
has anybody admitted that there are these mistakes?
Some did. At the same time that others didn’t.
The ones admitting it said all epistemologies have those flaws, and it’s impossible to do anything about it. When told that one already exists they just dismissed that as impossible instead of being interested in researching whether it succeeds. Or sometimes they took an attitude similar to EY: it’s a flaw and maybe we’ll fix it some day but we don’t know how to yet and the attempts in progress don’t look promising. (Why doesn’t the Popperian attempt in particular look promising? Why hasn’t it already succeeded? No comment given.)
And they dismissed it without even knowing of any scholarly work by anyone on their side which makes their point for them. As far as they know, no one from their side ever refuted Popper in depth, having carefully read his books. And they are OK with that.
@lip service—anyone who cares to can find criticisms of Popper on my blog on a variety of subjects. This is just accusing sources of ideas of bias as a way to dismiss them, without even doing basic research about whether these claims are true (let alone explaining why source should be used to determine quality of substance).
Previously, the most popular philosophy of science was probably Karl Popper’s falsificationism—this is the old philosophy that the Bayesian revolution is currently dethroning.
You can even formalize Popper’s philosophy mathematically
Popper’s dictum that an idea must be falsifiable
Karl Popper’s insight that falsification is stronger than confirmation,
Karl Popper’s idea that theories can be definitely falsified,
Has anybody here said, yes, these are myths and should be retracted?
I do not trust that they are accurate. Consequently I discount them when I encounter them. I am currently reading The Beginning of Infinity (which is hard to obtain in the US as it is not to be published until summer, though inexplicably I was able to buy it for the Kindle, though inexplicably my extensive highlights from the book are not showing up on my highlights page at Amazon), and trust Deutsch much more on the topic of Popper. I trust Popper still more on the topic of Popper, and I read the essay collection Objective Knowledge a few weeks ago.
I do not trust myself on the topic of Popper, which is why I will not declare these to be myths, as such a statement would presuppose that I am trustworthy.
Occasionally you make valid points and this is one of them. I agree that most of what you’ve quoted above is accurate. In general, Eliezer is somewhat sloppy when it comes to historical issues. Thus, I’ve pointed out here before problems with the use of phlogiston as an example of an unfalsifiable theory, as well as other essentially historical issues.
So we should now ask should Eliezer read any Popper? Well, I’d say he should read LScD and I’ve recommended Popper before to people here before (along with Kuhn and Lakatos). But there’s something to note: I estimate that the chance that any regular LW reader is going to read any of Popper has gone down drastically in the last 1.5 weeks. I will let you figure out why I think that and leave it to you to figure out if that’s a good thing or not.
LScD is not the correct book to read if you want to understand Popper’s philosophy. C&R and OK are better choices.
What do you mean “along with” Kuhn and Lakatos? They are dissimilar to Popper.
Popper’s positions aren’t important as historical issues but because there is an epistemology that matters today which he explained. It’s not historical sloppiness when Eliezer dismisses a rival theory using myths; it’s bad scholarship in the present about the ideas themselves (even if he didn’t know the right ideas, why did he attack a straw man instead of learning better ideas, improving the ideas himself, or refraining from speaking?)
BTW I emailed Eliezer years ago to let him know he had myths about Popper on his website and he chose not to fix it.
What do you mean “along with” Kuhn and Lakatos? They are dissimilar to Popper
As in they are people worth reading.
LScD is not the correct book to read if you want to understand Popper’s philosophy. C&R and OK are better choices.
You’ve asserted this before. So far no one here including myself has seen any reason from what you’ve said to think that. LScD has some interesting points but is overall wrong. I fail to see why at this point reading later books based on the same notions would be terribly helpful. Given what you’ve said here, my estimate that there’s useful material there has gone downwards.
LScD is Popper’s first major work. It is not representative. It is way more formalistic than Popper’s later work. He changed on purpose and said so.
He changed his mind about some stuff from LScD; he improved on it later. LScD is written before he understood the justificationism issue nearly as well as he did later.
LScD engages with the world views of his opponents a lot. It’s not oriented towards presented Popper’s whole way of thinking (especially his later way of thinking, after he refined it).
The later books are not “based on the same notion”. They often take a different approach: less logic, technical debate, more philosophical argument and explanation.
Since you haven’t read them, you really ought to listen to experts about which Popper books are best instead of just assuming, bizarrely, that the one you read which the Popper experts don’t favor is his best material. We’re telling you it’s not his best material; don’t judge him by it. It’s ridiculous to dismiss our worldview based on the books we’re telling you aren’t representative, while refusing to read the books we say explain what we’re actually about.
It’s ridiculous to dismiss our worldview based on the books we’re telling you aren’t representative, while refusing to read the books we say explain what we’re actually about.
I’m not dismissing your worldview based on books that aren’t representative. Indeed, earlier I told you that what you were saying especially in regards to morality seemed less reasonable than what Popper said in LScD.
The later books are not “based on the same notion”. They often take a different approach: less logic, technical debate, more philosophical argument and explanation.
So you are saying that he does less of a job making his notions precise and using careful logic? Using more words and less formalism is not making more philosophical argument, it is going back to the worst parts of philosophy. I don’t know what you think you think my views are, but whatever your model is of me you might want to update it or replace it if you think the above was something that would make me more inclined to read a text. Popper is clearly quite smart and clever, and there’s no question that there’s a lot of bad or misleading formalism in philosophy, but the general trend is pretty clear that philosophers who are willing to use formalism are more likely to have clear ideas.
in regards to morality seemed less reasonable than what Popper said in LScD.
He changed his mind to the same kind of view I have, FYI.
So you are saying that he does less of a job making his notions precise and using careful logic?
He changed his mind about what types of precision matter (in what fields). He is precise in different ways. Better explanations which get issues more precisely right; less formalness, less attempts to use math to address philosophical issues. It’s not that he pays less attention to what he writes later, it’s just that he uses the attention for somewhat different purposes.
I don’t know what you think you think my views are, but whatever your model is of me you might want to update it or replace it if you think the above was something that would make me more inclined to read a text.
I’m just explaining truths; I’m not designing my statements to have an effect on you.
but the general trend is pretty clear that philosophers who are willing to use formalism are more likely to have clear ideas.
I’m not sure about this trend; no particular opinion either way. Regardless, Popper isn’t a trend, he’s a category of his own.
It places too much emphasis on standards and style, but dissent, when it is expressed, may not conform to the standards and style one expects, yet still be truthful.
Yes, the truth can be rude, and this is why mere rudeness is sometimes mistaken for truth. But most rudeness is not truth, because rudeness is easy and truth is hard.
One of the differences is: when they screw up because they don’t know this stuff, we’ll explain some.
When they think we’re screwing up, they down vote to invisibility, remove normal site links to your post, say “read the sequences” which don’t specifically address the points of disagreement, or just plain stop engaging (e.g. that guy who said i was in invisible dragon territory, and therefore safe to ignore without the possibility of missing any important ideas instead of discussing my point. Basically since my ideas fail some of his criteria—mostly due to his misunderstanding—he ignores them. And he thinks that is safe and not closed minded!)
Yes, but at the same time, people who enjoy that should realize they might be damaging LW’s very good signal to noise ratio. I certainly was guilty of that by repeatedly replying.
At the risk of sounding glib, one man’s signal is another man’s noise. About half of those threads consisted of reasonably good and interesting arguments. And the other half included some links to good ideas expressed less shallowly.
Yes, there are good and interesting arguments there, as well as good links. I think many people on LW underestimate the difficulty of communicating, especially communicating new ideas. Learning is difficult, it takes time and effort, and there will be many misunderstandings. curi, myself, and other Popperians have a very different philosophy to the one here at LW. It can take a lot of discussion with someone, often where seemingly no progress is made, before they begin to understand something like why support is impossible.
Another point is that the traditions here are actually not conducive to truth-seeking discussions, especially where there is fundamental disagreement. There is this voting system for one thing. It’s designed to get people into line and to score posts. It encourages you to write to get good kharma but writing to get good kharma shouldn’t be a goal in any truth-seeking discussion. It places too much emphasis on standards and style, but dissent, when it is expressed, may not conform to the standards and style one expects, yet still be truthful. It is supposed to be some kind of troll detector, and a way of labelling them, so that people can commit the fallacy of judging ideas by their source. The whole thing, in short, is authoritarian.
I must apologize for my lack of clarity. You have apparently taken me as a “fellow traveler”. Sorry. The “good and interesting arguments” that I made reference to were written by the anti-Popper ‘militia’ rather than the Popperian ‘invaders’. I meant to credit the invaders only with “good links”.
What I have found discouraging in this regard is that you and Curi have been so hopelessly tone-deaf in coming to understand just why your doctrines have been so difficult to sell. Why (and in what sense) people here think that support is possible. That folks who pay lip service to Popper seem to treat their own positions as given by authority and measure ‘progress’ by how much the other guy changes his mind.
Probably the most frustrating thing about these episodes is how little you guys have learned from your experience here, and how much of what you think you have learned is incorrect. People can engage in persistent disagreement with the local orthodoxy without losing karma. People can occasionally speak too impolitely without losing excessive karma. I serve as a fairly good example of both. There are many other examples. It is actually fairly easy. All you have to do is to expend as much effort in reacting to what other people say as in trying to get your own points across. In trying to understand what they are saying, rather than reacting negatively.
Curi’s posting against the conjunction fallacy was a perfect example of how not to do things here—rising to the level of self-parody. Attacking non-existent beliefs in a doctrine he clearly didn’t understand. How could an intelligent person possibly have become so deluded about what his interlocutors understood by the “conjunction fallacy”? How could he have thought it worthwhile to use sock puppets to raise his karma enough to make that posting? Was he really trying to engage in instruction and consciousness raising? Or was he just trying to assert intellectual superiority?
In a sense, it is a shame that the posting can no longer be seen, because it was so perfect as a negative example.
Please don’t feed the trolls within a discussion about feeding trolls.
:) , but …
Occasional trollish behavior does not brand one as inherently and essentially trollish. IMHO, our Popperian friends are exhibiting sufficiently good behavior in this conversation as to deserve being treated with a modicum of respect.
I agree. Hate the sin, not the sinner.
That’s this post - which is still somewhat visible.
I did not use sock puppets. I do not have any additional accounts here.
Given you are insulting me with 100% imaginary factual claims which you have no evidence for, who is the troll, really?
I would be happy to discard the hypothesis should it be falsified. Or even if a more plausible hypothesis were to be brought to my attention. So let me ask. Do you know why your karma jumped sufficiently to make that conjunction fallacy posting? I would probably accept your explanation if you have a plausible one.
My karma was fluctuating a lot. It got up around 90 three times, and back to 0 each time. It also had many smaller upticks, getting to the 20-50 range.
The reason it stopped fluctuated is because of the conjunction fallacy post: that −330 from that one post keeps my karma solidly negative.
Well, since you don’t offer a plausible hypothesis, I have no reason to discard my plausible one. But I will repeat the question, since you failed to answer it. Do you know why your karma jumped sufficiently for you to make that posting? That is, for example, do you know that someone (whether or not they wear socks) was systematically upvoting you?
How should I know?
I think I was systematically upvoted at least once, maybe a few times. I also think I was systematically downvoted at least one. I often saw my karma drop 10 or 20 points in a couple minutes (not due to front page post), sometimes more. shrug. Wasn’t paying much attention.
You’re really going to assume as a default—without evidence—that I’m a lying cheater rather than show a little respect? Maybe some people liked my posts.
To the extent I’ve shown them to non-Bayesians I’ve received nothing but praise and agreement, and in particular the conjunctional fallacy post was deeply appreciated by some people.
Upvoted (systematically?) for honesty. ETA: And the sock-puppet hypothesis is withdrawn.
I liked some of you posts. As for accusing you of being a “lying cheater”, I explicitly did not accuse you of lying. I suggested that you were being devious and disingenuous in the way you responded to the question.
Really???? By people who understand the conjunction fallacy the way it is generally understood here? Because, as I’m sure you are aware by now, the idea is not that people always attach a higher probability to (A & B) than they do to (B) alone. It is that they do it (reproducibly) in a particular kind of context C which makes (A | C) plausible and especially when (B | A, C) is more plausible than (B | C).
So what you were doing in that posting was calling people idiots for believing something they don’t believe—that the conjunction fallacy applies to all conjunctions without exception. You were exhibiting ignorance in an obnoxious fashion. So, of course you got downvoted. And now we have you and Scurfield believing that we here are just unwilling to listen to the Popperian truth. Are you really certain that you yourselves are trying hard enough to listen?
Which is it?
Both. I particularly liked curi’s post linking to the Deutsch lecture. But, in the reference in question, here, I was being cute and saying that half the speeches in the conversation (the ones originating from this side of the aisle) contained good and interesting arguments. This is not to say that you guys have never produced good arguments, though the recent stuff—particularly from curi—hasn’t been very good. But I was in agreement with you (about a week ago) when you complained that curi’s thoughtful comments weren’t garnering the upvotes they deserved. I always appreciate it when you guys provide links to the Critical Rationalism blog and/or writings by Popper, Miller, or Deutsch. You should do that more often. Don’t just provide a bare link, but (assertion, argument, plus link to further argument) is a winning combination.
I have some sympathy with you guys for three reasons:
I’m a big fan of Gunter Wachtershauser, and he was a friend and protege of Popper.
You guys have apparently taken over the task of keeping John Edser entertained. :)
I think you guys are pretty much right about “support”. In fact, I said something very similar recently in my blog
I haven’t spoken to John Edser since Matt killed the CR list ><
He could have just given it away, but no, he wanted to kill it...
Well, no, they have a different way of thinking about it than you. They disagree with you and agree with me.
Your equivocations about what the papers do and don’t say do not, in my mind, make anything better. You’re first of all defanging the paper as a way to defend it—when you basically deny it made any substantive claims you’re making it easy to defend against anything—and that’s not something, I think, that it’s authors would appreciate.
You’ve misunderstood the word “applies”. It applies in the sense of being relevant, not happening every time.
The idea of the conjunction fallacy is there is a mistake people sometimes make and it has something to do with conjunctions (in general, not a special category of them). It is supposed to apply to all conjunctions in the sense that whenever there is a conjunction it could happen.
What I think is: you can trick people into making various mistakes. The ones from these studies have nothing to do with conjunctions. The authors simply designed it so all the mistakes would look like they had something to do with conjunctions, but actually they had other causes such as miscommunication.
Can you see how, if you discovered a way to miscommunicate with people to cause them to make mistakes about many topics (for example), you could use it selectively, e.g. by making all your examples that you publish have to do with conjunctions, and can you see how, in that case, I would be right that the conjunctional fallacy is a myth?
I’m pretty sure you are wrong here, but I’m willing to look into it. Which paper, exactly, are you referring to?
Incidentally, your reference to my “equivocations about what the papers do and don’t say” suggest that I said one thing at one point, and a different thing at a different point. Did I really do that? Could you point out how/where?
I can see that if I discovered a way to cause people to make mistakes, I would want to publish about it. If the clearest and most direct demonstration (that the thinking I had caused actually was a mistake) were to exhibit the mistaken thinking in the form of a conjunction, then I might well choose to call my discovery the conjunction fallacy. I probably would completely fail to anticipate that some guy who had just had a very bad day (in failing to convince a Bayesian forum of his Popperian brilliance) would totally misinterpret my claims.
They said people have bounded rationality in the nobel prize paper. In the 1983 paper they were more ambiguous about their conclusions.
Besides their papers, there is the issue of what their readership thinks. We can learn about this from websites like Less Wrong and wikipedia and what they say it means. I found they largely read stronger claims into the papers than were actually there. I don’t blame them: the papers hinted at the stronger claims on purpose, but avoided saying too much for fear of being refuted. But they made it clear enough what they thought and meant, and that’s how Less Wrong people understand the papers.
The research does not even claim to demonstrate the conjunction fallacy is common—e.g. they state in the 1983 paper that their results have no bearing (well, only a biased bearing, they say) on the prevalence of the mistakes. Yet people here took it as a common, important thing.
It’s a little funny though, b/c the researchers realize that in some senses their conclusions don’t actually follow from their research, so they have to be careful what they say.
By equivocate I meant making ambiguous statements, not changing your story. You probably don’t regard them as ambiguous. Different perspective.
Everything is ambiguous in regards to some issues, and which issues we regard as the important ones determines which statements we regard as ambiguous in any important way.
I think if your discovery has nothing to do with conjunctions, it’s bad to call it the conjunction fallacy, and then have people make websites saying it has to do with the difference in probability between A&B and A, and that it shows people estimate probability wrong.
Your comments about me having a bad day are silly and should be withdrawn. Among other things, I predicted in advance what would happen and wasn’t disappointed.
“Not disappointed”, huh? You are probably not surprised that people call you a troll, either. :)
As for your claims that people (particularly LessWrongers) who invoke the Conjunction Fallacy are making stronger claims than did the original authors—I’ll look into them. Could you provide two actual (URL) links of LessWrong people actually doing that to the extent that you suggested in your notorious posting?
http://lesswrong.com/lw/ji/conjunction_fallacy/
This moral is not what the researchers said; they were careful not to say much in the way of a conclusion, only hint at it by sticking it in the title and some general remarks. It is an interpretation added on by Less Wrong people (as the researchers intended it to be).
This moral contains equivocation: it says it “can” happen. The Less Wrong people obviously think it’s more than “can”: it’s pretty common and worth worrying about, not a one in a million.
The moral, if taken literally, is pretty vacuous. Removing detail or assumptions can also make an event seem more plausible, if you do it in particular ways. Changing ideas can change people’s judgment of them. Duh.
It’s not meant to be taken that literally. It’s meant ot say: this is a serious problem, it’s meaningful, it really has something to do with adding conjunctions!
The research is compatible with this moral being false (if we interpret it to have any substance at all), and with it only being a one in a million event.
You may think one in a million is an exaggeration. But it’s not. The size of set of possible questions to ask that the researchers selected from was … I have no idea but surely far more than trillions. How many would have worked? The research does not and cannot say. The only things that could tell us that are philosophical arugments or perhaps further research.
The research says this stuff clearly enough:
http://www.econ.ucdavis.edu/faculty/nehring/teaching/econ106/readings/Extensional%20Versus%20Intuitive.pdf
In this paper, the researchers clearly state that their research does not tell us anything about the prevalence of the conjunction fallacy. But Less Wrongers have a different view of the matter.
They also state that it’s not arbitrary conjunctions which trigger the “conjunction” fallacy, but ones specifically designed to elicit it. But Less Wrong people are under the impression that people are in danger of doing a conjunction fallacy when there is no one designing situations to elicit it. That may or may not be true; the research certainly doesn’t demonstrate it is true.
Note: flawed results can tell you something about prevalence, but only if you estimate the amount of error introduced by the flaws. They did not do that (and it is very hard to do in this case). Error estimates of that type are difficult and would need to be subjected to peer review, not made up by readers of the paper.
Here’s various statements by Less Wrongers:
http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/
The research does not claim the students could not have misinterpretted. It suggests they wouldn’t have—the less wronger has gotten the idea the researchers wanted him to—but they won’t go so far as to actually say miscommunication is ruled out because it’s not. Given they were designing their interactions with their subjects on purpose in a way to get people to make errors, miscommunication is actually a very plausible interpretation, even in cases where the not very detailed writeup of what they actually did fails to explicitly record any blatant miscommunication.
This also goes beyond what the research says.
Also note that he forgot to equivocate about how often this happen. He didn’t even put a “sometimes” or a “more often than never”. But the paper doesn’t support that.
He is apparently unaware that it differs from normal life in that in normal life people’s careers don’t depend on tricking you into making mistakes which they can call conjunction fallacies.
When this was pointed out to him, he did not rethink things but pressed forward:
But when you read about the fallacy in the Less Wrong articles about it, they do not state “only happens in the cases where people are trying to trick you”. If it only applies in those situations, well, say so when telling it to people so they know when it is and isn’t relevant. But this cosntraint on applicability, from the paper, is simply ignored most of the time to reach a conclusion the paper does not support.
Here’s someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
Note the heavy equivocation. He retreats from “always” which is a straw man, to “sometimes” which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often. But the paper does not say that.
BTW the paper is very misleading in that it says things like, in the words of a Less Wronger:
The paper is full of statements that sound like they have something to do with the prevalence of the conjunction fallacy. And then one sentence admiting all those numbers should be disregarded. If there is any way to rescue those numbers as legitimate at all, it was too hard for the researchers and they didn’t attempt it (I don’t mean to criticize their skill here. I couldn’t do it either. Too hard.)
It is difficult for me to decide how to respond to this. You are obviously sincere—not trolling. The “conjunction fallacy” is objectionable to you—an ideological assault on human dignity which must be opposed. You see evidence of this malevolent influence everywhere. Yet I see it nowhere. We are interpreting plain language quite differently. Or rather, I would say that you are reading in subtexts that I don’t think are there.
To me, the conjunction fallacy is something like one of those optical illusions you commonly find in psychology books. “Which line is longer?” Our minds deceive us. No, a bit more than that. One can set up trick situations in which our minds predictably deceive us. Interesting. And definitely something that psychologists should look at. Maybe even something that we should watch for in ourselves, if we want to take our rationality to the next level. But not something which proves some kind of inferiority of human reason; something that justifies some political ideology.
I suggested this ‘deflationary’ take on the CF, and your initial response was something like “Oh, no. People here agree with me. The CF means much more than you suggest.” But then you quote Kahneman and Tversky:
Yes, you admit, K&T were deflationary, if you take their words at face value, but you persist that they intended their idea to be taken in an inflated sense. And you claim that it is taken in that sense here. So you quote LessWrong:
I emphasized the can because you call attention to it. Yes, you admit, LessWrong was deflationary too, if you take the words at face value. But again you insist that the words should not be taken at face value—that there is some kind of nefarious political subtext there. And as proof, you suggest that we look at what the people who wrote comments in response to your postings wrote.
All I can say is that here too, you are looking for subtexts, while admitting that the words themselves don’t really support the reading you are suggesting:
You are making a simple mistake in interpretation here. The vision of the conjunction fallacy that you attacked is not the vision of the conjunction fallacy that people here were defending. It is not all that uncommon for simple confusions like this one to generate inordinate amounts of sound and fury. But still, this one was a doozy.
Optical illusions are neat, but that’s just not what these studies mean to, e.g., their authors:
http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf
Bounded rationality is what they believe they were studying.
Our unbounded rationality is the topic of The Beginning of Infinity. Different worldview.
I don’t think the phrase “bounded rationality” means what you think it means. “Bounded rationality” is a philosophical term that means rationality as performed by agents that can only think for a finite amount of time, as opposed to unbounded rationality which is what an agent with truly infinite computational resources would perform instead. Since humans do not have infinite time to think, “our unbounded rationality” does not exist.
Where do they define it to have this technical meaning?
I’m not sure if your distinction makes sense (can you say what you mean by “rationality”?). One doesn’t need infinite time be fully rational, in the BoI worldview. I think they conceive of rationality itself—and also which bounds are worth attention—differently. btw surely infinite time has no relevance to their studies in which people spent 20 minutes or whatever. of if you’re comparing with agents that do infinite thinking in 20 minutes, well, that’s rather impossible.
They don’t, because it’s a pre-existing standard term. Its Wikipedia article should point you in the right direction. I’m not going to write you an introduction to the topic when you haven’t made an effort to understand one of the introductions that’s already out there.
Attacking things that you don’t understand is obnoxious. If you haven’t even looked up what the words mean, you have no business arguing that the professionals are wrong. You need to take some time off, learn to recognize when you are confused, and start over with some humility this time.
But wikipedia says it means exactly what I thought it meant:
Let me quote the important part again:
So everything I said previously stands.
BoI says there are no limits on human minds, other than those imposed by the laws of physics on all minds, and also the improvable-without-limit issue of ignorance.
My surprise was due to you describing the third clause alone. I didn’t think that was the whole meaning.
No, you’re still confused. Unbounded rationality is the theoretical study of minds without any resource limits, not even those imposed by the laws of physics. It’s a purely theoretical construct, since the laws of physics apply to everyone and everything. This conversation started with a quote where K&T used the phrase “bounded rationality” to clarify that they were talking about humans, not about this theoretical construct (which none of us really care about, except sometimes as an explanatory device).
Our “unbounded rationality” is not the type you are talking about. It’s about the possibility of humans making unlimited progress. We conceive of rationality differently than you do. We disagree with your dichotomy, approach to research, assumptions being made, etc...
So, again, there is a difference in world view here, and the “bounded rationality” quote is a good example of the anti-BoI worldview found in the papers.
The phrase “unbounded rationality” was first introduced into this argument by you, quoting K&T. The fact that Beginning of Infinity uses that phrase to mean something else, while talking about a different topic, is completely irrelevant.
please don’t misquote.
I’m just going to quickly say that I endorse what jimrandomh has written here. The concept of “bounded rationality” is a concept in economics/psychology going back to Herbert Simon. It is not, as you seem to think, an ideology hostile to human dignity.
I’m outta here! The depth and incorrigibility of your ignorance rivals even that of our mutual acquaintance Edser. Any idiot can make a fool of himself, but it takes a special kind of persistence to just keep digging as you are doing.
If the source of your delusion is Deutsch—well, it serves you right for assuming that any physicist has enough modesty to learn something about other fields before he tries to pontificate in them.
I find nothing in Deutsch that supports curi’s confusion.
Here is an passage from Herbert A. Simon: the bounds of reason in modern America:
Do you think this is accurate? If so, can you see how a Popperian would think it is wrong?
They don’t know anything. They just make up stuff about people they haven’t read. We have read half the stuff they’ve tried to cite more than them. Then they down vote instead of answering.
i checked out the Mathematical Statistics book today that one of them said had a proof. it didn’t have it. and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
they are in love with citing authority and judging ideas by sources (which they think make the value of the ideas probabilistically predictable with an error margin). not so much in love with the notion that if they can’t answer all the criticisms of their position they’re wrong. not so much in love with the notion that popper published criticisms no bayesian ever answered. not so much in love with the idea that, no, saying why your position is awesome doesn’t automatically mean everything else ever is worse and can safely be ignored.
While making false accusations like this does get you attention, it is also unethical behavior. We have done our very best to help you understand what we’re talking about, but you have a pattern where every time you don’t understand something, you skip the step where you should ask for clarification, you make up an incorrect interpretation, and then you start attacking that interpretation.
Almost every sentence you have above is wrong. Against what may be my better judgment I’m going to comment here because a) I get annoyed when my own positions are inaccurately portrayed and b) I hope that showing that you are wrong about one particular issue might convince you that you are interpreting things through a highly emotional lens that is causing you to misread and misremember what people have to say.
I presume you are referring to my remarks.
Let’s go back to the context.
You wrote:
and then wrote:
To which I replied
In another part of that conversation I wrote:
You then wrote:
Note that this comment of yours is the first example where the word “professional” shows up.
I observed
You replied:
My final comment on that issue was then:
And asking about your comment about professionals.
You never replied to that question. I’ve never mentioned Earman in the about 20 other comments to you after that exchange. This doesn’t seem like “repeatedly pressuring me to read some book he’d never read” Note also that I never once discussed Earman being a “professional” until you brought up the term.
You appear to be reading conversations through an adversarial lens in which people with whom you disagree must be not just wrong but evil, stupid, or slavishly adhering to authority. This is a common failure mode of conversation. But we’re not evil mutants. Humans like to demonize the ones they disagree with and it isn’t productive. It is a sign that one is unlikely to be listening with the actual content. At that point, one needs to either radically change approach or just spend some time doing something else and wait for the emotions to cool down.
Now, you aren’t the only person here with that issue; there’s some reaction by some of the regulars here in the same way, but that’s to a large extent a reaction against what you’ve been doing. So, please leave for some time; take a break. Think carefully about what you intend to get out of Less Wrong and whether or not anyone here has any valid points. And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct? Is it really that likely that you are right about every single detail?
If it helps, imagine it we weren’t talking about you but some other person. That person wanders over to a forum they’ve never been to, and has extensive discussions about technical issues where that person disgarees with the forum regulars on almost everything, including peripherally related issues. How likely would you be to say that that person had everything right, even if you took for granted that they were correct about the overarching primary issues? So, if the person never agrees that they were wrong about even the tiniest of issues, do you think it is more likely that the person actually got everything right or that they just aren’t listening?
(Now, everyone who thinks that this discussion is hurting our signal to noise ratio please go and downvote this comment. Hurting my karma score is probably the most efficient method of giving me some sort of feedback to prevent further engagement.)
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
note that, ironically, your comment here complaining that you didn’t repeatedly pressure is itself pressure, and not for the first time...
When they have policies like resorting to authoritarian, ad hominem arguments rather than substance ones … and attacking merits such as confidence and lack of appeasement … then yes. You’re just asking me to concede because one person can’t be right so much against a crowd. It’s so deeply anti-Ayn-Rand.
Let me pose a counter question: when you contradict several geniuses such as Popper, Deutsch, Feynman (e.g. about caring what other people think), and Rand, does that worry you? I don’t particularly think you should say “yes” (maybe mild worry causing only an interest in investigating a bit more, not conceding). The point is your style of argument can easily be used against you here. And against most people for most stuff.
BTW did you know that improvements on existing ideas always start out as small minority opinions? Someone thinks of them first. And they don’t spread instantly.
Why shouldn’t I be right most of the time? I learned from the best. And changed my mind thousands of times in online debates to improve even more. You guys learned the wrong stuff. Why should the number of you be important? Lots of people learning the same material won’t make it correct.
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
Let’s do a scientific test. Find someone impartial to take a look at these threads, and report how they think you did. To avoid biasing the results, don’t tell them which side you were on, and don’t use the same name, just say “There was recently a big argument on Less Wrong (link), what do you think of it?” and have them email back the results.
Will you do that?
I can do that, if you can agree on a comment to link to. It’ll double-blind it in a sense—I do have a vague opinion on the matter but haven’t been following the conversation at all over the last couple days, so it’d be hard for me to pick a friend who’d be inclined to, for example, favor a particular kind of argument that’s been used.
The judge should be someone who’s never participated on Less Wrong before, so there’s no chance that they’ve already picked up ideas from here.
That is true of one of the two people I have in mind. The other has participated on less than five occasions. I have definitely shown articles from here to the latter and possibly to the former, but neither of them have done any extended reading here that I’m aware of (and I expect that I would be if they had). Also, neither of them are the type of person who’s inclined to participate in LW-type places, and the former in particular is inclined toward doing critical analysis of even concepts that she likes. (Just not in a very LW-ish way.)
I’d suggest using the primary thread about the conjunction fallacy and then the subthread on the same topic with Brian. Together that makes a long conversation with a fair number of participants.
How am I supposed to find someone impartial? Most people are justificationists. Most Popperians I know will recognize my writing style (plus, having showed them plenty already, i already know they agree with me, and i don’t attribute that to bias about the source of each post). The ones who wouldn’t recognize my writing style are just the ones who won’t want to read all this and discuss—it’s by choosing not to read much online discussion that they stayed that way.
EDIT: what does it even mean for someone to be neutral? neither justificationist nor popperian? are there any such people? what do they believe? probably ridiculous magical thinking! or hardcore skepticism. or something...
How about someone who hasn’t read much online discussion, and hasn’t thought about the issue enough to take a side? There are lots of people like that. It doesn’t have to be someone you know personally; a friend-of-a-friend or a colleague you don’t interact with much will do fine (although you might have to bribe them to spend the time reading).
A person like that won’t follow details of online discussion well.
A person like that won’t be very philosophical.
Sounds heavily biased against me. Understanding advanced ideas takes skill.
And, again, a person like that can pretty much be assumed to be a justificationist.
Surely you can find someone who seems like they should be impartial and who’s smart enough to follow an online discussion. They don’t have to understand everything perfectly, they’re only judging what’s been written. Maybe a distant relative who doesn’t know your online alias?
If you pick a random person on the street, and you try to explain Popper to them for 20 minutes—and they are interested enough to let you have those 20 minutes—the chances they will understand it are small.
This doesn’t mean Popper was wrong. But it is one of many reasons your test won’t work well.
Right, it takes a while to understand these things. But they won’t understand Bayes, either; but they’ll still be able to give some impression of which side gave better arguments, and that sort of thing.
How about a philosopher who works on a topic totally unrelated to this one?
Have you ever heard what Popper thinks of the academic philosophy community?
No, an academic philosopher won’t do.
This entire exercise is silly. What will it prove? Nothing. What good is a sample size of 1 for this? Not much. Will you drop Bayesianism if the person sides against you? Of course not.
How about a physicist, then? They’re generally smart enough to figure out new topics quickly, but they don’t usually have cause to think about epistemology in the abstract.
MWI is a minority view in the physics community because bad philosophy is prevalent there.
The point of the exercise is not just to judge epistemology, it’s to judge whether you’ve gotten too emotional to think clearly. My hypothesis is that you have, and that this will be immediately obvious to any impartial observer. In fact, it should be obvious even to an observer who agrees with Popper.
How about asking David Deutsch what he thinks? If he does reply, I expect it will be an interesting reply indeed.
I know what he thinks: he agrees with me.
But I’ve already shown lots of this discussion to a bunch of observers outside Less Wrong (I run the best private Popperian email list). Feedback has been literally 100% positive, including some sincere thank yous for helping them see the Conjunction Fallacy mistake (another reaction I got was basically: “I saw the conjunction fallacy was crap within 20 seconds. Reading your stuff now, it’s similar to what I had in mind.”) I got a variety of other positive reactions about other stuff. They’re all at least partly Popperian. I know that many of them are not shy about disagreeing with me or criticizing me—they do so often enough—but when it comes to this Bayesian stuff none of them has any criticism of my stance.
Not a single one of them considers it plausible my position on this stuff is a matter of emotions.
I’m skeptical of the claim about Deutsch. Why not actually test it? A Popperian should try to poke holes in his ideas. Note also that you seem to be missing Jim’s point: Jim is making an observation about the level of emotionalism in your argument, not the correctness. These are distinct issues.
As another suggestion, we could take a bunch of people who have epistemologies that are radically different from either Popper or Bayes. From my friends who aren’t involved in these issues, I could easily get an Orthodox Jew, a religious Muslim, and a conspiracy theorist. That also handles your concern about a sample size of 1. Other options include other Orthodox Jews including one who supports secular monarchies as the primary form of government, and a math undergrad who is a philosophical skeptic. Or if you want to have some real fun, we could get some regulars from the Flat Earth Forum. A large number of them self-identify as “zetetic” which in this context means something like only accepting evidence that is from one’s own sensory observation.
Jim is not suggesting that your position is “a matter of emotions”- he’s suggesting that you are being emotional. Note that these aren’t the same thing. For example, one could hypothetically have a conversation between a biologist and a creationist about evolution and the biologist could get quite angry with the creationist remaining calm. In that case, the biologist believes what they do due to evidence, but they could still be unproductively emotional about how they present that evidence.
From sibling:
Shall I take this as a no?
You’ve already been told that you made that the point of Jim’s remark was about the emotionalism in your remarks, not about the correctness of your arguments. In that context, your sibling comment is utterly irrelevant. The fact that you still haven’t gotten that point is of course further evidence of that problem. We are well past the point where there’s any substantial likelyhood of productive conversation. Please go away. If you feel a need to come back, do so later, a few months from now when you are confident you can do so without either insulting people, deliberately trolling, and are willing to actually listen to what people here have to say.
So you say things that are false, and then you think the appropriate follow up is to rant about me?
And your skepticism stems from a combination of
A) your ignorance B) your negative assumptions to fill in the gaps in knowledge. you don’t think I’m a serious person who has reasons for what he says, and you became skeptical before finding out, just by assumption.
And you’re wrong.
Maybe by pointing it out in this case, you will be able to learn something. Do you think so?
Here’s two mild hints:
1) I interviewed David Deutsch yesterday 2) I’m in the acknowledgements of BoI
Here you are lecturing me about how a Popperian should try to poke holes in his ideas, as if I hadn’t. I had. (The hints do not prove I had. They’re just hints.)
I still don’t understand what you think the opinions of several random people will show (and now you are suggesting people that A) we both think are wrong (so who cares what they think?) B) are still justificationists). It seems to me the opinions of, say, the flat earth forum, should be regarded as pretty much random (well, maybe not random if you know a lot about their psychology. which i don’t want to study) and not a fair judge of the quality of arguments!
So they are extremely anti-Popperian...
You don’t say...
Also I want to thank you for continuing to post. You are easily the most effective troll that LW has ever had, and it is interesting and informative to study your techniques.
This isn’t obvious to me, but this cuts both ways.
As to your other claims, do you think any of these matters will be a serious issue if we used the conjunction fallacy thread as our test case?
Yes. Someone with a worldview similar to mine will be more inclined to agree with me in all the threads. Someone with one I think is rather bad … won’t.
This confuses me. Why should whether someone is a justificationist impact how they read the thread about the conjunction fallacy? Note by the way that you seem to be misinterpreting Jim’s point. Jim isn’t saying that one should have an uninvolved individual decide who is correct. Jim’s suggestion (which granted may not have been explained well in his earlier comment) is to focus on seeing if anyone is behaving too emotionally rather than calmly discussing the issues. That shouldn’t be substantially impacted by philosophical issues.
Because justificationism is relevant to that, and most everything else.
Seriously? You don’t even know that there are philosophical issues about the nature of emotion, the proper ways to evaluate its presence, and so on?
I’m neither Justificationist nor Popperian. I’m Bayesian, which is neither of these things.
In our terminology, which you have not understood, Bayesians like yourself are justificationists.
No offense, but you haven’t demonstrated that you understand Bayesian epistemology well enough to classify it. I read the Wikipedia page on Theory of Justification, and it pretty much all looks wrong to me. How sure are you that your Justificationist/Popperian taxonomy is complete?
You can’t understand Popper by reading wikipedia. Try Realism and the Aim of Science starting on p18 for a ways.
BTW induction is a type of justificationism. Bayesians advocate induction. The end.
I have no confidence the word “induction” means the same thing from one sentence to the next. If you could directly say precisely what is wrong with Bayesian updating, with concrete examples of Bayesian updating in action and why they fail, that might be more persuasive, or at least more useful for diagnosis of the problem wherever it may lie.
For example, you update based on selective observation (all observation is selective).
And you update theories getting attention selectively, ignoring the infinities (by using an arbitrary prior, but not actually using it in the sense of applying it infinitely many times, just estimating what you imagine might happen if you did).
Why don’t you—or someone else—read Popper. You can’t expect to fully understand it from discussion here. Why does this community have no one who knows what they are talking about on this subject? Who is familiar with the concepts instead of just having to ask me?
Since all observation is selective, then the selectivity of my observations can hardly be a flaw. Therefore the flaw is that I update. But I just asked you to explain why updating is flawed. Your explanation is circular.
I don’t know what that means. I’m not even sure that
is idiomatic English. And
needs to be clarified, I’m sure you must realize. Your parenthetical does not cear it up. You write:
Using it but not actually using it. Can you see why that is hard to interpret? And then:
My flaw is that I don’t do something infinitely many times? Can you see why this begs for clarification? And:
I’ve lost track: is my core flaw that I am estimating something? Why?
I’ve read Popper. He makes a lot more sense to me than you do. He is not cryptic. He is crystal clear. You are opaque.
You seem to be saying that for me to understand you, first I have to understand Popper from his own writing, and that nobody here seems to have done that. I’m not sure why you’re posting here if that’s what you believe.
If you’ve read Popper (well) then you should understand his solution to the problem of induction. How about you tell us what it is, rather than complaining my summary of what you already read needs clarifying.
You are changing the subject. I never asked you to summarize Popper’s solution to the problem of induction. If that’s what you were summarizing just now then you ignored my request. I asked you to critique Bayesian updating with concrete examples.
Now, if Popper wrote a critique of Bayesian updating, I want to read it. So tell me the title. But he must specifically talk about Bayesian updating. Or, if that is not available, anything written by Popper which mentions the word “Bayes” or “Bayesian” at all. Or that discusses Bayes’ equation.
If you check out the indexes of popper’s books it’s easy to find the word bayes. try it some time...
There was nothing in Open Society and its Enemies, Objective Knowledge, Popper Selections, or Conjectures and Refutations, nor in any of the books on Popper that I located. The one exception I found was in The Logic of Scientific Discovery, and the mentions of Bayes are purely technical discussion of the theorem (with which Popper, reasonably enough, has no problem), with no criticism of Bayesians. In summary, I found no critiques by Popper of Bayesians after going through the indices of Popper’s main works as you recommended. I did find mention of Bayes, but it was a mention in which Popper did not criticize Bayes or Bayesians.
Nor were Bayes or Bayesians mentioned anywhere in David Deutsch’s book The Beginning of Infiinity.
So I return to my earlier request:
I have repeatedly requested this, and in reply been given either condescension, or a fantastically obscure and seemingly self-contradictory response, or a major misinterpretation of my request, or a recommendation that I look to Popper, which recommendation I have followed with no results.
I think you’re problem is you don’t understand what the issues at stake are, so you don’t know what you’re trying to find.
You said:
But then when you found a well known book by Popper which does have those words, and which does discuss Bayes’ equation, you were not satisfied. You asked for something which wasn’t actually what you wanted. That is not my fault.
You also said:
But you don’t seem to understand that Popper’s solution to the problem of induction is the same topic. You don’t know what you’re looking for. It wasn’t a change of topic. (Hence I thought we should discuss this. But you refused. I’m not sure how you expect to make progress when you refuse to discuss the topic the other guy thinks is crucial to continuing.)
Bayesian updating, as a method of learning in general, is induction. It’s trying to derive knowledge from data. Popper’s criticisms of induction, in general, apply. And his solution solves the underlying problem rendering Bayesian updating unnecessary even if it wasn’t wrong. (Of course, as usual, it’s right when applied narrowly to certain mathematical problems. It’s wrong when extended out of that context to be used for other purposes, e.g. to try to solve the problem of induction.)
So, question: what do you think you’re looking for? There is tons of stuff about probability in various Popper books including chapter 8 of LScD titled “probability”. There is tons of explanation about the problem of induction, and why support doesn’t work, in various Popper books. Bayesian updating is a method of positively supporting theories; Popper criticized all such methods and his criticisms apply. In what way is that not what you wanted? What do you want?
So for example I opened to a random page in that chapter and found, p 183, start of section 66, the first sentence is:
This is a criticism of the Bayesian approach as unscientific. It’s not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at? How would you do it without Bayes’ theorem?). In any case it is a criticism and it applies straightforwardly to Bayesian epistemology. It’s not the only criticism.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up (no, making up a “prior” which assigns all of them at once, in a way vague enough that you can’t even use it in real life without “estimating” arbitrarily, doesn’t mean you haven’t just made them up).
EDIT: read the first 2 footnotes in section 81 of LScD, plus section 81 itself. And note that the indexer did not miss this but included it...
Only in a sense so broad that Popper can rightly be accused of the very same thing. Bayesians use experience to decide between competing hypotheses. That is the sort of “derive” that Bayesians do. But if that is “deriving”, then Popper “derives”. David Deutsch, who you know, says the following:
I direct you specifically to this sentence:
This is what Bayesians do. Experience is what Bayesians use to choose between theories which have already been guessed. They do this using Bayes’ Theorem. But look back at the first sentence of the passage:
Clearly, then, Deutsch does not consider using the data to choose between theories to be “deriving”. But Bayesians use the data to choose between theories. Therefore, as Deutsch himself defines it, Bayesians are not “deriving”.
Yes, the Bayesians make them up, but notice that Bayesians therefore are not trying to derive them from data—which was your initial criticism above. Moreover, this is not importantly different from a Popperian scientist making up conjectures to test. The Popperian scientist comes up with some conjectures, and then, as Deutsch says, he uses experimental data to “choose between theories that have already been guessed”. How exactly does he do that? Typical data does not decisively falsify a hypothesis. There is, just for starters, the possibility of experimental error. So how does one really employ data to choose between competing hypotheses? Bayesians have an answer: they choose on the basis of how well the data fits each hypothesis, which they interpret to mean how probable the data is given the hypothesis. Whether he admits it or not, the Popperian scientist can’t help but do something fundamentally the same. He has no choice but to deal with probabilities, because probabilities are all he has.
The Popperian scientist, then, chooses between theories that he has guessed on the basis of the data. Since the data, being uncertain, does not decisively refute either theory but is merely more, or less, probable given the theory, then the Popperian scientist has no choice but to deal with probabilities. If the Popperian scientist chooses the theory that the data fits best, then he is in effect acting as a Bayesian who has assigned to his competing theories the same prior.
Where do you get the theories you consider?
Do you understand DD’s point that the majority of the time theories are rejected without testing which is in both his books? Testing is only useful when dealing with good explanations.
Do you understand that data alone cannot choose between the infinitely many theories consistent with it, which reach a wide variety of contradictory and opposite conclusions? So Bayesian Updating based on data does not solve the problem of choosing between theories. What does?
Bayesians are also seriously concerned with the fact that an infinity of theories are consistent with the evidence. DD evidently doesn’t think so, given his comments on Occam’s Razor, which he appears to be familiar with only in an old, crude version, but I think that there is a lot in common between his “good explanation” criterion and parsimony considerations.
We aren’t “seriously concerned” because we have solved the problem, and it’s not particularly relevant to our approach.
We just bring it up as a criticism of epistemologies that fail to solve the problem… Because they have failed, they should be rejected.
You haven’t provided details about your fixed Occam’s razor, a specific criticism of any specific thing DD said, a solution to the problem of induction (all epistemologies need one of some sort), or a solution to the infinity of theories problem.
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified. At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Is this true? Popper wrote LScD in 1934. Keynes and Ramsey wrote about using probability to handle uncertainty in the 1920s although I don’t think anyone paid attention to that work for a few years. I don’t know enough about their work in detail to comment on whether or not Popper is taking it into account although I certainly get the impression that he’s influenced by Keynes.
According to the wikipedia page, Cox’s theorem first appeared in R. T. Cox, “Probability, Frequency, and Reasonable Expectation,” Am. Jour. Phys., 14, 1–13, (1946). Prior to that, I don’t think probability had much in the way of philosophical foundations, although they may’ve gotten the technical side right. And correct use of probability for more complex things, like causal models, didn’t come until much later. (And Popper was dealing with the case of science-in-general, which requires those sorts of advanced tools.)
The English version of LScD came out in 1959. It wasn’t a straight translation; Popper worked on it. In my (somewhat vague) understanding he changed some stuff or at least added some footnotes (and appendices?).
Anyway Popper published plenty of stuff after 1946 including material from the LScD postscript that got split into several books, and also various books where he had the chance to say whatever he wanted. If he thought there was anything important to update he would have. And for example probability gets a lot of discussion in Popper’s replies to his critics, and Bayes’ theorem in particular comes up some; that’s from 1974.
So for example on page 1185 of the Schilpp volume 2, Popper says he never doubted Bayes’ theorem but that “it is not generally applicable to hypotheses which form an infinite set”.
How can something be partially falsified? It’s either consistent with the evidence or contradicted. This is a dichotomy. To allow partial falsification you have to judge in some other way which has a larger number of outcomes. What way?
You’re saying you started without them, and come up with some in the middle. But how does that work? How do you get started without having any?
Changing the math cannot answer any of his non-mathematical criticisms. So his challenge remains.
Here’s one way.
(It’s subject to limitations that do not constrain the Bayesian approach, and as near as I can tell, is mathematically equivalent to a non-informative Bayesian approach when it is applicable, but the author’s justification for his procedure is wholly non-Bayesian.)
I think you mixed up Bayes’ Theorem and Bayesian Epistemology. The abstract begins:
They have a problem with a prior distribution, and wish to do without it. That’s what I think the paper is about. The abstract does not say “we don’t like bayes’ theorem and figured out a way to avoid it.” Did you have something else in mind? What?
I had in mind a way of putting probability distributions on unknown constants that avoids prior distributions and Bayes’ theorem. I though that this would answer the question you posed when you wrote:
Karma has little bearing on the abstract truth of an argument, but it works pretty well as a means of gauging whether or not an argument is productive in the surrounding community’s eyes. It should be interpreted accordingly: a higher karma score doesn’t magically lead to a more perfect set of opinions, but paying some attention to karma is absolutely rational, either as a sanity check, a method of discouraging an atmosphere of mutual condescension, or simply a means of making everyone’s time here marginally more pleasant. Despite their first-order irrelevance, pretending that these goals are insignificant to practical truth-seeking is… naive, at best.
Unfortunately, this also means that negative karma is an equally good gauge of an statement’s disruptiveness to the surrounding community, which can give downvotes some perverse consequences when applied to people who’re interested in being disruptive.
Would you mind responding to the actual substance of what JoshuaZ said, please? Particularly this paragraph:
Have done so before seeing your comment.
Is editing comments repeatedly to add stuff in the first few minutes against etiquette here?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
I think there may be some definitional issues here. A handful of remarks suggesting one read a certain book would not be what most people would call pressure. Moreover, given the negative connotations of pressure, it worries me that you find that to be pressure. Conversations are not battles. And suggesting that one might want to read a book should not be pressure. If one feels that way, it indicates that one might be too emotionally invested in the claim.
And is that what you see here? Again, most people don’t seem to see that. Take an outside view; it is possible that you are simply too emotionally involved?
I don’t have reason to think that Rand is a genius. But, let’s say I did, and let’s say the other three are geniuses. Should one in that context, be worried that their opinions contradict my own? Let’s use a more extreme example, Jonathan Sarfati is an accomplished chemist, a highly ranked chess master, and a young earth creationist. Should I be worried by that? The answer I think is yes. But, at the same time, even if I were only vaguely aware of Sarfati’s existence, would I need to read everything by him to decide that he’s wrong? No. In this case, having read some of their material, and having read your arguments for why I should read it, it is falling more and more into the Sarfati category. It wouldn’t surprise me at all if there’s something I’m wrong about that Popper or Deutsch gets right, and if I read everything Popper had to say and read everything Deutsch had to say and didn’t come away with any changed opinions, that should cause me to doubt my rationality. Ironically, BoI was already on my intended reading list before interacting with you. It has no dropped farther down on my priority list because of these conversation.
I don’t think you understood why I made that remark. I have an interest in cooperating with other humans. We have a karma system for among other reasons to decide whether or not people want to read more of something. Since some people have already expressed a disinterest in this discussion, I’m explicitly inviting them to show that disinterest so I know not to waste their time. If it helps, imagine this conversation occurring on a My Little Pony forum with a karma system. Do you see why someone would want to downvote in that context a Popper v. Bayes discussion? Do you see how listening in that context to such preferences could be rational?
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary? More than 10? Any detailed, serious discuss of Popper? If it does, how do you know that having not read it? Is the discussion of Popper tainted by the common myths?
Some Less Wrongers recommend stuff that doesn’t have the specific stuff they claim it does have, e.g. the Mathematical Statistics book. IME it’s quite common that recommendations don’t have what they are supposed to even when people directly claim it’s there.
You seem to think I believe this stuff because I’m a beginner; that’s an insulting non-argument. I already know how to take outside views, I am not emotionally involved. I already know how to deal with such issues, and have done so.
None of your post here really has any substance. It’s also psychological and meta topics you’ve brought up. Maybe I shouldn’t have fed you any replies at all about them. But that’s why I’m not answering the rest.
Please don’t move goalposts and pretend they were there all along. You asked whether he mentioned Popper, to which I answered yes. I don’t know the answers to your above questions, and they really don’t have anything to do with the central points, the claims that I “pressured” you to read Earman based on his being a professional.
This confuses me.
Ok. This confuses me since earlier you had this exchange:
That discussion was about a week ago.
I’m also confused about where you are getting any reason to think that I think that you believe what you do because you are a “beginner”- it doesn’t help matters that I’m not sure what you mean by that term in this context.
But if you are very sure that you don’t have any emotional aspect coming into play then I will try to refrain from suggesting otherwise until we get more concrete data such as per Jim’s suggestion.
I have never been interested in people who mention Popper but people who address his ideas (well). I don’t know why you think I’m moving goalposts.
There’s a difference between “Outside View” and “outside view”. I certainly know what it means to look at things from another perspective which is what the non-caps one means (or at least that’s how I read it).
I am not “very sure” about anything; that’s not how it works; but I think pretty much everything I say here you can take as “very sure” in your worldview.
--
--
-Wikipedia
Stop being a jerk. He didn’t even state whether it mentions Popper, let alone whether it does more than mention. The first thing wasn’t a goal post. You’re being super pedantic but you’re also wrong. It’s not charming.
Links? I ask not to challenge the veracity of your claims but because I am interested in avoiding the kind of erroneous argument you’ve observed.
http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3xos
Thanks.
How does that prevent it from being a mistake? With consequences? With appeal to certain moral views?
Bounded rationality is not an ideology, it’s a definition. Calling it a mistake makes as much sense as calling the number “4” a mistake; it is not a type of thing which can be correct or mistaken.
I’m curious why this has been downvoted. Does it mean people don’t believe you? I think this is another illustration of why the voting system sucks. Your protestations of innocence are now hidden and no-one is fronting up with an argument. I know you have been upvoted and I upvoted a bunch of posts myself once because I was annoyed that you were getting systematically downvoted to, I think, rate-limit your posts (which you were annoyed about to) and to hide them (which makes it a pain reading threads). Oh, the sort of silliness the kharma system leads to!
you can turn off hiding of low scoring comments in your preferences.
I presume it was a reaction to the argument that Perplexed was a troll. Had curi made the same protestation of innocence without adding the personal attack, it might not have been downvoted at all.
curi was responding to this:
Trollish is a good description of this, so I think curi’s comment was accurate. And, note, the comment in which the above insult appears in was upvoted.
Whether Perplexed was trolling depends on Perplexed’s intent. Perplexed probably believed what he was saying, so he probably did not intend to accuse curi unjustly. Had he known curi was innocent and then said it anyway in order to upset him, then he would have been trolling.
The issue isn’t whether he honestly believed it but that he came up with a rather harsh claim without evidence.
He did have evidence. Perhaps what you mean is that there were alternative hypotheses explaining the same evidence. But the existence of alternative hypotheses does not stop the evidence being evidence. There are always alternative hypotheses, and yet there is evidence. Therefore the existence of alternatives does not prevent something from being evidence.
Consider the following three hypotheses:
A) Nobody was systematically voting up your posts.
B) You were systematically voting up your own posts.
C) Somebody else was systematically voting up your posts.
The evidence includes the fact that after having your karma voted down to oblivion, your karma shot up for no obvious reason, allowing you to post a main article to the front page. This evidence decreases the probability of hypothesis A, and simultaneously increases the probabilities of B and C, by Bayesian updating. I don’t know what the Popperian account is precisely, but since it is not completely insane then I believe it leads to roughly the same scientific result, though stated in Popperian terms, eg without talking about the probability of the hypothesis. The evidence is in any case inconsistent with A, and consistent with both B and C, or less consistent with A than with B or C, however you want to put it, and thus is evidence for each as contrasted with A.
As I recall, you yourself proposed hypothesis C. Do you claim that you proposed a hypothesis without evidence, and that this was a bad thing? For that would seem to follow from your own logic. An alternative hypothesis consistent with your own experience is that you did it yourself and just forgot, call this C1. C1 is highly improbable, but to say such a thing is nothing other than to say that the prior of C1 is extremely low, which is Bayesian thinking, which I understand you disapprove of.
In any case, your complaint that he offered a conjecture in advance of evidence is odd coming from a Popperian. Deutsch the Popperian writes that conjectures are not “derived” from anything. They are guesses—bold conjectures, so he writes. Experience is not the source from which conjectures are derived. Its main use is to choose between conjectures that have already been guessed. So he writes. But now you complain about a conjecture offered—so you claim—in advance of evidence!
And whence comes this new appreciation of yours for social niceties such as holding one’s tongue and not blurting out uncomfortable truths, or, since we are after all fallible (as Popper says), uncomfortable guesses at the truth? Did you not claim that we should all be beyond such social niceties, such sacrifices of the bold pursuit of truth to mere matters of “style”? Did you not yourself explain that you deliberately post in a style that does not pull any punches, and that anyone who complains merely proves that they are not worthy of your time? So why the apparent change of heart?
Of course, they presumably had additional evidence regarding the truth or falsehood of B (leaving aside highly unlikely scenarios like them performing acts of which they were unaware), which other people don’t have. So the situation isn’t quite symmetrical.
(Not disagreeing with your main point. Or, in fact, engaging with it at all. Just nitpicking.)
A very rude one. The problem wasn’t that it was a conjecture but that it was a personal attack.
Here let me demonstrate:
You agree that’s a bad way to approach discussion, right?
Maybe I used to agree, but maybe a week of posts like this have gradually persuaded me, through persuasive arguments, otherwise. Quoting from you:
But now you complain:
Oh great master of the breaking of etiquette. I am confused!
Then when swimmer963 wrote:
You responded:
Have you fallen into cultural bias?
Then when swimmer963 wrote:
You responded:
Is it time for you to re-read this book?
I am confused. If you want others to respect some very basic ground rules of etiquette, then why did you preach the opposite only days ago?
It’s not that I care what he thinks, I just think posting crazy factual libels is a bad idea.
I never said one should ignore all etiquette of all types. There’s objective limits on what’s a good or bad idea in this area.
I’m not sure what you hope to gain by arguing this point with me. Do you just want to make me concede something in a debate or are you hoping to learn something?
I already know that etiquette is important, and why. I am pointing out that you also know that it is important when you are the target of a breach. So, even the one who preaches that etiquette be set aside in the bold pursuit of truth, does not really believe it, not when it’s his own ox being gored.
You have all along been oblivious to the reactions of others—admittedly so, proudly so. You have argued that your social obliviousness is a virtue, an intellectual strength, because such things as etiquette are mere obstacles in the road to reality. But this is mistaken, and even you intuit that it is, once the shoe is on the other foot—that is, once you stand in the place of those that you have antagonized and exasperated for an entire week. Your philosophy of antagonism serves no purpose but to justify your own failure or refusal to treat others well.
I don’t need you to concede anything. What I’ve done here is put together the pieces and given you a chance to respond.
Uh huh. Yet you offer no guidelines as to what is allowed, and what is off limits, after a week of preaching and practicing. Or to be more precise, you do offer a guide of sorts: you are off limits, and everyone else is fair game. What you are inclined to break, is okay to break. What you, for whatever reason, don’t break, nobody else must break.
You’re not giving any reasons here why you think we are “tone-deaf” other than you think we have not explained, for example, why LW people think support is possible. But that’s not tone-deafness. Tone deafness is a complaint not about the substance of ideas but about how well you have expressed those ideas with respect to some norm: it is essentially a complaint about style, and it would seem you want us to pay attention to kharma. We think rational people ought to be able to look beyond those things. Right? With regard to support, we explained and gave examples but, briefly, just to illustrate again, here is a quote from where recursive justification hits bottom
Perhaps it is you that has been deaf?
We know that Popperian philosophy is a hard sell and we know that in order to sell it we have to combat lies and myths that have been spread around. Popper himself spent a lot of time doing that. Some of those myths are right here at LW, in the sequences, and we gave examples. But, again, deafness—has anybody admitted that there are these mistakes? Is that deafness a reaction to our “tone-deafness” and is that rational?
I agree, but rather than just asserting I haven’t been, perhaps you should illustrate with some examples.
You seem to care about kharma. If one thinks that kharma is an authoritarian mistake, as I do, then how much respect to you think I should have?
That you think we are paying lip service is reacting negatively and rather hostile don’t you think? Also I do want to hear some good arguments against Popper. Unfortunately most arguments, including those here, are to do with the myths—such as Popper is falsificationism, and come from people that don’t know Popper well or got their information from second-hand sources.
Why do you think he did that? You’re just making things up here.
Tversky and Kahneman believe in such ideas as “evolved mental behaviour” and “bounded rationality”. These beliefs exist right enough. If you read The Beginning of Infinity by David Deutsch you will see arguments against these sort of things. You’re arguing by assertion again and haven’t carefully looked at the substance.
Please give up on us. We’re obviously not as careful and rational thinker as you are. Utterly hopeless. We’re stuck in our realm of imagined cognitive biases. Go find something productive to do. Please go away.
Plus, we’re a cult. and we read Harry Potter fan fiction. And the groupthink is staggering.
First, in response to your first paragraph, my complaint about ‘tone-deafness’ was not intended as a complaint about style. It was a complaint about your failure to do well at the listening half of conversation. A failure to tailor your arguments to the responses you receive. A failure to understand the counterarguments. My complaint may be wrong and unjustified, but it is definitely not a complaint about style.
But, speaking of style, you suggest:
Well, I can see the attractiveness of that slogan, but we tend to think of it a bit differently here. Here, we think that rational people ought to be able to fix any rough edges in their ‘style’ that prevent them from communicating their ideas successfully. We don’t believe that it makes sense to place the entire onus of adjustment on the listener. And we especially don’t believe that only one side has the onus of listening.
Perhaps. But that’s enough about me. Lets talk about you. :) As you may have noticed, responding to an attack with a counterattack usually doesn’t achieve very much here.
I guess that would depend on how interested you are in having me listen to your ideas.
You are probably right. And that presents you with a problem. How do you induce people to come to know Popper well? How do you tempt them to get their information from some non-second-hand source?
Now I’m sure you guys have given long and careful thought to this problem and have developed a plan. But if you should discover that things are not going well, I have some ideas that might help. Which is simply that you might consider producing some discussion postings consisting mostly of long quotes from Popper and his most prominent disciples, with only short glosses from yourselves.
Hmmm. There is something here I just don’t understand. Why all this hostility to what seems to me to be the fairly uncontroversial realization that people are often less good at reasoning than we would like them to be. It is almost as if you had religious or political objections to some evil doctrine. Do you think it would be possible to enlighten me as to why it seems to you that the stakes are so high with this issue?
As for reading Deutsch, I intend to. I don’t think I have ever had a book recommended to me so many times before it is even published in this country.
Somehow I was able to buy it in the Amazon Kindle store for about $18, but the highlight feature is not working properly. My introduction to Deutsch was several years ago with The Fabric of Reality, in which he defends the Everett interpretation, among other things. At that point he became a must-read author (which means I find him worth reading, not that I agree fully with him), one of only a handful. (Daniel Dennett is another). If you want to read Deutsch now, The Fabric of Reality is immediately available. As I recall, it’s a mix of persuasive arguments and dubious arguments.
I’d be very curious to see where anything Tversky wrote contains the phrase “evolved mental behavior”- as I explained to you T&K have classically been pretty agnostic about where these biases and heuristics are coming from. That other people in the field might think that they are evolved is a side issue. I can’t speak as strongly about Kahneman, but I’d be surprised to see any joint paper of the two had where that phrase was used.
But there’s a more serious issue here which I pointed out to you earlier and you are still missing: You cannot let philosophy override evidence. When evidence and your philosophy contradict, philosophy must lose. No matter how good my philosophical arguments are, they cannot withstand empirical data. If my philosophy says the Earth is flat, my philosophy is bad, since the evidence is overwhelming that the Earth is not flat. If my philosophy requires a geocentric universes, then my philosophy is bad. If my philosophy requires irreducible mental entities then my philosophy is bad. And if my philosophy requires humans to be perfect reasoners then my philosophy is bad.
As long as you keep insisting that your philosophical desires about what humans should be override the evidence of what humans are you will not be doing a good job understanding humans or the rest of the universe.
And to be blunt, as long as you keep making this sort of claim, people here are going to not take you seriously. So please go elsewhere. We don’t have much to say to each other.
No, as far as I know, they don’t use the phrase “evolved mental behaviour”, but I didn’t say they did, only that they believe in such things. That they do is evident here:
“From its earliest days, the research that Tversky and I conducted was guided by the idea that intuitive judgments occupy a position – perhaps corresponding to evolutionary history – between the automatic operations of perception and the deliberate operations of reasoning.”
Read the wording closely. To me it indicates they don’t have a good explanation for these heuristics, or, if they do have an explanation, it is vague so that it is consistent with both evolved and not-evolved. But they don’t have a problem with evolved. I also gave you other arguments in my comments to you in our other discussion.
Why are you continuing this when you’ve already sarcastically told me to go away?
Edit: This wikipedia page says “Cognitive biases are instances of evolved mental behavior”: Do you think that is an accurate description of what cognitive biases are supposed to be? Is there any controversy about whether they are evolved or not?
I’m sorry. Could you clarify exactly what it is you think that quote illustrates?
I intend to respond more completely to your posting, but that clarification would be helpful.
He is saying that one always has to make an argument to prove that an idea is true or more likely to be true. Ideas must be supported.
Yes, I understood that, but my question was about why you wrote:
So, apparently, what was illustrated was that Eliezer was not a good and faithful disciple of Popper when he wrote that. I’m a bit surprised you thought that needed illustration.
ETA: Or maybe you meant that your ability to dredge up that quote illustrates that you have been paying attention to whether and why LesWrongers believe support is possible. Yeah, that makes more sense, is more charitable, and is the interpretation I’ll go with.
Ok, with that out of the way, I will respond to your long great-grandfather comment (above) directly.
That sounds pretty awful. Bounded rationality is a standard concept. Surely if you argue against it, you are confused, or don’t understand it properly. I’m not sure what an “evolved mental behaviour” is, but that sounds pretty uncontroversial too. Looking at Deutsch on video about 28:00 in he is using the term “bounded rationality” to refer to something different—so this seems like a simple confusion based on different definitions of terms.
If you are going to assume that people are confused for arguing against “standard concepts” or because you think something is uncontroversial, then that is just argument from authority.
The supposed heuristics which Herbert Simon and others propose which give rise to our alledged cognitive biases are held by them to have evolved via biological evolution, to be based on induction, and to be bounded. Hard-coded processes based on induction that can generate some knowledge but not all knowledge goes against the ideas that Deutsch discusses in The Beginning of Infinity. For one thing, induction is impossible and doesn’t happen anywhere including in human brains. For another, knowledge creation is all or nothing; a machine that can generate some knowledge can generate all knowledge (the jump to universality) - halfway houses like these heuristics would be very difficult to engineer, they would keep jumping. And, for another, human knowledge and reasoning is memetic, not genetic, and there are no hard-coded reasoning rules.
This is just an argument over the definition of the phrase “bounded rationality”. Let’s call these two definitions BR1 and BR2. The definition that timtyler, Kahnemann and Tversky, and I are using is BR1; the definition that you, curi, and David Deutsch use is BR2.
BR1 means “rationality that is performed using a finite amount of resources”. Think of this as bounded-resource rationality. All rationality done in this universe is BR1, by definition, because you only get a limited amount of time and memory to think about things. This definition does not contain any claims about what sort of knowledge BR1 can or can’t generate. A detailed theory of BR1 would say things like “solving this math problem requires at least 10^9 operations”. More commonly, people refer to BR1 to distinguish it from things like AIXI, which is a mathematical construct that can theoretically figure out anything given sufficient data, but which is impossible to construct because it contains several infinities. A mind with universal reasoning is BR1 if it only has a finite amount of time to do it in.
BR2 means “rationality that can generate some types of knowledge, but not others”. Think of this as bounded-domain rationality. Whether this exists at all depends on what you mean by “knowledge”. For example, if you have a computer program that collects seismograph data and predicts earthquakes, you might say it “knows” where earthquakes will occur; this would make it a BR2. If you say that this sort of thing doesn’t count as knowledge until a human reads it from a screen or printout, then no BR2s exist.
BR1 is a standard concept, but as far as I know BR2 is unique to Deutsch’s book Beginning of Infinity. BR1 exists, tautologically from its definition. Whether BR2 exists or not depends on how you define some other things, but personally I don’t find BR2 illuminating so I see no reason to take a stance either way on it.
I’m pretty sure there’s a similar issue with the definition of “induction”. I know of at least two definitions relevant to epistemology, but neither of them seems to make sense in context so I suspect that Deutsch has come up with a third. Could you explain what Deutsch uses the word induction to mean? I think that would clear up a great deal of confusion.
All your points are wrong, though. Induction has been discussed to death already. Computation universality doesn’t mean intelligent systems evolve without cognitive biases, and the fact that human cultural knowledge is memetic doesn’t mean there are not common built-in biases either. The human brain is reprogrammable to some extent, but much of the basic pattern-recognition circuitry has a genetically specified architecture.
Many of the biases in question are in the basic psychology textbooks—this is surely not something that is up for debate.
Looks to me that those biases are very much up for debate and not just by curi and myself:
Why do you argue from authority saying things like something surely cannot be up for debate because it’s in all the textbooks? curi and I are fallibilists: nothing is beyond question.
You say you’re a fallibist, but you’re actually falling into the failure mode described in this article. Suppose you’ve got a question with positions A and B, with a a bunch of supporting arguments for A, and a bunch of supporting arguments for B. Some of those arguments for each side will be wrong, or ambiguous, or inapplicable—that’s what fallibilism predicts and I think we all agree with that.
Suppose there are 3 valid and 3 invalid arguments for A, and 3 valid and 3 invalid arguments for B. Now suppose someone decides to get rid any of the arguments that are invalid, but they happen to think A is better. Most people will end up attacking all the arguments for B, but they won’t look as closely at the arguments for A. After they’re finished, they’ll have 3 valid and 3 invalid arguments for A, and 3 valid arguments for B—which looks like a preponderance of evidence in favor of B, but it isn’t.
Now read the abstract of that paper you linked again. That paper disagrees with where K&T draw the boundary between questions that trigger the conjunction fallacy and questions that don’t, and describe the underlying mechanism that produces it differently. They do not claim that the conjunction fallacy doesn’t exist.
It seems as though they acknowledge the conjunction fallacy and are proposing different underlying mechanisms to explain how it is produced.
If you want to argue with psychology 101, fine, but do it in public, without experimental support, and a dodgy theoretical framework derived from computation universality and things are not going to go well.
If citing textbooks is classed as “arguing from authority”, one should point out that such arguments are usually correct.
They have put fallacious behaviour in quotes to indicate that they don’t agree the fallacy exists. I could be wrong, however, as I am just going from the abstract and maybe the authors do claim it exists. However they seem to be saying it is just an artifact of hints. I’ll need to read the paper to understand better. Maybe I’ll end up disagreeing with the authors.
Textbook arguments are often wrong. Consider quantum physics and the Copenhagen Interpretation for example. And one way of arguing against CI is from a philosophical perspective (it’s instrumentalist and a bad explanation).
I looked through the whole paper and don’t think you’re wrong.
I don’t agree with the hints paper in various respects. But it disagrees with the conjunction fallacy and argues that conjunction isn’t the real issue and the biases explanation isn’t right either. So certainly there is disagreement on these issues.
Do you mean in the context of arguments in textbooks? This seems like a very weak claim, given how frequently some areas change. Indeed, psychology is an area where what an intro level textbook would both claim to be true and would even discuss as relevant topics has changed drastically in the last 60 years. For example, in a modern psychology textbook the primary discussion of Freud will be to note that most of his claims fell into two broad categories:untestable or demonstrably false. Similarly, even experimentally derived claims about some things (such as how children learn) has changed a lot in the last few years as more clever experimental design has done a better job separating issues of planning and physical coordination from babies’ models of reality. Psychology seems to be a bad area to make this sort of argument.
Yes.
It is weak, in that it makes no bold claims, and merely states what most would take for granted—that most of the things in textbooks are essentially correct.
Nice post.
Some did. At the same time that others didn’t.
The ones admitting it said all epistemologies have those flaws, and it’s impossible to do anything about it. When told that one already exists they just dismissed that as impossible instead of being interested in researching whether it succeeds. Or sometimes they took an attitude similar to EY: it’s a flaw and maybe we’ll fix it some day but we don’t know how to yet and the attempts in progress don’t look promising. (Why doesn’t the Popperian attempt in particular look promising? Why hasn’t it already succeeded? No comment given.)
And they dismissed it without even knowing of any scholarly work by anyone on their side which makes their point for them. As far as they know, no one from their side ever refuted Popper in depth, having carefully read his books. And they are OK with that.
@lip service—anyone who cares to can find criticisms of Popper on my blog on a variety of subjects. This is just accusing sources of ideas of bias as a way to dismiss them, without even doing basic research about whether these claims are true (let alone explaining why source should be used to determine quality of substance).
I had in mind myths like these:
Has anybody here said, yes, these are myths and should be retracted?
I think that’s the only one with a serious problem.
I do not trust that they are accurate. Consequently I discount them when I encounter them. I am currently reading The Beginning of Infinity (which is hard to obtain in the US as it is not to be published until summer, though inexplicably I was able to buy it for the Kindle, though inexplicably my extensive highlights from the book are not showing up on my highlights page at Amazon), and trust Deutsch much more on the topic of Popper. I trust Popper still more on the topic of Popper, and I read the essay collection Objective Knowledge a few weeks ago.
I do not trust myself on the topic of Popper, which is why I will not declare these to be myths, as such a statement would presuppose that I am trustworthy.
Occasionally you make valid points and this is one of them. I agree that most of what you’ve quoted above is accurate. In general, Eliezer is somewhat sloppy when it comes to historical issues. Thus, I’ve pointed out here before problems with the use of phlogiston as an example of an unfalsifiable theory, as well as other essentially historical issues.
So we should now ask should Eliezer read any Popper? Well, I’d say he should read LScD and I’ve recommended Popper before to people here before (along with Kuhn and Lakatos). But there’s something to note: I estimate that the chance that any regular LW reader is going to read any of Popper has gone down drastically in the last 1.5 weeks. I will let you figure out why I think that and leave it to you to figure out if that’s a good thing or not.
LScD is not the correct book to read if you want to understand Popper’s philosophy. C&R and OK are better choices.
What do you mean “along with” Kuhn and Lakatos? They are dissimilar to Popper.
Popper’s positions aren’t important as historical issues but because there is an epistemology that matters today which he explained. It’s not historical sloppiness when Eliezer dismisses a rival theory using myths; it’s bad scholarship in the present about the ideas themselves (even if he didn’t know the right ideas, why did he attack a straw man instead of learning better ideas, improving the ideas himself, or refraining from speaking?)
BTW I emailed Eliezer years ago to let him know he had myths about Popper on his website and he chose not to fix it.
As in they are people worth reading.
You’ve asserted this before. So far no one here including myself has seen any reason from what you’ve said to think that. LScD has some interesting points but is overall wrong. I fail to see why at this point reading later books based on the same notions would be terribly helpful. Given what you’ve said here, my estimate that there’s useful material there has gone downwards.
LScD is Popper’s first major work. It is not representative. It is way more formalistic than Popper’s later work. He changed on purpose and said so.
He changed his mind about some stuff from LScD; he improved on it later. LScD is written before he understood the justificationism issue nearly as well as he did later.
LScD engages with the world views of his opponents a lot. It’s not oriented towards presented Popper’s whole way of thinking (especially his later way of thinking, after he refined it).
The later books are not “based on the same notion”. They often take a different approach: less logic, technical debate, more philosophical argument and explanation.
Since you haven’t read them, you really ought to listen to experts about which Popper books are best instead of just assuming, bizarrely, that the one you read which the Popper experts don’t favor is his best material. We’re telling you it’s not his best material; don’t judge him by it. It’s ridiculous to dismiss our worldview based on the books we’re telling you aren’t representative, while refusing to read the books we say explain what we’re actually about.
I’m not dismissing your worldview based on books that aren’t representative. Indeed, earlier I told you that what you were saying especially in regards to morality seemed less reasonable than what Popper said in LScD.
So you are saying that he does less of a job making his notions precise and using careful logic? Using more words and less formalism is not making more philosophical argument, it is going back to the worst parts of philosophy. I don’t know what you think you think my views are, but whatever your model is of me you might want to update it or replace it if you think the above was something that would make me more inclined to read a text. Popper is clearly quite smart and clever, and there’s no question that there’s a lot of bad or misleading formalism in philosophy, but the general trend is pretty clear that philosophers who are willing to use formalism are more likely to have clear ideas.
He changed his mind to the same kind of view I have, FYI.
He changed his mind about what types of precision matter (in what fields). He is precise in different ways. Better explanations which get issues more precisely right; less formalness, less attempts to use math to address philosophical issues. It’s not that he pays less attention to what he writes later, it’s just that he uses the attention for somewhat different purposes.
I’m just explaining truths; I’m not designing my statements to have an effect on you.
I’m not sure about this trend; no particular opinion either way. Regardless, Popper isn’t a trend, he’s a category of his own.
Yes, the truth can be rude, and this is why mere rudeness is sometimes mistaken for truth. But most rudeness is not truth, because rudeness is easy and truth is hard.
They don’t know this stuff.
One of the differences is: when they screw up because they don’t know this stuff, we’ll explain some.
When they think we’re screwing up, they down vote to invisibility, remove normal site links to your post, say “read the sequences” which don’t specifically address the points of disagreement, or just plain stop engaging (e.g. that guy who said i was in invisible dragon territory, and therefore safe to ignore without the possibility of missing any important ideas instead of discussing my point. Basically since my ideas fail some of his criteria—mostly due to his misunderstanding—he ignores them. And he thinks that is safe and not closed minded!)