Well, since you don’t offer a plausible hypothesis, I have no reason to discard my plausible one. But I will repeat the question, since you failed to answer it. Do you know why your karma jumped sufficiently for you to make that posting? That is, for example, do you know that someone (whether or not they wear socks) was systematically upvoting you?
I think I was systematically upvoted at least once, maybe a few times. I also think I was systematically downvoted at least one. I often saw my karma drop 10 or 20 points in a couple minutes (not due to front page post), sometimes more. shrug. Wasn’t paying much attention.
You’re really going to assume as a default—without evidence—that I’m a lying cheater rather than show a little respect? Maybe some people liked my posts.
To the extent I’ve shown them to non-Bayesians I’ve received nothing but praise and agreement, and in particular the conjunctional fallacy post was deeply appreciated by some people.
I think I was systematically upvoted at least once, maybe a few times.
Upvoted (systematically?) for honesty. ETA: And the sock-puppet hypothesis is withdrawn.
You’re really going to assume as a default—without evidence—that I’m a lying cheater rather than show a little respect? Maybe some people liked my posts.
I liked some of you posts. As for accusing you of being a “lying cheater”, I explicitly did not accuse you of lying. I suggested that you were being devious and disingenuous in the way you responded to the question.
the conjunctional fallacy post was deeply appreciated by some people.
Really???? By people who understand the conjunction fallacy the way it is generally understood here? Because, as I’m sure you are aware by now, the idea is not that people always attach a higher probability to (A & B) than they do to (B) alone. It is that they do it (reproducibly) in a particular kind of context C which makes (A | C) plausible and especially when (B | A, C) is more plausible than (B | C).
So what you were doing in that posting was calling people idiots for believing something they don’t believe—that the conjunction fallacy applies to all conjunctions without exception. You were exhibiting ignorance in an obnoxious fashion. So, of course you got downvoted. And now we have you and Scurfield believing that we here are just unwilling to listen to the Popperian truth. Are you really certain that you yourselves are trying hard enough to listen?
The “good and interesting arguments” that I made reference to were written by the anti-Popper ‘militia’ rather than the Popperian ‘invaders’. I meant to credit the invaders only with “good links”.
Both. I particularly liked curi’s post linking to the Deutsch lecture. But, in the reference in question, here, I was being cute and saying that half the speeches in the conversation (the ones originating from this side of the aisle) contained good and interesting arguments. This is not to say that you guys have never produced good arguments, though the recent stuff—particularly from curi—hasn’t been very good. But I was in agreement with you (about a week ago) when you complained that curi’s thoughtful comments weren’t garnering the upvotes they deserved. I always appreciate it when you guys provide links to the Critical Rationalism blog and/or writings by Popper, Miller, or Deutsch. You should do that more often. Don’t just provide a bare link, but (assertion, argument, plus link to further argument) is a winning combination.
I have some sympathy with you guys for three reasons:
By people who understand the conjunction fallacy the way it is generally understood here?
Well, no, they have a different way of thinking about it than you. They disagree with you and agree with me.
Your equivocations about what the papers do and don’t say do not, in my mind, make anything better. You’re first of all defanging the paper as a way to defend it—when you basically deny it made any substantive claims you’re making it easy to defend against anything—and that’s not something, I think, that it’s authors would appreciate.
So what you were doing in that posting was calling people idiots for believing something they don’t believe—that the conjunction fallacy applies to all conjunctions without exception.
You’ve misunderstood the word “applies”. It applies in the sense of being relevant, not happening every time.
The idea of the conjunction fallacy is there is a mistake people sometimes make and it has something to do with conjunctions (in general, not a special category of them). It is supposed to apply to all conjunctions in the sense that whenever there is a conjunction it could happen.
What I think is: you can trick people into making various mistakes. The ones from these studies have nothing to do with conjunctions. The authors simply designed it so all the mistakes would look like they had something to do with conjunctions, but actually they had other causes such as miscommunication.
Can you see how, if you discovered a way to miscommunicate with people to cause them to make mistakes about many topics (for example), you could use it selectively, e.g. by making all your examples that you publish have to do with conjunctions, and can you see how, in that case, I would be right that the conjunctional fallacy is a myth?
Well, no, they have a different way of thinking about it than you. They disagree with you and agree with me.
Your equivocations about what the papers do and don’t say do not, in my mind, make anything better. You’re first of all defanging the paper as a way to defend it—when you basically deny it made any substantive claims you’re making it easy to defend against anything—and that’s not something, I think, that it’s authors would appreciate.
I’m pretty sure you are wrong here, but I’m willing to look into it. Which paper, exactly, are you referring to?
Incidentally, your reference to my “equivocations about what the papers do and don’t say” suggest that I said one thing at one point, and a different thing at a different point.
Did I really do that? Could you point out how/where?
Can you see how, if you discovered a way to miscommunicate with people to cause them to make mistakes about many topics (for example), you could use it selectively, e.g. by making all your examples that you publish have to do with conjunctions, and can you see how, in that case, I would be right that the conjunctional fallacy is a myth?
I can see that if I discovered a way to cause people to make mistakes, I would want to publish about it. If the clearest and most direct demonstration (that the thinking I had caused actually was a mistake) were to exhibit the mistaken thinking in the form of a conjunction, then I might well choose to call my discovery the conjunction fallacy. I probably would completely fail to anticipate that some guy who had just had a very bad day (in failing to convince a Bayesian forum of his Popperian brilliance) would totally misinterpret my claims.
They said people have bounded rationality in the nobel prize paper. In the 1983 paper they were more ambiguous about their conclusions.
Besides their papers, there is the issue of what their readership thinks. We can learn about this from websites like Less Wrong and wikipedia and what they say it means. I found they largely read stronger claims into the papers than were actually there. I don’t blame them: the papers hinted at the stronger claims on purpose, but avoided saying too much for fear of being refuted. But they made it clear enough what they thought and meant, and that’s how Less Wrong people understand the papers.
The research does not even claim to demonstrate the conjunction fallacy is common—e.g. they state in the 1983 paper that their results have no bearing (well, only a biased bearing, they say) on the prevalence of the mistakes. Yet people here took it as a common, important thing.
It’s a little funny though, b/c the researchers realize that in some senses their conclusions don’t actually follow from their research, so they have to be careful what they say.
By equivocate I meant making ambiguous statements, not changing your story. You probably don’t regard them as ambiguous. Different perspective.
Everything is ambiguous in regards to some issues, and which issues we regard as the important ones determines which statements we regard as ambiguous in any important way.
I think if your discovery has nothing to do with conjunctions, it’s bad to call it the conjunction fallacy, and then have people make websites saying it has to do with the difference in probability between A&B and A, and that it shows people estimate probability wrong.
Your comments about me having a bad day are silly and should be withdrawn. Among other things, I predicted in advance what would happen and wasn’t disappointed.
Your comments about me having a bad day are silly and should be withdrawn. Among other things, I predicted in advance what would happen and wasn’t disappointed.
“Not disappointed”, huh? You are probably not surprised that people call you a troll, either. :)
As for your claims that people (particularly LessWrongers) who invoke the Conjunction Fallacy are making stronger claims than did the original authors—I’ll look into them. Could you provide two actual (URL) links of LessWrong people actually doing that to the extent that you suggested in your notorious posting?
The moral? Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.
This moral is not what the researchers said; they were careful not to say much in the way of a conclusion, only hint at it by sticking it in the title and some general remarks. It is an interpretation added on by Less Wrong people (as the researchers intended it to be).
This moral contains equivocation: it says it “can” happen. The Less Wrong people obviously think it’s more than “can”: it’s pretty common and worth worrying about, not a one in a million.
The moral, if taken literally, is pretty vacuous. Removing detail or assumptions can also make an event seem more plausible, if you do it in particular ways. Changing ideas can change people’s judgment of them. Duh.
It’s not meant to be taken that literally. It’s meant ot say: this is a serious problem, it’s meaningful, it really has something to do with adding conjunctions!
The research is compatible with this moral being false (if we interpret it to have any substance at all), and with it only being a one in a million event.
You may think one in a million is an exaggeration. But it’s not. The size of set of possible questions to ask that the researchers selected from was … I have no idea but surely far more than trillions. How many would have worked? The research does not and cannot say. The only things that could tell us that are philosophical arugments or perhaps further research.
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
In this paper, the researchers clearly state that their research does not tell us anything about the prevalence of the conjunction fallacy. But Less Wrongers have a different view of the matter.
They also state that it’s not arbitrary conjunctions which trigger the “conjunction” fallacy, but ones specifically designed to elicit it. But Less Wrong people are under the impression that people are in danger of doing a conjunction fallacy when there is no one designing situations to elicit it. That may or may not be true; the research certainly doesn’t demonstrate it is true.
Note: flawed results can tell you something about prevalence, but only if you estimate the amount of error introduced by the flaws. They did not do that (and it is very hard to do in this case). Error estimates of that type are difficult and would need to be subjected to peer review, not made up by readers of the paper.
There is no plausible way that the students could have misinterpreted this question because of ambiguous understandings of phrases involving “probability”.
The research does not claim the students could not have misinterpretted. It suggests they wouldn’t have—the less wronger has gotten the idea the researchers wanted him to—but they won’t go so far as to actually say miscommunication is ruled out because it’s not. Given they were designing their interactions with their subjects on purpose in a way to get people to make errors, miscommunication is actually a very plausible interpretation, even in cases where the not very detailed writeup of what they actually did fails to explicitly record any blatant miscommunication.
The issue is that when a person intuitively tries to make a judgement on probabilities, their intuition gives them the wrong result, because it seems to use a heuristic based on representativeness rather than actual probability.
This also goes beyond what the research says.
Also note that he forgot to equivocate about how often this happen. He didn’t even put a “sometimes” or a “more often than never”. But the paper doesn’t support that.
In short how is the experimental setting so different that we should completely ignore experimental results? If you have a detailed argument for that, then you’d actually be making a point.
He is apparently unaware that it differs from normal life in that in normal life people’s careers don’t depend on tricking you into making mistakes which they can call conjunction fallacies.
When this was pointed out to him, he did not rethink things but pressed forward:
So, your position is that it is completely unrealistic to try to trick people, because that never happens in real life?
But when you read about the fallacy in the Less Wrong articles about it, they do not state “only happens in the cases where people are trying to trick you”. If it only applies in those situations, well, say so when telling it to people so they know when it is and isn’t relevant. But this cosntraint on applicability, from the paper, is simply ignored most of the time to reach a conclusion the paper does not support.
I understand this criticism when applied to the Linda experiment, but not when it is applied to the color experiment. There was no “trickery” here.
Here’s someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice.
Note the heavy equivocation. He retreats from “always” which is a straw man, to “sometimes” which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often. But the paper does not say that.
BTW the paper is very misleading in that it says things like, in the words of a Less Wronger:
65% still chose sequence 2 despite it being a conjunction of sequence 1 with another event
The paper is full of statements that sound like they have something to do with the prevalence of the conjunction fallacy. And then one sentence admiting all those numbers should be disregarded. If there is any way to rescue those numbers as legitimate at all, it was too hard for the researchers and they didn’t attempt it (I don’t mean to criticize their skill here. I couldn’t do it either. Too hard.)
It is difficult for me to decide how to respond to this. You are obviously sincere—not trolling. The “conjunction fallacy” is objectionable to you—an ideological assault on human dignity which must be opposed. You see evidence of this malevolent influence everywhere. Yet I see it nowhere. We are interpreting plain language quite differently. Or rather, I would say that you are reading in subtexts that I don’t think are there.
To me, the conjunction fallacy is something like one of those optical illusions you commonly find in psychology books. “Which line is longer?” Our minds deceive us. No, a bit more than that. One can set up trick situations in which our minds predictably deceive us. Interesting. And definitely something that psychologists should look at. Maybe even something that we should watch for in ourselves, if we want to take our rationality to the next level. But not something which proves some kind of inferiority of human reason; something that justifies some political ideology.
I suggested this ‘deflationary’ take on the CF, and your initial response was something like “Oh, no. People here agree with me. The CF means much more than you suggest.” But then you quote Kahneman and Tversky:
Our problems, of course, were constructed to elicit conjunction errors, and they do not provide an unbiased estimate of the prevalence of these errors.
Yes, you admit, K&T were deflationary, if you take their words at face value, but you persist that they intended their idea to be taken in an inflated sense. And you claim that it is taken in that sense here. So you quote LessWrong:
Adding more detail or extra assumptions can make an event seem more plausible.
I emphasized the can because you call attention to it. Yes, you admit, LessWrong was deflationary too, if you take the words at face value. But again you insist that the words should not be taken at face value—that there is some kind of nefarious political subtext there. And as proof, you suggest that we look at what the people who wrote comments in response to your postings wrote.
All I can say is that here too, you are looking for subtexts, while admitting that the words themselves don’t really support the reading you are suggesting:
Here’s someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
It addresses the question of whether that kind of mistake is one that people do sometimes make; it does not address the question of how often people make that kind of mistake in practice.
Note the heavy equivocation. He retreats from “always” which is a straw man, to “sometimes” which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often.
You are making a simple mistake in interpretation here. The vision of the conjunction fallacy that you attacked is not the vision of the conjunction fallacy that people here were defending. It is not all that uncommon for simple confusions like this one to generate inordinate amounts of sound and fury. But still, this one was a doozy.
Bounded rationality is what they believe they were studying.
Our unbounded rationality is the topic of The Beginning of Infinity.
I don’t think the phrase “bounded rationality” means what you think it means. “Bounded rationality” is a philosophical term that means rationality as performed by agents that can only think for a finite amount of time, as opposed to unbounded rationality which is what an agent with truly infinite computational resources would perform instead. Since humans do not have infinite time to think, “our unbounded rationality” does not exist.
Where do they define it to have this technical meaning?
I’m not sure if your distinction makes sense (can you say what you mean by “rationality”?). One doesn’t need infinite time be fully rational, in the BoI worldview. I think they conceive of rationality itself—and also which bounds are worth attention—differently. btw surely infinite time has no relevance to their studies in which people spent 20 minutes or whatever. of if you’re comparing with agents that do infinite thinking in 20 minutes, well, that’s rather impossible.
Where do they define it to have this technical meaning?
They don’t, because it’s a pre-existing standard term. Its Wikipedia article should point you in the right direction. I’m not going to write you an introduction to the topic when you haven’t made an effort to understand one of the introductions that’s already out there.
Attacking things that you don’t understand is obnoxious. If you haven’t even looked up what the words mean, you have no business arguing that the professionals are wrong. You need to take some time off, learn to recognize when you are confused, and start over with some humility this time.
But wikipedia says it means exactly what I thought it meant:
Bounded rationality is the idea that in decision making, rationality of individuals is limited by the information they have, the cognitive limitations of their minds, and the finite amount of time they have to make decisions.
Let me quote the important part again:
the cognitive limitations of their minds
So everything I said previously stands.
BoI says there are no limits on human minds, other than those imposed by the laws of physics on all minds, and also the improvable-without-limit issue of ignorance.
My surprise was due to you describing the third clause alone. I didn’t think that was the whole meaning.
No, you’re still confused. Unbounded rationality is the theoretical study of minds without any resource limits, not even those imposed by the laws of physics. It’s a purely theoretical construct, since the laws of physics apply to everyone and everything. This conversation started with a quote where K&T used the phrase “bounded rationality” to clarify that they were talking about humans, not about this theoretical construct (which none of us really care about, except sometimes as an explanatory device).
Our “unbounded rationality” is not the type you are talking about. It’s about the possibility of humans making unlimited progress. We conceive of rationality differently than you do. We disagree with your dichotomy, approach to research, assumptions being made, etc...
So, again, there is a difference in world view here, and the “bounded rationality” quote is a good example of the anti-BoI worldview found in the papers.
The phrase “unbounded rationality” was first introduced into this argument by you, quoting K&T. The fact that Beginning of Infinity uses that phrase to mean something else, while talking about a different topic, is completely irrelevant.
I’m just going to quickly say that I endorse what jimrandomh has written here. The concept of “bounded rationality” is a concept in economics/psychology going back to Herbert Simon. It is not, as you seem to think, an ideology hostile to human dignity.
I’m outta here! The depth and incorrigibility of your ignorance rivals even that of our mutual acquaintance Edser. Any idiot can make a fool of himself, but it takes a special kind of persistence to just keep digging as you are doing.
If the source of your delusion is Deutsch—well, it serves you right for assuming that any physicist has enough modesty to learn something about other fields before he tries to pontificate in them.
If the source of your delusion is Deutsch—well, it serves you right for assuming that any physicist has enough modesty to learn something about other fields before he tries to pontificate in them.
I find nothing in Deutsch that supports curi’s confusion.
Here is an passage from Herbert A. Simon: the bounds of reason in modern America:
Simon’s exploration of how people acquire and employ heuristics led him to posit that there are two kinds of heuristics: “weak” universal ones and “strong” domain-specific ones, the latter depending heavily on the development of domain-specific memory structures. In addition, Simon increasingly emphasized that both heuristics and memory structures evolved through interaction with the environment. The mind was an induction machine. This process of induction produced evolutionary growth patterns, giving both heuristics and memories a common treelike organization. Thus, for Simon, Darwin’s evolutionary tree of life became not only a decision tree but also the tree of knowledge.
Do you think this is accurate? If so, can you see how a Popperian would think it is wrong?
They don’t know anything. They just make up stuff about people they haven’t read. We have read half the stuff they’ve tried to cite more than them. Then they down vote instead of answering.
i checked out the Mathematical Statistics book today that one of them said had a proof. it didn’t have it. and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
they are in love with citing authority and judging ideas by sources (which they think make the value of the ideas probabilistically predictable with an error margin). not so much in love with the notion that if they can’t answer all the criticisms of their position they’re wrong. not so much in love with the notion that popper published criticisms no bayesian ever answered. not so much in love with the idea that, no, saying why your position is awesome doesn’t automatically mean everything else ever is worse and can safely be ignored.
They don’t know anything. They just make up stuff about people they haven’t read. We have read half the stuff they’ve tried to cite more than them. Then they down vote instead of answering.
While making false accusations like this does get you attention, it is also unethical behavior. We have done our very best to help you understand what we’re talking about, but you have a pattern where every time you don’t understand something, you skip the step where you should ask for clarification, you make up an incorrect interpretation, and then you start attacking that interpretation.
Almost every sentence you have above is wrong. Against what may be my better judgment I’m going to comment here because a) I get annoyed when my own positions are inaccurately portrayed and b) I hope that showing that you are wrong about one particular issue might convince you that you are interpreting things through a highly emotional lens that is causing you to misread and misremember what people have to say.
and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
I presume you are referring to my remarks.
Let’s go back to the context.
You wrote:
It’s fine if most people haven’t read Popper. But they should be able to point to some Bayesian somewhere who did, or they should know at least one good argument against a major Popperian idea
and then wrote:
One particular thing I would be interested in is a Bayesian criticism of Popper. Are there any? By contrast (maybe), Popper did criticize Bayesian epistemology in LScD and elsewhere.
I haven’t read it myself but I’ve been told that Earman’s “Bayes or Bust” deals with a lot of the philosophical criticisms of Bayesianism as well as giving a lot of useful references. It should do a decent job in regards to the scholarly concerns.
There are many different ideas out there and practically speaking, there are too many ideas out there for people to have to deal with every single one. Sure, making incorrect arguments is bad. And making arguments against strawmen is very bad. But people don’t have time actually research every single idea out there or even know which one’s to look at. Now, I think that Popper is important enough and has relevant enough points that he should be on the short list of philosophers that people can grapple with at least to some limited extent.
You then wrote:
The number of major ideas in epistemology is not very large. After Aristotle, there wasn’t very much innovation for a long time. It’s a small enough field you can actually trace ideas all the way back to the start of written history. Any professional can look at everything important. Some Bayesian should have. Maybe some did, but I haven’t seen anything of decent quality.
Note that this comment of yours is the first example where the word “professional” shows up.
As to a professional, I already referred you to Earman. Incidentally, you seem to be narrowing the claim somewhat. Note that I didn’t say that the set of major ideas in epistemology isn’t small, I referred to the much larger class of philosophical ideas (although I can see how that might not be clear from my wording). And the set is indeed very large. However, I think that your claim about “after Aristotle” is both wrong and misleading. There’s a lot of what thought about epistemological issues in both the Islamic and Christian worlds during the Middle Ages. Now, you might argue that that’s not helpful or relevant since it gets tangled up in theology and involves bad assumptions. But that’s not to say that material doesn’t exist. And that’s before we get to non-Western stuff (which admittedly I don’t know much about at all).
(I agree when you restrict to professionals, and have already recommended Earman to you.)
Which you stated you had not read. I have rather low standards for recommendations of things to read, but “I never read it myself” isn’t good enough.
I don’t agree with “restrict to professionals”. How is it to be determined who is a professional? I don’t want to set up arbitrary, authoritative criteria for dismissing ideas based on their source.
My final comment on that issue was then:
Earman is a philosopher and the book has gotten positive reviews from other philosophers. I don’t know what else to say in that regard.
And asking about your comment about professionals.
Hrrm? You mentioned professionals first. I’m not sure why you are now objecting to the use of professionals as a relevant category.
You never replied to that question. I’ve never mentioned Earman in the about 20 other comments to you after that exchange. This doesn’t seem like “repeatedly pressuring me to read some book he’d never read” Note also that I never once discussed Earman being a “professional” until you brought up the term.
You appear to be reading conversations through an adversarial lens in which people with whom you disagree must be not just wrong but evil, stupid, or slavishly adhering to authority. This is a common failure mode of conversation. But we’re not evil mutants. Humans like to demonize the ones they disagree with and it isn’t productive. It is a sign that one is unlikely to be listening with the actual content. At that point, one needs to either radically change approach or just spend some time doing something else and wait for the emotions to cool down.
Now, you aren’t the only person here with that issue; there’s some reaction by some of the regulars here in the same way, but that’s to a large extent a reaction against what you’ve been doing. So, please leave for some time; take a break. Think carefully about what you intend to get out of Less Wrong and whether or not anyone here has any valid points. And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct? Is it really that likely that you are right about every single detail?
If it helps, imagine it we weren’t talking about you but some other person. That person wanders over to a forum they’ve never been to, and has extensive discussions about technical issues where that person disgarees with the forum regulars on almost everything, including peripherally related issues. How likely would you be to say that that person had everything right, even if you took for granted that they were correct about the overarching primary issues? So, if the person never agrees that they were wrong about even the tiniest of issues, do you think it is more likely that the person actually got everything right or that they just aren’t listening?
(Now, everyone who thinks that this discussion is hurting our signal to noise ratio please go and downvote this comment. Hurting my karma score is probably the most efficient method of giving me some sort of feedback to prevent further engagement.)
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
note that, ironically, your comment here complaining that you didn’t repeatedly pressure is itself pressure, and not for the first time...
And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct?
When they have policies like resorting to authoritarian, ad hominem arguments rather than substance ones … and attacking merits such as confidence and lack of appeasement … then yes. You’re just asking me to concede because one person can’t be right so much against a crowd. It’s so deeply anti-Ayn-Rand.
Let me pose a counter question: when you contradict several geniuses such as Popper, Deutsch, Feynman (e.g. about caring what other people think), and Rand, does that worry you? I don’t particularly think you should say “yes” (maybe mild worry causing only an interest in investigating a bit more, not conceding). The point is your style of argument can easily be used against you here. And against most people for most stuff.
BTW did you know that improvements on existing ideas always start out as small minority opinions? Someone thinks of them first. And they don’t spread instantly.
Why shouldn’t I be right most of the time? I learned from the best. And changed my mind thousands of times in online debates to improve even more. You guys learned the wrong stuff. Why should the number of you be important? Lots of people learning the same material won’t make it correct.
Hurting my karma score is probably the most efficient method of giving me some sort of feedback to prevent further engagement.
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
Let’s do a scientific test. Find someone impartial to take a look at these threads, and report how they think you did. To avoid biasing the results, don’t tell them which side you were on, and don’t use the same name, just say “There was recently a big argument on Less Wrong (link), what do you think of it?” and have them email back the results.
I can do that, if you can agree on a comment to link to. It’ll double-blind it in a sense—I do have a vague opinion on the matter but haven’t been following the conversation at all over the last couple days, so it’d be hard for me to pick a friend who’d be inclined to, for example, favor a particular kind of argument that’s been used.
That is true of one of the two people I have in mind. The other has participated on less than five occasions. I have definitely shown articles from here to the latter and possibly to the former, but neither of them have done any extended reading here that I’m aware of (and I expect that I would be if they had). Also, neither of them are the type of person who’s inclined to participate in LW-type places, and the former in particular is inclined toward doing critical analysis of even concepts that she likes. (Just not in a very LW-ish way.)
I’d suggest using the primary thread about the conjunction fallacy and then the subthread on the same topic with Brian. Together that makes a long conversation with a fair number of participants.
How am I supposed to find someone impartial? Most people are justificationists. Most Popperians I know will recognize my writing style (plus, having showed them plenty already, i already know they agree with me, and i don’t attribute that to bias about the source of each post). The ones who wouldn’t recognize my writing style are just the ones who won’t want to read all this and discuss—it’s by choosing not to read much online discussion that they stayed that way.
EDIT: what does it even mean for someone to be neutral? neither justificationist nor popperian? are there any such people? what do they believe? probably ridiculous magical thinking! or hardcore skepticism. or something...
How am I supposed to find someone impartial? Most people are justificationists.
How about someone who hasn’t read much online discussion, and hasn’t thought about the issue enough to take a side? There are lots of people like that. It doesn’t have to be someone you know personally; a friend-of-a-friend or a colleague you don’t interact with much will do fine (although you might have to bribe them to spend the time reading).
Surely you can find someone who seems like they should be impartial and who’s smart enough to follow an online discussion. They don’t have to understand everything perfectly, they’re only judging what’s been written. Maybe a distant relative who doesn’t know your online alias?
If you pick a random person on the street, and you try to explain Popper to them for 20 minutes—and they are interested enough to let you have those 20 minutes—the chances they will understand it are small.
This doesn’t mean Popper was wrong. But it is one of many reasons your test won’t work well.
Right, it takes a while to understand these things. But they won’t understand Bayes, either; but they’ll still be able to give some impression of which side gave better arguments, and that sort of thing.
How about a philosopher who works on a topic totally unrelated to this one?
Have you ever heard what Popper thinks of the academic philosophy community?
No, an academic philosopher won’t do.
This entire exercise is silly. What will it prove? Nothing. What good is a sample size of 1 for this? Not much. Will you drop Bayesianism if the person sides against you? Of course not.
How about a physicist, then? They’re generally smart enough to figure out new topics quickly, but they don’t usually have cause to think about epistemology in the abstract.
The point of the exercise is not just to judge epistemology, it’s to judge whether you’ve gotten too emotional to think clearly. My hypothesis is that you have, and that this will be immediately obvious to any impartial observer. In fact, it should be obvious even to an observer who agrees with Popper.
How about asking David Deutsch what he thinks? If he does reply, I expect it will be an interesting reply indeed.
In fact, it should be obvious even to an observer who agrees with Popper.
But I’ve already shown lots of this discussion to a bunch of observers outside Less Wrong (I run the best private Popperian email list). Feedback has been literally 100% positive, including some sincere thank yous for helping them see the Conjunction Fallacy mistake (another reaction I got was basically: “I saw the conjunction fallacy was crap within 20 seconds. Reading your stuff now, it’s similar to what I had in mind.”) I got a variety of other positive reactions about other stuff. They’re all at least partly Popperian. I know that many of them are not shy about disagreeing with me or criticizing me—they do so often enough—but when it comes to this Bayesian stuff none of them has any criticism of my stance.
Not a single one of them considers it plausible my position on this stuff is a matter of emotions.
I’m skeptical of the claim about Deutsch. Why not actually test it? A Popperian should try to poke holes in his ideas. Note also that you seem to be missing Jim’s point: Jim is making an observation about the level of emotionalism in your argument, not the correctness. These are distinct issues.
As another suggestion, we could take a bunch of people who have epistemologies that are radically different from either Popper or Bayes. From my friends who aren’t involved in these issues, I could easily get an Orthodox Jew, a religious Muslim, and a conspiracy theorist. That also handles your concern about a sample size of 1. Other options include other Orthodox Jews including one who supports secular monarchies as the primary form of government, and a math undergrad who is a philosophical skeptic. Or if you want to have some real fun, we could get some regulars from the Flat Earth Forum. A large number of them self-identify as “zetetic” which in this context means something like only accepting evidence that is from one’s own sensory observation.
Not a single one of them considers it plausible my position on this stuff is a matter of emotions.
Jim is not suggesting that your position is “a matter of emotions”- he’s suggesting that you are being emotional. Note that these aren’t the same thing. For example, one could hypothetically have a conversation between a biologist and a creationist about evolution and the biologist could get quite angry with the creationist remaining calm. In that case, the biologist believes what they do due to evidence, but they could still be unproductively emotional about how they present that evidence.
You’ve already been told that you made that the point of Jim’s remark was about the emotionalism in your remarks, not about the correctness of your arguments. In that context, your sibling comment is utterly irrelevant. The fact that you still haven’t gotten that point is of course further evidence of that problem. We are well past the point where there’s any substantial likelyhood of productive conversation. Please go away. If you feel a need to come back, do so later, a few months from now when you are confident you can do so without either insulting people, deliberately trolling, and are willing to actually listen to what people here have to say.
A) your ignorance
B) your negative assumptions to fill in the gaps in knowledge. you don’t think I’m a serious person who has reasons for what he says, and you became skeptical before finding out, just by assumption.
And you’re wrong.
Maybe by pointing it out in this case, you will be able to learn something. Do you think so?
Here’s two mild hints:
1) I interviewed David Deutsch yesterday
2) I’m in the acknowledgements of BoI
Here you are lecturing me about how a Popperian should try to poke holes in his ideas, as if I hadn’t. I had. (The hints do not prove I had. They’re just hints.)
I still don’t understand what you think the opinions of several random people will show (and now you are suggesting people that A) we both think are wrong (so who cares what they think?) B) are still justificationists). It seems to me the opinions of, say, the flat earth forum, should be regarded as pretty much random (well, maybe not random if you know a lot about their psychology. which i don’t want to study) and not a fair judge of the quality of arguments!
A large number of them self-identify as “zetetic” which in this context means something like only accepting evidence that is from one’s own sensory observation.
Also I want to thank you for continuing to post. You are easily the most effective troll that LW has ever had, and it is interesting and informative to study your techniques.
Yes. Someone with a worldview similar to mine will be more inclined to agree with me in all the threads. Someone with one I think is rather bad … won’t.
This confuses me. Why should whether someone is a justificationist impact how they read the thread about the conjunction fallacy? Note by the way that you seem to be misinterpreting Jim’s point. Jim isn’t saying that one should have an uninvolved individual decide who is correct. Jim’s suggestion (which granted may not have been explained well in his earlier comment) is to focus on seeing if anyone is behaving too emotionally rather than calmly discussing the issues. That shouldn’t be substantially impacted by philosophical issues.
No offense, but you haven’t demonstrated that you understand Bayesian epistemology well enough to classify it. I read the Wikipedia page on Theory of Justification, and it pretty much all looks wrong to me. How sure are you that your Justificationist/Popperian taxonomy is complete?
BTW induction is a type of justificationism. Bayesians advocate induction. The end.
I have no confidence the word “induction” means the same thing from one sentence to the next. If you could directly say precisely what is wrong with Bayesian updating, with concrete examples of Bayesian updating in action and why they fail, that might be more persuasive, or at least more useful for diagnosis of the problem wherever it may lie.
For example, you update based on selective observation (all observation is selective).
And you update theories getting attention selectively, ignoring the infinities (by using an arbitrary prior, but not actually using it in the sense of applying it infinitely many times, just estimating what you imagine might happen if you did).
Why don’t you—or someone else—read Popper. You can’t expect to fully understand it from discussion here. Why does this community have no one who knows what they are talking about on this subject? Who is familiar with the concepts instead of just having to ask me?
For example, you update based on selective observation (all observation is selective).
Since all observation is selective, then the selectivity of my observations can hardly be a flaw. Therefore the flaw is that I update. But I just asked you to explain why updating is flawed. Your explanation is circular.
And you update theories getting attention selectively, ignoring the infinities (by using an arbitrary prior, but not actually using it in the sense of applying it infinitely many times, just estimating what you imagine might happen if you did).
I don’t know what that means. I’m not even sure that
you update theories getting attention selectively
is idiomatic English. And
ignoring the infinities
needs to be clarified, I’m sure you must realize. Your parenthetical does not cear it up. You write:
by using an arbitrary prior, but not actually using it
Using it but not actually using it. Can you see why that is hard to interpret? And then:
in the sense of applying it infinitely many times
My flaw is that I don’t do something infinitely many times? Can you see why this begs for clarification? And:
just estimating what you imagine might happen if you did
I’ve lost track: is my core flaw that I am estimating something? Why?
Why don’t you—or someone else—read Popper.
I’ve read Popper. He makes a lot more sense to me than you do. He is not cryptic. He is crystal clear. You are opaque.
You can’t expect to fully understand it from discussion here. Why does this community have no one who knows what they are talking about on this subject? Who is familiar with the concepts instead of just having to ask me?
You seem to be saying that for me to understand you, first I have to understand Popper from his own writing, and that nobody here seems to have done that. I’m not sure why you’re posting here if that’s what you believe.
If you’ve read Popper (well) then you should understand his solution to the problem of induction. How about you tell us what it is, rather than complaining my summary of what you already read needs clarifying.
If you’ve read Popper (well) then you should understand his solution to the problem of induction. How about you tell us what it is, rather than complaining my summary of what you already read needs clarifying.
You are changing the subject. I never asked you to summarize Popper’s solution to the problem of induction. If that’s what you were summarizing just now then you ignored my request. I asked you to critique Bayesian updating with concrete examples.
Now, if Popper wrote a critique of Bayesian updating, I want to read it. So tell me the title. But he must specifically talk about Bayesian updating. Or, if that is not available, anything written by Popper which mentions the word “Bayes” or “Bayesian” at all. Or that discusses Bayes’ equation.
There was nothing in Open Society and its Enemies, Objective Knowledge, Popper Selections, or Conjectures and Refutations, nor in any of the books on Popper that I located. The one exception I found was in The Logic of Scientific Discovery, and the mentions of Bayes are purely technical discussion of the theorem (with which Popper, reasonably enough, has no problem), with no criticism of Bayesians. In summary, I found no critiques by Popper of Bayesians after going through the indices of Popper’s main works as you recommended. I did find mention of Bayes, but it was a mention in which Popper did not criticize Bayes or Bayesians.
Nor were Bayes or Bayesians mentioned anywhere in David Deutsch’s book The Beginning of Infiinity.
So I return to my earlier request:
I asked you to critique Bayesian updating with concrete examples.
I have repeatedly requested this, and in reply been given either condescension, or a fantastically obscure and seemingly self-contradictory response, or a major misinterpretation of my request, or a recommendation that I look to Popper, which recommendation I have followed with no results.
I think you’re problem is you don’t understand what the issues at stake are, so you don’t know what you’re trying to find.
You said:
anything written by Popper which mentions the word “Bayes” or “Bayesian” at all. Or that discusses Bayes’ equation.
But then when you found a well known book by Popper which does have those words, and which does discuss Bayes’ equation, you were not satisfied. You asked for something which wasn’t actually what you wanted. That is not my fault.
You also said:
You are changing the subject. I never asked you to summarize Popper’s solution to the problem of induction.
But you don’t seem to understand that Popper’s solution to the problem of induction is the same topic. You don’t know what you’re looking for. It wasn’t a change of topic. (Hence I thought we should discuss this. But you refused. I’m not sure how you expect to make progress when you refuse to discuss the topic the other guy thinks is crucial to continuing.)
Bayesian updating, as a method of learning in general, is induction. It’s trying to derive knowledge from data. Popper’s criticisms of induction, in general, apply. And his solution solves the underlying problem rendering Bayesian updating unnecessary even if it wasn’t wrong. (Of course, as usual, it’s right when applied narrowly to certain mathematical problems. It’s wrong when extended out of that context to be used for other purposes, e.g. to try to solve the problem of induction.)
So, question: what do you think you’re looking for? There is tons of stuff about probability in various Popper books including chapter 8 of LScD titled “probability”. There is tons of explanation about the problem of induction, and why support doesn’t work, in various Popper books. Bayesian updating is a method of positively supporting theories; Popper criticized all such methods and his criticisms apply. In what way is that not what you wanted? What do you want?
So for example I opened to a random page in that chapter and found, p 183, start of section 66, the first sentence is:
Probability estimates are not falsifiable.
This is a criticism of the Bayesian approach as unscientific. It’s not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at? How would you do it without Bayes’ theorem?). In any case it is a criticism and it applies straightforwardly to Bayesian epistemology. It’s not the only criticism.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up (no, making up a “prior” which assigns all of them at once, in a way vague enough that you can’t even use it in real life without “estimating” arbitrarily, doesn’t mean you haven’t just made them up).
EDIT: read the first 2 footnotes in section 81 of LScD, plus section 81 itself. And note that the indexer did not miss this but included it...
Bayesian updating, as a method of learning in general, is induction. It’s trying to derive knowledge from data.
Only in a sense so broad that Popper can rightly be accused of the very same thing. Bayesians use experience to decide between competing hypotheses. That is the sort of “derive” that Bayesians do. But if that is “deriving”, then Popper “derives”. David Deutsch, who you know, says the following:
But, in reality, scientific theories are not ‘derived’ from anything. We do not read them in nature, nor does nature write them into us. They are guesses – bold conjectures. Human minds create them by rearranging, combining, altering and adding to existing ideas with the intention of improving upon them. We do not begin with ‘white paper’ at birth, but with inborn expectations and intentions and an innate ability to improve upon them using thought and experience. Experience is indeed essential to science, but its role is different from that supposed by empiricism. It is not the source from which theories are derived. Its main use is to choose between theories that have already been guessed. That is what ‘learning from experience’ is.
I direct you specifically to this sentence:
Experience is indeed essential to science, but its role is different from that supposed by empiricism. It is not the source from which theories are derived. Its main use is to choose between theories that have already been guessed.
This is what Bayesians do. Experience is what Bayesians use to choose between theories which have already been guessed. They do this using Bayes’ Theorem. But look back at the first sentence of the passage:
But, in reality, scientific theories are not ‘derived’ from anything.
Clearly, then, Deutsch does not consider using the data to choose between theories to be “deriving”. But Bayesians use the data to choose between theories. Therefore, as Deutsch himself defines it, Bayesians are not “deriving”.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up
Yes, the Bayesians make them up, but notice that Bayesians therefore are not trying to derive them from data—which was your initial criticism above. Moreover, this is not importantly different from a Popperian scientist making up conjectures to test. The Popperian scientist comes up with some conjectures, and then, as Deutsch says, he uses experimental data to “choose between theories that have already been guessed”. How exactly does he do that? Typical data does not decisively falsify a hypothesis. There is, just for starters, the possibility of experimental error. So how does one really employ data to choose between competing hypotheses? Bayesians have an answer: they choose on the basis of how well the data fits each hypothesis, which they interpret to mean how probable the data is given the hypothesis. Whether he admits it or not, the Popperian scientist can’t help but do something fundamentally the same. He has no choice but to deal with probabilities, because probabilities are all he has.
The Popperian scientist, then, chooses between theories that he has guessed on the basis of the data. Since the data, being uncertain, does not decisively refute either theory but is merely more, or less, probable given the theory, then the Popperian scientist has no choice but to deal with probabilities. If the Popperian scientist chooses the theory that the data fits best, then he is in effect acting as a Bayesian who has assigned to his competing theories the same prior.
Do you understand DD’s point that the majority of the time theories are rejected without testing which is in both his books? Testing is only useful when dealing with good explanations.
Do you understand that data alone cannot choose between the infinitely many theories consistent with it, which reach a wide variety of contradictory and opposite conclusions? So Bayesian Updating based on data does not solve the problem of choosing between theories. What does?
Do you understand DD’s point that the majority of the time theories are rejected without testing which is in both his books? Testing is only useful when dealing with good explanations.
Bayesians are also seriously concerned with the fact that an infinity of theories are consistent with the evidence. DD evidently doesn’t think so, given his comments on Occam’s Razor, which he appears to be familiar with only in an old, crude version, but I think that there is a lot in common between his “good explanation” criterion and parsimony considerations.
We aren’t “seriously concerned” because we have solved the problem, and it’s not particularly relevant to our approach.
We just bring it up as a criticism of epistemologies that fail to solve the problem… Because they have failed, they should be rejected.
You haven’t provided details about your fixed Occam’s razor, a specific criticism of any specific thing DD said, a solution to the problem of induction (all epistemologies need one of some sort), or a solution to the infinity of theories problem.
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified. At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Is this true? Popper wrote LScD in 1934. Keynes and Ramsey wrote about using probability to handle uncertainty in the 1920s although I don’t think anyone paid attention to that work for a few years. I don’t know enough about their work in detail to comment on whether or not Popper is taking it into account although I certainly get the impression that he’s influenced by Keynes.
According to the wikipedia page, Cox’s theorem first appeared in R. T. Cox, “Probability, Frequency, and Reasonable Expectation,” Am. Jour. Phys., 14, 1–13, (1946). Prior to that, I don’t think probability had much in the way of philosophical foundations, although they may’ve gotten the technical side right. And correct use of probability for more complex things, like causal models, didn’t come until much later. (And Popper was dealing with the case of science-in-general, which requires those sorts of advanced tools.)
The English version of LScD came out in 1959. It wasn’t a straight translation; Popper worked on it. In my (somewhat vague) understanding he changed some stuff or at least added some footnotes (and appendices?).
Anyway Popper published plenty of stuff after 1946 including material from the LScD postscript that got split into several books, and also various books where he had the chance to say whatever he wanted. If he thought there was anything important to update he would have. And for example probability gets a lot of discussion in Popper’s replies to his critics, and Bayes’ theorem in particular comes up some; that’s from 1974.
So for example on page 1185 of the Schilpp volume 2, Popper says he never doubted Bayes’ theorem but that “it is not generally applicable to hypotheses which form an infinite set”.
How can something be partially falsified? It’s either consistent with the evidence or contradicted. This is a dichotomy. To allow partial falsification you have to judge in some other way which has a larger number of outcomes. What way?
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified.
You’re saying you started without them, and come up with some in the middle. But how does that work? How do you get started without having any?
the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Changing the math cannot answer any of his non-mathematical criticisms. So his challenge remains.
(It’s subject to limitations that do not constrain the Bayesian approach, and as near as I can tell, is mathematically equivalent to a non-informative Bayesian approach when it is applicable, but the author’s justification for his procedure is wholly non-Bayesian.)
I think you mixed up Bayes’ Theorem and Bayesian Epistemology. The abstract begins:
By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution.
They have a problem with a prior distribution, and wish to do without it. That’s what I think the paper is about. The abstract does not say “we don’t like bayes’ theorem and figured out a way to avoid it.” Did you have something else in mind? What?
I had in mind a way of putting probability distributions on unknown constants that avoids prior distributions and Bayes’ theorem. I though that this would answer the question you posed when you wrote:
It’s not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at?)
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
Karma has little bearing on the abstract truth of an argument, but it works pretty well as a means of gauging whether or not an argument is productive in the surrounding community’s eyes. It should be interpreted accordingly: a higher karma score doesn’t magically lead to a more perfect set of opinions, but paying some attention to karma is absolutely rational, either as a sanity check, a method of discouraging an atmosphere of mutual condescension, or simply a means of making everyone’s time here marginally more pleasant. Despite their first-order irrelevance, pretending that these goals are insignificant to practical truth-seeking is… naive, at best.
Unfortunately, this also means that negative karma is an equally good gauge of an statement’s disruptiveness to the surrounding community, which can give downvotes some perverse consequences when applied to people who’re interested in being disruptive.
Would you mind responding to the actual substance of what JoshuaZ said, please? Particularly this paragraph:
If it helps, imagine it we weren’t talking about you but some other person. That person wanders over to a forum they’ve never been to, and has extensive discussions about technical issues where that person disgarees with the forum regulars on almost everything, including peripherally related issues. How likely would you be to say that that person had everything right, even if you took for granted that they were correct about the overarching primary issues? So, if the person never agrees that they were wrong about even the tiniest of issues, do you think it is more likely that the person actually got everything right or that they just aren’t listening?
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
note that, ironically, your comment here complaining that you didn’t repeatedly pressure is itself pressure, and not for the first time...
I think there may be some definitional issues here. A handful of remarks suggesting one read a certain book would not be what most people would call pressure. Moreover, given the negative connotations of pressure, it worries me that you find that to be pressure. Conversations are not battles. And suggesting that one might want to read a book should not be pressure. If one feels that way, it indicates that one might be too emotionally invested in the claim.
And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct?
When they have policies like resorting to authoritarian, ad hominem arguments rather than substance ones …
And is that what you see here? Again, most people don’t seem to see that. Take an outside view; it is possible that you are simply too emotionally involved?
Let me pose a counter question: when you contradict several geniuses such as Popper, Deutsch, Feynman (e.g. about caring what other people think), and Rand, does that worry you? I don’t particularly think you should say “yes” (maybe mild worry causing only an interest in investigating a bit more, not conceding). The point is your style of argument can easily be used against you here. And against most people for most stuff.
I don’t have reason to think that Rand is a genius. But, let’s say I did, and let’s say the other three are geniuses. Should one in that context, be worried that their opinions contradict my own? Let’s use a more extreme example, Jonathan Sarfati is an accomplished chemist, a highly ranked chess master, and a young earth creationist. Should I be worried by that? The answer I think is yes. But, at the same time, even if I were only vaguely aware of Sarfati’s existence, would I need to read everything by him to decide that he’s wrong? No. In this case, having read some of their material, and having read your arguments for why I should read it, it is falling more and more into the Sarfati category. It wouldn’t surprise me at all if there’s something I’m wrong about that Popper or Deutsch gets right, and if I read everything Popper had to say and read everything Deutsch had to say and didn’t come away with any changed opinions, that should cause me to doubt my rationality. Ironically, BoI was already on my intended reading list before interacting with you. It has no dropped farther down on my priority list because of these conversation.
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
I don’t think you understood why I made that remark. I have an interest in cooperating with other humans. We have a karma system for among other reasons to decide whether or not people want to read more of something. Since some people have already expressed a disinterest in this discussion, I’m explicitly inviting them to show that disinterest so I know not to waste their time. If it helps, imagine this conversation occurring on a My Little Pony forum with a karma system. Do you see why someone would want to downvote in that context a Popper v. Bayes discussion? Do you see how listening in that context to such preferences could be rational?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary? More than 10? Any detailed, serious discuss of Popper? If it does, how do you know that having not read it? Is the discussion of Popper tainted by the common myths?
Some Less Wrongers recommend stuff that doesn’t have the specific stuff they claim it does have, e.g. the Mathematical Statistics book. IME it’s quite common that recommendations don’t have what they are supposed to even when people directly claim it’s there.
Take an outside view; it is possible that you are simply too emotionally involved?
You seem to think I believe this stuff because I’m a beginner; that’s an insulting non-argument. I already know how to take outside views, I am not emotionally involved. I already know how to deal with such issues, and have done so.
None of your post here really has any substance. It’s also psychological and meta topics you’ve brought up. Maybe I shouldn’t have fed you any replies at all about them. But that’s why I’m not answering the rest.
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary?
Please don’t move goalposts and pretend they were there all along. You asked whether he mentioned Popper, to which I answered yes. I don’t know the answers to your above questions, and they really don’t have anything to do with the central points, the claims that I “pressured” you to read Earman based on his being a professional.
Take an outside view; it is possible that you are simply too emotionally involved?
You seem to think I believe this stuff because I’m a beginner; that’s an insulting non-argument. I already know how to take outside views, I am not emotionally involved. I already know how to deal with such issues, and have done so.
I’m also confused about where you are getting any reason to think that I think that you believe what you do because you are a “beginner”- it doesn’t help matters that I’m not sure what you mean by that term in this context.
But if you are very sure that you don’t have any emotional aspect coming into play then I will try to refrain from suggesting otherwise until we get more concrete data such as per Jim’s suggestion.
I have never been interested in people who mention Popper but people who address his ideas (well). I don’t know why you think I’m moving goalposts.
There’s a difference between “Outside View” and “outside view”. I certainly know what it means to look at things from another perspective which is what the non-caps one means (or at least that’s how I read it).
I am not “very sure” about anything; that’s not how it works; but I think pretty much everything I say here you can take as “very sure” in your worldview.
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary? More than 10? Any detailed, serious discuss of Popper? If it does, how do you know that having not read it? Is the discussion of Popper tainted by the common myths?
--
Moving the goalposts, also known as raising the bar, is an informal logically fallacious argument in which evidence presented in response to a specific claim is dismissed and some other (often greater) evidence is demanded. In other words, after an attempt has been made to score a goal, the goalposts are moved to exclude the attempt. This attempts to leave the impression that an argument had a fair hearing while actually reaching a preordained conclusion.
Stop being a jerk. He didn’t even state whether it mentions Popper, let alone whether it does more than mention. The first thing wasn’t a goal post. You’re being super pedantic but you’re also wrong. It’s not charming.
i checked out the Mathematical Statistics book today that one of them said had a proof. it didn’t have it. and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
Links? I ask not to challenge the veracity of your claims but because I am interested in avoiding the kind of erroneous argument you’ve observed.
The concept of “bounded rationality” is a concept in economics/psychology going back to Herbert Simon. It is not, as you seem to think, an ideology hostile to human dignity.
How does that prevent it from being a mistake? With consequences? With appeal to certain moral views?
Bounded rationality is not an ideology, it’s a definition. Calling it a mistake makes as much sense as calling the number “4” a mistake; it is not a type of thing which can be correct or mistaken.
Well, since you don’t offer a plausible hypothesis, I have no reason to discard my plausible one. But I will repeat the question, since you failed to answer it. Do you know why your karma jumped sufficiently for you to make that posting? That is, for example, do you know that someone (whether or not they wear socks) was systematically upvoting you?
How should I know?
I think I was systematically upvoted at least once, maybe a few times. I also think I was systematically downvoted at least one. I often saw my karma drop 10 or 20 points in a couple minutes (not due to front page post), sometimes more. shrug. Wasn’t paying much attention.
You’re really going to assume as a default—without evidence—that I’m a lying cheater rather than show a little respect? Maybe some people liked my posts.
To the extent I’ve shown them to non-Bayesians I’ve received nothing but praise and agreement, and in particular the conjunctional fallacy post was deeply appreciated by some people.
Upvoted (systematically?) for honesty. ETA: And the sock-puppet hypothesis is withdrawn.
I liked some of you posts. As for accusing you of being a “lying cheater”, I explicitly did not accuse you of lying. I suggested that you were being devious and disingenuous in the way you responded to the question.
Really???? By people who understand the conjunction fallacy the way it is generally understood here? Because, as I’m sure you are aware by now, the idea is not that people always attach a higher probability to (A & B) than they do to (B) alone. It is that they do it (reproducibly) in a particular kind of context C which makes (A | C) plausible and especially when (B | A, C) is more plausible than (B | C).
So what you were doing in that posting was calling people idiots for believing something they don’t believe—that the conjunction fallacy applies to all conjunctions without exception. You were exhibiting ignorance in an obnoxious fashion. So, of course you got downvoted. And now we have you and Scurfield believing that we here are just unwilling to listen to the Popperian truth. Are you really certain that you yourselves are trying hard enough to listen?
Which is it?
Both. I particularly liked curi’s post linking to the Deutsch lecture. But, in the reference in question, here, I was being cute and saying that half the speeches in the conversation (the ones originating from this side of the aisle) contained good and interesting arguments. This is not to say that you guys have never produced good arguments, though the recent stuff—particularly from curi—hasn’t been very good. But I was in agreement with you (about a week ago) when you complained that curi’s thoughtful comments weren’t garnering the upvotes they deserved. I always appreciate it when you guys provide links to the Critical Rationalism blog and/or writings by Popper, Miller, or Deutsch. You should do that more often. Don’t just provide a bare link, but (assertion, argument, plus link to further argument) is a winning combination.
I have some sympathy with you guys for three reasons:
I’m a big fan of Gunter Wachtershauser, and he was a friend and protege of Popper.
You guys have apparently taken over the task of keeping John Edser entertained. :)
I think you guys are pretty much right about “support”. In fact, I said something very similar recently in my blog
I haven’t spoken to John Edser since Matt killed the CR list ><
He could have just given it away, but no, he wanted to kill it...
Well, no, they have a different way of thinking about it than you. They disagree with you and agree with me.
Your equivocations about what the papers do and don’t say do not, in my mind, make anything better. You’re first of all defanging the paper as a way to defend it—when you basically deny it made any substantive claims you’re making it easy to defend against anything—and that’s not something, I think, that it’s authors would appreciate.
You’ve misunderstood the word “applies”. It applies in the sense of being relevant, not happening every time.
The idea of the conjunction fallacy is there is a mistake people sometimes make and it has something to do with conjunctions (in general, not a special category of them). It is supposed to apply to all conjunctions in the sense that whenever there is a conjunction it could happen.
What I think is: you can trick people into making various mistakes. The ones from these studies have nothing to do with conjunctions. The authors simply designed it so all the mistakes would look like they had something to do with conjunctions, but actually they had other causes such as miscommunication.
Can you see how, if you discovered a way to miscommunicate with people to cause them to make mistakes about many topics (for example), you could use it selectively, e.g. by making all your examples that you publish have to do with conjunctions, and can you see how, in that case, I would be right that the conjunctional fallacy is a myth?
I’m pretty sure you are wrong here, but I’m willing to look into it. Which paper, exactly, are you referring to?
Incidentally, your reference to my “equivocations about what the papers do and don’t say” suggest that I said one thing at one point, and a different thing at a different point. Did I really do that? Could you point out how/where?
I can see that if I discovered a way to cause people to make mistakes, I would want to publish about it. If the clearest and most direct demonstration (that the thinking I had caused actually was a mistake) were to exhibit the mistaken thinking in the form of a conjunction, then I might well choose to call my discovery the conjunction fallacy. I probably would completely fail to anticipate that some guy who had just had a very bad day (in failing to convince a Bayesian forum of his Popperian brilliance) would totally misinterpret my claims.
They said people have bounded rationality in the nobel prize paper. In the 1983 paper they were more ambiguous about their conclusions.
Besides their papers, there is the issue of what their readership thinks. We can learn about this from websites like Less Wrong and wikipedia and what they say it means. I found they largely read stronger claims into the papers than were actually there. I don’t blame them: the papers hinted at the stronger claims on purpose, but avoided saying too much for fear of being refuted. But they made it clear enough what they thought and meant, and that’s how Less Wrong people understand the papers.
The research does not even claim to demonstrate the conjunction fallacy is common—e.g. they state in the 1983 paper that their results have no bearing (well, only a biased bearing, they say) on the prevalence of the mistakes. Yet people here took it as a common, important thing.
It’s a little funny though, b/c the researchers realize that in some senses their conclusions don’t actually follow from their research, so they have to be careful what they say.
By equivocate I meant making ambiguous statements, not changing your story. You probably don’t regard them as ambiguous. Different perspective.
Everything is ambiguous in regards to some issues, and which issues we regard as the important ones determines which statements we regard as ambiguous in any important way.
I think if your discovery has nothing to do with conjunctions, it’s bad to call it the conjunction fallacy, and then have people make websites saying it has to do with the difference in probability between A&B and A, and that it shows people estimate probability wrong.
Your comments about me having a bad day are silly and should be withdrawn. Among other things, I predicted in advance what would happen and wasn’t disappointed.
“Not disappointed”, huh? You are probably not surprised that people call you a troll, either. :)
As for your claims that people (particularly LessWrongers) who invoke the Conjunction Fallacy are making stronger claims than did the original authors—I’ll look into them. Could you provide two actual (URL) links of LessWrong people actually doing that to the extent that you suggested in your notorious posting?
http://lesswrong.com/lw/ji/conjunction_fallacy/
This moral is not what the researchers said; they were careful not to say much in the way of a conclusion, only hint at it by sticking it in the title and some general remarks. It is an interpretation added on by Less Wrong people (as the researchers intended it to be).
This moral contains equivocation: it says it “can” happen. The Less Wrong people obviously think it’s more than “can”: it’s pretty common and worth worrying about, not a one in a million.
The moral, if taken literally, is pretty vacuous. Removing detail or assumptions can also make an event seem more plausible, if you do it in particular ways. Changing ideas can change people’s judgment of them. Duh.
It’s not meant to be taken that literally. It’s meant ot say: this is a serious problem, it’s meaningful, it really has something to do with adding conjunctions!
The research is compatible with this moral being false (if we interpret it to have any substance at all), and with it only being a one in a million event.
You may think one in a million is an exaggeration. But it’s not. The size of set of possible questions to ask that the researchers selected from was … I have no idea but surely far more than trillions. How many would have worked? The research does not and cannot say. The only things that could tell us that are philosophical arugments or perhaps further research.
The research says this stuff clearly enough:
http://www.econ.ucdavis.edu/faculty/nehring/teaching/econ106/readings/Extensional%20Versus%20Intuitive.pdf
In this paper, the researchers clearly state that their research does not tell us anything about the prevalence of the conjunction fallacy. But Less Wrongers have a different view of the matter.
They also state that it’s not arbitrary conjunctions which trigger the “conjunction” fallacy, but ones specifically designed to elicit it. But Less Wrong people are under the impression that people are in danger of doing a conjunction fallacy when there is no one designing situations to elicit it. That may or may not be true; the research certainly doesn’t demonstrate it is true.
Note: flawed results can tell you something about prevalence, but only if you estimate the amount of error introduced by the flaws. They did not do that (and it is very hard to do in this case). Error estimates of that type are difficult and would need to be subjected to peer review, not made up by readers of the paper.
Here’s various statements by Less Wrongers:
http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/
The research does not claim the students could not have misinterpretted. It suggests they wouldn’t have—the less wronger has gotten the idea the researchers wanted him to—but they won’t go so far as to actually say miscommunication is ruled out because it’s not. Given they were designing their interactions with their subjects on purpose in a way to get people to make errors, miscommunication is actually a very plausible interpretation, even in cases where the not very detailed writeup of what they actually did fails to explicitly record any blatant miscommunication.
This also goes beyond what the research says.
Also note that he forgot to equivocate about how often this happen. He didn’t even put a “sometimes” or a “more often than never”. But the paper doesn’t support that.
He is apparently unaware that it differs from normal life in that in normal life people’s careers don’t depend on tricking you into making mistakes which they can call conjunction fallacies.
When this was pointed out to him, he did not rethink things but pressed forward:
But when you read about the fallacy in the Less Wrong articles about it, they do not state “only happens in the cases where people are trying to trick you”. If it only applies in those situations, well, say so when telling it to people so they know when it is and isn’t relevant. But this cosntraint on applicability, from the paper, is simply ignored most of the time to reach a conclusion the paper does not support.
Here’s someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
Note the heavy equivocation. He retreats from “always” which is a straw man, to “sometimes” which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often. But the paper does not say that.
BTW the paper is very misleading in that it says things like, in the words of a Less Wronger:
The paper is full of statements that sound like they have something to do with the prevalence of the conjunction fallacy. And then one sentence admiting all those numbers should be disregarded. If there is any way to rescue those numbers as legitimate at all, it was too hard for the researchers and they didn’t attempt it (I don’t mean to criticize their skill here. I couldn’t do it either. Too hard.)
It is difficult for me to decide how to respond to this. You are obviously sincere—not trolling. The “conjunction fallacy” is objectionable to you—an ideological assault on human dignity which must be opposed. You see evidence of this malevolent influence everywhere. Yet I see it nowhere. We are interpreting plain language quite differently. Or rather, I would say that you are reading in subtexts that I don’t think are there.
To me, the conjunction fallacy is something like one of those optical illusions you commonly find in psychology books. “Which line is longer?” Our minds deceive us. No, a bit more than that. One can set up trick situations in which our minds predictably deceive us. Interesting. And definitely something that psychologists should look at. Maybe even something that we should watch for in ourselves, if we want to take our rationality to the next level. But not something which proves some kind of inferiority of human reason; something that justifies some political ideology.
I suggested this ‘deflationary’ take on the CF, and your initial response was something like “Oh, no. People here agree with me. The CF means much more than you suggest.” But then you quote Kahneman and Tversky:
Yes, you admit, K&T were deflationary, if you take their words at face value, but you persist that they intended their idea to be taken in an inflated sense. And you claim that it is taken in that sense here. So you quote LessWrong:
I emphasized the can because you call attention to it. Yes, you admit, LessWrong was deflationary too, if you take the words at face value. But again you insist that the words should not be taken at face value—that there is some kind of nefarious political subtext there. And as proof, you suggest that we look at what the people who wrote comments in response to your postings wrote.
All I can say is that here too, you are looking for subtexts, while admitting that the words themselves don’t really support the reading you are suggesting:
You are making a simple mistake in interpretation here. The vision of the conjunction fallacy that you attacked is not the vision of the conjunction fallacy that people here were defending. It is not all that uncommon for simple confusions like this one to generate inordinate amounts of sound and fury. But still, this one was a doozy.
Optical illusions are neat, but that’s just not what these studies mean to, e.g., their authors:
http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf
Bounded rationality is what they believe they were studying.
Our unbounded rationality is the topic of The Beginning of Infinity. Different worldview.
I don’t think the phrase “bounded rationality” means what you think it means. “Bounded rationality” is a philosophical term that means rationality as performed by agents that can only think for a finite amount of time, as opposed to unbounded rationality which is what an agent with truly infinite computational resources would perform instead. Since humans do not have infinite time to think, “our unbounded rationality” does not exist.
Where do they define it to have this technical meaning?
I’m not sure if your distinction makes sense (can you say what you mean by “rationality”?). One doesn’t need infinite time be fully rational, in the BoI worldview. I think they conceive of rationality itself—and also which bounds are worth attention—differently. btw surely infinite time has no relevance to their studies in which people spent 20 minutes or whatever. of if you’re comparing with agents that do infinite thinking in 20 minutes, well, that’s rather impossible.
They don’t, because it’s a pre-existing standard term. Its Wikipedia article should point you in the right direction. I’m not going to write you an introduction to the topic when you haven’t made an effort to understand one of the introductions that’s already out there.
Attacking things that you don’t understand is obnoxious. If you haven’t even looked up what the words mean, you have no business arguing that the professionals are wrong. You need to take some time off, learn to recognize when you are confused, and start over with some humility this time.
But wikipedia says it means exactly what I thought it meant:
Let me quote the important part again:
So everything I said previously stands.
BoI says there are no limits on human minds, other than those imposed by the laws of physics on all minds, and also the improvable-without-limit issue of ignorance.
My surprise was due to you describing the third clause alone. I didn’t think that was the whole meaning.
No, you’re still confused. Unbounded rationality is the theoretical study of minds without any resource limits, not even those imposed by the laws of physics. It’s a purely theoretical construct, since the laws of physics apply to everyone and everything. This conversation started with a quote where K&T used the phrase “bounded rationality” to clarify that they were talking about humans, not about this theoretical construct (which none of us really care about, except sometimes as an explanatory device).
Our “unbounded rationality” is not the type you are talking about. It’s about the possibility of humans making unlimited progress. We conceive of rationality differently than you do. We disagree with your dichotomy, approach to research, assumptions being made, etc...
So, again, there is a difference in world view here, and the “bounded rationality” quote is a good example of the anti-BoI worldview found in the papers.
The phrase “unbounded rationality” was first introduced into this argument by you, quoting K&T. The fact that Beginning of Infinity uses that phrase to mean something else, while talking about a different topic, is completely irrelevant.
please don’t misquote.
I’m just going to quickly say that I endorse what jimrandomh has written here. The concept of “bounded rationality” is a concept in economics/psychology going back to Herbert Simon. It is not, as you seem to think, an ideology hostile to human dignity.
I’m outta here! The depth and incorrigibility of your ignorance rivals even that of our mutual acquaintance Edser. Any idiot can make a fool of himself, but it takes a special kind of persistence to just keep digging as you are doing.
If the source of your delusion is Deutsch—well, it serves you right for assuming that any physicist has enough modesty to learn something about other fields before he tries to pontificate in them.
I find nothing in Deutsch that supports curi’s confusion.
Here is an passage from Herbert A. Simon: the bounds of reason in modern America:
Do you think this is accurate? If so, can you see how a Popperian would think it is wrong?
They don’t know anything. They just make up stuff about people they haven’t read. We have read half the stuff they’ve tried to cite more than them. Then they down vote instead of answering.
i checked out the Mathematical Statistics book today that one of them said had a proof. it didn’t have it. and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
they are in love with citing authority and judging ideas by sources (which they think make the value of the ideas probabilistically predictable with an error margin). not so much in love with the notion that if they can’t answer all the criticisms of their position they’re wrong. not so much in love with the notion that popper published criticisms no bayesian ever answered. not so much in love with the idea that, no, saying why your position is awesome doesn’t automatically mean everything else ever is worse and can safely be ignored.
While making false accusations like this does get you attention, it is also unethical behavior. We have done our very best to help you understand what we’re talking about, but you have a pattern where every time you don’t understand something, you skip the step where you should ask for clarification, you make up an incorrect interpretation, and then you start attacking that interpretation.
Almost every sentence you have above is wrong. Against what may be my better judgment I’m going to comment here because a) I get annoyed when my own positions are inaccurately portrayed and b) I hope that showing that you are wrong about one particular issue might convince you that you are interpreting things through a highly emotional lens that is causing you to misread and misremember what people have to say.
I presume you are referring to my remarks.
Let’s go back to the context.
You wrote:
and then wrote:
To which I replied
In another part of that conversation I wrote:
You then wrote:
Note that this comment of yours is the first example where the word “professional” shows up.
I observed
You replied:
My final comment on that issue was then:
And asking about your comment about professionals.
You never replied to that question. I’ve never mentioned Earman in the about 20 other comments to you after that exchange. This doesn’t seem like “repeatedly pressuring me to read some book he’d never read” Note also that I never once discussed Earman being a “professional” until you brought up the term.
You appear to be reading conversations through an adversarial lens in which people with whom you disagree must be not just wrong but evil, stupid, or slavishly adhering to authority. This is a common failure mode of conversation. But we’re not evil mutants. Humans like to demonize the ones they disagree with and it isn’t productive. It is a sign that one is unlikely to be listening with the actual content. At that point, one needs to either radically change approach or just spend some time doing something else and wait for the emotions to cool down.
Now, you aren’t the only person here with that issue; there’s some reaction by some of the regulars here in the same way, but that’s to a large extent a reaction against what you’ve been doing. So, please leave for some time; take a break. Think carefully about what you intend to get out of Less Wrong and whether or not anyone here has any valid points. And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct? Is it really that likely that you are right about every single detail?
If it helps, imagine it we weren’t talking about you but some other person. That person wanders over to a forum they’ve never been to, and has extensive discussions about technical issues where that person disgarees with the forum regulars on almost everything, including peripherally related issues. How likely would you be to say that that person had everything right, even if you took for granted that they were correct about the overarching primary issues? So, if the person never agrees that they were wrong about even the tiniest of issues, do you think it is more likely that the person actually got everything right or that they just aren’t listening?
(Now, everyone who thinks that this discussion is hurting our signal to noise ratio please go and downvote this comment. Hurting my karma score is probably the most efficient method of giving me some sort of feedback to prevent further engagement.)
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
note that, ironically, your comment here complaining that you didn’t repeatedly pressure is itself pressure, and not for the first time...
When they have policies like resorting to authoritarian, ad hominem arguments rather than substance ones … and attacking merits such as confidence and lack of appeasement … then yes. You’re just asking me to concede because one person can’t be right so much against a crowd. It’s so deeply anti-Ayn-Rand.
Let me pose a counter question: when you contradict several geniuses such as Popper, Deutsch, Feynman (e.g. about caring what other people think), and Rand, does that worry you? I don’t particularly think you should say “yes” (maybe mild worry causing only an interest in investigating a bit more, not conceding). The point is your style of argument can easily be used against you here. And against most people for most stuff.
BTW did you know that improvements on existing ideas always start out as small minority opinions? Someone thinks of them first. And they don’t spread instantly.
Why shouldn’t I be right most of the time? I learned from the best. And changed my mind thousands of times in online debates to improve even more. You guys learned the wrong stuff. Why should the number of you be important? Lots of people learning the same material won’t make it correct.
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
Let’s do a scientific test. Find someone impartial to take a look at these threads, and report how they think you did. To avoid biasing the results, don’t tell them which side you were on, and don’t use the same name, just say “There was recently a big argument on Less Wrong (link), what do you think of it?” and have them email back the results.
Will you do that?
I can do that, if you can agree on a comment to link to. It’ll double-blind it in a sense—I do have a vague opinion on the matter but haven’t been following the conversation at all over the last couple days, so it’d be hard for me to pick a friend who’d be inclined to, for example, favor a particular kind of argument that’s been used.
The judge should be someone who’s never participated on Less Wrong before, so there’s no chance that they’ve already picked up ideas from here.
That is true of one of the two people I have in mind. The other has participated on less than five occasions. I have definitely shown articles from here to the latter and possibly to the former, but neither of them have done any extended reading here that I’m aware of (and I expect that I would be if they had). Also, neither of them are the type of person who’s inclined to participate in LW-type places, and the former in particular is inclined toward doing critical analysis of even concepts that she likes. (Just not in a very LW-ish way.)
I’d suggest using the primary thread about the conjunction fallacy and then the subthread on the same topic with Brian. Together that makes a long conversation with a fair number of participants.
How am I supposed to find someone impartial? Most people are justificationists. Most Popperians I know will recognize my writing style (plus, having showed them plenty already, i already know they agree with me, and i don’t attribute that to bias about the source of each post). The ones who wouldn’t recognize my writing style are just the ones who won’t want to read all this and discuss—it’s by choosing not to read much online discussion that they stayed that way.
EDIT: what does it even mean for someone to be neutral? neither justificationist nor popperian? are there any such people? what do they believe? probably ridiculous magical thinking! or hardcore skepticism. or something...
How about someone who hasn’t read much online discussion, and hasn’t thought about the issue enough to take a side? There are lots of people like that. It doesn’t have to be someone you know personally; a friend-of-a-friend or a colleague you don’t interact with much will do fine (although you might have to bribe them to spend the time reading).
A person like that won’t follow details of online discussion well.
A person like that won’t be very philosophical.
Sounds heavily biased against me. Understanding advanced ideas takes skill.
And, again, a person like that can pretty much be assumed to be a justificationist.
Surely you can find someone who seems like they should be impartial and who’s smart enough to follow an online discussion. They don’t have to understand everything perfectly, they’re only judging what’s been written. Maybe a distant relative who doesn’t know your online alias?
If you pick a random person on the street, and you try to explain Popper to them for 20 minutes—and they are interested enough to let you have those 20 minutes—the chances they will understand it are small.
This doesn’t mean Popper was wrong. But it is one of many reasons your test won’t work well.
Right, it takes a while to understand these things. But they won’t understand Bayes, either; but they’ll still be able to give some impression of which side gave better arguments, and that sort of thing.
How about a philosopher who works on a topic totally unrelated to this one?
Have you ever heard what Popper thinks of the academic philosophy community?
No, an academic philosopher won’t do.
This entire exercise is silly. What will it prove? Nothing. What good is a sample size of 1 for this? Not much. Will you drop Bayesianism if the person sides against you? Of course not.
How about a physicist, then? They’re generally smart enough to figure out new topics quickly, but they don’t usually have cause to think about epistemology in the abstract.
MWI is a minority view in the physics community because bad philosophy is prevalent there.
The point of the exercise is not just to judge epistemology, it’s to judge whether you’ve gotten too emotional to think clearly. My hypothesis is that you have, and that this will be immediately obvious to any impartial observer. In fact, it should be obvious even to an observer who agrees with Popper.
How about asking David Deutsch what he thinks? If he does reply, I expect it will be an interesting reply indeed.
I know what he thinks: he agrees with me.
But I’ve already shown lots of this discussion to a bunch of observers outside Less Wrong (I run the best private Popperian email list). Feedback has been literally 100% positive, including some sincere thank yous for helping them see the Conjunction Fallacy mistake (another reaction I got was basically: “I saw the conjunction fallacy was crap within 20 seconds. Reading your stuff now, it’s similar to what I had in mind.”) I got a variety of other positive reactions about other stuff. They’re all at least partly Popperian. I know that many of them are not shy about disagreeing with me or criticizing me—they do so often enough—but when it comes to this Bayesian stuff none of them has any criticism of my stance.
Not a single one of them considers it plausible my position on this stuff is a matter of emotions.
I’m skeptical of the claim about Deutsch. Why not actually test it? A Popperian should try to poke holes in his ideas. Note also that you seem to be missing Jim’s point: Jim is making an observation about the level of emotionalism in your argument, not the correctness. These are distinct issues.
As another suggestion, we could take a bunch of people who have epistemologies that are radically different from either Popper or Bayes. From my friends who aren’t involved in these issues, I could easily get an Orthodox Jew, a religious Muslim, and a conspiracy theorist. That also handles your concern about a sample size of 1. Other options include other Orthodox Jews including one who supports secular monarchies as the primary form of government, and a math undergrad who is a philosophical skeptic. Or if you want to have some real fun, we could get some regulars from the Flat Earth Forum. A large number of them self-identify as “zetetic” which in this context means something like only accepting evidence that is from one’s own sensory observation.
Jim is not suggesting that your position is “a matter of emotions”- he’s suggesting that you are being emotional. Note that these aren’t the same thing. For example, one could hypothetically have a conversation between a biologist and a creationist about evolution and the biologist could get quite angry with the creationist remaining calm. In that case, the biologist believes what they do due to evidence, but they could still be unproductively emotional about how they present that evidence.
From sibling:
Shall I take this as a no?
You’ve already been told that you made that the point of Jim’s remark was about the emotionalism in your remarks, not about the correctness of your arguments. In that context, your sibling comment is utterly irrelevant. The fact that you still haven’t gotten that point is of course further evidence of that problem. We are well past the point where there’s any substantial likelyhood of productive conversation. Please go away. If you feel a need to come back, do so later, a few months from now when you are confident you can do so without either insulting people, deliberately trolling, and are willing to actually listen to what people here have to say.
So you say things that are false, and then you think the appropriate follow up is to rant about me?
And your skepticism stems from a combination of
A) your ignorance B) your negative assumptions to fill in the gaps in knowledge. you don’t think I’m a serious person who has reasons for what he says, and you became skeptical before finding out, just by assumption.
And you’re wrong.
Maybe by pointing it out in this case, you will be able to learn something. Do you think so?
Here’s two mild hints:
1) I interviewed David Deutsch yesterday 2) I’m in the acknowledgements of BoI
Here you are lecturing me about how a Popperian should try to poke holes in his ideas, as if I hadn’t. I had. (The hints do not prove I had. They’re just hints.)
I still don’t understand what you think the opinions of several random people will show (and now you are suggesting people that A) we both think are wrong (so who cares what they think?) B) are still justificationists). It seems to me the opinions of, say, the flat earth forum, should be regarded as pretty much random (well, maybe not random if you know a lot about their psychology. which i don’t want to study) and not a fair judge of the quality of arguments!
So they are extremely anti-Popperian...
You don’t say...
Also I want to thank you for continuing to post. You are easily the most effective troll that LW has ever had, and it is interesting and informative to study your techniques.
This isn’t obvious to me, but this cuts both ways.
As to your other claims, do you think any of these matters will be a serious issue if we used the conjunction fallacy thread as our test case?
Yes. Someone with a worldview similar to mine will be more inclined to agree with me in all the threads. Someone with one I think is rather bad … won’t.
This confuses me. Why should whether someone is a justificationist impact how they read the thread about the conjunction fallacy? Note by the way that you seem to be misinterpreting Jim’s point. Jim isn’t saying that one should have an uninvolved individual decide who is correct. Jim’s suggestion (which granted may not have been explained well in his earlier comment) is to focus on seeing if anyone is behaving too emotionally rather than calmly discussing the issues. That shouldn’t be substantially impacted by philosophical issues.
Because justificationism is relevant to that, and most everything else.
Seriously? You don’t even know that there are philosophical issues about the nature of emotion, the proper ways to evaluate its presence, and so on?
I’m neither Justificationist nor Popperian. I’m Bayesian, which is neither of these things.
In our terminology, which you have not understood, Bayesians like yourself are justificationists.
No offense, but you haven’t demonstrated that you understand Bayesian epistemology well enough to classify it. I read the Wikipedia page on Theory of Justification, and it pretty much all looks wrong to me. How sure are you that your Justificationist/Popperian taxonomy is complete?
You can’t understand Popper by reading wikipedia. Try Realism and the Aim of Science starting on p18 for a ways.
BTW induction is a type of justificationism. Bayesians advocate induction. The end.
I have no confidence the word “induction” means the same thing from one sentence to the next. If you could directly say precisely what is wrong with Bayesian updating, with concrete examples of Bayesian updating in action and why they fail, that might be more persuasive, or at least more useful for diagnosis of the problem wherever it may lie.
For example, you update based on selective observation (all observation is selective).
And you update theories getting attention selectively, ignoring the infinities (by using an arbitrary prior, but not actually using it in the sense of applying it infinitely many times, just estimating what you imagine might happen if you did).
Why don’t you—or someone else—read Popper. You can’t expect to fully understand it from discussion here. Why does this community have no one who knows what they are talking about on this subject? Who is familiar with the concepts instead of just having to ask me?
Since all observation is selective, then the selectivity of my observations can hardly be a flaw. Therefore the flaw is that I update. But I just asked you to explain why updating is flawed. Your explanation is circular.
I don’t know what that means. I’m not even sure that
is idiomatic English. And
needs to be clarified, I’m sure you must realize. Your parenthetical does not cear it up. You write:
Using it but not actually using it. Can you see why that is hard to interpret? And then:
My flaw is that I don’t do something infinitely many times? Can you see why this begs for clarification? And:
I’ve lost track: is my core flaw that I am estimating something? Why?
I’ve read Popper. He makes a lot more sense to me than you do. He is not cryptic. He is crystal clear. You are opaque.
You seem to be saying that for me to understand you, first I have to understand Popper from his own writing, and that nobody here seems to have done that. I’m not sure why you’re posting here if that’s what you believe.
If you’ve read Popper (well) then you should understand his solution to the problem of induction. How about you tell us what it is, rather than complaining my summary of what you already read needs clarifying.
You are changing the subject. I never asked you to summarize Popper’s solution to the problem of induction. If that’s what you were summarizing just now then you ignored my request. I asked you to critique Bayesian updating with concrete examples.
Now, if Popper wrote a critique of Bayesian updating, I want to read it. So tell me the title. But he must specifically talk about Bayesian updating. Or, if that is not available, anything written by Popper which mentions the word “Bayes” or “Bayesian” at all. Or that discusses Bayes’ equation.
If you check out the indexes of popper’s books it’s easy to find the word bayes. try it some time...
There was nothing in Open Society and its Enemies, Objective Knowledge, Popper Selections, or Conjectures and Refutations, nor in any of the books on Popper that I located. The one exception I found was in The Logic of Scientific Discovery, and the mentions of Bayes are purely technical discussion of the theorem (with which Popper, reasonably enough, has no problem), with no criticism of Bayesians. In summary, I found no critiques by Popper of Bayesians after going through the indices of Popper’s main works as you recommended. I did find mention of Bayes, but it was a mention in which Popper did not criticize Bayes or Bayesians.
Nor were Bayes or Bayesians mentioned anywhere in David Deutsch’s book The Beginning of Infiinity.
So I return to my earlier request:
I have repeatedly requested this, and in reply been given either condescension, or a fantastically obscure and seemingly self-contradictory response, or a major misinterpretation of my request, or a recommendation that I look to Popper, which recommendation I have followed with no results.
I think you’re problem is you don’t understand what the issues at stake are, so you don’t know what you’re trying to find.
You said:
But then when you found a well known book by Popper which does have those words, and which does discuss Bayes’ equation, you were not satisfied. You asked for something which wasn’t actually what you wanted. That is not my fault.
You also said:
But you don’t seem to understand that Popper’s solution to the problem of induction is the same topic. You don’t know what you’re looking for. It wasn’t a change of topic. (Hence I thought we should discuss this. But you refused. I’m not sure how you expect to make progress when you refuse to discuss the topic the other guy thinks is crucial to continuing.)
Bayesian updating, as a method of learning in general, is induction. It’s trying to derive knowledge from data. Popper’s criticisms of induction, in general, apply. And his solution solves the underlying problem rendering Bayesian updating unnecessary even if it wasn’t wrong. (Of course, as usual, it’s right when applied narrowly to certain mathematical problems. It’s wrong when extended out of that context to be used for other purposes, e.g. to try to solve the problem of induction.)
So, question: what do you think you’re looking for? There is tons of stuff about probability in various Popper books including chapter 8 of LScD titled “probability”. There is tons of explanation about the problem of induction, and why support doesn’t work, in various Popper books. Bayesian updating is a method of positively supporting theories; Popper criticized all such methods and his criticisms apply. In what way is that not what you wanted? What do you want?
So for example I opened to a random page in that chapter and found, p 183, start of section 66, the first sentence is:
This is a criticism of the Bayesian approach as unscientific. It’s not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at? How would you do it without Bayes’ theorem?). In any case it is a criticism and it applies straightforwardly to Bayesian epistemology. It’s not the only criticism.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up (no, making up a “prior” which assigns all of them at once, in a way vague enough that you can’t even use it in real life without “estimating” arbitrarily, doesn’t mean you haven’t just made them up).
EDIT: read the first 2 footnotes in section 81 of LScD, plus section 81 itself. And note that the indexer did not miss this but included it...
Only in a sense so broad that Popper can rightly be accused of the very same thing. Bayesians use experience to decide between competing hypotheses. That is the sort of “derive” that Bayesians do. But if that is “deriving”, then Popper “derives”. David Deutsch, who you know, says the following:
I direct you specifically to this sentence:
This is what Bayesians do. Experience is what Bayesians use to choose between theories which have already been guessed. They do this using Bayes’ Theorem. But look back at the first sentence of the passage:
Clearly, then, Deutsch does not consider using the data to choose between theories to be “deriving”. But Bayesians use the data to choose between theories. Therefore, as Deutsch himself defines it, Bayesians are not “deriving”.
Yes, the Bayesians make them up, but notice that Bayesians therefore are not trying to derive them from data—which was your initial criticism above. Moreover, this is not importantly different from a Popperian scientist making up conjectures to test. The Popperian scientist comes up with some conjectures, and then, as Deutsch says, he uses experimental data to “choose between theories that have already been guessed”. How exactly does he do that? Typical data does not decisively falsify a hypothesis. There is, just for starters, the possibility of experimental error. So how does one really employ data to choose between competing hypotheses? Bayesians have an answer: they choose on the basis of how well the data fits each hypothesis, which they interpret to mean how probable the data is given the hypothesis. Whether he admits it or not, the Popperian scientist can’t help but do something fundamentally the same. He has no choice but to deal with probabilities, because probabilities are all he has.
The Popperian scientist, then, chooses between theories that he has guessed on the basis of the data. Since the data, being uncertain, does not decisively refute either theory but is merely more, or less, probable given the theory, then the Popperian scientist has no choice but to deal with probabilities. If the Popperian scientist chooses the theory that the data fits best, then he is in effect acting as a Bayesian who has assigned to his competing theories the same prior.
Where do you get the theories you consider?
Do you understand DD’s point that the majority of the time theories are rejected without testing which is in both his books? Testing is only useful when dealing with good explanations.
Do you understand that data alone cannot choose between the infinitely many theories consistent with it, which reach a wide variety of contradictory and opposite conclusions? So Bayesian Updating based on data does not solve the problem of choosing between theories. What does?
Bayesians are also seriously concerned with the fact that an infinity of theories are consistent with the evidence. DD evidently doesn’t think so, given his comments on Occam’s Razor, which he appears to be familiar with only in an old, crude version, but I think that there is a lot in common between his “good explanation” criterion and parsimony considerations.
We aren’t “seriously concerned” because we have solved the problem, and it’s not particularly relevant to our approach.
We just bring it up as a criticism of epistemologies that fail to solve the problem… Because they have failed, they should be rejected.
You haven’t provided details about your fixed Occam’s razor, a specific criticism of any specific thing DD said, a solution to the problem of induction (all epistemologies need one of some sort), or a solution to the infinity of theories problem.
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified. At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Is this true? Popper wrote LScD in 1934. Keynes and Ramsey wrote about using probability to handle uncertainty in the 1920s although I don’t think anyone paid attention to that work for a few years. I don’t know enough about their work in detail to comment on whether or not Popper is taking it into account although I certainly get the impression that he’s influenced by Keynes.
According to the wikipedia page, Cox’s theorem first appeared in R. T. Cox, “Probability, Frequency, and Reasonable Expectation,” Am. Jour. Phys., 14, 1–13, (1946). Prior to that, I don’t think probability had much in the way of philosophical foundations, although they may’ve gotten the technical side right. And correct use of probability for more complex things, like causal models, didn’t come until much later. (And Popper was dealing with the case of science-in-general, which requires those sorts of advanced tools.)
The English version of LScD came out in 1959. It wasn’t a straight translation; Popper worked on it. In my (somewhat vague) understanding he changed some stuff or at least added some footnotes (and appendices?).
Anyway Popper published plenty of stuff after 1946 including material from the LScD postscript that got split into several books, and also various books where he had the chance to say whatever he wanted. If he thought there was anything important to update he would have. And for example probability gets a lot of discussion in Popper’s replies to his critics, and Bayes’ theorem in particular comes up some; that’s from 1974.
So for example on page 1185 of the Schilpp volume 2, Popper says he never doubted Bayes’ theorem but that “it is not generally applicable to hypotheses which form an infinite set”.
How can something be partially falsified? It’s either consistent with the evidence or contradicted. This is a dichotomy. To allow partial falsification you have to judge in some other way which has a larger number of outcomes. What way?
You’re saying you started without them, and come up with some in the middle. But how does that work? How do you get started without having any?
Changing the math cannot answer any of his non-mathematical criticisms. So his challenge remains.
Here’s one way.
(It’s subject to limitations that do not constrain the Bayesian approach, and as near as I can tell, is mathematically equivalent to a non-informative Bayesian approach when it is applicable, but the author’s justification for his procedure is wholly non-Bayesian.)
I think you mixed up Bayes’ Theorem and Bayesian Epistemology. The abstract begins:
They have a problem with a prior distribution, and wish to do without it. That’s what I think the paper is about. The abstract does not say “we don’t like bayes’ theorem and figured out a way to avoid it.” Did you have something else in mind? What?
I had in mind a way of putting probability distributions on unknown constants that avoids prior distributions and Bayes’ theorem. I though that this would answer the question you posed when you wrote:
Karma has little bearing on the abstract truth of an argument, but it works pretty well as a means of gauging whether or not an argument is productive in the surrounding community’s eyes. It should be interpreted accordingly: a higher karma score doesn’t magically lead to a more perfect set of opinions, but paying some attention to karma is absolutely rational, either as a sanity check, a method of discouraging an atmosphere of mutual condescension, or simply a means of making everyone’s time here marginally more pleasant. Despite their first-order irrelevance, pretending that these goals are insignificant to practical truth-seeking is… naive, at best.
Unfortunately, this also means that negative karma is an equally good gauge of an statement’s disruptiveness to the surrounding community, which can give downvotes some perverse consequences when applied to people who’re interested in being disruptive.
Would you mind responding to the actual substance of what JoshuaZ said, please? Particularly this paragraph:
Have done so before seeing your comment.
Is editing comments repeatedly to add stuff in the first few minutes against etiquette here?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
I think there may be some definitional issues here. A handful of remarks suggesting one read a certain book would not be what most people would call pressure. Moreover, given the negative connotations of pressure, it worries me that you find that to be pressure. Conversations are not battles. And suggesting that one might want to read a book should not be pressure. If one feels that way, it indicates that one might be too emotionally invested in the claim.
And is that what you see here? Again, most people don’t seem to see that. Take an outside view; it is possible that you are simply too emotionally involved?
I don’t have reason to think that Rand is a genius. But, let’s say I did, and let’s say the other three are geniuses. Should one in that context, be worried that their opinions contradict my own? Let’s use a more extreme example, Jonathan Sarfati is an accomplished chemist, a highly ranked chess master, and a young earth creationist. Should I be worried by that? The answer I think is yes. But, at the same time, even if I were only vaguely aware of Sarfati’s existence, would I need to read everything by him to decide that he’s wrong? No. In this case, having read some of their material, and having read your arguments for why I should read it, it is falling more and more into the Sarfati category. It wouldn’t surprise me at all if there’s something I’m wrong about that Popper or Deutsch gets right, and if I read everything Popper had to say and read everything Deutsch had to say and didn’t come away with any changed opinions, that should cause me to doubt my rationality. Ironically, BoI was already on my intended reading list before interacting with you. It has no dropped farther down on my priority list because of these conversation.
I don’t think you understood why I made that remark. I have an interest in cooperating with other humans. We have a karma system for among other reasons to decide whether or not people want to read more of something. Since some people have already expressed a disinterest in this discussion, I’m explicitly inviting them to show that disinterest so I know not to waste their time. If it helps, imagine this conversation occurring on a My Little Pony forum with a karma system. Do you see why someone would want to downvote in that context a Popper v. Bayes discussion? Do you see how listening in that context to such preferences could be rational?
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary? More than 10? Any detailed, serious discuss of Popper? If it does, how do you know that having not read it? Is the discussion of Popper tainted by the common myths?
Some Less Wrongers recommend stuff that doesn’t have the specific stuff they claim it does have, e.g. the Mathematical Statistics book. IME it’s quite common that recommendations don’t have what they are supposed to even when people directly claim it’s there.
You seem to think I believe this stuff because I’m a beginner; that’s an insulting non-argument. I already know how to take outside views, I am not emotionally involved. I already know how to deal with such issues, and have done so.
None of your post here really has any substance. It’s also psychological and meta topics you’ve brought up. Maybe I shouldn’t have fed you any replies at all about them. But that’s why I’m not answering the rest.
Please don’t move goalposts and pretend they were there all along. You asked whether he mentioned Popper, to which I answered yes. I don’t know the answers to your above questions, and they really don’t have anything to do with the central points, the claims that I “pressured” you to read Earman based on his being a professional.
This confuses me.
Ok. This confuses me since earlier you had this exchange:
That discussion was about a week ago.
I’m also confused about where you are getting any reason to think that I think that you believe what you do because you are a “beginner”- it doesn’t help matters that I’m not sure what you mean by that term in this context.
But if you are very sure that you don’t have any emotional aspect coming into play then I will try to refrain from suggesting otherwise until we get more concrete data such as per Jim’s suggestion.
I have never been interested in people who mention Popper but people who address his ideas (well). I don’t know why you think I’m moving goalposts.
There’s a difference between “Outside View” and “outside view”. I certainly know what it means to look at things from another perspective which is what the non-caps one means (or at least that’s how I read it).
I am not “very sure” about anything; that’s not how it works; but I think pretty much everything I say here you can take as “very sure” in your worldview.
--
--
-Wikipedia
Stop being a jerk. He didn’t even state whether it mentions Popper, let alone whether it does more than mention. The first thing wasn’t a goal post. You’re being super pedantic but you’re also wrong. It’s not charming.
Links? I ask not to challenge the veracity of your claims but because I am interested in avoiding the kind of erroneous argument you’ve observed.
http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3xos
Thanks.
How does that prevent it from being a mistake? With consequences? With appeal to certain moral views?
Bounded rationality is not an ideology, it’s a definition. Calling it a mistake makes as much sense as calling the number “4” a mistake; it is not a type of thing which can be correct or mistaken.