On Debates with Trolls
One of the best achievements of the LessWrong community is our high standard of discussion. More than anywhere else, people here are actively trying to interpret others charitatively, argue to the point, not use provocative or rude language, apologise for inadvertent offenses while not being overtly prone to take offense themselves, avoid their own biases and fallacies instead seeking them in others, and most importantly, find the truth instead of winning the argument. Maybe the greatest attribute of this approach is its infectivity—I have observed several newcomers to change their discussing habits for better in few weeks. However, not everybody is susceptible to the LW standards and our attitude produces somewhat bizarre results when confronted with genuine trolls.
Recent posts about epistemology1 have all generated large number of replies; in fact, the discussions were among the largest in the last few months. People have commented there (yes, I too am guilty) even if it was clear that the author of the posts doesn’t actually react to our arguments. After he was rude and had admitted to do it on purpose. After commiting several fallacies, after generating an unreasonable amount of text of mediocre to low quality, after saying that he is neither trying to convince anyone nor he is willing to learn anything nor he aims for agreement. In short, perhaps all symptoms of trolling were present, and still, people were repeatedly patiently explaining what’s wrong with the author’s position. Which reaction is, I must admit, sort of amazing—but on the other hand, it is hard to deny that the whole discussion was detrimental to the quality of LW content and was mostly a waste of time.
So, here is the question: why didn’t we apply the don’t feed the troll meme, as would probably happen much sooner on most forums? I have several hypotheses on that.
1. We are unable to recognise trolls for lack of training. The first hypothesis is quite improbable, given that the concerned troll was downvoted to oblivion2, but still possible. There are not many trolls on LW and perhaps it is difficult to believe that someone is actively seeking that sort of confrontation. I have never understood the psychology of trolls—I try to avoid combative arguments instinctively and find it hard to imagine why somebody would intentionally try to create one. Perhaps a manifestation of the typical mind fallacy combines with compartmentalisation here: although we consciously know that there are trolls out there (as this is hard to ignore), when meeting one our instict tells us that the person cannot be so much different from us.
2. We are unwilling to deal with trolls. The second theory is that although we know that a person isn’t sincere, we cherish our standards of discussion so strongly that we still try to respond kindly and maintain a civil debate, or at least one side of the debate. If it is the case, it is not automatically a bad policy. Our rationality is limited and we always operate under the threat of self-serving biases. Some quasi-deontological rule of kindness in debates, even if it is an overkill, may be useful in the same way presumption of innocence is useful in justice.
3. Sunken costs. Once the debate has started, our initial investments feel binding. It is unsettling to quit an argument admitting that it was completely useless and we have lost an hour of our life for nothing. Sunken costs fallacy is well known and widespread, there is no reason to expect we are immune.
4. Best rebuttal contest. An interesting fact is that not only the number of replies was fairly large, but also lot of replies were strongly upvoted. It leads me to suspect that those replies weren’t in fact aimed at the opponent in the discussion, but rather intended to impress the fellow LessWrongers. Once the motivation is not “I want to convince my interlocutor” but rather “I can craft an extraordinarily elegant counter-argument which until now didn’t appear”, the attitude of the opponent doesn’t matter. The debate becomes an exercise in arguing, a potentially useful practice maybe, but with many associated dangers.
5. Trollish arguments are fun. I include this possibility mainly for completeness since I don’t much believe that significant number of LW users enjoy pointless arguments. But still, there is something fascinating in fallacious arguments. They are frustrating to follow, for sure, especially for a rationalist, but I cannot entirely leave out of consideration the appeal of seeing biases and fallacies in real life, as opposed to mere reading about them in a Kahneman and Tversky paper.
Whatever of the above hypotheses is correct, or even if none of them is correct, I don’t doubt that on reflection most of us would prefer to have less irrational discussions. The karma system works somehow, but slowly, and cannot prevent the trollish discussions from gaining momentum if people continue in their present voting patterns. One of the problems lies in upvoting the rebuttals which gives additional motivation for people to participate. There seem to be two main strategies of voting: “I want to see more/less of this” and “this deserves more/less karma than it presently has”. The first strategy seems marginally better for dealing with trolls, but both strategies should work better when applied in context. Even a brilliant reply should not be upvoted when placed in an irrational debate: first, it is mostly wasting of resources, and more, we certainly want to see less irrational debates. I don’t endorse downvoting good replies, if only because the troll could interpret it as support for his cause. But leaving them on zero seems to be a correct policy.
1 I am not going to link to them because I don’t want to generate more traffic there; one of those posts figures already on the 4th place when you Google lesswrong epistemology. Neither I write down the precise topic or the name of the author explicitly, which I hope decreases the probability of his appearing here.
2 In fact, the downvoting, even if massive, came relatively late, with the person in question being able to post on the main site after several days.
- 15 Sep 2012 20:14 UTC; 6 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 17 Apr 2011 1:32 UTC; 2 points) 's comment on “High Value” Karma vs “Regular” (i.e. Quirrell Points) by (
To your question, my own question: Why would we apply the “don’t feed the troll” meme? There is no reason to believe it works. There’s no academic study of least trollish places on the web, nor are there reams of data with which we can analyze best responses. Interestingly, the most trollish places on the internet tend to rely entirely on the “don’t feed the troll” meme. While the least visibly trollish places on the web all use distributed moderation systems along with an active defense by all the posters there to combat trolling.
Using 4chan’s method of ignoring trolls when we could be following Stackoverflow’s method of obliterating trolls is insane. It’s like asking for advice on passing our driving test from the girl who failed the test 12 times. Sure she’s got more experience than anyone, but that’s only because she so bad at it. We have no reason to believe that “don’t feed the trolls” works. Going further, I’d say we have good evidence to believe it doesn’t work. We should first determine what deters and eliminates trolls best, instead of aspiring to an untested ideal.
“Don’t feed the trolls” is just a meme that may or may not work. Nothing more and nothing less.
I agree that ‘don’t feed the trolls’ is an untested folk theory that we shouldn’t necessarily assume works, but your argument seems to assume that the expected result of not feeding trolls is no trolls at all, which I’m fairly sure is a strawman. The point of not feeding the trolls is, as far as I can tell, to minimize how disruptive they are when they appear. It’s more analogous to “if you cut yourself, apply first aid and consider seeing a doctor” than “if you want to avoid scurvy, make sure to get some vitamin C in your diet”.
For what it’s worth, I know several people who love to troll and feedback definitely seems to encourage them.
This is also a relevant discussion.
I have no experience with 4chan. But from what you say follows that regular users of 4chan actually don’t feed (i.e. communicate with) the trolls and the trolls are still there. How does such a discussion look like? Do the trolls interact only with each other?
By which I partly mean that the meme obviously doesn’t work when people fail to apply it.
There are more direct ways to obliterate trolls, but they come with some costs.
4Chan is probably not the best choice of a site to emulate in this context; it’s clearly extremely successful at community-building, but its values concerning discussion could hardly be more different from ours. Describing it as “trolls trolling trolls trolling...” is almost a cliche, and one more or less borne out by my limited experience with the site.
On the other hand, I’m certain that the “don’t feed the troll” meme predates it considerably.
Goes back to Usenet at least, I’m pretty sure.
This#Troll_sites) might be enlightening. Also see this comment. Being a troll sort of is a prerequisite to join 4chan.
I didn’t understand the OP to be suggesting that we not use the existing distributed moderation system. I understood him to be comparing (“don’t feed”+distributed moderation) to (“continue to encourage”+distributed moderation).
And, sure, I’m all for gathering data about what works. That said, I’m reasonably confident that “don’t feed” works better than “continue to encourage,” all else being equal.
Why do you believe what you believe? I too have been told not to feed trolls in the past. I have heard that meme so often that I have formed an automatic pattern. Yet there continue to be trolls.
At this point, I am convinced that “don’t feed the trolls” is pure superstition. Imagine if you will, a universe where the “don’t feed” doctrine actually worked. “Don’t feed” is already a common meme, so trolling in all parts of the internet would be obliterated in a handful of hours. The “don’t feed” meme itself is only invoked when one spots a troll, which means that people would go without hearing it and gradually forget it. The meme would die out of memory as it killed off it’s own reproduction vector, the trolls. We’d only have small, isolated flare ups of trolling as people re-invented trolling, then others remembered or re-discovered the cure to trolling. Is a version of reality that the world actually resembles?
Now imagine a universe in which “Don’t feed the troll” is useless or worse, it emboldens the trolls and causes them to act out more. How would that look? One of these possible universes is much, much more similar to reality than the other.
I think that says a lot about the efficacy of “Don’t feed the trolls” as a policy.
The problem on most forums is that people say “don’t feed” while continue feeding (not necessarily the same people are engaged in both parts). I believe that the not feeding policy works because (1) I don’t remember seeing a really obnoxious trollish exchange not feeded by non-trolls (itself a weak argument, since I don’t frequent troll habitats often), (2) it corresponds to my proto-model of troll motivation, which is seeking attention (also a weak argument, I don’t really understand trolls) and (3) the trolls need something to react to, and the responses to their debate contributions provide more material and thus opportunities (this is a bit stronger argument, it seems almost self-evident).
The continued simultaneous existence of trolls and the no-feeding policy doesn’t say much and is well compatible with the policy being effective. Note that:
The claim isn’t that non-feeding is capable of eliminating all trolls in any situation, but only that it reduces the negative effects of trolling.
The non-feeding policy, although well known, isn’t universally applied, and trolls can easily thrive on places where the local debaters lack discipline and engage them.
To show that the policy doesn’t work you should compare two forums which have (approximately) the same topic, the same moderation rules and comparable audience and differ only in the troll-feeding attitude.
To further support your claim it would be helpful if you provided an example of a troll-disrupted discussion where actually nobody was feeding the troll.
Trolling that isn’t being replied to is simply called spam. Unfed trolls aren’t trolls by definition. That leaves open the question whether a policy of not feeding trolls reduces the the total volume of trolling + spam aiming to troll. The answer seems likely to be yes, but might depend on how rigorously the policy is followed (I can imagine unsuccessful admonitions not to feed them encouraging trolls).
I believe it because in all the epic trollings I’ve seen, there have been locals who have engaged with the trolls throughout. I can’t remember the last time I saw a troll simply monologuing into the silent ether at length.
I also believe it because I’ve experienced people in real life who seem motivated by the desire to get a response from others, but who don’t seem differentially motivated by different kinds of responses.
But, all of that said, I certainly agree that none of that is definitive. I could easily be wrong. And it doesn’t matter too much for my own behavior… I mostly don’t talk to trolls because I don’t enjoy it.
If you are getting good results from talking to trolls, that’s a fine reason to keep talking to them.
I confirm that your interpretation is correct.
The poster in question at least wouldn’t be immediately obvious as a troll to outsiders reading only a small part of the discussion, more proactive ways to deal with posters like that would seem to carry a serious risk of making less wrong appear even more cultish (group-think).
For whatever measures we take we should first consider how much ammunition against less wrong they offer, how likely they are to cost us genuinely valuable contributions due to seeming closed to dissent, and whether the expected magnitude of intended effects is worth that.
That includes using the word “troll”. After the various facts about their behavior and motivations there is no additional fact as to whether they are a troll. Using the word troll might easily lead people who only took a quick look to come away with the impression that we generally dismiss non-bayesians as trolls, whereas talking about how to prevent endless discussions not aimed at resolving disagreements seems less dangerous that way.
Indeed. I’m not even sure the user in question was a troll by intention (even if they were one functionally) - being persistent and dense beyond reason is a highly plausible trait of participants in Internet philosophy discussions, after all, particularly when the participant has their very own site all about what they’re talking about.
That is, the label “troll” assigns intent in a way that is not actually all that relevant to the problem, which is the behaviour, when you can accurately describe the problematic behaviour.
“I prefer trolls to cranks, because trolls sometimes rest.”—Alexandre Dumas (fils) (loose translation)
This is a valid point. Approximately for these reasons I have limited my suggestions to altering our individual voting policies, which seems reasonably safe—just don’t upvote a comment if it appears in a nowhere-leading lengthy debate, even if the comment itself is well written and sound. I agree that more proactive methods carry risks which we may not have enough reasons to take now.
Very, very good point. I apply a zero-tolerance approach on my own blog, banning people who even faintly smell of troll because they’re not worth my mental energy, For a site like this (where the deletion of one post caused such an uproar) it would be counterproductive at best,
Re the word troll: Yes, I could have written that without using the word. The reason I did use it was a need for a label, since trolling is not an easily describable phenomenon (it implies not only endless discussions not aimed at resolving disagreements, but also a kind of rude behaviour, use of fallacies, personal attacks on the opponents and excessive amounts of generated text). Since there was a standard label for that kind of behaviour, I used it. I couldn’t think about other standard label with approximately same meaning which lacks the negative connotations, else I would have used it instead.
I don’t think the “trolls” started that way—in fact claiming to be trolling (or that they don’t care, etc) is a common defensive response, especially on the internet, after which appearances and cognitive dissonance do lead to some actual trolling.
They were just people who disagreed with with the majority, with the unfortunate need to frame everything in terms of their favorite topic. Sure their arguments were well below normal LW standards, but we have high standards. They were normal, pre-rationality-training arguers, and not talking to normal people doesn’t appeal to me at all.
Neither does to me. I also agree that at the core there was a genuine disagreement which later evolved into the entrenched defense we have seen. However, what do you suggest to do in such situations?
Tough question, let me think.
It’s possible that pushing particular LW posts at people when we see a problematic argument would help, e.g. resolving arguments about definitions, being charitable to the other person, arguments as soldiers, that sort of thing.
Someone asking the other person to write a longer, more general discussion post outlining their views actually helped me a lot—previously I didn’t understand just what they was being argued for.
Once people are already committed to not listening to us, feeding them or not feeding them doesn’t seem to make a difference (unless some person goes a step further and start being intentionally disruptive, but then they’d just get banned, hopefully), but maybe we could “not feed the trolls” as a proactive strategy against having bad discussions. If someone seems to be getting into a bad pattern, we could stop all confrontational discussions and try to only have more cooperative, fact-finding discussions. Discuss the problem before proposing solutions (and link to that too). If they are confrontational, turn the other cheek, unless they violate very basic (e.g. not lying) ground rules for discussion. This plan should be carried out in a way that does not fail if they are right, so discussion should keep going as long as there you can muster genuine curiosity, and not much longer. For ending discussions, even if you feel tired of the other person, maybe try to ramp up the politeness—“thank you for talking about this” “I’m sorry we couldn’t find more common ground,” that sort of thing, and then link them to an interesting place in the sequences if they want to read more.
Not sure if that would actually work, but I’ll try to give it a shot next time.
Your first suggestion, “we are unable to recognize trolls”, is a valid one. I personally didn’t think of the downvoted user as a troll, just as someone who supported an incorrect epistemology and had a couple of confused ideas. It was only after said user failed to learn from several clear, well-written replies that it became clear that ver presence on LW was a bad thing.
It wasn’t immediately clear to me either. But the discussions had continued well after said user failed to learn from that replies. And I have written at least two responses to his posts when I already knew it was going nowhere, and it required non-negligible willpower to give up after that.
A really good troll would be able to maintain a high karma level even while wasting Users’ time on trivial matters—and this forum has been able to fend off even that kind, so I would not regard it as worrisome that an occasional less-obvious troll like User:[withheld] succeeds.
I don’t know whether to upvote this or downvote it. I recognize that I am confused. I shall reward your assistance in that matter by buying an extra set of paperclips tomorrow. Given my general habit of misplacing small objects in both my apartment and my office it is likely that these paperclips will stay intact and unused for some time.
You’re a good human!
A hundred thousand upvotes would not be enough!
How about 6.: arguing for the lurkers’ benefit? That argument was voluminous, repetitious, and random sampling of the low-voted comments turned out to only be useful for verifying that their scores weren’t capricious, but a few of the upvoted comments were worth my time to find and read; if a few dozen other people felt the same then they might have also been worth the authors’ time to write.
It is true that some people can benefit from such arguments. My claim is that a voluminous repetitious heated debate is a rather inefficient way to convey the benefits. If I want to provide the lurkers with a valuable counterargument against a plausible position, I would better write it somwhere else, in a separate post with proper explanation, where the readers would not be distracted by the noise invariably present in the trollish debates.
There is a second related point. It seems to me that if LW is intended as a place to have arguments, the arguments ought to satisfy some minimal quality requirements. There are many places over the internet where you can encounter debunking of common non-sequiturs, red herrings, ad hominems and strawmen; I just think LW should be one level above that—we should take for granted that the debaters here don’t commit those things, at least not systematically.
You may possibly argue that if some people wish to engage in fallacy busting, let them go; that LW can have a low-quality debate now and then. The problem I have is that those debates are distracting. I feel an urge to react to fallacious arguments, to support people arguing against rude opponents, and often don’t resist and do that. But afterwards, I almost universally regret participating in these nowhere-leading discussions. I don’t think that I am so extraordinary in this respect and suppose thus that people can be attracted to such debates “against their will”.
And finally, such debates feel bad and damage the atmosphere of friendliness and mutual trust which LW has.
For what it’s worth—I had one exchange with the person in question pretty early on, decided based on their response that the conversation wasn’t going to go anywhere useful, and dropped it.
But I mostly refrained from downvoting them initially because there were people I respected who were continuing the discussion with every indication that it was being productive, and I value productive discussion even if I’m not getting anything out of it personally. (There are a lot of exchanges on this site that kinda sound like gibberish to me, and in at least some cases I’m fairly confident that this is because I don’t understand the issue well enough to participate usefully in the discussion even as an observer, not because the participants are in fact spouting gibberish.)
After a very short while I stopped reading any of the comments on that thread except to sample them every once in a while to see if they were going anywhere, and it seemed pretty clear that they weren’t. At that point I started downvoting everyone involved on the grounds that I want less high-volume discussion that makes no progress.
No general lesson here, just another data point.
In the cases where you believe people are talking gibberish, I’d suggest adding a comment saying that you’re confused.
It lets people know which topics people find confusing. And if you’re lucky you might get some links to background material which will help things make sense.
I do that sometimes, but in general only when I’m prepared to dedicate some effort to making sense of an explanation, should someone provide one. “I’m confused, and choose to stay that way for now” seems like an inane thing to say.
As someone who was probably doing more feeding than most, I’d like to apologise here.
In my case it was primarily the ‘didn’t recognise troll’ problem, I’m not very good at distinguishing the more eloquent and seemingly reasonable type of troll from honest commenters who disagree with me. I also have a strong aversion to just walking out on a debate without reaching any kind of agreement, mainly because it annoys me a lot when other people do it.
I will try not to fall into a similar pattern again, but since I’m not all that good at noticing it any ‘don’t feed the troll’ PMs are appreciated.
I’m in a similar situation—I can, technically, recognize that someone fits the profile of ‘troll’, but my brain doesn’t like to actually use that information for anything. (It’s not just an issue with trolls, it’s a general inability to use different scripts for people based on categorizing them.) What I’ve found works, in my case, is to be more aware of subtleties in peoples’ behavior, rather than trying to categorize them. I still wind up feeding trolls sometimes (and in fact tend to enjoy doing so in those cases), but if someone is being logically rude or otherwise offensive, that’s a thing worth noticing and reacting to whether they’re ‘a troll’ or not.
Another trick that’s useful sometimes is backing out of the argument a bit and looking at the bigger picture. Sometimes that shows patterns that aren’t otherwise obvious, and that can make it clear that the person isn’t worth continuing to deal with. For example, in the most recent case, the individual was defying the evidence in an irrational and not very obvious way. Once I noticed that, it seemed obvious to me that without a more thorough understanding of when that is and isn’t a reasonable thing to do, they weren’t going to stop doing it to any evidence that we gave them, and thus the argument at hand was not going to resolve anything, so it was pointless. It’s a lot easier to walk away in a situation like that, when you can see that it’s impossible to actually get the goal you were aiming for.
I’d like to apologize as well. I have a bad habit of not knowing when to stop talking to someone. I agree with you and several commenters about about not being able to immediately recognize trolls, although I think it’s good in the long run that we are slow on the uptake rather than trigger-happy.
There is, of course, an important difference between leaving a debate unfinished and walking out on one that’s a lost cause. But as you pointed out, it’s hard to distinguish between the two from an inside perspective. What I’m going to try to do in the future is simply ignore people when they reuse previously refuted arguments, as this is generally a sign of dishonesty or denial.
It’s good to recognize how tempting these are. In fact, my impression was that the exact same mental bugs were motivating the “troll” in this case to continue with the conversation. As far as motivation for participating in that specific conversation was concerned, the only difference between the “troll” and his interlocutors was that the interlocutors knew for sure that they had a large approving audience.
Good observation.
There’s another issue here- a lot of my comments in reply (almost all of them) were upvoted. Some were upvoted quite high. I interpreted this as a sign that people were interested in the subject when it seems in retrospect I should have interpreted those upvotes more as “agree” or “well-argued” and not “want more”. Being more clear about what people mean when they upvote might help.
I’m not sure clarity is the issue.
Note that this comment explicitly endorsing such replies has itself been moderately upvoted, which might reflect a more general endorsement of such replies.
So it’s possible that the upvotes really were expressing “I want more of this” from an interested minority, as you initially proposed. (And as you convinced me was plausible initially.)
Upvoted for admirable restraint in not linking or naming.
Several times the troll mentioned he was forced to slow down posting by the site. Was this because of low karma? If so, can we just penalise people more for massively negative karma?
Apparently there is a 10 minute countdown between comments if you have negative karma.
That might not have helped in this particular situation.
OK we can’t hope for a fully attack-resistant trust metric, but I’d like to do a little better than this.
Inability to create discussion posts for users with negative or zero karma was already proposed when we were experiencing attacks of spambots. I don’t know whether it has been implemented already, but suppose it hasn’t.
It has been. It isn’t clear how they were able to post the top level post so late. Some have suggested that they made additional accounts to vote up their older posts but I don’t know of any evidence of that. Unfortunately, there’s very little in the system that makes detecting that sort of thing very easy.
(ETA: By “they” I mean the potential troll, not the spambots.)
Spam stopped immediately when having positive Karma became a requirement. Only a few spam messages appeared in comments after that, AFAIK.
Yes, sorry, by “they” I meant the troll in question, not the spambots. Bad wording on my part.
I’d be surprised if that helped very much in cases like the one under discussion, given that nothing stops people from creating new accounts. That’s enough to stop casual spammers, which is great, but I’d expect anyone willing to sink hours into writing comments to also be willing to create new accounts on demand when their karma got too low.
More generally, I’d be surprised if any change to the karma system itself rendered us significantly less vulnerable to that sort of dedicated resource-grab without introducing negative side effects.
My own feeling is that the best immune system is cultural, here. To the extent that LW members find participating in discussions like that one valuable (there’s a defense of that position here, for example), we will continue to periodically experience such discussions.
Replying to people who are wrong or systematically wrong is not a problem, so long as we keep the good Less Wrong tradition of addressing the statements with excruciating seriousness, like this.
The problem appears when people who systematically produce wrong or low-quality content don’t slow down in modes in which they get downvoted. Global Karma level and 10 minute delay don’t address this problem directly.
Perhaps downvotes should act as a cooler measure, temporary ban points, for example:
Count the total Karma K of all comments published within the last 2 days that have total negative Karma.
If K is less than −10, and the last comment with total negative Karma was made at time T, you’re not allowed to comment before time T+(|K|*30 min).
This would be too harsh sometimes, but it would automatically prevent escalation of negatively-judged discussions.
That system would also ban people for some time after they set up a poll. Perhaps it would be better to let K bet the total Karma of all comments, not just negative Karma comments, so good comments could offset bad ones.
This could just make implementation of a polling system a dependence. Total Karma probably wouldn’t work, most of the edge cases where it’s not totally obvious that the user should be removed allow for a positive average balance. Also, one of the use cases applies to established contributors going into a wrong mode, in which case they’d be quite capable of offsetting the downvotes.
I agree, provided we really do add a poll feature first.
My $0.02 - I’d be OK with extending the current “impose delay on negative karma” policy into a tiered solution with longer delays for more negative scores (either using the algorithm you sketch or some other), if someone were highly motivated to code that, but I don’t think it would be a particularly valuable use of anyone’s time.
I worry that a measure like this would encourage trolls or people with trollish tendencies to start PMing their interlocutors, which would be a public good but a private nuisance with no outlet for moderation.
Throttle PMs too, then.
There is a “report” link on PM’s. What does it do? We could also add a feature to allow a user to block another user’s PM’s.
Goes straight to Santa Claus.
I’m not sure. When a comment is reported, I can see how many reports it’s accumulated when viewing the comment in some ordinary context, and then I can “ignore” the reports (making them evaporate), or ban the comment, or leave it alone for someone else to deal with. It doesn’t send me a message notifying me that a report has been lodged. I can’t see other people’s PMs so I’m not sure how I’d become aware of reports on them.
These links collect reported items on main section and in discussion (they work for moderators, not visible if not logged in, not sure about other users):
http://lesswrong.com/r/lesswrong/about/reports
http://lesswrong.com/r/discussion/about/reports
I get the message “The page you requested does not exist”.
That is because you are not Englightened.
Oh, neat, thank you.
Well, there goes my apparently silly theory that moderators would have access to a list of pending reports, which would obviously let them see reported PM’s that they wouldn’t be able to see otherwise.
Amazing only if you assume that educating that trollish author is the purpose of the response. Frequently, though, one responds imagining an audience much larger than a single troll. Sometimes one writes experimentally, for oneself, seeking feedback from the community as to whether one’s own viewpoint finds resonance with other people.
Some people think it is hard to deny that participating in sex while using a condom is a waste of time. Yet I have heard people deny it.
Yeah. Right.My guess is that you are pretty close to the truth with your “best rebuttal contest” and “fun” hypotheses. You are participating in a forum with several hundred people. You can’t expect that all of them will share your own austere tastes in intellectual entertainment.
Yes, but at the same time, people who enjoy that should realize they might be damaging LW’s very good signal to noise ratio. I certainly was guilty of that by repeatedly replying.
At the risk of sounding glib, one man’s signal is another man’s noise. About half of those threads consisted of reasonably good and interesting arguments. And the other half included some links to good ideas expressed less shallowly.
Yes, there are good and interesting arguments there, as well as good links. I think many people on LW underestimate the difficulty of communicating, especially communicating new ideas. Learning is difficult, it takes time and effort, and there will be many misunderstandings. curi, myself, and other Popperians have a very different philosophy to the one here at LW. It can take a lot of discussion with someone, often where seemingly no progress is made, before they begin to understand something like why support is impossible.
Another point is that the traditions here are actually not conducive to truth-seeking discussions, especially where there is fundamental disagreement. There is this voting system for one thing. It’s designed to get people into line and to score posts. It encourages you to write to get good kharma but writing to get good kharma shouldn’t be a goal in any truth-seeking discussion. It places too much emphasis on standards and style, but dissent, when it is expressed, may not conform to the standards and style one expects, yet still be truthful. It is supposed to be some kind of troll detector, and a way of labelling them, so that people can commit the fallacy of judging ideas by their source. The whole thing, in short, is authoritarian.
I must apologize for my lack of clarity. You have apparently taken me as a “fellow traveler”. Sorry. The “good and interesting arguments” that I made reference to were written by the anti-Popper ‘militia’ rather than the Popperian ‘invaders’. I meant to credit the invaders only with “good links”.
What I have found discouraging in this regard is that you and Curi have been so hopelessly tone-deaf in coming to understand just why your doctrines have been so difficult to sell. Why (and in what sense) people here think that support is possible. That folks who pay lip service to Popper seem to treat their own positions as given by authority and measure ‘progress’ by how much the other guy changes his mind.
Probably the most frustrating thing about these episodes is how little you guys have learned from your experience here, and how much of what you think you have learned is incorrect. People can engage in persistent disagreement with the local orthodoxy without losing karma. People can occasionally speak too impolitely without losing excessive karma. I serve as a fairly good example of both. There are many other examples. It is actually fairly easy. All you have to do is to expend as much effort in reacting to what other people say as in trying to get your own points across. In trying to understand what they are saying, rather than reacting negatively.
Curi’s posting against the conjunction fallacy was a perfect example of how not to do things here—rising to the level of self-parody. Attacking non-existent beliefs in a doctrine he clearly didn’t understand. How could an intelligent person possibly have become so deluded about what his interlocutors understood by the “conjunction fallacy”? How could he have thought it worthwhile to use sock puppets to raise his karma enough to make that posting? Was he really trying to engage in instruction and consciousness raising? Or was he just trying to assert intellectual superiority?
In a sense, it is a shame that the posting can no longer be seen, because it was so perfect as a negative example.
Please don’t feed the trolls within a discussion about feeding trolls.
:) , but …
Occasional trollish behavior does not brand one as inherently and essentially trollish. IMHO, our Popperian friends are exhibiting sufficiently good behavior in this conversation as to deserve being treated with a modicum of respect.
I agree. Hate the sin, not the sinner.
That’s this post - which is still somewhat visible.
I did not use sock puppets. I do not have any additional accounts here.
Given you are insulting me with 100% imaginary factual claims which you have no evidence for, who is the troll, really?
I would be happy to discard the hypothesis should it be falsified. Or even if a more plausible hypothesis were to be brought to my attention. So let me ask. Do you know why your karma jumped sufficiently to make that conjunction fallacy posting? I would probably accept your explanation if you have a plausible one.
My karma was fluctuating a lot. It got up around 90 three times, and back to 0 each time. It also had many smaller upticks, getting to the 20-50 range.
The reason it stopped fluctuated is because of the conjunction fallacy post: that −330 from that one post keeps my karma solidly negative.
Well, since you don’t offer a plausible hypothesis, I have no reason to discard my plausible one. But I will repeat the question, since you failed to answer it. Do you know why your karma jumped sufficiently for you to make that posting? That is, for example, do you know that someone (whether or not they wear socks) was systematically upvoting you?
How should I know?
I think I was systematically upvoted at least once, maybe a few times. I also think I was systematically downvoted at least one. I often saw my karma drop 10 or 20 points in a couple minutes (not due to front page post), sometimes more. shrug. Wasn’t paying much attention.
You’re really going to assume as a default—without evidence—that I’m a lying cheater rather than show a little respect? Maybe some people liked my posts.
To the extent I’ve shown them to non-Bayesians I’ve received nothing but praise and agreement, and in particular the conjunctional fallacy post was deeply appreciated by some people.
Upvoted (systematically?) for honesty. ETA: And the sock-puppet hypothesis is withdrawn.
I liked some of you posts. As for accusing you of being a “lying cheater”, I explicitly did not accuse you of lying. I suggested that you were being devious and disingenuous in the way you responded to the question.
Really???? By people who understand the conjunction fallacy the way it is generally understood here? Because, as I’m sure you are aware by now, the idea is not that people always attach a higher probability to (A & B) than they do to (B) alone. It is that they do it (reproducibly) in a particular kind of context C which makes (A | C) plausible and especially when (B | A, C) is more plausible than (B | C).
So what you were doing in that posting was calling people idiots for believing something they don’t believe—that the conjunction fallacy applies to all conjunctions without exception. You were exhibiting ignorance in an obnoxious fashion. So, of course you got downvoted. And now we have you and Scurfield believing that we here are just unwilling to listen to the Popperian truth. Are you really certain that you yourselves are trying hard enough to listen?
Which is it?
Both. I particularly liked curi’s post linking to the Deutsch lecture. But, in the reference in question, here, I was being cute and saying that half the speeches in the conversation (the ones originating from this side of the aisle) contained good and interesting arguments. This is not to say that you guys have never produced good arguments, though the recent stuff—particularly from curi—hasn’t been very good. But I was in agreement with you (about a week ago) when you complained that curi’s thoughtful comments weren’t garnering the upvotes they deserved. I always appreciate it when you guys provide links to the Critical Rationalism blog and/or writings by Popper, Miller, or Deutsch. You should do that more often. Don’t just provide a bare link, but (assertion, argument, plus link to further argument) is a winning combination.
I have some sympathy with you guys for three reasons:
I’m a big fan of Gunter Wachtershauser, and he was a friend and protege of Popper.
You guys have apparently taken over the task of keeping John Edser entertained. :)
I think you guys are pretty much right about “support”. In fact, I said something very similar recently in my blog
I haven’t spoken to John Edser since Matt killed the CR list ><
He could have just given it away, but no, he wanted to kill it...
Well, no, they have a different way of thinking about it than you. They disagree with you and agree with me.
Your equivocations about what the papers do and don’t say do not, in my mind, make anything better. You’re first of all defanging the paper as a way to defend it—when you basically deny it made any substantive claims you’re making it easy to defend against anything—and that’s not something, I think, that it’s authors would appreciate.
You’ve misunderstood the word “applies”. It applies in the sense of being relevant, not happening every time.
The idea of the conjunction fallacy is there is a mistake people sometimes make and it has something to do with conjunctions (in general, not a special category of them). It is supposed to apply to all conjunctions in the sense that whenever there is a conjunction it could happen.
What I think is: you can trick people into making various mistakes. The ones from these studies have nothing to do with conjunctions. The authors simply designed it so all the mistakes would look like they had something to do with conjunctions, but actually they had other causes such as miscommunication.
Can you see how, if you discovered a way to miscommunicate with people to cause them to make mistakes about many topics (for example), you could use it selectively, e.g. by making all your examples that you publish have to do with conjunctions, and can you see how, in that case, I would be right that the conjunctional fallacy is a myth?
I’m pretty sure you are wrong here, but I’m willing to look into it. Which paper, exactly, are you referring to?
Incidentally, your reference to my “equivocations about what the papers do and don’t say” suggest that I said one thing at one point, and a different thing at a different point. Did I really do that? Could you point out how/where?
I can see that if I discovered a way to cause people to make mistakes, I would want to publish about it. If the clearest and most direct demonstration (that the thinking I had caused actually was a mistake) were to exhibit the mistaken thinking in the form of a conjunction, then I might well choose to call my discovery the conjunction fallacy. I probably would completely fail to anticipate that some guy who had just had a very bad day (in failing to convince a Bayesian forum of his Popperian brilliance) would totally misinterpret my claims.
They said people have bounded rationality in the nobel prize paper. In the 1983 paper they were more ambiguous about their conclusions.
Besides their papers, there is the issue of what their readership thinks. We can learn about this from websites like Less Wrong and wikipedia and what they say it means. I found they largely read stronger claims into the papers than were actually there. I don’t blame them: the papers hinted at the stronger claims on purpose, but avoided saying too much for fear of being refuted. But they made it clear enough what they thought and meant, and that’s how Less Wrong people understand the papers.
The research does not even claim to demonstrate the conjunction fallacy is common—e.g. they state in the 1983 paper that their results have no bearing (well, only a biased bearing, they say) on the prevalence of the mistakes. Yet people here took it as a common, important thing.
It’s a little funny though, b/c the researchers realize that in some senses their conclusions don’t actually follow from their research, so they have to be careful what they say.
By equivocate I meant making ambiguous statements, not changing your story. You probably don’t regard them as ambiguous. Different perspective.
Everything is ambiguous in regards to some issues, and which issues we regard as the important ones determines which statements we regard as ambiguous in any important way.
I think if your discovery has nothing to do with conjunctions, it’s bad to call it the conjunction fallacy, and then have people make websites saying it has to do with the difference in probability between A&B and A, and that it shows people estimate probability wrong.
Your comments about me having a bad day are silly and should be withdrawn. Among other things, I predicted in advance what would happen and wasn’t disappointed.
“Not disappointed”, huh? You are probably not surprised that people call you a troll, either. :)
As for your claims that people (particularly LessWrongers) who invoke the Conjunction Fallacy are making stronger claims than did the original authors—I’ll look into them. Could you provide two actual (URL) links of LessWrong people actually doing that to the extent that you suggested in your notorious posting?
http://lesswrong.com/lw/ji/conjunction_fallacy/
This moral is not what the researchers said; they were careful not to say much in the way of a conclusion, only hint at it by sticking it in the title and some general remarks. It is an interpretation added on by Less Wrong people (as the researchers intended it to be).
This moral contains equivocation: it says it “can” happen. The Less Wrong people obviously think it’s more than “can”: it’s pretty common and worth worrying about, not a one in a million.
The moral, if taken literally, is pretty vacuous. Removing detail or assumptions can also make an event seem more plausible, if you do it in particular ways. Changing ideas can change people’s judgment of them. Duh.
It’s not meant to be taken that literally. It’s meant ot say: this is a serious problem, it’s meaningful, it really has something to do with adding conjunctions!
The research is compatible with this moral being false (if we interpret it to have any substance at all), and with it only being a one in a million event.
You may think one in a million is an exaggeration. But it’s not. The size of set of possible questions to ask that the researchers selected from was … I have no idea but surely far more than trillions. How many would have worked? The research does not and cannot say. The only things that could tell us that are philosophical arugments or perhaps further research.
The research says this stuff clearly enough:
http://www.econ.ucdavis.edu/faculty/nehring/teaching/econ106/readings/Extensional%20Versus%20Intuitive.pdf
In this paper, the researchers clearly state that their research does not tell us anything about the prevalence of the conjunction fallacy. But Less Wrongers have a different view of the matter.
They also state that it’s not arbitrary conjunctions which trigger the “conjunction” fallacy, but ones specifically designed to elicit it. But Less Wrong people are under the impression that people are in danger of doing a conjunction fallacy when there is no one designing situations to elicit it. That may or may not be true; the research certainly doesn’t demonstrate it is true.
Note: flawed results can tell you something about prevalence, but only if you estimate the amount of error introduced by the flaws. They did not do that (and it is very hard to do in this case). Error estimates of that type are difficult and would need to be subjected to peer review, not made up by readers of the paper.
Here’s various statements by Less Wrongers:
http://lesswrong.com/lw/56m/the_conjunction_fallacy_does_not_exist/
The research does not claim the students could not have misinterpretted. It suggests they wouldn’t have—the less wronger has gotten the idea the researchers wanted him to—but they won’t go so far as to actually say miscommunication is ruled out because it’s not. Given they were designing their interactions with their subjects on purpose in a way to get people to make errors, miscommunication is actually a very plausible interpretation, even in cases where the not very detailed writeup of what they actually did fails to explicitly record any blatant miscommunication.
This also goes beyond what the research says.
Also note that he forgot to equivocate about how often this happen. He didn’t even put a “sometimes” or a “more often than never”. But the paper doesn’t support that.
He is apparently unaware that it differs from normal life in that in normal life people’s careers don’t depend on tricking you into making mistakes which they can call conjunction fallacies.
When this was pointed out to him, he did not rethink things but pressed forward:
But when you read about the fallacy in the Less Wrong articles about it, they do not state “only happens in the cases where people are trying to trick you”. If it only applies in those situations, well, say so when telling it to people so they know when it is and isn’t relevant. But this cosntraint on applicability, from the paper, is simply ignored most of the time to reach a conclusion the paper does not support.
Here’s someone from Less Wrong denying there was any trickery in one of the experiments, even though the paper says there was.
Note the heavy equivocation. He retreats from “always” which is a straw man, to “sometimes” which is heavily ambiguous about how often. He has in mind: often enough to be important; pretty often. But the paper does not say that.
BTW the paper is very misleading in that it says things like, in the words of a Less Wronger:
The paper is full of statements that sound like they have something to do with the prevalence of the conjunction fallacy. And then one sentence admiting all those numbers should be disregarded. If there is any way to rescue those numbers as legitimate at all, it was too hard for the researchers and they didn’t attempt it (I don’t mean to criticize their skill here. I couldn’t do it either. Too hard.)
It is difficult for me to decide how to respond to this. You are obviously sincere—not trolling. The “conjunction fallacy” is objectionable to you—an ideological assault on human dignity which must be opposed. You see evidence of this malevolent influence everywhere. Yet I see it nowhere. We are interpreting plain language quite differently. Or rather, I would say that you are reading in subtexts that I don’t think are there.
To me, the conjunction fallacy is something like one of those optical illusions you commonly find in psychology books. “Which line is longer?” Our minds deceive us. No, a bit more than that. One can set up trick situations in which our minds predictably deceive us. Interesting. And definitely something that psychologists should look at. Maybe even something that we should watch for in ourselves, if we want to take our rationality to the next level. But not something which proves some kind of inferiority of human reason; something that justifies some political ideology.
I suggested this ‘deflationary’ take on the CF, and your initial response was something like “Oh, no. People here agree with me. The CF means much more than you suggest.” But then you quote Kahneman and Tversky:
Yes, you admit, K&T were deflationary, if you take their words at face value, but you persist that they intended their idea to be taken in an inflated sense. And you claim that it is taken in that sense here. So you quote LessWrong:
I emphasized the can because you call attention to it. Yes, you admit, LessWrong was deflationary too, if you take the words at face value. But again you insist that the words should not be taken at face value—that there is some kind of nefarious political subtext there. And as proof, you suggest that we look at what the people who wrote comments in response to your postings wrote.
All I can say is that here too, you are looking for subtexts, while admitting that the words themselves don’t really support the reading you are suggesting:
You are making a simple mistake in interpretation here. The vision of the conjunction fallacy that you attacked is not the vision of the conjunction fallacy that people here were defending. It is not all that uncommon for simple confusions like this one to generate inordinate amounts of sound and fury. But still, this one was a doozy.
Optical illusions are neat, but that’s just not what these studies mean to, e.g., their authors:
http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahnemann-lecture.pdf
Bounded rationality is what they believe they were studying.
Our unbounded rationality is the topic of The Beginning of Infinity. Different worldview.
I don’t think the phrase “bounded rationality” means what you think it means. “Bounded rationality” is a philosophical term that means rationality as performed by agents that can only think for a finite amount of time, as opposed to unbounded rationality which is what an agent with truly infinite computational resources would perform instead. Since humans do not have infinite time to think, “our unbounded rationality” does not exist.
Where do they define it to have this technical meaning?
I’m not sure if your distinction makes sense (can you say what you mean by “rationality”?). One doesn’t need infinite time be fully rational, in the BoI worldview. I think they conceive of rationality itself—and also which bounds are worth attention—differently. btw surely infinite time has no relevance to their studies in which people spent 20 minutes or whatever. of if you’re comparing with agents that do infinite thinking in 20 minutes, well, that’s rather impossible.
They don’t, because it’s a pre-existing standard term. Its Wikipedia article should point you in the right direction. I’m not going to write you an introduction to the topic when you haven’t made an effort to understand one of the introductions that’s already out there.
Attacking things that you don’t understand is obnoxious. If you haven’t even looked up what the words mean, you have no business arguing that the professionals are wrong. You need to take some time off, learn to recognize when you are confused, and start over with some humility this time.
But wikipedia says it means exactly what I thought it meant:
Let me quote the important part again:
So everything I said previously stands.
BoI says there are no limits on human minds, other than those imposed by the laws of physics on all minds, and also the improvable-without-limit issue of ignorance.
My surprise was due to you describing the third clause alone. I didn’t think that was the whole meaning.
No, you’re still confused. Unbounded rationality is the theoretical study of minds without any resource limits, not even those imposed by the laws of physics. It’s a purely theoretical construct, since the laws of physics apply to everyone and everything. This conversation started with a quote where K&T used the phrase “bounded rationality” to clarify that they were talking about humans, not about this theoretical construct (which none of us really care about, except sometimes as an explanatory device).
Our “unbounded rationality” is not the type you are talking about. It’s about the possibility of humans making unlimited progress. We conceive of rationality differently than you do. We disagree with your dichotomy, approach to research, assumptions being made, etc...
So, again, there is a difference in world view here, and the “bounded rationality” quote is a good example of the anti-BoI worldview found in the papers.
The phrase “unbounded rationality” was first introduced into this argument by you, quoting K&T. The fact that Beginning of Infinity uses that phrase to mean something else, while talking about a different topic, is completely irrelevant.
please don’t misquote.
I’m just going to quickly say that I endorse what jimrandomh has written here. The concept of “bounded rationality” is a concept in economics/psychology going back to Herbert Simon. It is not, as you seem to think, an ideology hostile to human dignity.
I’m outta here! The depth and incorrigibility of your ignorance rivals even that of our mutual acquaintance Edser. Any idiot can make a fool of himself, but it takes a special kind of persistence to just keep digging as you are doing.
If the source of your delusion is Deutsch—well, it serves you right for assuming that any physicist has enough modesty to learn something about other fields before he tries to pontificate in them.
I find nothing in Deutsch that supports curi’s confusion.
Here is an passage from Herbert A. Simon: the bounds of reason in modern America:
Do you think this is accurate? If so, can you see how a Popperian would think it is wrong?
They don’t know anything. They just make up stuff about people they haven’t read. We have read half the stuff they’ve tried to cite more than them. Then they down vote instead of answering.
i checked out the Mathematical Statistics book today that one of them said had a proof. it didn’t have it. and there was that other guy repeatedly pressuring me to read some book he’d never read, based on his vague idea of the contents and the fact that the author was a professional.
they are in love with citing authority and judging ideas by sources (which they think make the value of the ideas probabilistically predictable with an error margin). not so much in love with the notion that if they can’t answer all the criticisms of their position they’re wrong. not so much in love with the notion that popper published criticisms no bayesian ever answered. not so much in love with the idea that, no, saying why your position is awesome doesn’t automatically mean everything else ever is worse and can safely be ignored.
While making false accusations like this does get you attention, it is also unethical behavior. We have done our very best to help you understand what we’re talking about, but you have a pattern where every time you don’t understand something, you skip the step where you should ask for clarification, you make up an incorrect interpretation, and then you start attacking that interpretation.
Almost every sentence you have above is wrong. Against what may be my better judgment I’m going to comment here because a) I get annoyed when my own positions are inaccurately portrayed and b) I hope that showing that you are wrong about one particular issue might convince you that you are interpreting things through a highly emotional lens that is causing you to misread and misremember what people have to say.
I presume you are referring to my remarks.
Let’s go back to the context.
You wrote:
and then wrote:
To which I replied
In another part of that conversation I wrote:
You then wrote:
Note that this comment of yours is the first example where the word “professional” shows up.
I observed
You replied:
My final comment on that issue was then:
And asking about your comment about professionals.
You never replied to that question. I’ve never mentioned Earman in the about 20 other comments to you after that exchange. This doesn’t seem like “repeatedly pressuring me to read some book he’d never read” Note also that I never once discussed Earman being a “professional” until you brought up the term.
You appear to be reading conversations through an adversarial lens in which people with whom you disagree must be not just wrong but evil, stupid, or slavishly adhering to authority. This is a common failure mode of conversation. But we’re not evil mutants. Humans like to demonize the ones they disagree with and it isn’t productive. It is a sign that one is unlikely to be listening with the actual content. At that point, one needs to either radically change approach or just spend some time doing something else and wait for the emotions to cool down.
Now, you aren’t the only person here with that issue; there’s some reaction by some of the regulars here in the same way, but that’s to a large extent a reaction against what you’ve been doing. So, please leave for some time; take a break. Think carefully about what you intend to get out of Less Wrong and whether or not anyone here has any valid points. And ask yourself, does it really seem likely to you that not a single person on LW would ever have made a point to you that turned out to be correct? Is it really that likely that you are right about every single detail?
If it helps, imagine it we weren’t talking about you but some other person. That person wanders over to a forum they’ve never been to, and has extensive discussions about technical issues where that person disgarees with the forum regulars on almost everything, including peripherally related issues. How likely would you be to say that that person had everything right, even if you took for granted that they were correct about the overarching primary issues? So, if the person never agrees that they were wrong about even the tiniest of issues, do you think it is more likely that the person actually got everything right or that they just aren’t listening?
(Now, everyone who thinks that this discussion is hurting our signal to noise ratio please go and downvote this comment. Hurting my karma score is probably the most efficient method of giving me some sort of feedback to prevent further engagement.)
You did not even state whether the Earman book mentions Popper. Because, I took it, you don’t know. so … wtf? that’s the best you can do?
note that, ironically, your comment here complaining that you didn’t repeatedly pressure is itself pressure, and not for the first time...
When they have policies like resorting to authoritarian, ad hominem arguments rather than substance ones … and attacking merits such as confidence and lack of appeasement … then yes. You’re just asking me to concede because one person can’t be right so much against a crowd. It’s so deeply anti-Ayn-Rand.
Let me pose a counter question: when you contradict several geniuses such as Popper, Deutsch, Feynman (e.g. about caring what other people think), and Rand, does that worry you? I don’t particularly think you should say “yes” (maybe mild worry causing only an interest in investigating a bit more, not conceding). The point is your style of argument can easily be used against you here. And against most people for most stuff.
BTW did you know that improvements on existing ideas always start out as small minority opinions? Someone thinks of them first. And they don’t spread instantly.
Why shouldn’t I be right most of the time? I learned from the best. And changed my mind thousands of times in online debates to improve even more. You guys learned the wrong stuff. Why should the number of you be important? Lots of people learning the same material won’t make it correct.
Can’t you see that that’s irrational? Karma scores are not important, arguments are.
Let’s do a scientific test. Find someone impartial to take a look at these threads, and report how they think you did. To avoid biasing the results, don’t tell them which side you were on, and don’t use the same name, just say “There was recently a big argument on Less Wrong (link), what do you think of it?” and have them email back the results.
Will you do that?
I can do that, if you can agree on a comment to link to. It’ll double-blind it in a sense—I do have a vague opinion on the matter but haven’t been following the conversation at all over the last couple days, so it’d be hard for me to pick a friend who’d be inclined to, for example, favor a particular kind of argument that’s been used.
The judge should be someone who’s never participated on Less Wrong before, so there’s no chance that they’ve already picked up ideas from here.
That is true of one of the two people I have in mind. The other has participated on less than five occasions. I have definitely shown articles from here to the latter and possibly to the former, but neither of them have done any extended reading here that I’m aware of (and I expect that I would be if they had). Also, neither of them are the type of person who’s inclined to participate in LW-type places, and the former in particular is inclined toward doing critical analysis of even concepts that she likes. (Just not in a very LW-ish way.)
I’d suggest using the primary thread about the conjunction fallacy and then the subthread on the same topic with Brian. Together that makes a long conversation with a fair number of participants.
How am I supposed to find someone impartial? Most people are justificationists. Most Popperians I know will recognize my writing style (plus, having showed them plenty already, i already know they agree with me, and i don’t attribute that to bias about the source of each post). The ones who wouldn’t recognize my writing style are just the ones who won’t want to read all this and discuss—it’s by choosing not to read much online discussion that they stayed that way.
EDIT: what does it even mean for someone to be neutral? neither justificationist nor popperian? are there any such people? what do they believe? probably ridiculous magical thinking! or hardcore skepticism. or something...
How about someone who hasn’t read much online discussion, and hasn’t thought about the issue enough to take a side? There are lots of people like that. It doesn’t have to be someone you know personally; a friend-of-a-friend or a colleague you don’t interact with much will do fine (although you might have to bribe them to spend the time reading).
A person like that won’t follow details of online discussion well.
A person like that won’t be very philosophical.
Sounds heavily biased against me. Understanding advanced ideas takes skill.
And, again, a person like that can pretty much be assumed to be a justificationist.
Surely you can find someone who seems like they should be impartial and who’s smart enough to follow an online discussion. They don’t have to understand everything perfectly, they’re only judging what’s been written. Maybe a distant relative who doesn’t know your online alias?
If you pick a random person on the street, and you try to explain Popper to them for 20 minutes—and they are interested enough to let you have those 20 minutes—the chances they will understand it are small.
This doesn’t mean Popper was wrong. But it is one of many reasons your test won’t work well.
Right, it takes a while to understand these things. But they won’t understand Bayes, either; but they’ll still be able to give some impression of which side gave better arguments, and that sort of thing.
How about a philosopher who works on a topic totally unrelated to this one?
Have you ever heard what Popper thinks of the academic philosophy community?
No, an academic philosopher won’t do.
This entire exercise is silly. What will it prove? Nothing. What good is a sample size of 1 for this? Not much. Will you drop Bayesianism if the person sides against you? Of course not.
How about a physicist, then? They’re generally smart enough to figure out new topics quickly, but they don’t usually have cause to think about epistemology in the abstract.
MWI is a minority view in the physics community because bad philosophy is prevalent there.
The point of the exercise is not just to judge epistemology, it’s to judge whether you’ve gotten too emotional to think clearly. My hypothesis is that you have, and that this will be immediately obvious to any impartial observer. In fact, it should be obvious even to an observer who agrees with Popper.
How about asking David Deutsch what he thinks? If he does reply, I expect it will be an interesting reply indeed.
I know what he thinks: he agrees with me.
But I’ve already shown lots of this discussion to a bunch of observers outside Less Wrong (I run the best private Popperian email list). Feedback has been literally 100% positive, including some sincere thank yous for helping them see the Conjunction Fallacy mistake (another reaction I got was basically: “I saw the conjunction fallacy was crap within 20 seconds. Reading your stuff now, it’s similar to what I had in mind.”) I got a variety of other positive reactions about other stuff. They’re all at least partly Popperian. I know that many of them are not shy about disagreeing with me or criticizing me—they do so often enough—but when it comes to this Bayesian stuff none of them has any criticism of my stance.
Not a single one of them considers it plausible my position on this stuff is a matter of emotions.
I’m skeptical of the claim about Deutsch. Why not actually test it? A Popperian should try to poke holes in his ideas. Note also that you seem to be missing Jim’s point: Jim is making an observation about the level of emotionalism in your argument, not the correctness. These are distinct issues.
As another suggestion, we could take a bunch of people who have epistemologies that are radically different from either Popper or Bayes. From my friends who aren’t involved in these issues, I could easily get an Orthodox Jew, a religious Muslim, and a conspiracy theorist. That also handles your concern about a sample size of 1. Other options include other Orthodox Jews including one who supports secular monarchies as the primary form of government, and a math undergrad who is a philosophical skeptic. Or if you want to have some real fun, we could get some regulars from the Flat Earth Forum. A large number of them self-identify as “zetetic” which in this context means something like only accepting evidence that is from one’s own sensory observation.
Jim is not suggesting that your position is “a matter of emotions”- he’s suggesting that you are being emotional. Note that these aren’t the same thing. For example, one could hypothetically have a conversation between a biologist and a creationist about evolution and the biologist could get quite angry with the creationist remaining calm. In that case, the biologist believes what they do due to evidence, but they could still be unproductively emotional about how they present that evidence.
From sibling:
Shall I take this as a no?
You’ve already been told that you made that the point of Jim’s remark was about the emotionalism in your remarks, not about the correctness of your arguments. In that context, your sibling comment is utterly irrelevant. The fact that you still haven’t gotten that point is of course further evidence of that problem. We are well past the point where there’s any substantial likelyhood of productive conversation. Please go away. If you feel a need to come back, do so later, a few months from now when you are confident you can do so without either insulting people, deliberately trolling, and are willing to actually listen to what people here have to say.
So you say things that are false, and then you think the appropriate follow up is to rant about me?
And your skepticism stems from a combination of
A) your ignorance B) your negative assumptions to fill in the gaps in knowledge. you don’t think I’m a serious person who has reasons for what he says, and you became skeptical before finding out, just by assumption.
And you’re wrong.
Maybe by pointing it out in this case, you will be able to learn something. Do you think so?
Here’s two mild hints:
1) I interviewed David Deutsch yesterday 2) I’m in the acknowledgements of BoI
Here you are lecturing me about how a Popperian should try to poke holes in his ideas, as if I hadn’t. I had. (The hints do not prove I had. They’re just hints.)
I still don’t understand what you think the opinions of several random people will show (and now you are suggesting people that A) we both think are wrong (so who cares what they think?) B) are still justificationists). It seems to me the opinions of, say, the flat earth forum, should be regarded as pretty much random (well, maybe not random if you know a lot about their psychology. which i don’t want to study) and not a fair judge of the quality of arguments!
So they are extremely anti-Popperian...
You don’t say...
Also I want to thank you for continuing to post. You are easily the most effective troll that LW has ever had, and it is interesting and informative to study your techniques.
This isn’t obvious to me, but this cuts both ways.
As to your other claims, do you think any of these matters will be a serious issue if we used the conjunction fallacy thread as our test case?
Yes. Someone with a worldview similar to mine will be more inclined to agree with me in all the threads. Someone with one I think is rather bad … won’t.
This confuses me. Why should whether someone is a justificationist impact how they read the thread about the conjunction fallacy? Note by the way that you seem to be misinterpreting Jim’s point. Jim isn’t saying that one should have an uninvolved individual decide who is correct. Jim’s suggestion (which granted may not have been explained well in his earlier comment) is to focus on seeing if anyone is behaving too emotionally rather than calmly discussing the issues. That shouldn’t be substantially impacted by philosophical issues.
Because justificationism is relevant to that, and most everything else.
Seriously? You don’t even know that there are philosophical issues about the nature of emotion, the proper ways to evaluate its presence, and so on?
I’m neither Justificationist nor Popperian. I’m Bayesian, which is neither of these things.
In our terminology, which you have not understood, Bayesians like yourself are justificationists.
No offense, but you haven’t demonstrated that you understand Bayesian epistemology well enough to classify it. I read the Wikipedia page on Theory of Justification, and it pretty much all looks wrong to me. How sure are you that your Justificationist/Popperian taxonomy is complete?
You can’t understand Popper by reading wikipedia. Try Realism and the Aim of Science starting on p18 for a ways.
BTW induction is a type of justificationism. Bayesians advocate induction. The end.
I have no confidence the word “induction” means the same thing from one sentence to the next. If you could directly say precisely what is wrong with Bayesian updating, with concrete examples of Bayesian updating in action and why they fail, that might be more persuasive, or at least more useful for diagnosis of the problem wherever it may lie.
For example, you update based on selective observation (all observation is selective).
And you update theories getting attention selectively, ignoring the infinities (by using an arbitrary prior, but not actually using it in the sense of applying it infinitely many times, just estimating what you imagine might happen if you did).
Why don’t you—or someone else—read Popper. You can’t expect to fully understand it from discussion here. Why does this community have no one who knows what they are talking about on this subject? Who is familiar with the concepts instead of just having to ask me?
Since all observation is selective, then the selectivity of my observations can hardly be a flaw. Therefore the flaw is that I update. But I just asked you to explain why updating is flawed. Your explanation is circular.
I don’t know what that means. I’m not even sure that
is idiomatic English. And
needs to be clarified, I’m sure you must realize. Your parenthetical does not cear it up. You write:
Using it but not actually using it. Can you see why that is hard to interpret? And then:
My flaw is that I don’t do something infinitely many times? Can you see why this begs for clarification? And:
I’ve lost track: is my core flaw that I am estimating something? Why?
I’ve read Popper. He makes a lot more sense to me than you do. He is not cryptic. He is crystal clear. You are opaque.
You seem to be saying that for me to understand you, first I have to understand Popper from his own writing, and that nobody here seems to have done that. I’m not sure why you’re posting here if that’s what you believe.
If you’ve read Popper (well) then you should understand his solution to the problem of induction. How about you tell us what it is, rather than complaining my summary of what you already read needs clarifying.
You are changing the subject. I never asked you to summarize Popper’s solution to the problem of induction. If that’s what you were summarizing just now then you ignored my request. I asked you to critique Bayesian updating with concrete examples.
Now, if Popper wrote a critique of Bayesian updating, I want to read it. So tell me the title. But he must specifically talk about Bayesian updating. Or, if that is not available, anything written by Popper which mentions the word “Bayes” or “Bayesian” at all. Or that discusses Bayes’ equation.
If you check out the indexes of popper’s books it’s easy to find the word bayes. try it some time...
There was nothing in Open Society and its Enemies, Objective Knowledge, Popper Selections, or Conjectures and Refutations, nor in any of the books on Popper that I located. The one exception I found was in The Logic of Scientific Discovery, and the mentions of Bayes are purely technical discussion of the theorem (with which Popper, reasonably enough, has no problem), with no criticism of Bayesians. In summary, I found no critiques by Popper of Bayesians after going through the indices of Popper’s main works as you recommended. I did find mention of Bayes, but it was a mention in which Popper did not criticize Bayes or Bayesians.
Nor were Bayes or Bayesians mentioned anywhere in David Deutsch’s book The Beginning of Infiinity.
So I return to my earlier request:
I have repeatedly requested this, and in reply been given either condescension, or a fantastically obscure and seemingly self-contradictory response, or a major misinterpretation of my request, or a recommendation that I look to Popper, which recommendation I have followed with no results.
I think you’re problem is you don’t understand what the issues at stake are, so you don’t know what you’re trying to find.
You said:
But then when you found a well known book by Popper which does have those words, and which does discuss Bayes’ equation, you were not satisfied. You asked for something which wasn’t actually what you wanted. That is not my fault.
You also said:
But you don’t seem to understand that Popper’s solution to the problem of induction is the same topic. You don’t know what you’re looking for. It wasn’t a change of topic. (Hence I thought we should discuss this. But you refused. I’m not sure how you expect to make progress when you refuse to discuss the topic the other guy thinks is crucial to continuing.)
Bayesian updating, as a method of learning in general, is induction. It’s trying to derive knowledge from data. Popper’s criticisms of induction, in general, apply. And his solution solves the underlying problem rendering Bayesian updating unnecessary even if it wasn’t wrong. (Of course, as usual, it’s right when applied narrowly to certain mathematical problems. It’s wrong when extended out of that context to be used for other purposes, e.g. to try to solve the problem of induction.)
So, question: what do you think you’re looking for? There is tons of stuff about probability in various Popper books including chapter 8 of LScD titled “probability”. There is tons of explanation about the problem of induction, and why support doesn’t work, in various Popper books. Bayesian updating is a method of positively supporting theories; Popper criticized all such methods and his criticisms apply. In what way is that not what you wanted? What do you want?
So for example I opened to a random page in that chapter and found, p 183, start of section 66, the first sentence is:
This is a criticism of the Bayesian approach as unscientific. It’s not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at? How would you do it without Bayes’ theorem?). In any case it is a criticism and it applies straightforwardly to Bayesian epistemology. It’s not the only criticism.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up (no, making up a “prior” which assigns all of them at once, in a way vague enough that you can’t even use it in real life without “estimating” arbitrarily, doesn’t mean you haven’t just made them up).
EDIT: read the first 2 footnotes in section 81 of LScD, plus section 81 itself. And note that the indexer did not miss this but included it...
Only in a sense so broad that Popper can rightly be accused of the very same thing. Bayesians use experience to decide between competing hypotheses. That is the sort of “derive” that Bayesians do. But if that is “deriving”, then Popper “derives”. David Deutsch, who you know, says the following:
I direct you specifically to this sentence:
This is what Bayesians do. Experience is what Bayesians use to choose between theories which have already been guessed. They do this using Bayes’ Theorem. But look back at the first sentence of the passage:
Clearly, then, Deutsch does not consider using the data to choose between theories to be “deriving”. But Bayesians use the data to choose between theories. Therefore, as Deutsch himself defines it, Bayesians are not “deriving”.
Yes, the Bayesians make them up, but notice that Bayesians therefore are not trying to derive them from data—which was your initial criticism above. Moreover, this is not importantly different from a Popperian scientist making up conjectures to test. The Popperian scientist comes up with some conjectures, and then, as Deutsch says, he uses experimental data to “choose between theories that have already been guessed”. How exactly does he do that? Typical data does not decisively falsify a hypothesis. There is, just for starters, the possibility of experimental error. So how does one really employ data to choose between competing hypotheses? Bayesians have an answer: they choose on the basis of how well the data fits each hypothesis, which they interpret to mean how probable the data is given the hypothesis. Whether he admits it or not, the Popperian scientist can’t help but do something fundamentally the same. He has no choice but to deal with probabilities, because probabilities are all he has.
The Popperian scientist, then, chooses between theories that he has guessed on the basis of the data. Since the data, being uncertain, does not decisively refute either theory but is merely more, or less, probable given the theory, then the Popperian scientist has no choice but to deal with probabilities. If the Popperian scientist chooses the theory that the data fits best, then he is in effect acting as a Bayesian who has assigned to his competing theories the same prior.
Where do you get the theories you consider?
Do you understand DD’s point that the majority of the time theories are rejected without testing which is in both his books? Testing is only useful when dealing with good explanations.
Do you understand that data alone cannot choose between the infinitely many theories consistent with it, which reach a wide variety of contradictory and opposite conclusions? So Bayesian Updating based on data does not solve the problem of choosing between theories. What does?
Bayesians are also seriously concerned with the fact that an infinity of theories are consistent with the evidence. DD evidently doesn’t think so, given his comments on Occam’s Razor, which he appears to be familiar with only in an old, crude version, but I think that there is a lot in common between his “good explanation” criterion and parsimony considerations.
We aren’t “seriously concerned” because we have solved the problem, and it’s not particularly relevant to our approach.
We just bring it up as a criticism of epistemologies that fail to solve the problem… Because they have failed, they should be rejected.
You haven’t provided details about your fixed Occam’s razor, a specific criticism of any specific thing DD said, a solution to the problem of induction (all epistemologies need one of some sort), or a solution to the infinity of theories problem.
Probability estimates are essentially the bookkeeping which Bayesians use to keep track of which things they’ve falsified, and which things they’ve partially falsified. At the time Popper wrote that, scientists had not yet figured out the rules for using probability correctly; the stuff he was criticizing really was wrong, but it wasn’t the same stuff people use today.
Is this true? Popper wrote LScD in 1934. Keynes and Ramsey wrote about using probability to handle uncertainty in the 1920s although I don’t think anyone paid attention to that work for a few years. I don’t know enough about their work in detail to comment on whether or not Popper is taking it into account although I certainly get the impression that he’s influenced by Keynes.
According to the wikipedia page, Cox’s theorem first appeared in R. T. Cox, “Probability, Frequency, and Reasonable Expectation,” Am. Jour. Phys., 14, 1–13, (1946). Prior to that, I don’t think probability had much in the way of philosophical foundations, although they may’ve gotten the technical side right. And correct use of probability for more complex things, like causal models, didn’t come until much later. (And Popper was dealing with the case of science-in-general, which requires those sorts of advanced tools.)
The English version of LScD came out in 1959. It wasn’t a straight translation; Popper worked on it. In my (somewhat vague) understanding he changed some stuff or at least added some footnotes (and appendices?).
Anyway Popper published plenty of stuff after 1946 including material from the LScD postscript that got split into several books, and also various books where he had the chance to say whatever he wanted. If he thought there was anything important to update he would have. And for example probability gets a lot of discussion in Popper’s replies to his critics, and Bayes’ theorem in particular comes up some; that’s from 1974.
So for example on page 1185 of the Schilpp volume 2, Popper says he never doubted Bayes’ theorem but that “it is not generally applicable to hypotheses which form an infinite set”.
How can something be partially falsified? It’s either consistent with the evidence or contradicted. This is a dichotomy. To allow partial falsification you have to judge in some other way which has a larger number of outcomes. What way?
You’re saying you started without them, and come up with some in the middle. But how does that work? How do you get started without having any?
Changing the math cannot answer any of his non-mathematical criticisms. So his challenge remains.
Here’s one way.
(It’s subject to limitations that do not constrain the Bayesian approach, and as near as I can tell, is mathematically equivalent to a non-informative Bayesian approach when it is applicable, but the author’s justification for his procedure is wholly non-Bayesian.)
I think you mixed up Bayes’ Theorem and Bayesian Epistemology. The abstract begins:
They have a problem with a prior distribution, and wish to do without it. That’s what I think the paper is about. The abstract does not say “we don’t like bayes’ theorem and figured out a way to avoid it.” Did you have something else in mind? What?
I had in mind a way of putting probability distributions on unknown constants that avoids prior distributions and Bayes’ theorem. I though that this would answer the question you posed when you wrote:
Karma has little bearing on the abstract truth of an argument, but it works pretty well as a means of gauging whether or not an argument is productive in the surrounding community’s eyes. It should be interpreted accordingly: a higher karma score doesn’t magically lead to a more perfect set of opinions, but paying some attention to karma is absolutely rational, either as a sanity check, a method of discouraging an atmosphere of mutual condescension, or simply a means of making everyone’s time here marginally more pleasant. Despite their first-order irrelevance, pretending that these goals are insignificant to practical truth-seeking is… naive, at best.
Unfortunately, this also means that negative karma is an equally good gauge of an statement’s disruptiveness to the surrounding community, which can give downvotes some perverse consequences when applied to people who’re interested in being disruptive.
Would you mind responding to the actual substance of what JoshuaZ said, please? Particularly this paragraph:
Have done so before seeing your comment.
Is editing comments repeatedly to add stuff in the first few minutes against etiquette here?
My apologies. I would have thought it was clear that when one recommends a book one is doing so because it has material that is relevant that one is talking about. Yes, the he discusses various ideas due to Popper.
I think there may be some definitional issues here. A handful of remarks suggesting one read a certain book would not be what most people would call pressure. Moreover, given the negative connotations of pressure, it worries me that you find that to be pressure. Conversations are not battles. And suggesting that one might want to read a book should not be pressure. If one feels that way, it indicates that one might be too emotionally invested in the claim.
And is that what you see here? Again, most people don’t seem to see that. Take an outside view; it is possible that you are simply too emotionally involved?
I don’t have reason to think that Rand is a genius. But, let’s say I did, and let’s say the other three are geniuses. Should one in that context, be worried that their opinions contradict my own? Let’s use a more extreme example, Jonathan Sarfati is an accomplished chemist, a highly ranked chess master, and a young earth creationist. Should I be worried by that? The answer I think is yes. But, at the same time, even if I were only vaguely aware of Sarfati’s existence, would I need to read everything by him to decide that he’s wrong? No. In this case, having read some of their material, and having read your arguments for why I should read it, it is falling more and more into the Sarfati category. It wouldn’t surprise me at all if there’s something I’m wrong about that Popper or Deutsch gets right, and if I read everything Popper had to say and read everything Deutsch had to say and didn’t come away with any changed opinions, that should cause me to doubt my rationality. Ironically, BoI was already on my intended reading list before interacting with you. It has no dropped farther down on my priority list because of these conversation.
I don’t think you understood why I made that remark. I have an interest in cooperating with other humans. We have a karma system for among other reasons to decide whether or not people want to read more of something. Since some people have already expressed a disinterest in this discussion, I’m explicitly inviting them to show that disinterest so I know not to waste their time. If it helps, imagine this conversation occurring on a My Little Pony forum with a karma system. Do you see why someone would want to downvote in that context a Popper v. Bayes discussion? Do you see how listening in that context to such preferences could be rational?
That’s ambiguous. Does he discuss Popper directly? Or, say, some of Popper’s students? Does the book have, say, more than 20 quotes of Popper, with commentary? More than 10? Any detailed, serious discuss of Popper? If it does, how do you know that having not read it? Is the discussion of Popper tainted by the common myths?
Some Less Wrongers recommend stuff that doesn’t have the specific stuff they claim it does have, e.g. the Mathematical Statistics book. IME it’s quite common that recommendations don’t have what they are supposed to even when people directly claim it’s there.
You seem to think I believe this stuff because I’m a beginner; that’s an insulting non-argument. I already know how to take outside views, I am not emotionally involved. I already know how to deal with such issues, and have done so.
None of your post here really has any substance. It’s also psychological and meta topics you’ve brought up. Maybe I shouldn’t have fed you any replies at all about them. But that’s why I’m not answering the rest.
Please don’t move goalposts and pretend they were there all along. You asked whether he mentioned Popper, to which I answered yes. I don’t know the answers to your above questions, and they really don’t have anything to do with the central points, the claims that I “pressured” you to read Earman based on his being a professional.
This confuses me.
Ok. This confuses me since earlier you had this exchange:
That discussion was about a week ago.
I’m also confused about where you are getting any reason to think that I think that you believe what you do because you are a “beginner”- it doesn’t help matters that I’m not sure what you mean by that term in this context.
But if you are very sure that you don’t have any emotional aspect coming into play then I will try to refrain from suggesting otherwise until we get more concrete data such as per Jim’s suggestion.
I have never been interested in people who mention Popper but people who address his ideas (well). I don’t know why you think I’m moving goalposts.
There’s a difference between “Outside View” and “outside view”. I certainly know what it means to look at things from another perspective which is what the non-caps one means (or at least that’s how I read it).
I am not “very sure” about anything; that’s not how it works; but I think pretty much everything I say here you can take as “very sure” in your worldview.
--
--
-Wikipedia
Stop being a jerk. He didn’t even state whether it mentions Popper, let alone whether it does more than mention. The first thing wasn’t a goal post. You’re being super pedantic but you’re also wrong. It’s not charming.
Links? I ask not to challenge the veracity of your claims but because I am interested in avoiding the kind of erroneous argument you’ve observed.
http://lesswrong.com/lw/3ox/bayesianism_versus_critical_rationalism/3xos
Thanks.
How does that prevent it from being a mistake? With consequences? With appeal to certain moral views?
Bounded rationality is not an ideology, it’s a definition. Calling it a mistake makes as much sense as calling the number “4” a mistake; it is not a type of thing which can be correct or mistaken.
I’m curious why this has been downvoted. Does it mean people don’t believe you? I think this is another illustration of why the voting system sucks. Your protestations of innocence are now hidden and no-one is fronting up with an argument. I know you have been upvoted and I upvoted a bunch of posts myself once because I was annoyed that you were getting systematically downvoted to, I think, rate-limit your posts (which you were annoyed about to) and to hide them (which makes it a pain reading threads). Oh, the sort of silliness the kharma system leads to!
you can turn off hiding of low scoring comments in your preferences.
I presume it was a reaction to the argument that Perplexed was a troll. Had curi made the same protestation of innocence without adding the personal attack, it might not have been downvoted at all.
curi was responding to this:
Trollish is a good description of this, so I think curi’s comment was accurate. And, note, the comment in which the above insult appears in was upvoted.
Whether Perplexed was trolling depends on Perplexed’s intent. Perplexed probably believed what he was saying, so he probably did not intend to accuse curi unjustly. Had he known curi was innocent and then said it anyway in order to upset him, then he would have been trolling.
The issue isn’t whether he honestly believed it but that he came up with a rather harsh claim without evidence.
He did have evidence. Perhaps what you mean is that there were alternative hypotheses explaining the same evidence. But the existence of alternative hypotheses does not stop the evidence being evidence. There are always alternative hypotheses, and yet there is evidence. Therefore the existence of alternatives does not prevent something from being evidence.
Consider the following three hypotheses:
A) Nobody was systematically voting up your posts.
B) You were systematically voting up your own posts.
C) Somebody else was systematically voting up your posts.
The evidence includes the fact that after having your karma voted down to oblivion, your karma shot up for no obvious reason, allowing you to post a main article to the front page. This evidence decreases the probability of hypothesis A, and simultaneously increases the probabilities of B and C, by Bayesian updating. I don’t know what the Popperian account is precisely, but since it is not completely insane then I believe it leads to roughly the same scientific result, though stated in Popperian terms, eg without talking about the probability of the hypothesis. The evidence is in any case inconsistent with A, and consistent with both B and C, or less consistent with A than with B or C, however you want to put it, and thus is evidence for each as contrasted with A.
As I recall, you yourself proposed hypothesis C. Do you claim that you proposed a hypothesis without evidence, and that this was a bad thing? For that would seem to follow from your own logic. An alternative hypothesis consistent with your own experience is that you did it yourself and just forgot, call this C1. C1 is highly improbable, but to say such a thing is nothing other than to say that the prior of C1 is extremely low, which is Bayesian thinking, which I understand you disapprove of.
In any case, your complaint that he offered a conjecture in advance of evidence is odd coming from a Popperian. Deutsch the Popperian writes that conjectures are not “derived” from anything. They are guesses—bold conjectures, so he writes. Experience is not the source from which conjectures are derived. Its main use is to choose between conjectures that have already been guessed. So he writes. But now you complain about a conjecture offered—so you claim—in advance of evidence!
And whence comes this new appreciation of yours for social niceties such as holding one’s tongue and not blurting out uncomfortable truths, or, since we are after all fallible (as Popper says), uncomfortable guesses at the truth? Did you not claim that we should all be beyond such social niceties, such sacrifices of the bold pursuit of truth to mere matters of “style”? Did you not yourself explain that you deliberately post in a style that does not pull any punches, and that anyone who complains merely proves that they are not worthy of your time? So why the apparent change of heart?
Of course, they presumably had additional evidence regarding the truth or falsehood of B (leaving aside highly unlikely scenarios like them performing acts of which they were unaware), which other people don’t have. So the situation isn’t quite symmetrical.
(Not disagreeing with your main point. Or, in fact, engaging with it at all. Just nitpicking.)
A very rude one. The problem wasn’t that it was a conjecture but that it was a personal attack.
Here let me demonstrate:
You agree that’s a bad way to approach discussion, right?
Maybe I used to agree, but maybe a week of posts like this have gradually persuaded me, through persuasive arguments, otherwise. Quoting from you:
But now you complain:
Oh great master of the breaking of etiquette. I am confused!
Then when swimmer963 wrote:
You responded:
Have you fallen into cultural bias?
Then when swimmer963 wrote:
You responded:
Is it time for you to re-read this book?
I am confused. If you want others to respect some very basic ground rules of etiquette, then why did you preach the opposite only days ago?
It’s not that I care what he thinks, I just think posting crazy factual libels is a bad idea.
I never said one should ignore all etiquette of all types. There’s objective limits on what’s a good or bad idea in this area.
I’m not sure what you hope to gain by arguing this point with me. Do you just want to make me concede something in a debate or are you hoping to learn something?
I already know that etiquette is important, and why. I am pointing out that you also know that it is important when you are the target of a breach. So, even the one who preaches that etiquette be set aside in the bold pursuit of truth, does not really believe it, not when it’s his own ox being gored.
You have all along been oblivious to the reactions of others—admittedly so, proudly so. You have argued that your social obliviousness is a virtue, an intellectual strength, because such things as etiquette are mere obstacles in the road to reality. But this is mistaken, and even you intuit that it is, once the shoe is on the other foot—that is, once you stand in the place of those that you have antagonized and exasperated for an entire week. Your philosophy of antagonism serves no purpose but to justify your own failure or refusal to treat others well.
I don’t need you to concede anything. What I’ve done here is put together the pieces and given you a chance to respond.
Uh huh. Yet you offer no guidelines as to what is allowed, and what is off limits, after a week of preaching and practicing. Or to be more precise, you do offer a guide of sorts: you are off limits, and everyone else is fair game. What you are inclined to break, is okay to break. What you, for whatever reason, don’t break, nobody else must break.
You’re not giving any reasons here why you think we are “tone-deaf” other than you think we have not explained, for example, why LW people think support is possible. But that’s not tone-deafness. Tone deafness is a complaint not about the substance of ideas but about how well you have expressed those ideas with respect to some norm: it is essentially a complaint about style, and it would seem you want us to pay attention to kharma. We think rational people ought to be able to look beyond those things. Right? With regard to support, we explained and gave examples but, briefly, just to illustrate again, here is a quote from where recursive justification hits bottom
Perhaps it is you that has been deaf?
We know that Popperian philosophy is a hard sell and we know that in order to sell it we have to combat lies and myths that have been spread around. Popper himself spent a lot of time doing that. Some of those myths are right here at LW, in the sequences, and we gave examples. But, again, deafness—has anybody admitted that there are these mistakes? Is that deafness a reaction to our “tone-deafness” and is that rational?
I agree, but rather than just asserting I haven’t been, perhaps you should illustrate with some examples.
You seem to care about kharma. If one thinks that kharma is an authoritarian mistake, as I do, then how much respect to you think I should have?
That you think we are paying lip service is reacting negatively and rather hostile don’t you think? Also I do want to hear some good arguments against Popper. Unfortunately most arguments, including those here, are to do with the myths—such as Popper is falsificationism, and come from people that don’t know Popper well or got their information from second-hand sources.
Why do you think he did that? You’re just making things up here.
Tversky and Kahneman believe in such ideas as “evolved mental behaviour” and “bounded rationality”. These beliefs exist right enough. If you read The Beginning of Infinity by David Deutsch you will see arguments against these sort of things. You’re arguing by assertion again and haven’t carefully looked at the substance.
Please give up on us. We’re obviously not as careful and rational thinker as you are. Utterly hopeless. We’re stuck in our realm of imagined cognitive biases. Go find something productive to do. Please go away.
Plus, we’re a cult. and we read Harry Potter fan fiction. And the groupthink is staggering.
First, in response to your first paragraph, my complaint about ‘tone-deafness’ was not intended as a complaint about style. It was a complaint about your failure to do well at the listening half of conversation. A failure to tailor your arguments to the responses you receive. A failure to understand the counterarguments. My complaint may be wrong and unjustified, but it is definitely not a complaint about style.
But, speaking of style, you suggest:
Well, I can see the attractiveness of that slogan, but we tend to think of it a bit differently here. Here, we think that rational people ought to be able to fix any rough edges in their ‘style’ that prevent them from communicating their ideas successfully. We don’t believe that it makes sense to place the entire onus of adjustment on the listener. And we especially don’t believe that only one side has the onus of listening.
Perhaps. But that’s enough about me. Lets talk about you. :) As you may have noticed, responding to an attack with a counterattack usually doesn’t achieve very much here.
I guess that would depend on how interested you are in having me listen to your ideas.
You are probably right. And that presents you with a problem. How do you induce people to come to know Popper well? How do you tempt them to get their information from some non-second-hand source?
Now I’m sure you guys have given long and careful thought to this problem and have developed a plan. But if you should discover that things are not going well, I have some ideas that might help. Which is simply that you might consider producing some discussion postings consisting mostly of long quotes from Popper and his most prominent disciples, with only short glosses from yourselves.
Hmmm. There is something here I just don’t understand. Why all this hostility to what seems to me to be the fairly uncontroversial realization that people are often less good at reasoning than we would like them to be. It is almost as if you had religious or political objections to some evil doctrine. Do you think it would be possible to enlighten me as to why it seems to you that the stakes are so high with this issue?
As for reading Deutsch, I intend to. I don’t think I have ever had a book recommended to me so many times before it is even published in this country.
Somehow I was able to buy it in the Amazon Kindle store for about $18, but the highlight feature is not working properly. My introduction to Deutsch was several years ago with The Fabric of Reality, in which he defends the Everett interpretation, among other things. At that point he became a must-read author (which means I find him worth reading, not that I agree fully with him), one of only a handful. (Daniel Dennett is another). If you want to read Deutsch now, The Fabric of Reality is immediately available. As I recall, it’s a mix of persuasive arguments and dubious arguments.
I’d be very curious to see where anything Tversky wrote contains the phrase “evolved mental behavior”- as I explained to you T&K have classically been pretty agnostic about where these biases and heuristics are coming from. That other people in the field might think that they are evolved is a side issue. I can’t speak as strongly about Kahneman, but I’d be surprised to see any joint paper of the two had where that phrase was used.
But there’s a more serious issue here which I pointed out to you earlier and you are still missing: You cannot let philosophy override evidence. When evidence and your philosophy contradict, philosophy must lose. No matter how good my philosophical arguments are, they cannot withstand empirical data. If my philosophy says the Earth is flat, my philosophy is bad, since the evidence is overwhelming that the Earth is not flat. If my philosophy requires a geocentric universes, then my philosophy is bad. If my philosophy requires irreducible mental entities then my philosophy is bad. And if my philosophy requires humans to be perfect reasoners then my philosophy is bad.
As long as you keep insisting that your philosophical desires about what humans should be override the evidence of what humans are you will not be doing a good job understanding humans or the rest of the universe.
And to be blunt, as long as you keep making this sort of claim, people here are going to not take you seriously. So please go elsewhere. We don’t have much to say to each other.
No, as far as I know, they don’t use the phrase “evolved mental behaviour”, but I didn’t say they did, only that they believe in such things. That they do is evident here:
“From its earliest days, the research that Tversky and I conducted was guided by the idea that intuitive judgments occupy a position – perhaps corresponding to evolutionary history – between the automatic operations of perception and the deliberate operations of reasoning.”
Read the wording closely. To me it indicates they don’t have a good explanation for these heuristics, or, if they do have an explanation, it is vague so that it is consistent with both evolved and not-evolved. But they don’t have a problem with evolved. I also gave you other arguments in my comments to you in our other discussion.
Why are you continuing this when you’ve already sarcastically told me to go away?
Edit: This wikipedia page says “Cognitive biases are instances of evolved mental behavior”: Do you think that is an accurate description of what cognitive biases are supposed to be? Is there any controversy about whether they are evolved or not?
I’m sorry. Could you clarify exactly what it is you think that quote illustrates?
I intend to respond more completely to your posting, but that clarification would be helpful.
He is saying that one always has to make an argument to prove that an idea is true or more likely to be true. Ideas must be supported.
Yes, I understood that, but my question was about why you wrote:
So, apparently, what was illustrated was that Eliezer was not a good and faithful disciple of Popper when he wrote that. I’m a bit surprised you thought that needed illustration.
ETA: Or maybe you meant that your ability to dredge up that quote illustrates that you have been paying attention to whether and why LesWrongers believe support is possible. Yeah, that makes more sense, is more charitable, and is the interpretation I’ll go with.
Ok, with that out of the way, I will respond to your long great-grandfather comment (above) directly.
That sounds pretty awful. Bounded rationality is a standard concept. Surely if you argue against it, you are confused, or don’t understand it properly. I’m not sure what an “evolved mental behaviour” is, but that sounds pretty uncontroversial too. Looking at Deutsch on video about 28:00 in he is using the term “bounded rationality” to refer to something different—so this seems like a simple confusion based on different definitions of terms.
If you are going to assume that people are confused for arguing against “standard concepts” or because you think something is uncontroversial, then that is just argument from authority.
The supposed heuristics which Herbert Simon and others propose which give rise to our alledged cognitive biases are held by them to have evolved via biological evolution, to be based on induction, and to be bounded. Hard-coded processes based on induction that can generate some knowledge but not all knowledge goes against the ideas that Deutsch discusses in The Beginning of Infinity. For one thing, induction is impossible and doesn’t happen anywhere including in human brains. For another, knowledge creation is all or nothing; a machine that can generate some knowledge can generate all knowledge (the jump to universality) - halfway houses like these heuristics would be very difficult to engineer, they would keep jumping. And, for another, human knowledge and reasoning is memetic, not genetic, and there are no hard-coded reasoning rules.
This is just an argument over the definition of the phrase “bounded rationality”. Let’s call these two definitions BR1 and BR2. The definition that timtyler, Kahnemann and Tversky, and I are using is BR1; the definition that you, curi, and David Deutsch use is BR2.
BR1 means “rationality that is performed using a finite amount of resources”. Think of this as bounded-resource rationality. All rationality done in this universe is BR1, by definition, because you only get a limited amount of time and memory to think about things. This definition does not contain any claims about what sort of knowledge BR1 can or can’t generate. A detailed theory of BR1 would say things like “solving this math problem requires at least 10^9 operations”. More commonly, people refer to BR1 to distinguish it from things like AIXI, which is a mathematical construct that can theoretically figure out anything given sufficient data, but which is impossible to construct because it contains several infinities. A mind with universal reasoning is BR1 if it only has a finite amount of time to do it in.
BR2 means “rationality that can generate some types of knowledge, but not others”. Think of this as bounded-domain rationality. Whether this exists at all depends on what you mean by “knowledge”. For example, if you have a computer program that collects seismograph data and predicts earthquakes, you might say it “knows” where earthquakes will occur; this would make it a BR2. If you say that this sort of thing doesn’t count as knowledge until a human reads it from a screen or printout, then no BR2s exist.
BR1 is a standard concept, but as far as I know BR2 is unique to Deutsch’s book Beginning of Infinity. BR1 exists, tautologically from its definition. Whether BR2 exists or not depends on how you define some other things, but personally I don’t find BR2 illuminating so I see no reason to take a stance either way on it.
I’m pretty sure there’s a similar issue with the definition of “induction”. I know of at least two definitions relevant to epistemology, but neither of them seems to make sense in context so I suspect that Deutsch has come up with a third. Could you explain what Deutsch uses the word induction to mean? I think that would clear up a great deal of confusion.
All your points are wrong, though. Induction has been discussed to death already. Computation universality doesn’t mean intelligent systems evolve without cognitive biases, and the fact that human cultural knowledge is memetic doesn’t mean there are not common built-in biases either. The human brain is reprogrammable to some extent, but much of the basic pattern-recognition circuitry has a genetically specified architecture.
Many of the biases in question are in the basic psychology textbooks—this is surely not something that is up for debate.
Looks to me that those biases are very much up for debate and not just by curi and myself:
Why do you argue from authority saying things like something surely cannot be up for debate because it’s in all the textbooks? curi and I are fallibilists: nothing is beyond question.
You say you’re a fallibist, but you’re actually falling into the failure mode described in this article. Suppose you’ve got a question with positions A and B, with a a bunch of supporting arguments for A, and a bunch of supporting arguments for B. Some of those arguments for each side will be wrong, or ambiguous, or inapplicable—that’s what fallibilism predicts and I think we all agree with that.
Suppose there are 3 valid and 3 invalid arguments for A, and 3 valid and 3 invalid arguments for B. Now suppose someone decides to get rid any of the arguments that are invalid, but they happen to think A is better. Most people will end up attacking all the arguments for B, but they won’t look as closely at the arguments for A. After they’re finished, they’ll have 3 valid and 3 invalid arguments for A, and 3 valid arguments for B—which looks like a preponderance of evidence in favor of B, but it isn’t.
Now read the abstract of that paper you linked again. That paper disagrees with where K&T draw the boundary between questions that trigger the conjunction fallacy and questions that don’t, and describe the underlying mechanism that produces it differently. They do not claim that the conjunction fallacy doesn’t exist.
It seems as though they acknowledge the conjunction fallacy and are proposing different underlying mechanisms to explain how it is produced.
If you want to argue with psychology 101, fine, but do it in public, without experimental support, and a dodgy theoretical framework derived from computation universality and things are not going to go well.
If citing textbooks is classed as “arguing from authority”, one should point out that such arguments are usually correct.
They have put fallacious behaviour in quotes to indicate that they don’t agree the fallacy exists. I could be wrong, however, as I am just going from the abstract and maybe the authors do claim it exists. However they seem to be saying it is just an artifact of hints. I’ll need to read the paper to understand better. Maybe I’ll end up disagreeing with the authors.
Textbook arguments are often wrong. Consider quantum physics and the Copenhagen Interpretation for example. And one way of arguing against CI is from a philosophical perspective (it’s instrumentalist and a bad explanation).
I looked through the whole paper and don’t think you’re wrong.
I don’t agree with the hints paper in various respects. But it disagrees with the conjunction fallacy and argues that conjunction isn’t the real issue and the biases explanation isn’t right either. So certainly there is disagreement on these issues.
Do you mean in the context of arguments in textbooks? This seems like a very weak claim, given how frequently some areas change. Indeed, psychology is an area where what an intro level textbook would both claim to be true and would even discuss as relevant topics has changed drastically in the last 60 years. For example, in a modern psychology textbook the primary discussion of Freud will be to note that most of his claims fell into two broad categories:untestable or demonstrably false. Similarly, even experimentally derived claims about some things (such as how children learn) has changed a lot in the last few years as more clever experimental design has done a better job separating issues of planning and physical coordination from babies’ models of reality. Psychology seems to be a bad area to make this sort of argument.
Yes.
It is weak, in that it makes no bold claims, and merely states what most would take for granted—that most of the things in textbooks are essentially correct.
Nice post.
Some did. At the same time that others didn’t.
The ones admitting it said all epistemologies have those flaws, and it’s impossible to do anything about it. When told that one already exists they just dismissed that as impossible instead of being interested in researching whether it succeeds. Or sometimes they took an attitude similar to EY: it’s a flaw and maybe we’ll fix it some day but we don’t know how to yet and the attempts in progress don’t look promising. (Why doesn’t the Popperian attempt in particular look promising? Why hasn’t it already succeeded? No comment given.)
And they dismissed it without even knowing of any scholarly work by anyone on their side which makes their point for them. As far as they know, no one from their side ever refuted Popper in depth, having carefully read his books. And they are OK with that.
@lip service—anyone who cares to can find criticisms of Popper on my blog on a variety of subjects. This is just accusing sources of ideas of bias as a way to dismiss them, without even doing basic research about whether these claims are true (let alone explaining why source should be used to determine quality of substance).
I had in mind myths like these:
Has anybody here said, yes, these are myths and should be retracted?
I think that’s the only one with a serious problem.
I do not trust that they are accurate. Consequently I discount them when I encounter them. I am currently reading The Beginning of Infinity (which is hard to obtain in the US as it is not to be published until summer, though inexplicably I was able to buy it for the Kindle, though inexplicably my extensive highlights from the book are not showing up on my highlights page at Amazon), and trust Deutsch much more on the topic of Popper. I trust Popper still more on the topic of Popper, and I read the essay collection Objective Knowledge a few weeks ago.
I do not trust myself on the topic of Popper, which is why I will not declare these to be myths, as such a statement would presuppose that I am trustworthy.
Occasionally you make valid points and this is one of them. I agree that most of what you’ve quoted above is accurate. In general, Eliezer is somewhat sloppy when it comes to historical issues. Thus, I’ve pointed out here before problems with the use of phlogiston as an example of an unfalsifiable theory, as well as other essentially historical issues.
So we should now ask should Eliezer read any Popper? Well, I’d say he should read LScD and I’ve recommended Popper before to people here before (along with Kuhn and Lakatos). But there’s something to note: I estimate that the chance that any regular LW reader is going to read any of Popper has gone down drastically in the last 1.5 weeks. I will let you figure out why I think that and leave it to you to figure out if that’s a good thing or not.
LScD is not the correct book to read if you want to understand Popper’s philosophy. C&R and OK are better choices.
What do you mean “along with” Kuhn and Lakatos? They are dissimilar to Popper.
Popper’s positions aren’t important as historical issues but because there is an epistemology that matters today which he explained. It’s not historical sloppiness when Eliezer dismisses a rival theory using myths; it’s bad scholarship in the present about the ideas themselves (even if he didn’t know the right ideas, why did he attack a straw man instead of learning better ideas, improving the ideas himself, or refraining from speaking?)
BTW I emailed Eliezer years ago to let him know he had myths about Popper on his website and he chose not to fix it.
As in they are people worth reading.
You’ve asserted this before. So far no one here including myself has seen any reason from what you’ve said to think that. LScD has some interesting points but is overall wrong. I fail to see why at this point reading later books based on the same notions would be terribly helpful. Given what you’ve said here, my estimate that there’s useful material there has gone downwards.
LScD is Popper’s first major work. It is not representative. It is way more formalistic than Popper’s later work. He changed on purpose and said so.
He changed his mind about some stuff from LScD; he improved on it later. LScD is written before he understood the justificationism issue nearly as well as he did later.
LScD engages with the world views of his opponents a lot. It’s not oriented towards presented Popper’s whole way of thinking (especially his later way of thinking, after he refined it).
The later books are not “based on the same notion”. They often take a different approach: less logic, technical debate, more philosophical argument and explanation.
Since you haven’t read them, you really ought to listen to experts about which Popper books are best instead of just assuming, bizarrely, that the one you read which the Popper experts don’t favor is his best material. We’re telling you it’s not his best material; don’t judge him by it. It’s ridiculous to dismiss our worldview based on the books we’re telling you aren’t representative, while refusing to read the books we say explain what we’re actually about.
I’m not dismissing your worldview based on books that aren’t representative. Indeed, earlier I told you that what you were saying especially in regards to morality seemed less reasonable than what Popper said in LScD.
So you are saying that he does less of a job making his notions precise and using careful logic? Using more words and less formalism is not making more philosophical argument, it is going back to the worst parts of philosophy. I don’t know what you think you think my views are, but whatever your model is of me you might want to update it or replace it if you think the above was something that would make me more inclined to read a text. Popper is clearly quite smart and clever, and there’s no question that there’s a lot of bad or misleading formalism in philosophy, but the general trend is pretty clear that philosophers who are willing to use formalism are more likely to have clear ideas.
He changed his mind to the same kind of view I have, FYI.
He changed his mind about what types of precision matter (in what fields). He is precise in different ways. Better explanations which get issues more precisely right; less formalness, less attempts to use math to address philosophical issues. It’s not that he pays less attention to what he writes later, it’s just that he uses the attention for somewhat different purposes.
I’m just explaining truths; I’m not designing my statements to have an effect on you.
I’m not sure about this trend; no particular opinion either way. Regardless, Popper isn’t a trend, he’s a category of his own.
Yes, the truth can be rude, and this is why mere rudeness is sometimes mistaken for truth. But most rudeness is not truth, because rudeness is easy and truth is hard.
They don’t know this stuff.
One of the differences is: when they screw up because they don’t know this stuff, we’ll explain some.
When they think we’re screwing up, they down vote to invisibility, remove normal site links to your post, say “read the sequences” which don’t specifically address the points of disagreement, or just plain stop engaging (e.g. that guy who said i was in invisible dragon territory, and therefore safe to ignore without the possibility of missing any important ideas instead of discussing my point. Basically since my ideas fail some of his criteria—mostly due to his misunderstanding—he ignores them. And he thinks that is safe and not closed minded!)
I’m unfamiliar with this particular instance, but I’ve engaged trolls on Fark before, for these reasons:
1) Some trolls just give up when taken calmly seriously at face value, as opposed to getting hit with indignation.
2) Bored at work.
3) They provide such wonderful strawmen against which to clarify my own thoughts and sharpen my arguments.
4) In some discussions (political ones especially), Poe’s Law applies, oftentimes real people hold real opinions that are as blunt as trolls’ opinions.
In the discussed case I have little doubts that the said troll indeed held most of the opinions he has expressed here. To clarify, by “troll” I mean someone who argues for fun of arguing and without caring a bit about the standards of a reasonable discussion; a troll needn’t to pretend holding opinions which he in fact doesn’t share. (My use of “not being sincere” in the OP might be misleading in this respect.)
Responding to curi was good training. I found myself discovering, for myself, what levels parts of my understanding were at—automatically seeing exactly where he went wrong, or being able to derive from my web of beliefs a principle that was violated… and I could go on.
Point is, I fed the trolls because it was good exercise.
I recommend numbering your hypotheses, for easy reference.
Done.
There are some more constructive reasons for arguing with the incorrigible. One is to persuade the audience, rather than your interlocutor—anything you say on a public stage is addressed as much to the general readership as to the individual being explicitly addressed. Another is to practise your skill at arguing the material.
Not that these justify going on and on indefinitely, but they are worth putting in the balance.
As something of a troll, depending on what is meant by that at least, it is often quite interesting to debate with people that do not hold your point of view and are capable of making decent arguments. Too often one runs into arguments where both sides don’t know what they are talking about in the slightest, which is terribly frustrating.
As for the Karma, I have had for a while enough karma to make a top level post, getting it isn’t particularly hard. I would have much higher karma if I didn’t occasionally post controversial comments.
If I were determined to actually troll I would create a group of accounts and post thoughtful comments for a day or so to build up some karma from outside sources. Then I would up vote one account using the dozen or so other accounts I had to give it plenty of Karma with which to make top level posts even with a decent amount of down votes. To slow the drop off I could continue up-voting that account with the other ones, however this wouldn’t be worth while. Better would be to craft on the other accounts a series of posts attacking the top level troll post so that these other accounts get upvoted by tearing into the troll. This could create serial accounts to dominate the recent posts threads over a decently long period of time. I am sure there are ways in which this process could be optimized from the trolls point of view.
I have no intention of ever posting a top level post that is highly controversial. I am not that kind of troll.
Also one could probably create an semiautomatic spammer as well that had the purpose of defeating the karma requirements without much difficulty.
Basically create a core of accounts that is the size of the minimal number of Karma needed to make a top level post. Then one of those accounts needs to get one Karma, which hopefully can’t be automated. Then one has to create some banal discussion that includes one post from each of the core accounts which then gets upvoted in a cycle.
Then start creating spam accounts, have it post randomly something like “I agree” on something and have the core accounts upvote that one post so the spam account can post the spam on the top page. Rinse and repeat.
It would be important to avoid detection to have a scripted discussion between the core accounts and to not have the spam accounts post on that discussion except according to the random algorithm that is being used.
There are probably ways to make this simpler to create and there may be easy ways to defeat such a program. This is just an idea.
We can implement a karma minimum for upvotes if that becomes a problem.
One interesting “feature” of the karma system that makes this a lot easier is that fact that upvotes/downvotes of deleted comments still contribute to your karma.
On the topic of karma, why are you downvoting every post I make regardless of content?
(Downvoted for making an accusation without presenting evidence.)
It’s a long story, starting with Eugine publicly declaring that he was downvoting the comments I made that he disagreed with, which has seemingly escalated to downvoting every comment I make even where I’m just conducting meta-housekeeping and the like.
I’m not commenting to shame or accuse, I’m trying to understand his motivations.
Good post, thanks.
I note you don’t list as a hypothesis “even though the interlocutor was a troll, their arguments had enough merit to be worth the effort of rebutting”. I didn’t read most of the threads; I take it if I had, I’d know that wasn’t a good hypothesis? Or does this come under the “Best rebuttal contest” heading?
It was rather an unstated assumption that the arguments have not enough merit. Of course a troll can make few good arguments, and perhaps there were few in those threads, but most of the debate wasn’t centered around reasonable arguments.
The worth rebutting property itself can possibly be further reduced with respect to the motivation of the debater. Arguments may be rebutted for pure intellectual curiosity, just as “let’s look whether I can find a hole in that reasoning”, they may be rebutted to ensure that other people don’t fall victim to fallacious deductions, or they may be rebutted to show one’s intellectual superiority or other social signalling purposes. Those are different mechanisms and an argument worth rebutting for one reason may not be worth the same for other reasons (in fact, presence of the signalling reasons depends more on the context than on the argument itself).
I don’t endorse the rebutting for signalling, as it is in fact what the trolls generally aim for. When other motivations are present, a better way is to create a separate post to discuss the problem, ideally after waiting for a while until the troll disappears. Instant rebutting will likely be contaminated by trollish distractions and thus suboptimally productive.
Yes, that makes sense. Thanks.
That hypothesis is actually the most reasonable one. The troll’s arguments didn’t have any merit—they were all perfect examples of every bad argument going—but they were the arguments one sees time and again from people who aren’t (consciously) trolling.
I think this is quite a large part of it. I have several times on Less Wrong followed discussions that seemed to be headed towards trollishness, and then all of a sudden someone changes their mind, updates, and everyone moves on. It is one of the things I love about this website, and I would be sad if an anti-trolling sentiment led to these sort of discussions being abandoned before they concluded. Sometimes persistence is a waste of time, but sometimes it makes a difference.
I’m glad I didn’t read it now—thanks!
Under the usual rules, trolls are to be treated like zombies: they emit messages, but their words don’t reflect what they actually think, but a sort of fake-thinking designed to deceive you. Or they are outlaws: responding in good faith towards them is considered bad behavior, under the “don’t feed the troll” principle.
“We are actors; we are the opposite of people. So? We need an audience.” — Rosencrantz and Guildenstern are Dead
If what you care about is arriving at accurate beliefs about the world under discussion rather than about the social intentions of the discussion participants, it might be best to declare that trolls don’t exist, but obnoxious people do. That it doesn’t matter whether a poster believes the words they’re posting, but it does matter whether they are a pain in the ass.
We could hide downvoted posts from “recent posts” (make them available only to those who already know the url) and hide downvoted comments from “recent comments”. And hide all children of downvoted posts/comments from “recent comments” too. That would discourage feeding.
Undesired side effect.
Why? We already automatically collapse downvoted comments along with all their children, to discourage people from writing more comments in these threads. My proposal has similar intent.
Because there are many valuable discussions that have happened to occur below one downvoted comment. I do not desire any additional penalties to those comments. Deal with the problem of trolls by some other more direct mechanism.
Hiding downvoted posts is already available as an individual preference, which lots of people have turned off. Do not want.
I want to have a possibility to see all recent posts without knowing the specific url, if only to be able to find it easily if I want to look at the comments at some later moment. But I agree that it may be useful to have an option to switch the display of downvoted posts off, analogously to how it works for comments. It should even be the default setting.
There have been a few comments of that type, and I’ve sent private messages to some participants saying just that. Public “don’t feed the troll” can be an encouragement.
Part of this is probably due to the feeling that “we can’t let this outrageous statement stand”. We feel the need to respond to a bad argument, even if there is no particular benefit to doing so.
But then they’ll keep being wrong!
I know. Never said it was rational. Upvoted (that’s a particularly good one)
If you knew in advance that you were wrong and your comment was meaningless, why did you make it?
It’s pretty normal for philosophers to rubbish all (other) philosophers. Think of it as being like how niggers call each other niggers. So for example and just off the top of my head, you can find Cicero praising Socrates for applying philosophy to practical real life concerns instead of the rubbishy pre-Socratics speculating about the origins of the universe and matter; you can find Sextus Empiricus rubbishing every non-skeptic as being sadly deluded and deluding everyone else with their talks of logic; you can find Nietzsche (well, I hardly need describe him) calling the previous 2 milleniums of philosophy literally ‘sick’ or diseased (gosh, so that ‘trolling’ isn’t even original!) thanks to the Apollonian and then Christian influences; or Hume talking about casting almost all philosophy to the flames; Wittgenstein thinking he solved all the problems of philosophy and showing most discussions to be literal ‘nonsense’...
These are assertions, which may or may not be correct, but you have not given any examples or arguments for them.
Does being amazed cause you to question some of your assertions?
How do you know it was mostly a waste of time? Do you speak for everyone here? Have you spoken to all the participants?