Holden, I think your assessment is accurate … but I would venture to say that it does not go far enough.
My own experience with SI, and my background, might be relevant here. I am a member of the Math/Physical Science faculty at Wells College, in Upstate NY. I also have had a parallel career as a cognitive scientist/AI researcher, with several publications in the AGI field, including the opening chapter (coauthored with Ben Goertzel) in a forthcoming Springer book about the Singularity.
I have long complained about SI’s narrow and obsessive focus on the “utility function” aspect of AI—simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the “utility function” mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.
Perhaps most important of all, though, is the fact that the alternative motivation mechanism might (and notice that I am being cautious here: might) lead to systems that are extremely stable. Which means both friendly and safe.
Taken in isolation, these thoughts and arguments might amount to nothing more than a minor addition to the points that you make above. However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky. After delivering this tirade, Yudkowsky then banned me from the discussion forum that he controlled, and instructed others on that forum that discussion about me was henceforth forbidden.
Since that time I have found that when I partake in discussions on AGI topics in a context where SI supporters are present, I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky’s earlier outburst. This activity is now so common that when I occasionally post comments here, my remarks are very quickly voted down below a threshold that makes them virtually invisible. (A fate that will probably apply immediately to this very comment).
I would say that, far from deserving support, SI should be considered a cult-like community in which dissent is ruthlessly suppressed in order to exaggerate the point of view of SI’s founders and controllers, regardless of the scientific merits of those views, or of the dissenting opinions.
If you have some solid, rigorous and technical criticism of SIAI’s AI work, I wish you would create a pseudonimous account on LW and state that critcism without giving the slightest hint that you are Richard Loosemore, or making any claim about your credentials, or talking about censorship and quashing of dissenting views.
Until you do something like that, I can’t help think that you care more about your reputation or punishing Eliezer than about improving everybody’s understanding of technical issues.
I would say that, far from deserving support, SI should be considered a cult-like community in which dissent is ruthlessly suppressed in order to exaggerate the point of view of SI’s founders and controllers, regardless of the scientific merits of those views, or of the dissenting opinions.
This is a very strong statement. Have you allowed for the possibility that your current judgement might be clouded by the events transpired some 6 years ago?
I myself employ a very strong heuristic, from years of trolling the internet: when a user joins a forum and complains about an out-of-character and strongly personal persecution by the moderation staff in the past, there is virtually always more to the story when you look into it.
Indeed, Dolores, that is an empirically sound strategy, if used with caution.
My own experience, however, is that people who do that can usually be googled quickly, and are often found to be unqualified cranks of one persuasion or another. People with more anger than self-control.
But that is not always the case. Recently, for example, a woman friended me on Facebook and then posted numerous diatribes against a respected academic acquaintance of mine, accusing him of raping her and fathering her child. These posts were quite blood-curdling. And their target appeared quite the most innocent guy you could imagine. Very difficult to make a judgement. However, about a month ago the guy suddenly came out and made a full and embarrassing frank admission of guilt. It was an astonishing episode. But it was an instance of one of those rare occasions when the person (the woman in this case) turned out to be perfectly justified.
I am helpless to convince you. All I can do is point to my own qualifications and standing. I am no lone crank crying in the wilderness. I teach Math, Physics and Cognitive Neuroscience at the undergraduate level, and I have coauthored a paper with one of the AGI field’s leading exponents (Ben Goertzel), in a book about the Singularity that was at one point (maybe not anymore!) slated to be a publishing landmark for the field. You have to make a judgement.
Regardless of who was how much at fault in the SL4 incident, surely you must admit that Yudkowsky’s interactions with you were unusually hostile relative to how he generally interacts with critics. I can see how you’d want to place emphasis on those interactions because they involved you personally, but that doesn’t make them representative for purposes of judging cultishness or making general claims that “dissent is ruthlessly suppressed”.
Steven. That does make it seem as though the only thing worth complaining about was the “unusually” hostile EY behavior on that occasion. As if it were exceptional, not repeated before or since.
But that is inaccurate. That episode was the culmination of a long sequence of derogatory remarks. So that is what came before.
What came after? I have made a number of attempts to open a dialog on the important issue at hand, which is not the personal conflict but the question of AGI motivation systems. My attempts have been rebuffed. And instead I have been subjected to repeated attacks by SI members.
That would be six years of repeated attacks.
So portraying it as an isolated incident is not factually correct. Which was my point, of course.
I’m interested in any compiled papers or articles you wrote about AGI motivation systems, aside from the forthcoming book chapter, which I will read. Do you have any links?
shminux, It is of course possible that my current judgement might be clouded by past events … however, we have to assess the point at which judgements are “clouded” (in other words, poor because of confusion or emotion) by time, rather than being lessons learned that still apply.
In the time since those events I have found no diminution in the rate at which SI people intervene aggressively in discussions I am having, with the sole purposes of trying to tell everyone that I was banned from Yudkowsky’s forum back in 2006.
This most recently happened just a few weeks ago. On that occasion Luke Muehlhauser (no less) took the unusual step of asking me to friend him on Facebook, after which he joined a discussion I was having and made scathing ad hominem comments about me—which included trying to use the fact of the 2006 episode as a piece of evidence for my lack of credibility—and then disappeared again. He made no reply when his ad hominem assertions were challenged.
Now: would you consider it to be a matter of clouded judgment on my part when Luke Muehlhauser is still, in 2012, engaging in that kind of attack?
On balance, then, I think my comments come from privileged insight (I am one of the few to have made technical objections to SI’s cherished beliefs, and I was given valuable insight into their psychology when I experienced the violent reaction) rather than clouded judgement.
This most recently happened just a few weeks ago. On that occasion Luke Muehlhauser (no less) took the unusual step of asking me to friend him on Facebook, after which he joined a discussion I was having and made scathing ad hominem comments about me
Sounds serious… Feel free to post a relevant snippet of the discussion, here or elsewhere, so that those interested can judge this event on its merits, and not through your interpretation of it.
LessWrong has now shown its true mettle. After someone here on FB mentioned a LW discussion of consciousness, I went over there and explained that Eliezer Yudkowsky, in his essay, had completely misunderstood the Zombie Argument given by David Chalmers. I received a mix of critical, thoughtful and sometimes rude replies. But then, all of a sudden, Eliezer took an interest in this old thread again, and in less than 24 hours all of my contributions were relegated to the trash. Funnily enough, David Chalmers himself then appeared and explained that Eliezer had, in fact, completely misunderstood his argument. Chalmers’ comments, strangely enough, have NOT been censored. :-)
I replied:
I haven’t read the whole discussion, but just so everyone is clear...
Richard’s claim that “in less than 24 hours all of my contributions were relegated to the trash” is false.
What happened is that LWers disvalued Richard’s comments and downvoted them. Because most users have their preferences set to hide comments with a score of less than −3, these users saw Richard’s most-downvoted comments as collapsed by default, with a note reading “comment score below threshold”, and a plus symbol you can click to expand the comment and the ensuing thread. This happens regularly even for many LW regulars like Will Newsome.
What happened was not censorship. Richard’s comments were not “relegated to the trash.” They were downvoted by the community, and not merely because Eliezer “took an interest” in the thread again. I have strongly disagreed with Eliezer on LW before and had my comments massively UP-voted by the community. LessWrong is not community of mindless Eliezer-drones. It’s a community of people who have learned the skills of thinking quantitatively for themselves, which is one reason it can be hard for the community to cooperate to get things done in general.
Chalmers’ comments weren’t “censored” because (1) nobody’s comments on that thread were actually censored, to my knowledge, and (2) the community thought Chalmers’ comments were valuable even when they disagreed with them.
Richard, I find your comment to be misleading to the point of being dishonest, similar to the level of dishonesty in the messages that got you banned from the SL4 mailing list: http://www.sl4.org/archive/0608/15895.html
I’ve appreciated several of the articles you’ve written for IEET and H+, and I wish you would be more careful with your communications.
As you can see, the point of my comment wasn’t to “abuse” Richard, but to explain what actually happened so that readers could compare it to what Richard said had happened.
At that point, Abram Demski commented:
I humbly suggest that the debate end here. (I do not predict anything useful coming out of a continued debate, and I’d prefer if we kept on the interesting track which the conversation turned to.)
...and Richard and I agreed.
Thus, I will say no more here. Indeed, given Richard’s reaction (which I might have predicted with a bit more research), I regret having raised the issue with him at all.
I fail to see anything that can be qualified as an ad hominem (“an attempt to negate the truth of a claim by pointing out a negative characteristic or belief of the person supporting it”) in what you quoted. If anything, the original comment by Richard comes much closer to this definition.
“Ad hominem
An ad hominem argument is any that attempts to counter another’s claims or conclusions by attacking the person, rather than addressing the argument itself.”
My original argument, that Luke took so much exception to, was one made by many people in the history of civilisation: is it censorship when a community of people collectively vote in such a way that a dissenting voice becomes inaudible? For example, if all members of Congress were to shout loudly when a particular member got up to speak, drowning out their words, would this be censorship, or just their exercise of a community vote against that person? The question is debatable, and many people would agree that it is a quite sinister form of censorship.
So my point about censorship shared a heritage with something that has been said my others, on countless occasions.
Now, did Luke accept that MANY people would agree that this kind of “shouting down” of a voice was tantamount to censorship?
Far from accepting that this is a commonplace, he called my comment “misleading to the point of being dishonest”. That is not a reference to the question of whether the point was or was not valid, it was a reference to my character. My level of honesty. Which is the standard definition of an ad hominem.
But of course, he went much further than this simple ad hominem. He said: “Richard, I find your comment to be misleading to the point of being dishonest, similar to the level of dishonesty in the messages that got you banned from the SL4 mailing list: http://www.sl4.org/archive/0608/15895.html″
This is also an example of a “Poisoning the Well” attack. Guilt by association.
Furthermore, he makes a slanderous claim of bad character. He refers to ”… the level of dishonesty that got you banned from the SL4 mailing list”. In fact, there was no dishonesty in that episode at all. He alludes to the supposed dishonesty as if it were established fact, and uses it to try to smear my character rather than my argument.
But, in the face of this clear example of an ad hominem attack (it is self-evident, hardly needing me to spell it out), you, shminux, see nothing. In fact, without explaining your reasoning, you go on to state that you find more evidence for ad hominem in my original remarks! I just looked again: I say nothing about anyone’s character (!), so how can there be evidence for me attacking someone by using an ad hominem?
Finally, Luke distorted the quote of the conversation, above. He OMITTED part of the conversation, in which I supplied evidence that there was no dishonesty on my part, and that there was massive evidence that the banning occurred because Yudkowsky needed to stop me when I suggested we get an outside expert opinion to adjudicate the dispute. Faced with this reply, Luke disappeared. He made only ONE comment (the one above) and then he ignored the reply.
He continues to ignore that evidence, and continues to slander my character, making references to “Indeed, given Richard’s reaction (which with a bit more research I might have predicted), I regret having raised the issue with him at all.”
For example, if all members of Congress were to shout loudly when a particular member got up to speak, drowning out their words, would this be censorship, or just their exercise of a community vote against that person?
One thing to note is that your comment wasn’t removed; it was collapsed. It can still be viewed by anyone who clicks the expander or has their threshold set sufficiently low (with my settings, it’s expanded). There is a tension between the threat of censorship being a problem on the one hand, and the ability for a community to collectively decide what they want to talk about on the other.
The censorship issue is also diluted by the fact that 1) nothing here is binding on anyone (which is way different than your Congress example), and 2) there are plenty of other places people can discuss things, online and off. It is still somewhat relevant, of course, to the question of whether there’s an echo-chamber effect, but carefull not to pull in additional connotations with choice of words and examples.
This happens regularly even for many LW regulars like Will Newsome.
(Though to be fair I think this sort of depends on your definition of “regularly”—I think over 95% of my comments aren’t downvoted, many of them getting 5 or more upvotes, in contrast with other contributors who get about 25% of their comments downvoted and usually end up leaving as a result.)
Why? This isn’t obvious to me. If the remaining comments are highly upvoted and of correspondingly high quality then it would make sense for them to stick around. Timtyler may be a in a similar category.
I initially upvoted this post, because the criticism seemed reasonable. Then I read the discussion, and switched to downvoting it. In particular, this:
Taken in isolation, these thoughts and arguments might amount to nothing more than a minor addition to the points that you make above. However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky. After delivering this tirade, Yudkowsky then banned me from the discussion forum that he controlled, and instructed others on that forum that discussion about me was henceforth forbidden.
Since that time I have found that when I partake in discussions on AGI topics in a context where SI supporters are present, I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky’s earlier outburst. This activity is now so common that when I occasionally post comments here, my remarks are very quickly voted down below a threshold that makes them virtually invisible. (A fate that will probably apply immediately to this very comment).
Serious accusations there, with no links that would allow someone to judge the truth of them. And after reading the discussion, I suspect the reason people keep bringing up your 2006 banning is because they see your current behavior is part of a pattern of bad behavior, and that the behavior that led to your 2006 banning was also part of that same pattern of bad behavior.
I witnessed many of the emails in the 2006 banning. Richard disagreed with Eliezer often, and not very diplomatically. Rather than deal with Richard’s arguments, Eliezer decided to label Richard as a stupid troll, which he obviously was not, and dismiss him. I am disappointed that Eliezer has apparently never apologized. The email list, SL4, slacked off in volume for months afterwords, probably because most participants felt disgusted by the affair; and Ben Goertzel made a new list, which many people switched to.
The fact that many people quit the list / cut back their participation seems fairly strong evidence that Loosemore has a legitimate complaint here.
I’m not sure. People sometimes cut back participation in that sort of thing in response to drama in general. However, it is definitely evidence. Phil’s remark makes me strongly update in the direction of Loosemore having a legitimate point.
Can you provide some examples of these “abusive personal attacks”? I would also be interested in this ruthless suppression you mention. I have never seen this sort of behavior on LessWrong, and would be shocked to find it among those who support the Singularity Institute in general.
I’ve read a few of your previous comments, and while I felt that they were not strong arguments, I didn’t downvote them because they were intelligent and well-written, and competent constructive criticism is something we don’t get nearly enough of. Indeed, it is usually welcomed. The amount of downvotes given to the comments, therefore, does seem odd to me. (Any LW regular who is familiar with the situation is also welcome to comment on this.)
I have seen something like this before, and it turned out the comments were being downvoted because the person making them had gone over, and over, and over the same issues, unable or unwilling to either competently defend them, or change his own mind. That’s no evidence that the same thing is happening here, of course, but I give the example because in my experience, this community is almost never vindictive or malicious, and is laudably willing to consider any cogent argument. I’ve never seen an actual insult levied here by any regular, for instance, and well-constructed dissenting opinions are actively encouraged.
So in summary, I am very curious about this situation; why would a community that has been—to me, almost shockingly—consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?
One final thing I will note is that you do seem to be upset about past events, and it seems like it colors your view (and prose, a bit!). From checking both here and on SL4, for instance, your later claims regarding what’s going on (“dissent is ruthlessly suppressed”) seem exaggerated. But I don’t know the whole story, obviously—thus this question.
So in summary, I am very curious about this situation; why would a community that has been—to me, almost shockingly—consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?
The answer is probably that you overestimate that community’s dedication to rationality because you share its biases. The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?
My take is that neither side in this argument distinguished itself. Loosemore called for an “outside adjudicator” to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these “sins” deserved a ban (no wonder the raw feelings come back to haunt); no honorable person would accept a position where he has the authority to exercise such power (a party to a dispute is biased). Or at the very least, he wouldn’t use it the way Yudkowsky did, when he was the banned party’s main antagonist.
The answer is probably that you overestimate that community’s dedication to rationality because you share its biases.
That’s probably no small part of it. However, even if my opinion of the community is tinted rose, note that I refer specifically to observation. That is, I’ve sampled a good amount of posts and comments here on LessWrong, and I see people behaving rationally in arguments—appreciation of polite and lucid dissension, no insults or ad hominem attacks, etc. It’s harder to tell what’s going on with karma, but again, I’ve not seen any one particular individual harassed with negative karma merely for disagreeing.
The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?
Can you elaborate, please? I’m not sure what enormous conceit you refer to.
My take is that neither side in this argument distinguished itself. Loosemore called for an “outside adjudicator” to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these “sins” deserved a ban
I think that’s an excellent analysis. I certainly feel like Yudkowsky overreacted, and as you say, in the circumstances no wonder it still chafes; but as I say above, Richard’s arguments failed to impress, and calling for outside help (“adjudication” for an argument that should be based only on facts and logic?) is indeed beyond obnoxious.
Thanks. I read the whole debate, or as much of it as is there; I’ve prepared a short summary to post tomorrow if anyone is interested in knowing what really went on (“as according to Hul-Gil”, anyway) without having to hack their way through that thread-jungle themselves.
(Summary of summary: Loosemore really does know what he’s talking about—mostly—but he also appears somewhat dishonest, or at least extremely imprecise in his communication.)
Imagine you’re driving down the highway, and you see another car wobbling unpredictably across the lanes, lunging for any momentary opportunity to get ahead. Would you consider that driver particularly friendly or safe?
If you being downvoted is the result of LW ruthlessly suppressing dissent of all kind, how do you explain this post by Holden Karnofsky getting massively upvoted?
All possible. However, if you can explain anything, the explanation counts for nothing. The question is which explanation is the most likely, and “there is evidence for fair-mindedness (but it is mostly fake!)” is more contrived than “there is evidence for fair-mindedness”, as an explanation for the upvotes of OP.
I’m a regular, and I was impressed with it. Many other regulars have also said positive things about it, so possible explanation 1 is out. And unless I’m outright lying to you, 2, if true, would have to be entirely subconscious.
Alex, I did not say that ALL dissent is ruthlessly suppressed, I said that dissent ruthlessly suppressed. You inserted the qualifier “ALL”.
You ask an irrelevant question, since it pertains to someone else not getting suppressed. However, I will answer it for you since you failed to make the trivial and rather obvious logical leap needed to answer it yourself.
Holden Karnofsky stands in a position of great power with respect to the SIAI community: you want his money. And since the easiest and quickest way to ensure that you NEVER get to see any of the money that he controls, would be to ruthlessly suppress his dissent, he is treated with the utmost deference.
(I am saying this in case anyone looks at this thread and thinks Loosemore is making a valid point, not because I approve of anyone’s responding to him.)
Alex, I did not say that ALL dissent is ruthlessly suppressed
This is an abuse of language since it is implicated by the original statement.
And since the easiest and quickest way to ensure that you NEVER get to see any of the money that he controls, would be to ruthlessly suppress his dissent, he is treated with the utmost deference.
There is absolutely no reason to believe that all, or half, or a quarter, or even ten percent of the upvotes on this post come from SIAI staff. There are plenty of people on LW who don’t support donating to SIAI.
Actually bare noun phrases in English carry both interpretations, ambiguously. The canonical example is “Policemen carry guns” versus “Policemen were arriving”—the former makes little sense when interpreted existentially, but the latter makes even less sense when interpreted universally.
Well, it was a hasty generalization on my part. Flawed descriptivism, not prescriptivism. But you’re losing sight of the issue, even as you refute an unsound argument. In the particular case—check it out—Grognor resolved the ambiguity in favor of the universal quantifier. This would be uncharitable in the general case, but in context it’s—as I said—a ridiculous argument. I stretched for an abstract argument to establish the ridiculousness, and I produced a specious argument. But the fact is that it was Grognor who had accused Loosemore of “abuse of language,” on the tacit ground that the universal quantifier is automatically implied. There was the original prescriptivism.
(This comment originally said only, “Don’t do that.” That was rude, so I’m replacing it with the following. I apologize if you already saw that.)
As a general rule, I’d prefer that people don’t make silly jokes on this website, as that’s one first step in the slippery slope toward making this site just another reddit.
Curious. I was just reading Jerome Tuccille’s book on the history of libertarianism through his eyes, and when he discusses how Objectivism turned into a cult one of the issues apparently was a lack of acceptance of humor.
I disagree with your blanket policy on jokes. I don’t want to be a member of an organization that prohibits making fun of said organization (or its well-respected members); these types of organizations tend to have poor track records. I would, of course, fully support a ban on bad jokes, where “bad” is defined as “an unfunny joke that makes me want to downvote your comment, oh look, here’s me downvoting it”.
That said, I upvoted your comment for the honest clarification.
(I try to simply not vote on comments that actually make me laugh—there is a conflict between the part of me that wants LW to be Serious Business and the part of me that wants unexpected laughs, and such comments tend to get more karma than would be fair anyway.)
where “bad” is defined as “an unfunny joke that makes me want to downvote your comment, oh look, here’s me downvoting it”.
I usually operate using this definition, with one tweak: I’m more likely to upvote a useful comment if it’s also funny. I’m unlikely to upvote a comment if it’s only funny; and though the temptation to make those arises, I try hard to save it for reddit.
You know, I only visit LessWrong these days to entertain myself with the sight of self-styled “rational” people engaging in mind-twisting displays of irrationality.
Grognor: congratulations! You win the Idiot of the Week award.
Why? Let’s try taking my statement and really see if it was an “abuse of language” by setting exactly the same statement in a different context. For example: let’s suppose someone claims that the government of China “Ruthlessly suppresses dissent”. But now, does that mean that the government of China ruthlessly suppresses the torrent of dissent that comes from the American Embassy? I don’t think so! Very bad idea to invade the embassy and disappear the entire diplomatic mission …. seriously bad consequences! So they don’t. Oh, and what about people who are in some way close the heart of the regime … could it be that they do indeed tolerate some dissent in the kind of way that, if it happened out there in the populace, would lead to instant death? You can probably imagine the circumstances easily enough: back in Mao Tse Tung’s day, would there have been senior officials who called him funny names behind his back? Happens all the time in those kinds of power structures: some people are in-crowd and are allowed to get away with it, while the same behavior elsewhere is “ruthlessly suppressed”.
So, if that person claims that the government of China “Ruthlessly suppresses dissent”, according to you they MUST mean that all forms of dissent WITHOUT EXCEPTION are suppressed.
But if they argue that their statement obviously would not apply to the American Embassy staff ….… you would tell them that their original statement was “an abuse of language since it is implicated by the original statement.”
Amusing.
Keep up the good work. I find the reactions I get when I say things on Less Wrong to be almost infinitely varied in their irrationality.
Richard, this really isn’t productive. Your clearly quite intelligent and clearly still have issues due to the dispute between you and Eliezer. It is likely that if you got over this, you could be an effective, efficient, and helpful critic of SI and their ideas. But right now, you are engaging in a uncivil behavior that isn’t endearing you to anyone while making emotionally heavy comparisons that make you sound strident.
Yes, but how much of that is due to the prior negative experience and fighting he’s had? It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences. Human motivations are highly malleable.
Er, yes. The fact that Loosemore is a professional AI researchers with a fair number of accomplishments and his general history strongly suggests that at least in his case he didn’t start his interaction with the intent to troll. His early actions on LW were positive and some were voted up.
His second comment on LW is here is from January and is at +8 (and I seem to recall was higher earlier). Two of his earlier comments from around the same time were at positive numbers but have since dipped below. It looks like at last one person went through and systematically downvoted his comments without regard to content.
I understand your point, but given that sentiment, the sentence “It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences” confuses me.
Right, as mentioned I meant uncommon. My point is that I don’t think Loosemore’s experience is that different from what often happens. At least in my experience, I’ve seen people who were more or less productive on one forum becomes effectively trolls elsewhere on the internet after having had bad experiences elsewhere. I think a lot of this is due to cognitive dissonance- people don’t like to think that they were being actively stupid or were effectively accidentally trolling, so they convince themselves that those were their goals all along.
It seems to me it would be more appropriate to ask Yudkowsky and LukeProg to retract the false accusations that Loosemore is a liar or dishonest, respectively.
Yes, that would probably be a step in the right direction also. I don’t know whether the accusation is false, but the evidence is at best extremely slim and altogether unhelpful. That someone didn’t remember a study a few years ago in the heat of the moment simply isn’t something worth getting worked up about.
I don’t think ruthlessly is the right word; I’d rather say relentlessly. In fact, your analogy to Stalinist practices brings out, by way of contrast, how not ruthless LW’s practices are. Yudkowsky is—if not in your case—subtle. Soft censorship is effected by elaborate rituals (the “sequences”; the “rational” turn of phrase) effectively limiting the group to a single personality profile: narrow-focusers, who can’t be led astray from their monomania. Then, instituting a downvoting system that allows control by the high-karma elite: the available downvotes (but not upvotes—the masses must be kept content) are distributed based on the amount of accumulated karma. Formula nonpublic, as far as I can tell.
Why don’t these rationalists even come close to intuiting the logic of the downvoting system? They evidently care not the least about its mechanics. They are far from even imagining it is consequential. Some rationalists.
Total available downvotes are a high number (4 times total karma, if I recall correctly), and in practice I think they prevent very few users from downvoting as much as they want.
From personal experience, I think you’re wrong about a high number. I currently need 413 more points to downvote at all. I have no idea how you would even suspect whether “few users” are precluded from downvoting.
But what a way to discuss this: “high number.” If this is supposed to be a community forum, why doesn’t the community even know the number—or even care.
(For the record, I ended up editing in the “(4 times total karma, if I recall correctly)” after posting the comment, and you probably replied before seeing that part.)
I currently need 413 more points to downvote at all.
So how many downvotes did you use when your karma was still highly positive? That’s likely a major part of that result.
But what a way to discuss this: “high number.” If this is supposed to be a community forum, why doesn’t the community even know the number—or even care.
The main points of the limit are 1) to prevent easy gaming of the system and 2) to prevent trolls and the like from going though and downvoting to a level that doesn’t actually reflect communal norms. In practice, 1 and 2 are pretty successful and most of the community doesn’t see much danger in the system. That you can’t downvote I think would be seen by many as a feature rather than a bug. So they don’t have much need to care because the system at least at a glance seems to be working, and we don’t like to waste that much time thinking about the karma system.
Then, instituting a downvoting system that allows control by the high-karma elite: the available downvotes (but not upvotes—the masses must be kept content) are distributed based on the amount of accumulated karma. Formula nonpublic, as far as I can tell.
The formula max is 4*total karma. I’m curious- if there were a limit on the total number of upvotes also, would you then say that this was further evidence of control of entrenched users. If one option leads to a claim about keeping the masses content and the reverse would lead to a different set of accusations then something is wrong? If any pattern is evidence of malicious intent, then something is wrong. Incidentally, it might help to realize that the system as it exists is a slightly modified version of the standard reddit system. The code and details for the karma system are based off of fairly widely used open source code. It is much more likely that this karma system was adopted specifically as being a basic part of the code base. Don’t assume malice when laziness will do.
Why don’t these rationalists even come close to intuiting the logic of the downvoting system?
Disagreeing with what you think of the system is not the same as not intuiting it. Different humans have different intuition.
They evidently care not the least about its mechanics. They are far from even imagining it is consequential. Some rationalists.
But some people do think the karma system matters. And you are right in that it does matter in some respects more than many people realize it does. There’s no question that although I don’t really care much about my karma total at all, I can’t help but feel a tinge of happiness when I log in to see my karma go up from 9072 to 9076 as it just did, and then feel a slight negative feeling when I see it then go down to 9075. Attach a number to something and people will try to modify it. MMOs have known this for a while. (An amusing take.) And having partially randomized aspects certainly makes it more addicting since randomized reinforcement is more effective. And in this case, arguments that people like are more positively rewarded. That’s potentially quite subtle, and could have negative effects, but it isn’t censorship.
While that doesn’t amount to censorship, there are two other aspects of the karma system that most people don’t even notice much at all. The first of course is that downvoted comments get collapsed. The second is that one gets rate limited with positing as one’s karma becomes more negative. Neither of these really constitutes censorship by most notions of the term, although I suppose the second could sort of fall into it under some plausible notions. Practically speaking, you don’t seem to be having any trouble getting your points heard here.
I don’t know what your evidence is that they are from “evening imaginging it is consquential.” Listening to why one might think it is consequential and then deciding that the karma system doesn’t have that much impact is not the same thing as being unable to imagine the possibility. It is possible (and would seem not too unlikely to me) that people don’t appreciate the more negative side effects of the karma system, but once again, as often seems to be the case, your own worst enemy is yourself, by overstating your case in a way that overall makes people less likely to take it seriously.
Attach a number to something and people will try to modify it. MMOs have known this for a while. (An amusing take.)
The Kill Everyone Project was almost exactly this.
Progress Quest and Parameters are other takes on a similar concept (though Parameters is actually fairly interesting, if you think of it as an abstract puzzle).
SI should be considered a cult-like community in which dissent is ruthlessly suppressed in order to exaggerate the point of view of SI’s founders and controllers, regardless of the scientific merits of those views, or of the dissenting opinions.
Also, your impression might be different if you had witnessed the long, deep, and ongoing disagreements between Eliezer and I about several issues fundamental to SI — all while Eliezer suggested that I be made Executive Director and then continued to support me in that role.
Can you give an example of what you mean by “abusive personal attacks”?
However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky.
I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky’s earlier outburst
As someone who was previously totally unaware of that flap, that doesn’t sound to me like a “slanderous tirade.” Maybe Loosemore would care to explain what he thought was slanderous about it?
Okay, make that: I strongly suspect the rationality of the rational internet would improve many orders of magnitude if all arguments about arguments were quietly deleted
Markus: Happy to link to the details, but where in the huge stream would you like to be linked to? The problem is that opinions can be sharply skewed by choosing to link to only selected items.
I cite as evidence Oscar’s choice, below, to link to a post by EY. In that post he makes a series of statements that are flagrant untruths. If you read that particular link, and take his word as trustworthy, you get one impression.
But if you knew that EY had to remove several quotes from their context and present them in a deceiptful manner, in order to claim that I said things that I did not, you might get a very different impression.
You might also get a different impression if you knew this. The comment that Oscar cites came shortly after I offered to submit the dispute to outside arbitration by an expert in the field we were discussing. I offered that ANYONE could propose an outside expert, and I would abide by their opinion.
It was only at that point that EY suddenly wrote the post that Oscar just referenced, in which he declared me to be banished from the list and (a short time later) that all discussion about the topic should cease.
I’ll gladly start reading at any point you’ll link me to.
The fact that you don’t just provide a useful link but instead several paragraphs of excuses why the stuff I’m reading is untrustworthy I count as (small) evidence against you.
I’ve read SL4 around that time and saw the whole drama (although I couldn’t understand all the exact technical details, being 16). My prior on EY flagrantly lying like that is incredibly low. I’m virtually certain that you’re quite cranky in this regard.
I was on SL4 as well, and regarded Eliezer as basically correct, although I thought Loosemore’s ban was more than a little bit disproportionate. (If John Clark didn’t get banned for repeatedly and willfully misunderstanding Godelian arguments, wasting the time of countless posters over many years, why should Loosemore be banned for backtracking on some heuristics & biases positions?)
You use this word in an unconventional way, i.e., you use it to mean something like ‘unfairly causing harm and wasting people’s time’, which is not the standard definition: the standard definition necessitates intention to provoke or at least something in that vein. (I assume you know what “trolling” means in the context of fishing?) Because it’s only ever used in sensitive contexts, you might want to put effort into finding a more accurate word or phrase. As User:Eugine_Nier noted, lately “troll” and “trolling” have taken on a common usage similar to “fascist” and “fascism”, which I think is an unfortunate turn of events.
The animus here must be really strong. What Yudkowsky did was infer that Loosemore was lying about being a cognitive scientist from his ignorance of a variant of the Wasson experiment. First, people often forget obvious things in heated online discussions. Second, there are plenty of incompetent cognitive scientists: if Loosemore intended to deceive, he probably wouldn’t have expressly stated that he didn’t have teaching responsibilities for graduate students.
If what you say is true, then Eliezer is lying about Loosemore lying about his credentials, in which case Eliezer is “trolling”. But if what you say is false, then you are the “troll”.
(This comment is an attempt to convincingly demonstrate that Eliezer’s notion of trolling is, to put it bluntly, both harmful and dumb.)
If what you say is true, then Eliezer is lying about Loosemore lying about his credentials, in which case Eliezer is “trolling”. But if what you say is false, then you are the “troll”. (This comment is an attempt to convincingly demonstrate that Eliezer’s notion of trolling is, to put it bluntly, both harmful and dumb.)
I don’t know about you, but I’d prefer to be considered a troll than a liar; correspondingly, I think the expanded definition of liar is worse than the inaccurate definition of troll. Not every inaccuracy amounts to dishonesty and not all dishonesty to prevarication.
I have long complained about SI’s narrow and obsessive focus on the “utility function” aspect of AI—simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the “utility function” mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.
I often observe very intelligent folks acting irrationally. I suspect superintelligent AI’s might act superirrationally. Perhaps the focus should be on creating rational AI’s first. Any superintelligent being would have to be first and foremost superrational, or we are in for a world of trouble. Actually, in my experience, rationality trumps intelligence every time.
Holden, I think your assessment is accurate … but I would venture to say that it does not go far enough.
My own experience with SI, and my background, might be relevant here. I am a member of the Math/Physical Science faculty at Wells College, in Upstate NY. I also have had a parallel career as a cognitive scientist/AI researcher, with several publications in the AGI field, including the opening chapter (coauthored with Ben Goertzel) in a forthcoming Springer book about the Singularity.
I have long complained about SI’s narrow and obsessive focus on the “utility function” aspect of AI—simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the “utility function” mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.
Perhaps most important of all, though, is the fact that the alternative motivation mechanism might (and notice that I am being cautious here: might) lead to systems that are extremely stable. Which means both friendly and safe.
Taken in isolation, these thoughts and arguments might amount to nothing more than a minor addition to the points that you make above. However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky. After delivering this tirade, Yudkowsky then banned me from the discussion forum that he controlled, and instructed others on that forum that discussion about me was henceforth forbidden.
Since that time I have found that when I partake in discussions on AGI topics in a context where SI supporters are present, I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky’s earlier outburst. This activity is now so common that when I occasionally post comments here, my remarks are very quickly voted down below a threshold that makes them virtually invisible. (A fate that will probably apply immediately to this very comment).
I would say that, far from deserving support, SI should be considered a cult-like community in which dissent is ruthlessly suppressed in order to exaggerate the point of view of SI’s founders and controllers, regardless of the scientific merits of those views, or of the dissenting opinions.
Richard,
If you have some solid, rigorous and technical criticism of SIAI’s AI work, I wish you would create a pseudonimous account on LW and state that critcism without giving the slightest hint that you are Richard Loosemore, or making any claim about your credentials, or talking about censorship and quashing of dissenting views.
Until you do something like that, I can’t help think that you care more about your reputation or punishing Eliezer than about improving everybody’s understanding of technical issues.
This is a very strong statement. Have you allowed for the possibility that your current judgement might be clouded by the events transpired some 6 years ago?
I myself employ a very strong heuristic, from years of trolling the internet: when a user joins a forum and complains about an out-of-character and strongly personal persecution by the moderation staff in the past, there is virtually always more to the story when you look into it.
Indeed, Dolores, that is an empirically sound strategy, if used with caution.
My own experience, however, is that people who do that can usually be googled quickly, and are often found to be unqualified cranks of one persuasion or another. People with more anger than self-control.
But that is not always the case. Recently, for example, a woman friended me on Facebook and then posted numerous diatribes against a respected academic acquaintance of mine, accusing him of raping her and fathering her child. These posts were quite blood-curdling. And their target appeared quite the most innocent guy you could imagine. Very difficult to make a judgement. However, about a month ago the guy suddenly came out and made a full and embarrassing frank admission of guilt. It was an astonishing episode. But it was an instance of one of those rare occasions when the person (the woman in this case) turned out to be perfectly justified.
I am helpless to convince you. All I can do is point to my own qualifications and standing. I am no lone crank crying in the wilderness. I teach Math, Physics and Cognitive Neuroscience at the undergraduate level, and I have coauthored a paper with one of the AGI field’s leading exponents (Ben Goertzel), in a book about the Singularity that was at one point (maybe not anymore!) slated to be a publishing landmark for the field. You have to make a judgement.
Regardless of who was how much at fault in the SL4 incident, surely you must admit that Yudkowsky’s interactions with you were unusually hostile relative to how he generally interacts with critics. I can see how you’d want to place emphasis on those interactions because they involved you personally, but that doesn’t make them representative for purposes of judging cultishness or making general claims that “dissent is ruthlessly suppressed”.
Steven. That does make it seem as though the only thing worth complaining about was the “unusually” hostile EY behavior on that occasion. As if it were exceptional, not repeated before or since.
But that is inaccurate. That episode was the culmination of a long sequence of derogatory remarks. So that is what came before.
What came after? I have made a number of attempts to open a dialog on the important issue at hand, which is not the personal conflict but the question of AGI motivation systems. My attempts have been rebuffed. And instead I have been subjected to repeated attacks by SI members.
That would be six years of repeated attacks.
So portraying it as an isolated incident is not factually correct. Which was my point, of course.
I’m interested in any compiled papers or articles you wrote about AGI motivation systems, aside from the forthcoming book chapter, which I will read. Do you have any links?
http://susaro.com/
shminux, It is of course possible that my current judgement might be clouded by past events … however, we have to assess the point at which judgements are “clouded” (in other words, poor because of confusion or emotion) by time, rather than being lessons learned that still apply.
In the time since those events I have found no diminution in the rate at which SI people intervene aggressively in discussions I am having, with the sole purposes of trying to tell everyone that I was banned from Yudkowsky’s forum back in 2006.
This most recently happened just a few weeks ago. On that occasion Luke Muehlhauser (no less) took the unusual step of asking me to friend him on Facebook, after which he joined a discussion I was having and made scathing ad hominem comments about me—which included trying to use the fact of the 2006 episode as a piece of evidence for my lack of credibility—and then disappeared again. He made no reply when his ad hominem assertions were challenged.
Now: would you consider it to be a matter of clouded judgment on my part when Luke Muehlhauser is still, in 2012, engaging in that kind of attack?
On balance, then, I think my comments come from privileged insight (I am one of the few to have made technical objections to SI’s cherished beliefs, and I was given valuable insight into their psychology when I experienced the violent reaction) rather than clouded judgement.
Sounds serious… Feel free to post a relevant snippet of the discussion, here or elsewhere, so that those interested can judge this event on its merits, and not through your interpretation of it.
On April 7th, Richard posted to Facebook:
I replied:
As you can see, the point of my comment wasn’t to “abuse” Richard, but to explain what actually happened so that readers could compare it to what Richard said had happened.
At that point, Abram Demski commented:
...and Richard and I agreed.
Thus, I will say no more here. Indeed, given Richard’s reaction (which I might have predicted with a bit more research), I regret having raised the issue with him at all.
I fail to see anything that can be qualified as an ad hominem (“an attempt to negate the truth of a claim by pointing out a negative characteristic or belief of the person supporting it”) in what you quoted. If anything, the original comment by Richard comes much closer to this definition.
shminux.
I refer you to http://www.theskepticsguide.org/resources/logicalfallacies.aspx for a concise summary of argument fallacies, including ad hominem...
“Ad hominem An ad hominem argument is any that attempts to counter another’s claims or conclusions by attacking the person, rather than addressing the argument itself.”
My original argument, that Luke took so much exception to, was one made by many people in the history of civilisation: is it censorship when a community of people collectively vote in such a way that a dissenting voice becomes inaudible? For example, if all members of Congress were to shout loudly when a particular member got up to speak, drowning out their words, would this be censorship, or just their exercise of a community vote against that person? The question is debatable, and many people would agree that it is a quite sinister form of censorship.
So my point about censorship shared a heritage with something that has been said my others, on countless occasions.
Now, did Luke accept that MANY people would agree that this kind of “shouting down” of a voice was tantamount to censorship?
Far from accepting that this is a commonplace, he called my comment “misleading to the point of being dishonest”. That is not a reference to the question of whether the point was or was not valid, it was a reference to my character. My level of honesty. Which is the standard definition of an ad hominem.
But of course, he went much further than this simple ad hominem. He said: “Richard, I find your comment to be misleading to the point of being dishonest, similar to the level of dishonesty in the messages that got you banned from the SL4 mailing list: http://www.sl4.org/archive/0608/15895.html″
This is also an example of a “Poisoning the Well” attack. Guilt by association.
Furthermore, he makes a slanderous claim of bad character. He refers to ”… the level of dishonesty that got you banned from the SL4 mailing list”. In fact, there was no dishonesty in that episode at all. He alludes to the supposed dishonesty as if it were established fact, and uses it to try to smear my character rather than my argument.
But, in the face of this clear example of an ad hominem attack (it is self-evident, hardly needing me to spell it out), you, shminux, see nothing. In fact, without explaining your reasoning, you go on to state that you find more evidence for ad hominem in my original remarks! I just looked again: I say nothing about anyone’s character (!), so how can there be evidence for me attacking someone by using an ad hominem?
Finally, Luke distorted the quote of the conversation, above. He OMITTED part of the conversation, in which I supplied evidence that there was no dishonesty on my part, and that there was massive evidence that the banning occurred because Yudkowsky needed to stop me when I suggested we get an outside expert opinion to adjudicate the dispute. Faced with this reply, Luke disappeared. He made only ONE comment (the one above) and then he ignored the reply.
He continues to ignore that evidence, and continues to slander my character, making references to “Indeed, given Richard’s reaction (which with a bit more research I might have predicted), I regret having raised the issue with him at all.”
My “reaction” was to supply evidence.
Apparently that is a mark against someone.
One thing to note is that your comment wasn’t removed; it was collapsed. It can still be viewed by anyone who clicks the expander or has their threshold set sufficiently low (with my settings, it’s expanded). There is a tension between the threat of censorship being a problem on the one hand, and the ability for a community to collectively decide what they want to talk about on the other.
The censorship issue is also diluted by the fact that 1) nothing here is binding on anyone (which is way different than your Congress example), and 2) there are plenty of other places people can discuss things, online and off. It is still somewhat relevant, of course, to the question of whether there’s an echo-chamber effect, but carefull not to pull in additional connotations with choice of words and examples.
“Guilt by association” with your past self?
(Though to be fair I think this sort of depends on your definition of “regularly”—I think over 95% of my comments aren’t downvoted, many of them getting 5 or more upvotes, in contrast with other contributors who get about 25% of their comments downvoted and usually end up leaving as a result.)
Well, if someone’s comments are downvoted that regularly and still they stay LW regulars, there’s something wrong.
Why? This isn’t obvious to me. If the remaining comments are highly upvoted and of correspondingly high quality then it would make sense for them to stick around. Timtyler may be a in a similar category.
If I counted right, only 9 of Timtyler’s last 100 comments have negative scores as of now.
25% would be a lot. It’d mean that you either don’t realize or don’t care that people don’t want to see some types of comments.
With them or with us?
Most likely with them.
I initially upvoted this post, because the criticism seemed reasonable. Then I read the discussion, and switched to downvoting it. In particular, this:
Serious accusations there, with no links that would allow someone to judge the truth of them. And after reading the discussion, I suspect the reason people keep bringing up your 2006 banning is because they see your current behavior is part of a pattern of bad behavior, and that the behavior that led to your 2006 banning was also part of that same pattern of bad behavior.
I witnessed many of the emails in the 2006 banning. Richard disagreed with Eliezer often, and not very diplomatically. Rather than deal with Richard’s arguments, Eliezer decided to label Richard as a stupid troll, which he obviously was not, and dismiss him. I am disappointed that Eliezer has apparently never apologized. The email list, SL4, slacked off in volume for months afterwords, probably because most participants felt disgusted by the affair; and Ben Goertzel made a new list, which many people switched to.
Hmmm...
The fact that many people quit the list / cut back their participation seems fairly strong evidence that Loosemore has a legitimate complaint here.
Though if so, he’s done a poor job conveying it in this thread.
I’m not sure. People sometimes cut back participation in that sort of thing in response to drama in general. However, it is definitely evidence. Phil’s remark makes me strongly update in the direction of Loosemore having a legitimate point.
Can you provide some examples of these “abusive personal attacks”? I would also be interested in this ruthless suppression you mention. I have never seen this sort of behavior on LessWrong, and would be shocked to find it among those who support the Singularity Institute in general.
I’ve read a few of your previous comments, and while I felt that they were not strong arguments, I didn’t downvote them because they were intelligent and well-written, and competent constructive criticism is something we don’t get nearly enough of. Indeed, it is usually welcomed. The amount of downvotes given to the comments, therefore, does seem odd to me. (Any LW regular who is familiar with the situation is also welcome to comment on this.)
I have seen something like this before, and it turned out the comments were being downvoted because the person making them had gone over, and over, and over the same issues, unable or unwilling to either competently defend them, or change his own mind. That’s no evidence that the same thing is happening here, of course, but I give the example because in my experience, this community is almost never vindictive or malicious, and is laudably willing to consider any cogent argument. I’ve never seen an actual insult levied here by any regular, for instance, and well-constructed dissenting opinions are actively encouraged.
So in summary, I am very curious about this situation; why would a community that has been—to me, almost shockingly—consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?
One final thing I will note is that you do seem to be upset about past events, and it seems like it colors your view (and prose, a bit!). From checking both here and on SL4, for instance, your later claims regarding what’s going on (“dissent is ruthlessly suppressed”) seem exaggerated. But I don’t know the whole story, obviously—thus this question.
The answer is probably that you overestimate that community’s dedication to rationality because you share its biases. The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?
My take is that neither side in this argument distinguished itself. Loosemore called for an “outside adjudicator” to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these “sins” deserved a ban (no wonder the raw feelings come back to haunt); no honorable person would accept a position where he has the authority to exercise such power (a party to a dispute is biased). Or at the very least, he wouldn’t use it the way Yudkowsky did, when he was the banned party’s main antagonist.
That’s probably no small part of it. However, even if my opinion of the community is tinted rose, note that I refer specifically to observation. That is, I’ve sampled a good amount of posts and comments here on LessWrong, and I see people behaving rationally in arguments—appreciation of polite and lucid dissension, no insults or ad hominem attacks, etc. It’s harder to tell what’s going on with karma, but again, I’ve not seen any one particular individual harassed with negative karma merely for disagreeing.
Can you elaborate, please? I’m not sure what enormous conceit you refer to.
I think that’s an excellent analysis. I certainly feel like Yudkowsky overreacted, and as you say, in the circumstances no wonder it still chafes; but as I say above, Richard’s arguments failed to impress, and calling for outside help (“adjudication” for an argument that should be based only on facts and logic?) is indeed beyond obnoxious.
It seems like everyone is talking about SL4; here is a link to what Richard was probably complaining about:
http://www.sl4.org/archive/0608/15895.html
Thanks. I read the whole debate, or as much of it as is there; I’ve prepared a short summary to post tomorrow if anyone is interested in knowing what really went on (“as according to Hul-Gil”, anyway) without having to hack their way through that thread-jungle themselves.
(Summary of summary: Loosemore really does know what he’s talking about—mostly—but he also appears somewhat dishonest, or at least extremely imprecise in his communication.)
Please do post it, I think it would help resolve the arguments in this thread.
I don’t see how friendly and safe follow from stable.
Imagine you’re driving down the highway, and you see another car wobbling unpredictably across the lanes, lunging for any momentary opportunity to get ahead. Would you consider that driver particularly friendly or safe?
Safe & friendly imply stable, but stable does not imply safe or friendly
If you being downvoted is the result of LW ruthlessly suppressing dissent of all kind, how do you explain this post by Holden Karnofsky getting massively upvoted?
eg:
It’s not being upvoted by regulars/believers. It’s a magnet for dissidents, and transient visitors with negative perceptions of SI.
It’s high-profile,so it needs to be upvoted to put on a show of fair-mindedness.
All possible. However, if you can explain anything, the explanation counts for nothing. The question is which explanation is the most likely, and “there is evidence for fair-mindedness (but it is mostly fake!)” is more contrived than “there is evidence for fair-mindedness”, as an explanation for the upvotes of OP.
Yeah. But there’s also evidence of unfair-mindedness.
And some evidence for fair-mindedness.
I’m a regular, and I was impressed with it. Many other regulars have also said positive things about it, so possible explanation 1 is out. And unless I’m outright lying to you, 2, if true, would have to be entirely subconscious.
My current theory is that LW shares some properties with a successful cult, but is hilariously self-defeating in other ways.
Alex, I did not say that ALL dissent is ruthlessly suppressed, I said that dissent ruthlessly suppressed. You inserted the qualifier “ALL”.
You ask an irrelevant question, since it pertains to someone else not getting suppressed. However, I will answer it for you since you failed to make the trivial and rather obvious logical leap needed to answer it yourself.
Holden Karnofsky stands in a position of great power with respect to the SIAI community: you want his money. And since the easiest and quickest way to ensure that you NEVER get to see any of the money that he controls, would be to ruthlessly suppress his dissent, he is treated with the utmost deference.
(I am saying this in case anyone looks at this thread and thinks Loosemore is making a valid point, not because I approve of anyone’s responding to him.)
This is an abuse of language since it is implicated by the original statement.
There is absolutely no reason to believe that all, or half, or a quarter, or even ten percent of the upvotes on this post come from SIAI staff. There are plenty of people on LW who don’t support donating to SIAI.
No. Normally the absence of any quantifier implies an existential quantifier, not a universal quantifier. That would seem clearly the case here.
Grognor, this is an error so ridiculous that you should conclude your emotional involvement is affecting your rationality.
Actually bare noun phrases in English carry both interpretations, ambiguously. The canonical example is “Policemen carry guns” versus “Policemen were arriving”—the former makes little sense when interpreted existentially, but the latter makes even less sense when interpreted universally.
In short, there is no preferred interpretation.
(Oh, and prescriptivists always lose.)
Well, it was a hasty generalization on my part. Flawed descriptivism, not prescriptivism. But you’re losing sight of the issue, even as you refute an unsound argument. In the particular case—check it out—Grognor resolved the ambiguity in favor of the universal quantifier. This would be uncharitable in the general case, but in context it’s—as I said—a ridiculous argument. I stretched for an abstract argument to establish the ridiculousness, and I produced a specious argument. But the fact is that it was Grognor who had accused Loosemore of “abuse of language,” on the tacit ground that the universal quantifier is automatically implied. There was the original prescriptivism.
What, always ? By definition ? That sounds dangerously like a prescriptivist statement to me ! :-)
Problems with linguistic prescriptivism.
Your comment was a pretty cute tu quoque, but arguing against prescriptivism doesn’t mean giving up the ability to assert propositions.
I was making a joke :-(
(This comment originally said only, “Don’t do that.” That was rude, so I’m replacing it with the following. I apologize if you already saw that.)
As a general rule, I’d prefer that people don’t make silly jokes on this website, as that’s one first step in the slippery slope toward making this site just another reddit.
Paul Graham:
Curious. I was just reading Jerome Tuccille’s book on the history of libertarianism through his eyes, and when he discusses how Objectivism turned into a cult one of the issues apparently was a lack of acceptance of humor.
I disagree with your blanket policy on jokes. I don’t want to be a member of an organization that prohibits making fun of said organization (or its well-respected members); these types of organizations tend to have poor track records. I would, of course, fully support a ban on bad jokes, where “bad” is defined as “an unfunny joke that makes me want to downvote your comment, oh look, here’s me downvoting it”.
That said, I upvoted your comment for the honest clarification.
(I try to simply not vote on comments that actually make me laugh—there is a conflict between the part of me that wants LW to be Serious Business and the part of me that wants unexpected laughs, and such comments tend to get more karma than would be fair anyway.)
I usually operate using this definition, with one tweak: I’m more likely to upvote a useful comment if it’s also funny. I’m unlikely to upvote a comment if it’s only funny; and though the temptation to make those arises, I try hard to save it for reddit.
Does it count as a joke if I mention that every time I see your username I think of TROGDOR?
(This is only one of many similar mildly obsessive thought patterns that I have.)
There are in fact some policemen (e.g. in Japan) who do not carry firearms while on duty.
You know, I only visit LessWrong these days to entertain myself with the sight of self-styled “rational” people engaging in mind-twisting displays of irrationality.
Grognor: congratulations! You win the Idiot of the Week award.
Why? Let’s try taking my statement and really see if it was an “abuse of language” by setting exactly the same statement in a different context. For example: let’s suppose someone claims that the government of China “Ruthlessly suppresses dissent”. But now, does that mean that the government of China ruthlessly suppresses the torrent of dissent that comes from the American Embassy? I don’t think so! Very bad idea to invade the embassy and disappear the entire diplomatic mission …. seriously bad consequences! So they don’t. Oh, and what about people who are in some way close the heart of the regime … could it be that they do indeed tolerate some dissent in the kind of way that, if it happened out there in the populace, would lead to instant death? You can probably imagine the circumstances easily enough: back in Mao Tse Tung’s day, would there have been senior officials who called him funny names behind his back? Happens all the time in those kinds of power structures: some people are in-crowd and are allowed to get away with it, while the same behavior elsewhere is “ruthlessly suppressed”.
So, if that person claims that the government of China “Ruthlessly suppresses dissent”, according to you they MUST mean that all forms of dissent WITHOUT EXCEPTION are suppressed.
But if they argue that their statement obviously would not apply to the American Embassy staff ….… you would tell them that their original statement was “an abuse of language since it is implicated by the original statement.”
Amusing.
Keep up the good work. I find the reactions I get when I say things on Less Wrong to be almost infinitely varied in their irrationality.
Richard, this really isn’t productive. Your clearly quite intelligent and clearly still have issues due to the dispute between you and Eliezer. It is likely that if you got over this, you could be an effective, efficient, and helpful critic of SI and their ideas. But right now, you are engaging in a uncivil behavior that isn’t endearing you to anyone while making emotionally heavy comparisons that make you sound strident.
He doesn’t want to be “an effective, efficient, or helpful critic”. He’s here “for the lulz”, as he said in his comment above.
Yes, but how much of that is due to the prior negative experience and fighting he’s had? It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences. Human motivations are highly malleable.
I suspect you meant “isn’t at all uncommon,” though I think what you said might actually be true.
Er, yes. The fact that Loosemore is a professional AI researchers with a fair number of accomplishments and his general history strongly suggests that at least in his case he didn’t start his interaction with the intent to troll. His early actions on LW were positive and some were voted up.
His ‘early’ actions on LW were recent and largely negative, and one was voted up significantly (though I don’t see why—I voted that comment down).
At his best he’s been abrasive, confrontational, and rambling. Not someone worth engaging.
His second comment on LW is here is from January and is at +8 (and I seem to recall was higher earlier). Two of his earlier comments from around the same time were at positive numbers but have since dipped below. It looks like at last one person went through and systematically downvoted his comments without regard to content.
Yes, that’s the one I was referring to.
I understand your point, but given that sentiment, the sentence “It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences” confuses me.
Right, as mentioned I meant uncommon. My point is that I don’t think Loosemore’s experience is that different from what often happens. At least in my experience, I’ve seen people who were more or less productive on one forum becomes effectively trolls elsewhere on the internet after having had bad experiences elsewhere. I think a lot of this is due to cognitive dissonance- people don’t like to think that they were being actively stupid or were effectively accidentally trolling, so they convince themselves that those were their goals all along.
Ah, ok. Gotcha.
I agree that people often go from being productive participants to being unproductive, both for the reasons you describe and other reasons.
It seems to me it would be more appropriate to ask Yudkowsky and LukeProg to retract the false accusations that Loosemore is a liar or dishonest, respectively.
Yes, that would probably be a step in the right direction also. I don’t know whether the accusation is false, but the evidence is at best extremely slim and altogether unhelpful. That someone didn’t remember a study a few years ago in the heat of the moment simply isn’t something worth getting worked up about.
I don’t think ruthlessly is the right word; I’d rather say relentlessly. In fact, your analogy to Stalinist practices brings out, by way of contrast, how not ruthless LW’s practices are. Yudkowsky is—if not in your case—subtle. Soft censorship is effected by elaborate rituals (the “sequences”; the “rational” turn of phrase) effectively limiting the group to a single personality profile: narrow-focusers, who can’t be led astray from their monomania. Then, instituting a downvoting system that allows control by the high-karma elite: the available downvotes (but not upvotes—the masses must be kept content) are distributed based on the amount of accumulated karma. Formula nonpublic, as far as I can tell.
Why don’t these rationalists even come close to intuiting the logic of the downvoting system? They evidently care not the least about its mechanics. They are far from even imagining it is consequential. Some rationalists.
Total available downvotes are a high number (4 times total karma, if I recall correctly), and in practice I think they prevent very few users from downvoting as much as they want.
From personal experience, I think you’re wrong about a high number. I currently need 413 more points to downvote at all. I have no idea how you would even suspect whether “few users” are precluded from downvoting.
But what a way to discuss this: “high number.” If this is supposed to be a community forum, why doesn’t the community even know the number—or even care.
(For the record, I ended up editing in the “(4 times total karma, if I recall correctly)” after posting the comment, and you probably replied before seeing that part.)
So how many downvotes did you use when your karma was still highly positive? That’s likely a major part of that result.
The main points of the limit are 1) to prevent easy gaming of the system and 2) to prevent trolls and the like from going though and downvoting to a level that doesn’t actually reflect communal norms. In practice, 1 and 2 are pretty successful and most of the community doesn’t see much danger in the system. That you can’t downvote I think would be seen by many as a feature rather than a bug. So they don’t have much need to care because the system at least at a glance seems to be working, and we don’t like to waste that much time thinking about the karma system.
The formula max is 4*total karma. I’m curious- if there were a limit on the total number of upvotes also, would you then say that this was further evidence of control of entrenched users. If one option leads to a claim about keeping the masses content and the reverse would lead to a different set of accusations then something is wrong? If any pattern is evidence of malicious intent, then something is wrong. Incidentally, it might help to realize that the system as it exists is a slightly modified version of the standard reddit system. The code and details for the karma system are based off of fairly widely used open source code. It is much more likely that this karma system was adopted specifically as being a basic part of the code base. Don’t assume malice when laziness will do.
Disagreeing with what you think of the system is not the same as not intuiting it. Different humans have different intuition.
But some people do think the karma system matters. And you are right in that it does matter in some respects more than many people realize it does. There’s no question that although I don’t really care much about my karma total at all, I can’t help but feel a tinge of happiness when I log in to see my karma go up from 9072 to 9076 as it just did, and then feel a slight negative feeling when I see it then go down to 9075. Attach a number to something and people will try to modify it. MMOs have known this for a while. (An amusing take.) And having partially randomized aspects certainly makes it more addicting since randomized reinforcement is more effective. And in this case, arguments that people like are more positively rewarded. That’s potentially quite subtle, and could have negative effects, but it isn’t censorship.
While that doesn’t amount to censorship, there are two other aspects of the karma system that most people don’t even notice much at all. The first of course is that downvoted comments get collapsed. The second is that one gets rate limited with positing as one’s karma becomes more negative. Neither of these really constitutes censorship by most notions of the term, although I suppose the second could sort of fall into it under some plausible notions. Practically speaking, you don’t seem to be having any trouble getting your points heard here.
I don’t know what your evidence is that they are from “evening imaginging it is consquential.” Listening to why one might think it is consequential and then deciding that the karma system doesn’t have that much impact is not the same thing as being unable to imagine the possibility. It is possible (and would seem not too unlikely to me) that people don’t appreciate the more negative side effects of the karma system, but once again, as often seems to be the case, your own worst enemy is yourself, by overstating your case in a way that overall makes people less likely to take it seriously.
The Kill Everyone Project was almost exactly this. Progress Quest and Parameters are other takes on a similar concept (though Parameters is actually fairly interesting, if you think of it as an abstract puzzle).
That’s… sort of horrifying in a hilarious way.
Yeah, it’s like staring into the void.
Missing a ‘not’ I think.
Yep. Fixed. Thanks.
Obligatory link: You’re Calling Who a Cult Leader?
Also, your impression might be different if you had witnessed the long, deep, and ongoing disagreements between Eliezer and I about several issues fundamental to SI — all while Eliezer suggested that I be made Executive Director and then continued to support me in that role.
Can you give an example of what you mean by “abusive personal attacks”?
Link to the juicy details cough I mean evidence?
http://www.sl4.org/archive/0608/15895.html
As someone who was previously totally unaware of that flap, that doesn’t sound to me like a “slanderous tirade.” Maybe Loosemore would care to explain what he thought was slanderous about it?
I strongly suspect the rationality of the internet would improve many orders of magnitude if all arguments about arguments were quietly deleted.
Okay, make that: I strongly suspect the rationality of the rational internet would improve many orders of magnitude if all arguments about arguments were quietly deleted
Every time I try to think about that, I end up thinking about logical paradoxes instead.
edit for less subtlety in reponse to unexplained downvote: That argument is self-refuting.
Markus: Happy to link to the details, but where in the huge stream would you like to be linked to? The problem is that opinions can be sharply skewed by choosing to link to only selected items.
I cite as evidence Oscar’s choice, below, to link to a post by EY. In that post he makes a series of statements that are flagrant untruths. If you read that particular link, and take his word as trustworthy, you get one impression.
But if you knew that EY had to remove several quotes from their context and present them in a deceiptful manner, in order to claim that I said things that I did not, you might get a very different impression.
You might also get a different impression if you knew this. The comment that Oscar cites came shortly after I offered to submit the dispute to outside arbitration by an expert in the field we were discussing. I offered that ANYONE could propose an outside expert, and I would abide by their opinion.
It was only at that point that EY suddenly wrote the post that Oscar just referenced, in which he declared me to be banished from the list and (a short time later) that all discussion about the topic should cease.
That fact by itself speaks volumes.
I’ll gladly start reading at any point you’ll link me to.
The fact that you don’t just provide a useful link but instead several paragraphs of excuses why the stuff I’m reading is untrustworthy I count as (small) evidence against you.
I’ve read SL4 around that time and saw the whole drama (although I couldn’t understand all the exact technical details, being 16). My prior on EY flagrantly lying like that is incredibly low. I’m virtually certain that you’re quite cranky in this regard.
I was on SL4 as well, and regarded Eliezer as basically correct, although I thought Loosemore’s ban was more than a little bit disproportionate. (If John Clark didn’t get banned for repeatedly and willfully misunderstanding Godelian arguments, wasting the time of countless posters over many years, why should Loosemore be banned for backtracking on some heuristics & biases positions?)
(Because JKC never lied about his credentials, which is where it really crosses the line into trolling.)
You use this word in an unconventional way, i.e., you use it to mean something like ‘unfairly causing harm and wasting people’s time’, which is not the standard definition: the standard definition necessitates intention to provoke or at least something in that vein. (I assume you know what “trolling” means in the context of fishing?) Because it’s only ever used in sensitive contexts, you might want to put effort into finding a more accurate word or phrase. As User:Eugine_Nier noted, lately “troll” and “trolling” have taken on a common usage similar to “fascist” and “fascism”, which I think is an unfortunate turn of events.
The animus here must be really strong. What Yudkowsky did was infer that Loosemore was lying about being a cognitive scientist from his ignorance of a variant of the Wasson experiment. First, people often forget obvious things in heated online discussions. Second, there are plenty of incompetent cognitive scientists: if Loosemore intended to deceive, he probably wouldn’t have expressly stated that he didn’t have teaching responsibilities for graduate students.
If what you say is true, then Eliezer is lying about Loosemore lying about his credentials, in which case Eliezer is “trolling”. But if what you say is false, then you are the “troll”.
(This comment is an attempt to convincingly demonstrate that Eliezer’s notion of trolling is, to put it bluntly, both harmful and dumb.)
I don’t know about you, but I’d prefer to be considered a troll than a liar; correspondingly, I think the expanded definition of liar is worse than the inaccurate definition of troll. Not every inaccuracy amounts to dishonesty and not all dishonesty to prevarication.
I often observe very intelligent folks acting irrationally. I suspect superintelligent AI’s might act superirrationally. Perhaps the focus should be on creating rational AI’s first. Any superintelligent being would have to be first and foremost superrational, or we are in for a world of trouble. Actually, in my experience, rationality trumps intelligence every time.