Alex, I did not say that ALL dissent is ruthlessly suppressed, I said that dissent ruthlessly suppressed. You inserted the qualifier “ALL”.
You ask an irrelevant question, since it pertains to someone else not getting suppressed. However, I will answer it for you since you failed to make the trivial and rather obvious logical leap needed to answer it yourself.
Holden Karnofsky stands in a position of great power with respect to the SIAI community: you want his money. And since the easiest and quickest way to ensure that you NEVER get to see any of the money that he controls, would be to ruthlessly suppress his dissent, he is treated with the utmost deference.
(I am saying this in case anyone looks at this thread and thinks Loosemore is making a valid point, not because I approve of anyone’s responding to him.)
Alex, I did not say that ALL dissent is ruthlessly suppressed
This is an abuse of language since it is implicated by the original statement.
And since the easiest and quickest way to ensure that you NEVER get to see any of the money that he controls, would be to ruthlessly suppress his dissent, he is treated with the utmost deference.
There is absolutely no reason to believe that all, or half, or a quarter, or even ten percent of the upvotes on this post come from SIAI staff. There are plenty of people on LW who don’t support donating to SIAI.
Actually bare noun phrases in English carry both interpretations, ambiguously. The canonical example is “Policemen carry guns” versus “Policemen were arriving”—the former makes little sense when interpreted existentially, but the latter makes even less sense when interpreted universally.
Well, it was a hasty generalization on my part. Flawed descriptivism, not prescriptivism. But you’re losing sight of the issue, even as you refute an unsound argument. In the particular case—check it out—Grognor resolved the ambiguity in favor of the universal quantifier. This would be uncharitable in the general case, but in context it’s—as I said—a ridiculous argument. I stretched for an abstract argument to establish the ridiculousness, and I produced a specious argument. But the fact is that it was Grognor who had accused Loosemore of “abuse of language,” on the tacit ground that the universal quantifier is automatically implied. There was the original prescriptivism.
(This comment originally said only, “Don’t do that.” That was rude, so I’m replacing it with the following. I apologize if you already saw that.)
As a general rule, I’d prefer that people don’t make silly jokes on this website, as that’s one first step in the slippery slope toward making this site just another reddit.
Curious. I was just reading Jerome Tuccille’s book on the history of libertarianism through his eyes, and when he discusses how Objectivism turned into a cult one of the issues apparently was a lack of acceptance of humor.
I disagree with your blanket policy on jokes. I don’t want to be a member of an organization that prohibits making fun of said organization (or its well-respected members); these types of organizations tend to have poor track records. I would, of course, fully support a ban on bad jokes, where “bad” is defined as “an unfunny joke that makes me want to downvote your comment, oh look, here’s me downvoting it”.
That said, I upvoted your comment for the honest clarification.
(I try to simply not vote on comments that actually make me laugh—there is a conflict between the part of me that wants LW to be Serious Business and the part of me that wants unexpected laughs, and such comments tend to get more karma than would be fair anyway.)
where “bad” is defined as “an unfunny joke that makes me want to downvote your comment, oh look, here’s me downvoting it”.
I usually operate using this definition, with one tweak: I’m more likely to upvote a useful comment if it’s also funny. I’m unlikely to upvote a comment if it’s only funny; and though the temptation to make those arises, I try hard to save it for reddit.
You know, I only visit LessWrong these days to entertain myself with the sight of self-styled “rational” people engaging in mind-twisting displays of irrationality.
Grognor: congratulations! You win the Idiot of the Week award.
Why? Let’s try taking my statement and really see if it was an “abuse of language” by setting exactly the same statement in a different context. For example: let’s suppose someone claims that the government of China “Ruthlessly suppresses dissent”. But now, does that mean that the government of China ruthlessly suppresses the torrent of dissent that comes from the American Embassy? I don’t think so! Very bad idea to invade the embassy and disappear the entire diplomatic mission …. seriously bad consequences! So they don’t. Oh, and what about people who are in some way close the heart of the regime … could it be that they do indeed tolerate some dissent in the kind of way that, if it happened out there in the populace, would lead to instant death? You can probably imagine the circumstances easily enough: back in Mao Tse Tung’s day, would there have been senior officials who called him funny names behind his back? Happens all the time in those kinds of power structures: some people are in-crowd and are allowed to get away with it, while the same behavior elsewhere is “ruthlessly suppressed”.
So, if that person claims that the government of China “Ruthlessly suppresses dissent”, according to you they MUST mean that all forms of dissent WITHOUT EXCEPTION are suppressed.
But if they argue that their statement obviously would not apply to the American Embassy staff ….… you would tell them that their original statement was “an abuse of language since it is implicated by the original statement.”
Amusing.
Keep up the good work. I find the reactions I get when I say things on Less Wrong to be almost infinitely varied in their irrationality.
Richard, this really isn’t productive. Your clearly quite intelligent and clearly still have issues due to the dispute between you and Eliezer. It is likely that if you got over this, you could be an effective, efficient, and helpful critic of SI and their ideas. But right now, you are engaging in a uncivil behavior that isn’t endearing you to anyone while making emotionally heavy comparisons that make you sound strident.
Yes, but how much of that is due to the prior negative experience and fighting he’s had? It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences. Human motivations are highly malleable.
Er, yes. The fact that Loosemore is a professional AI researchers with a fair number of accomplishments and his general history strongly suggests that at least in his case he didn’t start his interaction with the intent to troll. His early actions on LW were positive and some were voted up.
His second comment on LW is here is from January and is at +8 (and I seem to recall was higher earlier). Two of his earlier comments from around the same time were at positive numbers but have since dipped below. It looks like at last one person went through and systematically downvoted his comments without regard to content.
I understand your point, but given that sentiment, the sentence “It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences” confuses me.
Right, as mentioned I meant uncommon. My point is that I don’t think Loosemore’s experience is that different from what often happens. At least in my experience, I’ve seen people who were more or less productive on one forum becomes effectively trolls elsewhere on the internet after having had bad experiences elsewhere. I think a lot of this is due to cognitive dissonance- people don’t like to think that they were being actively stupid or were effectively accidentally trolling, so they convince themselves that those were their goals all along.
It seems to me it would be more appropriate to ask Yudkowsky and LukeProg to retract the false accusations that Loosemore is a liar or dishonest, respectively.
Yes, that would probably be a step in the right direction also. I don’t know whether the accusation is false, but the evidence is at best extremely slim and altogether unhelpful. That someone didn’t remember a study a few years ago in the heat of the moment simply isn’t something worth getting worked up about.
I don’t think ruthlessly is the right word; I’d rather say relentlessly. In fact, your analogy to Stalinist practices brings out, by way of contrast, how not ruthless LW’s practices are. Yudkowsky is—if not in your case—subtle. Soft censorship is effected by elaborate rituals (the “sequences”; the “rational” turn of phrase) effectively limiting the group to a single personality profile: narrow-focusers, who can’t be led astray from their monomania. Then, instituting a downvoting system that allows control by the high-karma elite: the available downvotes (but not upvotes—the masses must be kept content) are distributed based on the amount of accumulated karma. Formula nonpublic, as far as I can tell.
Why don’t these rationalists even come close to intuiting the logic of the downvoting system? They evidently care not the least about its mechanics. They are far from even imagining it is consequential. Some rationalists.
Total available downvotes are a high number (4 times total karma, if I recall correctly), and in practice I think they prevent very few users from downvoting as much as they want.
From personal experience, I think you’re wrong about a high number. I currently need 413 more points to downvote at all. I have no idea how you would even suspect whether “few users” are precluded from downvoting.
But what a way to discuss this: “high number.” If this is supposed to be a community forum, why doesn’t the community even know the number—or even care.
(For the record, I ended up editing in the “(4 times total karma, if I recall correctly)” after posting the comment, and you probably replied before seeing that part.)
I currently need 413 more points to downvote at all.
So how many downvotes did you use when your karma was still highly positive? That’s likely a major part of that result.
But what a way to discuss this: “high number.” If this is supposed to be a community forum, why doesn’t the community even know the number—or even care.
The main points of the limit are 1) to prevent easy gaming of the system and 2) to prevent trolls and the like from going though and downvoting to a level that doesn’t actually reflect communal norms. In practice, 1 and 2 are pretty successful and most of the community doesn’t see much danger in the system. That you can’t downvote I think would be seen by many as a feature rather than a bug. So they don’t have much need to care because the system at least at a glance seems to be working, and we don’t like to waste that much time thinking about the karma system.
Then, instituting a downvoting system that allows control by the high-karma elite: the available downvotes (but not upvotes—the masses must be kept content) are distributed based on the amount of accumulated karma. Formula nonpublic, as far as I can tell.
The formula max is 4*total karma. I’m curious- if there were a limit on the total number of upvotes also, would you then say that this was further evidence of control of entrenched users. If one option leads to a claim about keeping the masses content and the reverse would lead to a different set of accusations then something is wrong? If any pattern is evidence of malicious intent, then something is wrong. Incidentally, it might help to realize that the system as it exists is a slightly modified version of the standard reddit system. The code and details for the karma system are based off of fairly widely used open source code. It is much more likely that this karma system was adopted specifically as being a basic part of the code base. Don’t assume malice when laziness will do.
Why don’t these rationalists even come close to intuiting the logic of the downvoting system?
Disagreeing with what you think of the system is not the same as not intuiting it. Different humans have different intuition.
They evidently care not the least about its mechanics. They are far from even imagining it is consequential. Some rationalists.
But some people do think the karma system matters. And you are right in that it does matter in some respects more than many people realize it does. There’s no question that although I don’t really care much about my karma total at all, I can’t help but feel a tinge of happiness when I log in to see my karma go up from 9072 to 9076 as it just did, and then feel a slight negative feeling when I see it then go down to 9075. Attach a number to something and people will try to modify it. MMOs have known this for a while. (An amusing take.) And having partially randomized aspects certainly makes it more addicting since randomized reinforcement is more effective. And in this case, arguments that people like are more positively rewarded. That’s potentially quite subtle, and could have negative effects, but it isn’t censorship.
While that doesn’t amount to censorship, there are two other aspects of the karma system that most people don’t even notice much at all. The first of course is that downvoted comments get collapsed. The second is that one gets rate limited with positing as one’s karma becomes more negative. Neither of these really constitutes censorship by most notions of the term, although I suppose the second could sort of fall into it under some plausible notions. Practically speaking, you don’t seem to be having any trouble getting your points heard here.
I don’t know what your evidence is that they are from “evening imaginging it is consquential.” Listening to why one might think it is consequential and then deciding that the karma system doesn’t have that much impact is not the same thing as being unable to imagine the possibility. It is possible (and would seem not too unlikely to me) that people don’t appreciate the more negative side effects of the karma system, but once again, as often seems to be the case, your own worst enemy is yourself, by overstating your case in a way that overall makes people less likely to take it seriously.
Attach a number to something and people will try to modify it. MMOs have known this for a while. (An amusing take.)
The Kill Everyone Project was almost exactly this.
Progress Quest and Parameters are other takes on a similar concept (though Parameters is actually fairly interesting, if you think of it as an abstract puzzle).
Alex, I did not say that ALL dissent is ruthlessly suppressed, I said that dissent ruthlessly suppressed. You inserted the qualifier “ALL”.
You ask an irrelevant question, since it pertains to someone else not getting suppressed. However, I will answer it for you since you failed to make the trivial and rather obvious logical leap needed to answer it yourself.
Holden Karnofsky stands in a position of great power with respect to the SIAI community: you want his money. And since the easiest and quickest way to ensure that you NEVER get to see any of the money that he controls, would be to ruthlessly suppress his dissent, he is treated with the utmost deference.
(I am saying this in case anyone looks at this thread and thinks Loosemore is making a valid point, not because I approve of anyone’s responding to him.)
This is an abuse of language since it is implicated by the original statement.
There is absolutely no reason to believe that all, or half, or a quarter, or even ten percent of the upvotes on this post come from SIAI staff. There are plenty of people on LW who don’t support donating to SIAI.
No. Normally the absence of any quantifier implies an existential quantifier, not a universal quantifier. That would seem clearly the case here.
Grognor, this is an error so ridiculous that you should conclude your emotional involvement is affecting your rationality.
Actually bare noun phrases in English carry both interpretations, ambiguously. The canonical example is “Policemen carry guns” versus “Policemen were arriving”—the former makes little sense when interpreted existentially, but the latter makes even less sense when interpreted universally.
In short, there is no preferred interpretation.
(Oh, and prescriptivists always lose.)
Well, it was a hasty generalization on my part. Flawed descriptivism, not prescriptivism. But you’re losing sight of the issue, even as you refute an unsound argument. In the particular case—check it out—Grognor resolved the ambiguity in favor of the universal quantifier. This would be uncharitable in the general case, but in context it’s—as I said—a ridiculous argument. I stretched for an abstract argument to establish the ridiculousness, and I produced a specious argument. But the fact is that it was Grognor who had accused Loosemore of “abuse of language,” on the tacit ground that the universal quantifier is automatically implied. There was the original prescriptivism.
What, always ? By definition ? That sounds dangerously like a prescriptivist statement to me ! :-)
Problems with linguistic prescriptivism.
Your comment was a pretty cute tu quoque, but arguing against prescriptivism doesn’t mean giving up the ability to assert propositions.
I was making a joke :-(
(This comment originally said only, “Don’t do that.” That was rude, so I’m replacing it with the following. I apologize if you already saw that.)
As a general rule, I’d prefer that people don’t make silly jokes on this website, as that’s one first step in the slippery slope toward making this site just another reddit.
Paul Graham:
Curious. I was just reading Jerome Tuccille’s book on the history of libertarianism through his eyes, and when he discusses how Objectivism turned into a cult one of the issues apparently was a lack of acceptance of humor.
I disagree with your blanket policy on jokes. I don’t want to be a member of an organization that prohibits making fun of said organization (or its well-respected members); these types of organizations tend to have poor track records. I would, of course, fully support a ban on bad jokes, where “bad” is defined as “an unfunny joke that makes me want to downvote your comment, oh look, here’s me downvoting it”.
That said, I upvoted your comment for the honest clarification.
(I try to simply not vote on comments that actually make me laugh—there is a conflict between the part of me that wants LW to be Serious Business and the part of me that wants unexpected laughs, and such comments tend to get more karma than would be fair anyway.)
I usually operate using this definition, with one tweak: I’m more likely to upvote a useful comment if it’s also funny. I’m unlikely to upvote a comment if it’s only funny; and though the temptation to make those arises, I try hard to save it for reddit.
Does it count as a joke if I mention that every time I see your username I think of TROGDOR?
(This is only one of many similar mildly obsessive thought patterns that I have.)
There are in fact some policemen (e.g. in Japan) who do not carry firearms while on duty.
You know, I only visit LessWrong these days to entertain myself with the sight of self-styled “rational” people engaging in mind-twisting displays of irrationality.
Grognor: congratulations! You win the Idiot of the Week award.
Why? Let’s try taking my statement and really see if it was an “abuse of language” by setting exactly the same statement in a different context. For example: let’s suppose someone claims that the government of China “Ruthlessly suppresses dissent”. But now, does that mean that the government of China ruthlessly suppresses the torrent of dissent that comes from the American Embassy? I don’t think so! Very bad idea to invade the embassy and disappear the entire diplomatic mission …. seriously bad consequences! So they don’t. Oh, and what about people who are in some way close the heart of the regime … could it be that they do indeed tolerate some dissent in the kind of way that, if it happened out there in the populace, would lead to instant death? You can probably imagine the circumstances easily enough: back in Mao Tse Tung’s day, would there have been senior officials who called him funny names behind his back? Happens all the time in those kinds of power structures: some people are in-crowd and are allowed to get away with it, while the same behavior elsewhere is “ruthlessly suppressed”.
So, if that person claims that the government of China “Ruthlessly suppresses dissent”, according to you they MUST mean that all forms of dissent WITHOUT EXCEPTION are suppressed.
But if they argue that their statement obviously would not apply to the American Embassy staff ….… you would tell them that their original statement was “an abuse of language since it is implicated by the original statement.”
Amusing.
Keep up the good work. I find the reactions I get when I say things on Less Wrong to be almost infinitely varied in their irrationality.
Richard, this really isn’t productive. Your clearly quite intelligent and clearly still have issues due to the dispute between you and Eliezer. It is likely that if you got over this, you could be an effective, efficient, and helpful critic of SI and their ideas. But right now, you are engaging in a uncivil behavior that isn’t endearing you to anyone while making emotionally heavy comparisons that make you sound strident.
He doesn’t want to be “an effective, efficient, or helpful critic”. He’s here “for the lulz”, as he said in his comment above.
Yes, but how much of that is due to the prior negative experience and fighting he’s had? It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences. Human motivations are highly malleable.
I suspect you meant “isn’t at all uncommon,” though I think what you said might actually be true.
Er, yes. The fact that Loosemore is a professional AI researchers with a fair number of accomplishments and his general history strongly suggests that at least in his case he didn’t start his interaction with the intent to troll. His early actions on LW were positive and some were voted up.
His ‘early’ actions on LW were recent and largely negative, and one was voted up significantly (though I don’t see why—I voted that comment down).
At his best he’s been abrasive, confrontational, and rambling. Not someone worth engaging.
His second comment on LW is here is from January and is at +8 (and I seem to recall was higher earlier). Two of his earlier comments from around the same time were at positive numbers but have since dipped below. It looks like at last one person went through and systematically downvoted his comments without regard to content.
Yes, that’s the one I was referring to.
I understand your point, but given that sentiment, the sentence “It isn’t at all common for a troll to self-identify as such only after they’ve had bad experiences” confuses me.
Right, as mentioned I meant uncommon. My point is that I don’t think Loosemore’s experience is that different from what often happens. At least in my experience, I’ve seen people who were more or less productive on one forum becomes effectively trolls elsewhere on the internet after having had bad experiences elsewhere. I think a lot of this is due to cognitive dissonance- people don’t like to think that they were being actively stupid or were effectively accidentally trolling, so they convince themselves that those were their goals all along.
Ah, ok. Gotcha.
I agree that people often go from being productive participants to being unproductive, both for the reasons you describe and other reasons.
It seems to me it would be more appropriate to ask Yudkowsky and LukeProg to retract the false accusations that Loosemore is a liar or dishonest, respectively.
Yes, that would probably be a step in the right direction also. I don’t know whether the accusation is false, but the evidence is at best extremely slim and altogether unhelpful. That someone didn’t remember a study a few years ago in the heat of the moment simply isn’t something worth getting worked up about.
I don’t think ruthlessly is the right word; I’d rather say relentlessly. In fact, your analogy to Stalinist practices brings out, by way of contrast, how not ruthless LW’s practices are. Yudkowsky is—if not in your case—subtle. Soft censorship is effected by elaborate rituals (the “sequences”; the “rational” turn of phrase) effectively limiting the group to a single personality profile: narrow-focusers, who can’t be led astray from their monomania. Then, instituting a downvoting system that allows control by the high-karma elite: the available downvotes (but not upvotes—the masses must be kept content) are distributed based on the amount of accumulated karma. Formula nonpublic, as far as I can tell.
Why don’t these rationalists even come close to intuiting the logic of the downvoting system? They evidently care not the least about its mechanics. They are far from even imagining it is consequential. Some rationalists.
Total available downvotes are a high number (4 times total karma, if I recall correctly), and in practice I think they prevent very few users from downvoting as much as they want.
From personal experience, I think you’re wrong about a high number. I currently need 413 more points to downvote at all. I have no idea how you would even suspect whether “few users” are precluded from downvoting.
But what a way to discuss this: “high number.” If this is supposed to be a community forum, why doesn’t the community even know the number—or even care.
(For the record, I ended up editing in the “(4 times total karma, if I recall correctly)” after posting the comment, and you probably replied before seeing that part.)
So how many downvotes did you use when your karma was still highly positive? That’s likely a major part of that result.
The main points of the limit are 1) to prevent easy gaming of the system and 2) to prevent trolls and the like from going though and downvoting to a level that doesn’t actually reflect communal norms. In practice, 1 and 2 are pretty successful and most of the community doesn’t see much danger in the system. That you can’t downvote I think would be seen by many as a feature rather than a bug. So they don’t have much need to care because the system at least at a glance seems to be working, and we don’t like to waste that much time thinking about the karma system.
The formula max is 4*total karma. I’m curious- if there were a limit on the total number of upvotes also, would you then say that this was further evidence of control of entrenched users. If one option leads to a claim about keeping the masses content and the reverse would lead to a different set of accusations then something is wrong? If any pattern is evidence of malicious intent, then something is wrong. Incidentally, it might help to realize that the system as it exists is a slightly modified version of the standard reddit system. The code and details for the karma system are based off of fairly widely used open source code. It is much more likely that this karma system was adopted specifically as being a basic part of the code base. Don’t assume malice when laziness will do.
Disagreeing with what you think of the system is not the same as not intuiting it. Different humans have different intuition.
But some people do think the karma system matters. And you are right in that it does matter in some respects more than many people realize it does. There’s no question that although I don’t really care much about my karma total at all, I can’t help but feel a tinge of happiness when I log in to see my karma go up from 9072 to 9076 as it just did, and then feel a slight negative feeling when I see it then go down to 9075. Attach a number to something and people will try to modify it. MMOs have known this for a while. (An amusing take.) And having partially randomized aspects certainly makes it more addicting since randomized reinforcement is more effective. And in this case, arguments that people like are more positively rewarded. That’s potentially quite subtle, and could have negative effects, but it isn’t censorship.
While that doesn’t amount to censorship, there are two other aspects of the karma system that most people don’t even notice much at all. The first of course is that downvoted comments get collapsed. The second is that one gets rate limited with positing as one’s karma becomes more negative. Neither of these really constitutes censorship by most notions of the term, although I suppose the second could sort of fall into it under some plausible notions. Practically speaking, you don’t seem to be having any trouble getting your points heard here.
I don’t know what your evidence is that they are from “evening imaginging it is consquential.” Listening to why one might think it is consequential and then deciding that the karma system doesn’t have that much impact is not the same thing as being unable to imagine the possibility. It is possible (and would seem not too unlikely to me) that people don’t appreciate the more negative side effects of the karma system, but once again, as often seems to be the case, your own worst enemy is yourself, by overstating your case in a way that overall makes people less likely to take it seriously.
The Kill Everyone Project was almost exactly this. Progress Quest and Parameters are other takes on a similar concept (though Parameters is actually fairly interesting, if you think of it as an abstract puzzle).
That’s… sort of horrifying in a hilarious way.
Yeah, it’s like staring into the void.
Missing a ‘not’ I think.
Yep. Fixed. Thanks.