I think your post is troubling in a couple of ways.
First, I think you draw too much of a dichotomy between “read sequences” and “not read sequences”. I have no idea what the true percentage of active LW members is, but I suspect a number of people, particularly new members, are in the process of reading the sequences, like I am. And that’s a pretty large task—especially if you’re in school, trying to work a demanding job, etc. I don’t wish to speak for you, since you’re not clear on the matter, but are people in the process of reading the sequences noise? I’m only in QM, and certainly wasn’t there when I started posting, but I’ve gotten over 1000 karma (all of it on comments or discussion level posts). I’d like to think I’ve added something to the community.
Secondly, I feel like entrance barriers are pretty damn high already. I touched on this in my other comment, but I didn’t want to make all of these points in that thread, since they were off topic to the original. When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted. (A re-reading of this thread seems prudent in light of this discussion) I’d never been part of a forum with a karma system before, and I’d spent enough time on here to know that I really respected the opinions of most people on here. The idea of my ideas being rejected by a community that I’d come to respect was very stressful. I eventually got over it, and as I got more and more karma, it didn’t hurt so much when I lost a point. But being karmassassinated was enough to throw all of that into doubt again, since when I asked about it I was just downvoted and no one commented. (I’m sure it’s just no one happened to see it in recent comments, since it was deep in a thread.) I thought that it was very likely that I would leave the site after that, because it seemed to me that people simply didn’t care what I had to say—my comments for about two days were met with wild downvoting and almost no replies, except by one person. But I don’t think I am the only person that felt this way when ey joined LessWrong.
Edit: Hyperlink messed up.
Edit 2: It just now occurred to me to add this, and I’ve commented enough in this thread for one person, so I’m putting it here: I think all of the meetup posts are much more harmful to the signal to noise ratio than anything else. Unless you’re going to them, there’s no reason to be interested in them.
I eventually got over it, and as I got more and more karma, it didn’t hurt so much when I lost a point. But being karmassassinated was enough to throw all of that into doubt again
Get a few more (thousand?) karma and you may find getting karmassassinated doesn’t hurt much any more either. I get karmassassinated about once a fortnight (frequency memory subject to all sorts of salience biases and utterly unreliable—it happens quite a lot though) and it doesn’t bother me all that much.
These days I find that getting the last 50 comments downvoted is a lot less emotionally burdensome than getting just one comment that I actually personally value downvoted in the absence of any other comments. The former just means someone (or several someones) don’t like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them. On the other hand if most of my comments have been upvoted but one specific comment that I consider valuable gets multiple downvotes it indicates something of a judgement from the community and is really damn annoying. On the plus side it can be enough to make me lose interest in lesswrong for a few weeks and so gives me a massive productivity boost!
When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted.
I believe you. That fear is a nuisance (to us if it keeps people silent and to those who are limited by it). If only we could give all lurkers rejection therapy to make them immune to this sort of thing!
I think if I were karmassassinated again I wouldn’t care nearly as much, because of how stupid I felt after the first time it happened. It was just so obvious that it was just some idiot, but I somehow convinced myself it wasn’t.
But that being said, one of the reasons it bothered me so much was that there were a number of posts that I was proud of that were downvoted—the guy who did it had sockpuppets, and it was more like my last 15-20 posts had each lost 5-10 karma. (This was also one of the reasons I wasn’t so sure it was karmassassination) Which put a number of posts I liked way below the visibility threshold. And it bothered me that if I linked to those comments later, people would just see a really low karma score and probably ignore it.
the guy who did it had sockpuppets, and it was more like my last 15-20 posts had each lost 5-10 karma.
I think you can’t give more downvotes than your karma, so that person would need 5-10 sockpuppets with at least 15-20 (EDIT: actually 4-5) karma each. If someone is going to the trouble of doing that, it seems unlikely that they would just pick on you and nobody else (given that your writings don’t seem to be particularly extreme in some way). Has anyone else experience something similar?
Just find a Wikipedia article on a cognitive bias that we haven’t had a top-level post on yet. Then, make a post to main with the content of the Wikipedia article (restated) and references to the relevant literature (you probably can safely make up half of the references). It will probably get in the neighborhood of 50 upvotes, giving you 500 karma, which allows 2000 comment downvotes.
Even if those estimates are really high, that’s still a lot of power for little effort. And just repeat the process for 20 biases, and you’ve got 20 sockpuppets who can push a combined 20 downvotes on a large number of comments.
Of course, in the bargain Less Wrong is getting genuinely high-quality articles. Not necessarily a bug.
If restating Wikipedia is enough to make for a genuinely high-quality article, maybe we should have a bot that copy-pastes a relevant Wikipedia article into a top-level post every few days. (Based on a few minutes of research, it looks like this is legal if you link to the original article each time, but tell me if I’m wrong.)
If restating Wikipedia is enough to make for a genuinely high-quality article, maybe we should have a bot that copy-pastes a relevant Wikipedia article into a top-level post every few days.
Really, I think the main problem with this is that most of the work is identifying which ones are the ‘relevant’ articles.
Yes; I didn’t mean to say you were implying a copy-paste solution. But if we’re speaking in the context of causing good articles to be posted and not in the context of thinking up hypothetical sock-puppeting strategies, whether it’s copy-pasted or restated shouldn’t matter unless the restatement is better-written than the original.
But then those comments / posts will be correctively downvoted, unless they’re high-quality. And you get a bunch more karma from a few posts than a few comments, so do both!
We found one of the sockpuppets, and he had one comment that added nothing that was at like 13 karma. It wasn’t downvoted until I was karmassassinated.
I should note that I have never actually been in your shoes. I haven’t had any cases where there was unambiguous use of bulk sockpuppets. I’ve only been downvoted via breadth (up to 50 different comments from my recent history) and usually by only one person at a time (occasionally two or three but probably not two or three that go as far as 50 comments at the same time).
(This was also one of the reasons I wasn’t so sure it was karmassassination)
That would really mess with your mind if you were in a situation where you could not yet reliably model community preferences (and be personally confident in your model despite immediate evidence.)
Take it as a high compliment! Nobody has ever cared enough about me to make half a dozen new accounts. What did you do to deserve that?
Basically it boiled down to this: I was suggesting that one reason some people might donate to more than one charity is that they’re risk averse and want to make sure they’re doing some good, instead of trying to help and unluckily choosing an unpredictably bad charity. It was admittedly a pretty pedantic point, but someone apparently didn’t like it.
Basically it boiled down to this: I was suggesting that one reason some people might donate to more than one charity is that they’re risk averse and want to make sure they’re doing some good, instead of trying to help and unluckily choosing an unpredictably bad charity. It was admittedly a pretty pedantic point, but someone apparently didn’t like it.
That seems to be something I would agree with, with an explicit acknowledgement that it relies on a combination of risk aversion and non-consequentialist values.
The former just means someone (or several someones) don’t like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them.
Presumably also because people you respect are not very likely to express their annoyance through something as silly as karmassassination, right?
It’s great that you are reading the sequences. You are right it’s not as simple as read them → not noise, not read them → noise. You say you are up to QM, then I would expect you to not make the sort of mistakes that would come from not having read the core sequences. On the other hand, if you posted something about ethics or AI (I forget where the AI stuff is chronologically), I would expect you to make some common mistakes and be basically noise.
The high barrier to entry is a problem for new people joining, but I also want a more strictly informed crowd to talk to sometimes. I think having a lower barrier to entry overall, but at least somewhere where having read stuff is strictly expected would be best, but there are problems with that.
Don’t leave, keep reading. When you are done you will know what I’m getting at.
I think it’s close to the end, right before/after the fun theory sequence? I’ve read some of the later posts just from being linked to them, but I’m not sure.
And I quite intentionally avoid talking about things like AI, because I know you’re right. I’m not sure that necessarily holds for ethics, since ethics is a much more approachable problem from a layperson’s standpoint. I spent a three hour car drive for fun trying to answer the question “How would I go about making an AI” even though I know almost nothing about it. The best I could come up with was having some kind of program that created a sandbox and randomly generated pieces of code that would compile, and pitting them in some kind of bracket contest that would determine intelligence and/or friendliness. Thought I’d make a discussion post about it, but I figured it was too obvious to not have been thought of before.
edit: and now I’m going to ask why this rated a downvote. What does the downvoter want less of?
edit 2: fair enough, “accepted” is wrong. I meant that it’s a thing that observably happens. I also specifically mean socking-up to mass-downvote someone, or to be a dick to people, not roleplay accounts like Clippy (though others find those problematic).
and now I’m going to ask why this rated a downvote.
Ahem, on my side it was a case of bad pattern-matching. When I realized it, I deleted the reply I was writing here, and also removed the downvote.
Perhaps you should have explained further why do you think sockpuppetry is bad. My original guess was that you speak about people having multiple votes from multiple accounts (I was primed by other comments in this thread) and I habitually downvote most comments speaking about karma. But now it seems to me that you are concerned with other aspects, such as anonymity and role-playing. But this is only a guess, I can’t see it from your comment.
Yeah, bad explanation on my account. I’m not so concerned with roleplay accounts (e.g. Clippy), as with socking up to mass-downvote. (Getting initial karma is very easy.) Socking-up to be a dick to people also strikes me as problematic. I think I mean “observed” rather than “accepted”, which implies a social norm.
I think your post is troubling in a couple of ways.
First, I think you draw too much of a dichotomy between “read sequences” and “not read sequences”. I have no idea what the true percentage of active LW members is, but I suspect a number of people, particularly new members, are in the process of reading the sequences, like I am. And that’s a pretty large task—especially if you’re in school, trying to work a demanding job, etc. I don’t wish to speak for you, since you’re not clear on the matter, but are people in the process of reading the sequences noise? I’m only in QM, and certainly wasn’t there when I started posting, but I’ve gotten over 1000 karma (all of it on comments or discussion level posts). I’d like to think I’ve added something to the community.
Secondly, I feel like entrance barriers are pretty damn high already. I touched on this in my other comment, but I didn’t want to make all of these points in that thread, since they were off topic to the original. When I was a lurker, the biggest barrier to me saying hi was a tremendous fear of being downvoted. (A re-reading of this thread seems prudent in light of this discussion) I’d never been part of a forum with a karma system before, and I’d spent enough time on here to know that I really respected the opinions of most people on here. The idea of my ideas being rejected by a community that I’d come to respect was very stressful. I eventually got over it, and as I got more and more karma, it didn’t hurt so much when I lost a point. But being karmassassinated was enough to throw all of that into doubt again, since when I asked about it I was just downvoted and no one commented. (I’m sure it’s just no one happened to see it in recent comments, since it was deep in a thread.) I thought that it was very likely that I would leave the site after that, because it seemed to me that people simply didn’t care what I had to say—my comments for about two days were met with wild downvoting and almost no replies, except by one person. But I don’t think I am the only person that felt this way when ey joined LessWrong.
Edit: Hyperlink messed up.
Edit 2: It just now occurred to me to add this, and I’ve commented enough in this thread for one person, so I’m putting it here: I think all of the meetup posts are much more harmful to the signal to noise ratio than anything else. Unless you’re going to them, there’s no reason to be interested in them.
Get a few more (thousand?) karma and you may find getting karmassassinated doesn’t hurt much any more either. I get karmassassinated about once a fortnight (frequency memory subject to all sorts of salience biases and utterly unreliable—it happens quite a lot though) and it doesn’t bother me all that much.
These days I find that getting the last 50 comments downvoted is a lot less emotionally burdensome than getting just one comment that I actually personally value downvoted in the absence of any other comments. The former just means someone (or several someones) don’t like me. Who cares? Chances are they are not people I respect, given that I am a lot less likely to offend people when I respect them. On the other hand if most of my comments have been upvoted but one specific comment that I consider valuable gets multiple downvotes it indicates something of a judgement from the community and is really damn annoying. On the plus side it can be enough to make me lose interest in lesswrong for a few weeks and so gives me a massive productivity boost!
I believe you. That fear is a nuisance (to us if it keeps people silent and to those who are limited by it). If only we could give all lurkers rejection therapy to make them immune to this sort of thing!
I think if I were karmassassinated again I wouldn’t care nearly as much, because of how stupid I felt after the first time it happened. It was just so obvious that it was just some idiot, but I somehow convinced myself it wasn’t.
But that being said, one of the reasons it bothered me so much was that there were a number of posts that I was proud of that were downvoted—the guy who did it had sockpuppets, and it was more like my last 15-20 posts had each lost 5-10 karma. (This was also one of the reasons I wasn’t so sure it was karmassassination) Which put a number of posts I liked way below the visibility threshold. And it bothered me that if I linked to those comments later, people would just see a really low karma score and probably ignore it.
I think you can’t give more downvotes than your karma, so that person would need 5-10 sockpuppets with at least 15-20 (EDIT: actually 4-5) karma each. If someone is going to the trouble of doing that, it seems unlikely that they would just pick on you and nobody else (given that your writings don’t seem to be particularly extreme in some way). Has anyone else experience something similar?
Creating sockpuppets for downvoting is easy.
(kids, don’t try this at home).
Just find a Wikipedia article on a cognitive bias that we haven’t had a top-level post on yet. Then, make a post to main with the content of the Wikipedia article (restated) and references to the relevant literature (you probably can safely make up half of the references). It will probably get in the neighborhood of 50 upvotes, giving you 500 karma, which allows 2000 comment downvotes.
Even if those estimates are really high, that’s still a lot of power for little effort. And just repeat the process for 20 biases, and you’ve got 20 sockpuppets who can push a combined 20 downvotes on a large number of comments.
Of course, in the bargain Less Wrong is getting genuinely high-quality articles. Not necessarily a bug.
If restating Wikipedia is enough to make for a genuinely high-quality article, maybe we should have a bot that copy-pastes a relevant Wikipedia article into a top-level post every few days. (Based on a few minutes of research, it looks like this is legal if you link to the original article each time, but tell me if I’m wrong.)
Really, I think the main problem with this is that most of the work is identifying which ones are the ‘relevant’ articles.
I was implying a non-copy-paste solution. Still, interesting idea.
Yes; I didn’t mean to say you were implying a copy-paste solution. But if we’re speaking in the context of causing good articles to be posted and not in the context of thinking up hypothetical sock-puppeting strategies, whether it’s copy-pasted or restated shouldn’t matter unless the restatement is better-written than the original.
agreed
Modulo the fake references, of course.
of course
There’s not much reason to do something like this, when you can arbitrarily upvote your own comments with your sockpuppets and give yourself karma.
But then those comments / posts will be correctively downvoted, unless they’re high-quality. And you get a bunch more karma from a few posts than a few comments, so do both!
You can delete them afterwards, you keep karma from deleted posts.
Let’s keep giving the disgruntled script kiddies instructions! That’s bound to produce eudaimonia for all!
We found one of the sockpuppets, and he had one comment that added nothing that was at like 13 karma. It wasn’t downvoted until I was karmassassinated.
It’s some multiple of your karma, isn’t it? At least four, I think- thomblake would know.
Yes, 4x, last I checked.
I should note that I have never actually been in your shoes. I haven’t had any cases where there was unambiguous use of bulk sockpuppets. I’ve only been downvoted via breadth (up to 50 different comments from my recent history) and usually by only one person at a time (occasionally two or three but probably not two or three that go as far as 50 comments at the same time).
That would really mess with your mind if you were in a situation where you could not yet reliably model community preferences (and be personally confident in your model despite immediate evidence.)
Take it as a high compliment! Nobody has ever cared enough about me to make half a dozen new accounts. What did you do to deserve that?
It was this thread.
Basically it boiled down to this: I was suggesting that one reason some people might donate to more than one charity is that they’re risk averse and want to make sure they’re doing some good, instead of trying to help and unluckily choosing an unpredictably bad charity. It was admittedly a pretty pedantic point, but someone apparently didn’t like it.
That seems to be something I would agree with, with an explicit acknowledgement that it relies on a combination of risk aversion and non-consequentialist values.
It didn’t really help that I made my point very poorly.
Presumably also because people you respect are not very likely to express their annoyance through something as silly as karmassassination, right?
It’s great that you are reading the sequences. You are right it’s not as simple as read them → not noise, not read them → noise. You say you are up to QM, then I would expect you to not make the sort of mistakes that would come from not having read the core sequences. On the other hand, if you posted something about ethics or AI (I forget where the AI stuff is chronologically), I would expect you to make some common mistakes and be basically noise.
The high barrier to entry is a problem for new people joining, but I also want a more strictly informed crowd to talk to sometimes. I think having a lower barrier to entry overall, but at least somewhere where having read stuff is strictly expected would be best, but there are problems with that.
Don’t leave, keep reading. When you are done you will know what I’m getting at.
I think it’s close to the end, right before/after the fun theory sequence? I’ve read some of the later posts just from being linked to them, but I’m not sure.
And I quite intentionally avoid talking about things like AI, because I know you’re right. I’m not sure that necessarily holds for ethics, since ethics is a much more approachable problem from a layperson’s standpoint. I spent a three hour car drive for fun trying to answer the question “How would I go about making an AI” even though I know almost nothing about it. The best I could come up with was having some kind of program that created a sandbox and randomly generated pieces of code that would compile, and pitting them in some kind of bracket contest that would determine intelligence and/or friendliness. Thought I’d make a discussion post about it, but I figured it was too obvious to not have been thought of before.
Aside: That sockpuppetry seems to now be an accepted mode of social discourse on LessWrong strikes me as a far greater social problem than people not having read the Sequences. (“Not as bad as” is a fallacy, but that doesn’t mean both things aren’t bad.)
edit: and now I’m going to ask why this rated a downvote. What does the downvoter want less of?
edit 2: fair enough, “accepted” is wrong. I meant that it’s a thing that observably happens. I also specifically mean socking-up to mass-downvote someone, or to be a dick to people, not roleplay accounts like Clippy (though others find those problematic).
I think it was downvoted because sockpuppetry wasn’t really “accepted” by LW, it was just one guy.
Yeah, “accepted” is connotationally wrong—I mean it’s observed, and it’s hard to do much about it.
To what extent does anyone except EY have moderation control over LW?
There are several people capable of modifying or deleting posts and comments.
Ahem, on my side it was a case of bad pattern-matching. When I realized it, I deleted the reply I was writing here, and also removed the downvote.
Perhaps you should have explained further why do you think sockpuppetry is bad. My original guess was that you speak about people having multiple votes from multiple accounts (I was primed by other comments in this thread) and I habitually downvote most comments speaking about karma. But now it seems to me that you are concerned with other aspects, such as anonymity and role-playing. But this is only a guess, I can’t see it from your comment.
Yeah, bad explanation on my account. I’m not so concerned with roleplay accounts (e.g. Clippy), as with socking up to mass-downvote. (Getting initial karma is very easy.) Socking-up to be a dick to people also strikes me as problematic. I think I mean “observed” rather than “accepted”, which implies a social norm.