The EA Forum has this problem worse, but I’ve started to see it on LessWrong: it feels to me like we have a lot more newbies on the site who don’t really get what LW-style rationality is about, and they make LessWrong a less fun place to write because they are regressing discussion norms back towards the mean.
Earlier this year I gave up on EAF because it regressed so far towards the mean that it became useless to me. LW has still been passable but feels like it’s been ages since I really got into a good, long, deep thread with somebody on here. Partly that’s because I’m busy, but it’s also because I’m been quicker to give up because my expectations of having a productive conversation here are now lower. :-(
My impression is that people are quicker to jump to cached thoughts and not actually read and understand things. So I’ve spent more time dealing with what I would consider bad faith takes on posts than I used to where it’s clear to me the person is trying to read in what they want it to have been that I said or meant to imply.
I also have a standing complaint that people are hypocritical by being too lenient towards things they like and too critical of things they don’t like for affiliative reasons rather than because they engaged with the reasoning and arguments.
I see a lot of this from both sides. I know how to farm karma on here, I just mostly choose not to, but when I post things that are of the type that I expect them to be voted up I can be pretty lazy and people will vote it up because I hit the applause light for something they already wanted to applaud. If I post something that I know people will disagree with because it goes against standard takes, I’ve got to be way more detailed. But I see this as a bad asymmetry that results from confirmation bias. I would rather live in a world where lazy posts that say things people already agree with get downvoted for being low quality, or live in a world where posts that people disagree with get upvoted despite disagreeing because they respect the argumentation, but not the world we find ourselves in now.
I know how to farm karma on here, I just mostly choose not to, but when I post things that are of the type that I expect them to be voted up I can be pretty lazy and people will vote it up because I hit the applause light for something they already wanted to applaud. If I post something that I know people will disagree with because it goes against standard takes, I’ve got to be way more detailed.
One thing I’ve been thinking about in this regard is the microhabits around voting.
I only vote on a small minority of the stuff I read. I assume others are similar.
And voting is a bit of a cognitive chore: There are 25 possible ways to vote: strong down/weak down/nothing/weak up/strong up, on the 2 different axes.
I wish I had a principled way of choosing between those 25 different ways to vote, but I don’t. I rarely feel satisfied with the choice I made. I’m definitely inconsistent in my behavior from comment to comment.
For example, if someone makes a point that I might have made myself, is it OK to upvote them overall, or should I just vote to agree? I appreciate them making the point, so I usually give them an upvote for overall—after all, if I made the point myself, I’d automatically give myself an “overall” upvote too. But now that I explicitly consider, maybe my threshold should be higher, e.g. only upvote “overall” if I think they made the point at least as well as I would’ve made it.
In any case, the “point I would’ve made myself” situation is one of a fairly small number of scenarios where I get enough activation energy to actually vote on something.
Sometimes I wonder what LW would be like if a user was only allowed to vote on a random 5% subset of the comments on any given page. (To make it deterministic, you could hand out vote privilege based on the hash of their user ID and the comment ID.) Then nudge users to actually vote on those 5%, or explicitly acknowledge a null vote. I wonder if this would create more of a “jury trial” sort of feel, compared to the current system which can have a “count the size of various tribes” feel.
The EA Forum has this problem worse, but I’ve started to see it on LessWrong: it feels to me like we have a lot more newbies on the site who don’t really get what LW-style rationality is about, and they make LessWrong a less fun place to write because they are regressing discussion norms back towards the mean.
Earlier this year I gave up on EAF because it regressed so far towards the mean that it became useless to me. LW has still been passable but feels like it’s been ages since I really got into a good, long, deep thread with somebody on here. Partly that’s because I’m busy, but it’s also because I’m been quicker to give up because my expectations of having a productive conversation here are now lower. :-(
Do you have any thoughts on what the most common issues you see are or is it more like that every time it is a different issue?
My impression is that people are quicker to jump to cached thoughts and not actually read and understand things. So I’ve spent more time dealing with what I would consider bad faith takes on posts than I used to where it’s clear to me the person is trying to read in what they want it to have been that I said or meant to imply.
I also have a standing complaint that people are hypocritical by being too lenient towards things they like and too critical of things they don’t like for affiliative reasons rather than because they engaged with the reasoning and arguments.
I see a lot of this from both sides. I know how to farm karma on here, I just mostly choose not to, but when I post things that are of the type that I expect them to be voted up I can be pretty lazy and people will vote it up because I hit the applause light for something they already wanted to applaud. If I post something that I know people will disagree with because it goes against standard takes, I’ve got to be way more detailed. But I see this as a bad asymmetry that results from confirmation bias. I would rather live in a world where lazy posts that say things people already agree with get downvoted for being low quality, or live in a world where posts that people disagree with get upvoted despite disagreeing because they respect the argumentation, but not the world we find ourselves in now.
One thing I’ve been thinking about in this regard is the microhabits around voting.
I only vote on a small minority of the stuff I read. I assume others are similar.
And voting is a bit of a cognitive chore: There are 25 possible ways to vote: strong down/weak down/nothing/weak up/strong up, on the 2 different axes.
I wish I had a principled way of choosing between those 25 different ways to vote, but I don’t. I rarely feel satisfied with the choice I made. I’m definitely inconsistent in my behavior from comment to comment.
For example, if someone makes a point that I might have made myself, is it OK to upvote them overall, or should I just vote to agree? I appreciate them making the point, so I usually give them an upvote for overall—after all, if I made the point myself, I’d automatically give myself an “overall” upvote too. But now that I explicitly consider, maybe my threshold should be higher, e.g. only upvote “overall” if I think they made the point at least as well as I would’ve made it.
In any case, the “point I would’ve made myself” situation is one of a fairly small number of scenarios where I get enough activation energy to actually vote on something.
Sometimes I wonder what LW would be like if a user was only allowed to vote on a random 5% subset of the comments on any given page. (To make it deterministic, you could hand out vote privilege based on the hash of their user ID and the comment ID.) Then nudge users to actually vote on those 5%, or explicitly acknowledge a null vote. I wonder if this would create more of a “jury trial” sort of feel, compared to the current system which can have a “count the size of various tribes” feel.