Certainly it could, and at times does. In our defense, however, we do not make our living this way. It’s all too easy for people to push karma around in a circle divorced from reality, but plenty of people feel free to criticize Less Wrong here, as you just neatly demonstrated. There’s a much stronger incentive to follow the party line in academia where dissent, however true or useful, can curtail promotion or even get one fired.
If we were making our living off of karma, your comparison would be entirely apt, and I’d expect to see the quality of discussion drop sharply.
Everything you say is true, and I agree. But lets not discount the pull towards social conformity that karma has, and the effect evaporative cooling of social groups has in terms of radicalizing community norms. You definitely get a lot further here by defending and promoting AI x-risk concerns than by dismissing or ignoring them.
That does tend to happen, yes, which is unfortunate. What would you suggest doing to reduce this tendency? (It’s totally fine if you don’t have a concrete solution of course, these sorts of problems are notoriously hard)
Karma should not be visible to anyone but mods, to whom it serves as a distributed mechanism for catching their attention and not much else. Large threads could use karma to decide which posts to initially display, but for smaller threads comments should be chronological.
People should be encouraged to post anonymously, as I am doing. Unfortunately the LW forum software devs are reverting this capability, which is a step backwards.
Get rid of featured articles and sequences. I mean keep the posts, but don’t feature them prominently on the top of the site. Have an infobar on the side maybe that can be a jumping off point for people to explore curated content, but don’t elevate it to the level of dogma as the current site does.
Encourage rigorous experimentation to verify one’s belief. A position arrived at through clever argumentation is quite possibly worthless. This is a particular vulnerability of this site, which is built around the exchange of words not physical evidence. So a culture needs to be developed which demands empirical investigation of the form “I wondered if X is true, so I did A, B, and C, and this is what happened...”
That was five minutes of thinking on the subject. I’m sure I could probably come up with more.
Ignoring the concerns basically means not participating in any of the AI x-risk threads. I don’t think it would be held against anyone to simply stay out.
Certainly it could, and at times does. In our defense, however, we do not make our living this way. It’s all too easy for people to push karma around in a circle divorced from reality, but plenty of people feel free to criticize Less Wrong here, as you just neatly demonstrated. There’s a much stronger incentive to follow the party line in academia where dissent, however true or useful, can curtail promotion or even get one fired.
If we were making our living off of karma, your comparison would be entirely apt, and I’d expect to see the quality of discussion drop sharply.
Everything you say is true, and I agree. But lets not discount the pull towards social conformity that karma has, and the effect evaporative cooling of social groups has in terms of radicalizing community norms. You definitely get a lot further here by defending and promoting AI x-risk concerns than by dismissing or ignoring them.
That does tend to happen, yes, which is unfortunate. What would you suggest doing to reduce this tendency? (It’s totally fine if you don’t have a concrete solution of course, these sorts of problems are notoriously hard)
Karma should not be visible to anyone but mods, to whom it serves as a distributed mechanism for catching their attention and not much else. Large threads could use karma to decide which posts to initially display, but for smaller threads comments should be chronological.
People should be encouraged to post anonymously, as I am doing. Unfortunately the LW forum software devs are reverting this capability, which is a step backwards.
Get rid of featured articles and sequences. I mean keep the posts, but don’t feature them prominently on the top of the site. Have an infobar on the side maybe that can be a jumping off point for people to explore curated content, but don’t elevate it to the level of dogma as the current site does.
Encourage rigorous experimentation to verify one’s belief. A position arrived at through clever argumentation is quite possibly worthless. This is a particular vulnerability of this site, which is built around the exchange of words not physical evidence. So a culture needs to be developed which demands empirical investigation of the form “I wondered if X is true, so I did A, B, and C, and this is what happened...”
That was five minutes of thinking on the subject. I’m sure I could probably come up with more.
Ignoring the concerns basically means not participating in any of the AI x-risk threads. I don’t think it would be held against anyone to simply stay out.
https://www.lesswrong.com/posts/X3p8mxE5dHYDZNxCm/a-concrete-bet-offer-to-those-with-short-ai-timelines would be a post arguing against AI x-risk concerns and it has more than three times the karma then any other post published the day it was published.
Well, we were getting paid for karma the other week, so…. (This is mostly a joke; I get that was an April Fool’s thing 🙃)