I am not a moderator, just sharing my hunches here.
I was only ratelimited for a day because I got in this fight.
re: Akram Choudhary—the example you give of a post by them is an exemplar of what habryka was talking about, the “you have to be joking”. this site has very tight rules on what argumentation structure and tone is acceptable: generally low-emotional-intensity words and generally arguments need to be made in a highly step-by-step way to be held as valid. I don’t know if that’s the full reason for the mute.
you got upvoted on april 1 because you were saying the things that, if you said the non-sarcastic version about ai, would be in line with general yudkowskian-transhumanist consensus. you continue to confuse me. it might be worth having the actual technical discussions you’d like to have about ai under the comments of those posts. what would you post on the april fools posts if you had thought they were not april fools at all? perhaps you can examine the step by step ways your reactions to those posts differ from ai in order to extract cruxes?
Victor Ashioya was posting a high ratio of things that sounded like advertisements, which I and likely others would then downvote on the homepage, and which would then disappear. Presumably Victor would delete them when they got downvotes. some still remain, which should give you a sense of why they were getting downvotes. Or not, if you’re so used to such things on twitter that they just seem normal.
I am surprised trevor, shminux, and noosphere are muted. I expect it is temporary, but if it is not, I would wonder why. I would require more evidence about the reasoning before I got pitchforky about it. (Incidentally, my willingness to get pitchforky fast may be a reason I get muted easily. Oh well.)
I don’t have an impression of the others in either direction on this topic.
But in general, my hunch is that since I was on this list and my muting was only for a day, the same may be true for others as well.
I appreciate you getting defensive about it rather than silently disappearing, even though I have had frustrating interactions with you before. I expect this post to be in the negatives. I have not voted yet, but if it goes below zero, I will strong upvote.
this site has very tight rules on what argumentation structure and tone is acceptable: generally low-emotional-intensity words and generally arguments need to be made in a highly step-by-step way to be held as valid.
I actually love this norm. It prevents emotions from affecting judgement, and laying out arguments step by step makes them easier to understand.
I spent several years moderating r/changemyview on Reddit which also has this rule. Having removed at least hundreds of comments that break it, I think the worst thing about it is that it rewards aloofness and punishes sincerity. That’s acceptable to trade off to prevent the rise of very sincere flame wars, but it elevates people pretending to be wise at the expense of those with more experience who likely have more deeply held but also informed opinions about the subject matter. This was easily the most common moderation frustration expressed by users.
I don’t really know, the best I can offer is sort of vaguely gesturing at LessWrong’s moderation vector and pointing in a direction.
LW’s rules go for a very soft, very subjective approach to definitions and rule enforcement. In essence, anything the moderators feel is against the LW ethos is against the rules here. That’s the right approach to take in an environment where the biggest threat to good content is bad content. Hacker News also takes this approach and it works well—it keeps HN protected against non-hackers.
ChangeMyView is somewhat under threat of bad content—if too many people post on a soapbox, then productive commenters will lose hope and leave the subreddit. However it’s also under threat of loss of buy-in—people with non-mainstream views, or those that would be likely to attract backlash elsewhere need to feel that the space is safe for them to explore.
When optimising for buy-in, strictness and clarity is desirable. We had roughly consistent standards in terms of numbers of violations, to earn a ban, and consistently escalating bans (3 days, 30 days, permanent) in line with behavioural infractions. When there were issues, buy-in seemed present that we were at least consistent (even if the things we were consistent to weren’t optimal). That consistency provided a plausible alternative to the motive uncertainty created by subjective enforcement—for example, the admins told us we were fine to continue hosting discussions regarding gender and race that were being cracked down on elsewhere on Reddit.
Right now, I think LW is doing a good job of defending against bad content. I think what would make LW stronger is a semi-constitutional backbone to fall against in times of unrest. Kind of like how the 5th pillar of Wikipedia is to ignore all rules, yet policy is still the essential basis of editing discussions.
I would like to see, in the case of commenting guidelines, clearer definitions of what excess looks like. I think the subjective approach is fine for posts for now.
I agree that makes sense to me and assume your experience is probably representative.
I do wonder about a dynamic that might be obscured by your insight above. While the near term might tend to elevate a lower standard post(er) I would expect one of two paths going forward.
The bad path would be something along the line of Gresham’s Law where the more passionate but well informed and intelligent get crowded out by the mediation and intellectual “posers”. I suspect that has happened. Probably reading that in but I might infer that is something of the longer terms outcome your comment suggests.
The good path would be those more passionate, informed and thoughtful learn to adjust their communications skills and keep a check on their emotional response. Emotion and passion are good but can cloud judgement so learning to step back, remove that aspect from one’s thinking and then evaluate one’s argument more objectively is very helpful. Both for any onlie forum and personally—and I would suggest even is something of a public good type thing in that it will become more a habit/personality trait that reflects in all aspects of one’s life and social interactions.
Do you have any sense about which path we might expect from this type of moderation standard?
Note the current setup is “ban and do so potentially for content a year old” and no feedback as to the specific user content that was the reason.
There’s dozens of reasons possible. Also note how the site moderators choose not to give any feedback but instead choose effectively useless vague statements that can be applied to any content or no content. See above the one from Habryka. It matches anything and nothing. See all the conditionals including “I am actually very unsure this criticism is even true” and “I am unwilling to give any clarification or specifics”. And see the one on “I don’t think you are at the level of your interlocutor” without specifying + or -.
Literally any piece of text matches the rules given and also fails to match.
As a result of this,
The good path would be those more passionate, informed and thoughtful learn to adjust their communications skills and keep a check on their emotional response.
Is outside the range of possible outcomes.
Big picture wise, almost all social media attempts die. Reddit/Facebook/Twitter et al are the natural monopoly winners and there are thousands of losers. This is just an experiment, the 2.0 attempt, and I have to respect the site moderators and owner for trying something new.
Their true policy is not to give feedback but to ban essentially everyone but a small elite set of “niche” contributors who often happen to agree with each other already, so sending empty strings at each other is the same as writing a huge post yet again arguing the same ground.
I think my concluding thought is that this creates a situation of essentially intellectual sandcastles. A large quantity is things that sound intelligent, are built on unproven assumptions that sound correct, and occupy a lot of text to describe and ultimately are useless.
A dumber sounding, simpler idea based on strong empirical evidence, using usually the simplest possible theory to explain it, is almost always going to be the least wrong. (And the correct policy is to only add complexity to a theory when strong replicable evidence can’t be explained without adding the complexity)
I think the site name should be GreaterWrong, just now miri is about stopping machine intelligence research, openAI is closed, and so on.
I would personally say that norms are things people expect others to do, but where the response to them not doing that is simply to be surprised; this is a rule, something where the response to not doing it is some form of other people taking some sort of action against a person for breaking the rule. when the people to whom a rule applies and the people who enforce it are different, those people are called authorities.
That fight (when I scanned over it briefly yesterday) seemed to be you and one other user (Shankar Sivarajan), having a sort of comment tennis game where you were pinging back an fourth, and (when I saw it) you both had downvotes that looked like you were both downvoting the other, and no one else was participating. I imagine that neither of was learning or having fun from that conversation. Ending that kind of chain might be the place the rate-limit has a use case. Whether it is the right solution I don’t know.
To an extent this whole site is just a fun game to play. But it’s no fun in a game if the GM declares you have to skip a turn for “no reason” or “git gud”. And you cannot figure out if the GM is being unfair or you actually made a mistake 50 turns back. Thank you for your courage in publicly speaking out.
Since I may not get a chance later, the tiny difference between asteroid and ASI is we have the craters for asteroids, even if almost all are old. Our evidence is empirical and there a large quantity of it. (The Moon)
For ASI it’s speculative and basic key questions like how much compute is needed to be a problem we have a lack of data on. We can speculate reasonably that more than human brains will be an ASI, but we don’t know how that translates to what the ASI can do against us, or how effective leaving out parts to cripple it will be.
Asteroid it’s just simple kinetic energy, and we know for example how much energy it takes to cause a tsunami or earthquake or blast wave that knocks down buildings.
I am not a moderator, just sharing my hunches here.
I was only ratelimited for a day because I got in this fight.
re: Akram Choudhary—the example you give of a post by them is an exemplar of what habryka was talking about, the “you have to be joking”. this site has very tight rules on what argumentation structure and tone is acceptable: generally low-emotional-intensity words and generally arguments need to be made in a highly step-by-step way to be held as valid. I don’t know if that’s the full reason for the mute.
you got upvoted on april 1 because you were saying the things that, if you said the non-sarcastic version about ai, would be in line with general yudkowskian-transhumanist consensus. you continue to confuse me. it might be worth having the actual technical discussions you’d like to have about ai under the comments of those posts. what would you post on the april fools posts if you had thought they were not april fools at all? perhaps you can examine the step by step ways your reactions to those posts differ from ai in order to extract cruxes?
Victor Ashioya was posting a high ratio of things that sounded like advertisements, which I and likely others would then downvote on the homepage, and which would then disappear. Presumably Victor would delete them when they got downvotes. some still remain, which should give you a sense of why they were getting downvotes. Or not, if you’re so used to such things on twitter that they just seem normal.
I am surprised trevor, shminux, and noosphere are muted. I expect it is temporary, but if it is not, I would wonder why. I would require more evidence about the reasoning before I got pitchforky about it. (Incidentally, my willingness to get pitchforky fast may be a reason I get muted easily. Oh well.)
I don’t have an impression of the others in either direction on this topic.
But in general, my hunch is that since I was on this list and my muting was only for a day, the same may be true for others as well.
I appreciate you getting defensive about it rather than silently disappearing, even though I have had frustrating interactions with you before. I expect this post to be in the negatives. I have not voted yet, but if it goes below zero, I will strong upvote.
I actually love this norm. It prevents emotions from affecting judgement, and laying out arguments step by step makes them easier to understand.
I spent several years moderating r/changemyview on Reddit which also has this rule. Having removed at least hundreds of comments that break it, I think the worst thing about it is that it rewards aloofness and punishes sincerity. That’s acceptable to trade off to prevent the rise of very sincere flame wars, but it elevates people pretending to be wise at the expense of those with more experience who likely have more deeply held but also informed opinions about the subject matter. This was easily the most common moderation frustration expressed by users.
Makes sense. Given that perspective, do you have any idea for a better approach?
I don’t really know, the best I can offer is sort of vaguely gesturing at LessWrong’s moderation vector and pointing in a direction.
LW’s rules go for a very soft, very subjective approach to definitions and rule enforcement. In essence, anything the moderators feel is against the LW ethos is against the rules here. That’s the right approach to take in an environment where the biggest threat to good content is bad content. Hacker News also takes this approach and it works well—it keeps HN protected against non-hackers.
ChangeMyView is somewhat under threat of bad content—if too many people post on a soapbox, then productive commenters will lose hope and leave the subreddit. However it’s also under threat of loss of buy-in—people with non-mainstream views, or those that would be likely to attract backlash elsewhere need to feel that the space is safe for them to explore.
When optimising for buy-in, strictness and clarity is desirable. We had roughly consistent standards in terms of numbers of violations, to earn a ban, and consistently escalating bans (3 days, 30 days, permanent) in line with behavioural infractions. When there were issues, buy-in seemed present that we were at least consistent (even if the things we were consistent to weren’t optimal). That consistency provided a plausible alternative to the motive uncertainty created by subjective enforcement—for example, the admins told us we were fine to continue hosting discussions regarding gender and race that were being cracked down on elsewhere on Reddit.
Right now, I think LW is doing a good job of defending against bad content. I think what would make LW stronger is a semi-constitutional backbone to fall against in times of unrest. Kind of like how the 5th pillar of Wikipedia is to ignore all rules, yet policy is still the essential basis of editing discussions.
I would like to see, in the case of commenting guidelines, clearer definitions of what excess looks like. I think the subjective approach is fine for posts for now.
I agree that makes sense to me and assume your experience is probably representative.
I do wonder about a dynamic that might be obscured by your insight above. While the near term might tend to elevate a lower standard post(er) I would expect one of two paths going forward.
The bad path would be something along the line of Gresham’s Law where the more passionate but well informed and intelligent get crowded out by the mediation and intellectual “posers”. I suspect that has happened. Probably reading that in but I might infer that is something of the longer terms outcome your comment suggests.
The good path would be those more passionate, informed and thoughtful learn to adjust their communications skills and keep a check on their emotional response. Emotion and passion are good but can cloud judgement so learning to step back, remove that aspect from one’s thinking and then evaluate one’s argument more objectively is very helpful. Both for any onlie forum and personally—and I would suggest even is something of a public good type thing in that it will become more a habit/personality trait that reflects in all aspects of one’s life and social interactions.
Do you have any sense about which path we might expect from this type of moderation standard?
Note the current setup is “ban and do so potentially for content a year old” and no feedback as to the specific user content that was the reason.
There’s dozens of reasons possible. Also note how the site moderators choose not to give any feedback but instead choose effectively useless vague statements that can be applied to any content or no content. See above the one from Habryka. It matches anything and nothing. See all the conditionals including “I am actually very unsure this criticism is even true” and “I am unwilling to give any clarification or specifics”. And see the one on “I don’t think you are at the level of your interlocutor” without specifying + or -.
Literally any piece of text matches the rules given and also fails to match.
As a result of this,
Is outside the range of possible outcomes.
Big picture wise, almost all social media attempts die. Reddit/Facebook/Twitter et al are the natural monopoly winners and there are thousands of losers. This is just an experiment, the 2.0 attempt, and I have to respect the site moderators and owner for trying something new.
Their true policy is not to give feedback but to ban essentially everyone but a small elite set of “niche” contributors who often happen to agree with each other already, so sending empty strings at each other is the same as writing a huge post yet again arguing the same ground.
I think my concluding thought is that this creates a situation of essentially intellectual sandcastles. A large quantity is things that sound intelligent, are built on unproven assumptions that sound correct, and occupy a lot of text to describe and ultimately are useless.
A dumber sounding, simpler idea based on strong empirical evidence, using usually the simplest possible theory to explain it, is almost always going to be the least wrong. (And the correct policy is to only add complexity to a theory when strong replicable evidence can’t be explained without adding the complexity)
I think the site name should be GreaterWrong, just now miri is about stopping machine intelligence research, openAI is closed, and so on.
I would personally say that norms are things people expect others to do, but where the response to them not doing that is simply to be surprised; this is a rule, something where the response to not doing it is some form of other people taking some sort of action against a person for breaking the rule. when the people to whom a rule applies and the people who enforce it are different, those people are called authorities.
That fight (when I scanned over it briefly yesterday) seemed to be you and one other user (Shankar Sivarajan), having a sort of comment tennis game where you were pinging back an fourth, and (when I saw it) you both had downvotes that looked like you were both downvoting the other, and no one else was participating. I imagine that neither of was learning or having fun from that conversation. Ending that kind of chain might be the place the rate-limit has a use case. Whether it is the right solution I don’t know.
To an extent this whole site is just a fun game to play. But it’s no fun in a game if the GM declares you have to skip a turn for “no reason” or “git gud”. And you cannot figure out if the GM is being unfair or you actually made a mistake 50 turns back. Thank you for your courage in publicly speaking out.
Since I may not get a chance later, the tiny difference between asteroid and ASI is we have the craters for asteroids, even if almost all are old. Our evidence is empirical and there a large quantity of it. (The Moon)
For ASI it’s speculative and basic key questions like how much compute is needed to be a problem we have a lack of data on. We can speculate reasonably that more than human brains will be an ASI, but we don’t know how that translates to what the ASI can do against us, or how effective leaving out parts to cripple it will be.
Asteroid it’s just simple kinetic energy, and we know for example how much energy it takes to cause a tsunami or earthquake or blast wave that knocks down buildings.