I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around −3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.
My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I’d do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.
I would not be offended if someone else “took the idea” and made such a post. I also wouldn’t mind if the consensus is that such a post is not warranted. So, what do you think?
Do people think that a discussion forum on the moderation and deletion policies would be beneficial?
I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.
As a newcomer here, I am completely mystified by the dark hints of a forbidden topic.
Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.
Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven’t thought of every possible explanation.
It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.
Self-censorship to protect our own mental health? Stupid.
My gloss on it is that this is at best a minor part, though it figures in.
The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation.
More explaining why many won’t think it dangerous at all. This doesn’t directly point anything out, but any details do narrow the search-space:
V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer.
I personally don’t buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I’m willing to self-censor to some degree, even though I hate the heavy-handed response.
Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don’t really live my life in a way that’s consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats.
I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.
(1) I know that the SIAI mission is vitally important.
(2) If we blow it, the universe could be paved with paper clips.
(3) Or worse.
(4) I hereby certify that points 1 & 2 do not give me nightmares.
(5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
Although 5 could be easily replaced by “Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don’t want to think about explicitly.”
I read the idea, but it seemed to have basically the same flaw as Pascal’s wager does. On that ground alone it seemed like it shouldn’t be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn’t save the post.)
My analysis was that it described a real danger. Not a topic worth banning, of course—but not as worthless a danger as the one that arises in Pascal’s wager.
My gloss on it is that this is at best a minor part, though it figures in.
I think that, even if this is a minor part of the reasoning for those who (unlike me) believe in the danger, it could easily be the best, most consensus* basis for an explicit deletion policy. I’d support such a policy, and definitely think a secret policy is stupid for several reasons.
If there’s just one topic that’s banned, then no. If it’s increased to 2 topics—and “No riddle theory” is one I hadn’t heard before—then maybe. Moderation and deletion is very rare here.
I would like moderation or deletion to include sending an email to the affected person—but this relies on the user giving a good email address at registration.
My registration email is good, and I received no such email. I can also be reached under the same user name using English wikipedia’s “contact user” function (which connects to the same email.)
Suggestions like your email idea would be the main purpose of having the discussion (here or on a top-level post). I don’t think that some short-lived chatter would change a strongly-held belief, and I have no desire nor capability of unseating the benevolent-dictator-for-life. However, I think that any partial steps towards epistemic glasnost, such as an email to deleted post authors or at least their ability to view the responses to their own deleted post, would be helpful.
Do people think that a discussion forum on the moderation and deletion policies would be beneficial?
Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.
As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we’ve compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn’t yet on our list—or which doesn’t quite match the way we worded it—or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).
This is not to say that I wouldn’t like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.
Any thoughts about whether there are differences between communities with a lot of specific rules and those with a more general “be excellent to each other” standard?
That’s a really good question; it makes me want to do actual experiments with social communities, which I’m not sure how you’d set up. Failing that, here are some ideas about what might happen:
Moderators of a very strictly rule-based community might easily find themselves in a walled garden situation just because their hands are tied. (This is the problem we had in the one I mentioned, before we made a conscious decision to be more flexible.) If someone behaves poorly, they have no justification to wield to eject that person. In mild cases they’ll tolerate it; in major cases, they’ll make an addition to the rules to cover the new infraction. Over time the rules become an unwieldy tome, intimidating users who want to behave well, reducing the number of people who actually read them, and increasing the chance of accidental infractions. Otherwise-useful participants who make a slip get a pass, leading to cries of favoritism from users who’d had the rules brought down on them before—or else they don’t, and the community loses good members.
This suggests a corollary of my earlier admonition for flexibility: What written rules there are should be brief and digestible, or at least accompanied by a summary. You can see this transition by comparing the long form of one community’s rules, complete with CSS and anchors that let you link to a specific infraction, and the short form which is used to give new people a general idea of what’s okay and not okay.
The potential flaw in the “be excellent to each other” standard is disagreement about what’s excellent—either amongst the moderators, or between the moderators and the community. For this reason, I’d expect it to work better in smaller communities with fewer of either. (This suggests another corollary—smaller communities need fewer written rules—which I suspect is true but with less confidence than the previous one.) If the moderators disagree amongst themselves, users will rightly have no idea what’s okay and isn’t; when they’re punished for something which was okay before, they’ll be frustrated and likely resentful, neither of which is conducive to a pleasant environment. If the moderators agree but the users disagree with their consensus, well, one set or the other will have to change.
Of course, in online communities, simple benevolent dictatorships are a popular choice. This isn’t surprising, given that there is often exactly one person with real power (e.g. server access), which they may or may not choose to delegate. Two such channels I’m in demonstrate the differences in the above fairly well, if not perfectly (I’m not in any that really relies on a strict code of rules). One is very small (about a dozen people connected as I write this), and has exactly one rule*: “Be awesome.” The arbiter of awesome is the channel owner. Therefore, the channel is a collection of people who suit him. Since there is no other principle we claim to hold to (no standard against which to measure the dictator), and he’s not a jerk (obviously I don’t think so, since I’m still there), it works perfectly well.
The other is the one whose rules I linked earlier. It’s fairly large, but not enormous (~375 people connected right now). There are a few people who technically have power, but one to whom the channel “belongs” (the author of the work it’s a fan community of). Because he has better things to do than keep an eye on it, he delegates responsibility to ops who are selected almost entirely for one quality: he predicts that they will make moderation decisions he approves of. Between that criterion and an active side channel for discussing policy, we mostly avoid the problems of moderator disagreement, and the posted rules ensure that there are very few surprises for the users.
A brief digression: That same channel owner actually did do an experiment in the moderation of a social community. He wanted to know if you could design an algorithm for a bot to moderate an IRC channel, with the goal of optimizing the signal to noise ratio; various algorithms were discussed, and one was implemented. I would call it a tentative success; the channel in question does have very good SNR when active, but it moves slowly; the trivial chatter wasn’t replaced with insight, it was just removed. Also, the channel bot is supplemented by human mods, for the rare cases when the bot’s enforcement is being circumvented.
The algorithm he went with is not my favorite of the ones proposed, and I’d love to see a more rigorous experiment done—the trick would be acquiring ready bodies of participants.
Anyway. If instead of experimenting on controlled social groups, we surveyed existing groups that had survived, I think we’d find a lot of small communities with no or almost no codified rules, and then a mix of rules and judgment as they got larger. There would be a cap on the quantity of written rules that were actually enforced in any size of community, and I wouldn’t expect to see even one that relied 100% on a codified ruleset with no enforcer judgment at all.
(Now I kind of want to research some communities and write an article about this, although I don’t think it’d be particularly relevant for LW.)
*I’m told there is actually a second one: “No capitals in the topic.” This is more of a policy than a behavioral rule, though, and it began as an observation of the way things actually were.
Again the very evil mind shattering secret, why do I keep running into you?
This is getting old, lots of people seem to know about it. And a few even know the
evil soul wrecking idea.
The truth is out there, my monkey brains can’t cope with the other’s having a secret not willing to share, they may bash my skull in with a stone! I should just mass PM the guys who know about the secret in a nonconspicus way. They will drop hints, they are weak. Also traces of the relevant texts have to still be on-line.
That job advert seems to be the kind a rather small subset of organizations would put out.
That is just paranoid don’t even think about that.
XXX asf ag agdlqog hh hpoq fha r wr rqw oipa wtrwz wrz wrhz. W211!!
Yay posting on Lesswrong feels like playing Call of Cthulhu!
....
These are supposed to be not only very smart, but very rational people, people you have a high opinion of, who seem to take the idea very seriously. They may be trying to manipulate you. There may be a non-trivial possibility of them being right.
....
I suddenly feel much less enthusiastic about life extension and cryonics.
I do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed.
I don’t think there’s anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.
But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don’t take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious.
What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I’m quite sure there are a few that could prove very toxic to humans.
The LW member that take it seriously are doing a horrible job of it.
Indeed, in the classic story, it was an idea whose time had come, and there was no effective means of quarantining it. And when it comes to ideas that have hit the light of day, there are always going to be those of us who hate censorship more than death.
I think such discussion wouldn’t necessarily warrant its own top-level post, but I think it would fit well in a new Meta thread. I have been meaning to post such a thread for a while, since there are also a couple of meta topics I would like to discuss, but I haven’t gotten around to it.
Do people think that a discussion forum on the moderation and deletion policies would be beneficial?
I don’t. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some—self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don’t see any possible upsides. Having a Benevolent Dictator For Life works quite well.
See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.
I don’t want a revolution, and don’t believe I’ll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.
I think Roko got a pretty clear explanation of why his post was deleted. I don’t think I did. I think everyone should. I suspect there may be others like me.
I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven’t, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is “non-danger” and “ineffectiveness”, and the truth will tend to win the argument over time, I think that would be a good thing.
I don’t think I was being sarcastic. I won’t take the juices out of the comment by analysing it too completely—but a good part of it was the joke of comparing Less Wrong with Fight Club.
We can’t tell you what materials are classified—that information is classified.
The thing I’m trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that’s possible, and I believe it is more appropriate to discuss this in public.
(Actually, since I’ve been making noise about this, and since I’ve promised not to reveal it, I now know the secret. No, I won’t tell you, I promised that. I won’t even tell who told me, even though I didn’t promise not to, because they’d just get too many requests to reveal it. But I can say that I don’t believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)
Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
Look, my post addressed these issues, and I’d be happy to discuss them further, if the ground rules were clear. Right now, we’re not having that discussion; we’re talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you’re right, you’ll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
I have been reading OB and LW from about a month of OB’s founding, but this site has been slipping for over a year now. I don’t even know what specifically is being discussed; not even being able to mention the subject matter of the banned post, and having secret rules, is outstandingly stupid. Maybe I’ll come back again in a bit to see if the “moderators” have grown up.
As a rather new reader, my impression has been that LW suffers from a moderate case of what in the less savory corners of the Internet would be known as CJS (circle-jerking syndrome).
At the same time, if one is willing to play around this aspect (which is as easy as avoiding certain threads and comment trees), there are discussion possibilities that, to the best of my knowledge, are not matched anywhere else—specifically, the combination of a low effort-barrier to entry, a high average thought-to-post ratio, and a decent community size.
I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around −3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.
My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I’d do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.
I would not be offended if someone else “took the idea” and made such a post. I also wouldn’t mind if the consensus is that such a post is not warranted. So, what do you think?
I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.
As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.
Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven’t thought of every possible explanation.
It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.
Edit: typo correction—insert missing words
My gloss on it is that this is at best a minor part, though it figures in.
The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation.
More explaining why many won’t think it dangerous at all. This doesn’t directly point anything out, but any details do narrow the search-space: V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer.
I personally don’t buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I’m willing to self-censor to some degree, even though I hate the heavy-handed response.
Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don’t really live my life in a way that’s consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats.
I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.
How about an informed consent form:
(1) I know that the SIAI mission is vitally important.
(2) If we blow it, the universe could be paved with paper clips.
(3) Or worse.
(4) I hereby certify that points 1 & 2 do not give me nightmares.
(5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
I feel you should detail point (1) a bit more (explain in more detail what the SIAI intends to do), but I agree with the principle. Upvoted.
I like it!
Although 5 could be easily replaced by “Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don’t want to think about explicitly.”
I read the idea, but it seemed to have basically the same flaw as Pascal’s wager does. On that ground alone it seemed like it shouldn’t be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn’t save the post.)
My analysis was that it described a real danger. Not a topic worth banning, of course—but not as worthless a danger as the one that arises in Pascal’s wager.
I think that, even if this is a minor part of the reasoning for those who (unlike me) believe in the danger, it could easily be the best, most consensus* basis for an explicit deletion policy. I’d support such a policy, and definitely think a secret policy is stupid for several reasons.
*no consensus here will be perfect.
I think it’s safe to tell you that your second two hypotheses are definitely not on the right track.
If there’s just one topic that’s banned, then no. If it’s increased to 2 topics—and “No riddle theory” is one I hadn’t heard before—then maybe. Moderation and deletion is very rare here.
I would like moderation or deletion to include sending an email to the affected person—but this relies on the user giving a good email address at registration.
I’m pretty sure that “riddle theory” is a reference to Roko’s post, not a new banned topic.
My registration email is good, and I received no such email. I can also be reached under the same user name using English wikipedia’s “contact user” function (which connects to the same email.)
Suggestions like your email idea would be the main purpose of having the discussion (here or on a top-level post). I don’t think that some short-lived chatter would change a strongly-held belief, and I have no desire nor capability of unseating the benevolent-dictator-for-life. However, I think that any partial steps towards epistemic glasnost, such as an email to deleted post authors or at least their ability to view the responses to their own deleted post, would be helpful.
Yes. I think that lack of policy 1) reflects poorly on the objectivity of moderators, even if in appearance only 2) diverts too much energy into nonproductive discussions.
As a moderator of a moderately large social community, I would like to note that moderator objectivity is not always the most effective way to reach the desired outcome (an enjoyable, productive community). Yes, we’ve compiled a list of specific actions that will result in warnings, bans, and so forth, but someone will always be able to think of a way to be an asshole which isn’t yet on our list—or which doesn’t quite match the way we worded it—or whatever. To do our jobs well, we need to be able to use our judgment (which is the criterion for which we were selected as moderators).
This is not to say that I wouldn’t like to see a list of guidelines for acceptable and unacceptable LW posts. But I respect the need for some flexibility on the editing side.
Any thoughts about whether there are differences between communities with a lot of specific rules and those with a more general “be excellent to each other” standard?
That’s a really good question; it makes me want to do actual experiments with social communities, which I’m not sure how you’d set up. Failing that, here are some ideas about what might happen:
Moderators of a very strictly rule-based community might easily find themselves in a walled garden situation just because their hands are tied. (This is the problem we had in the one I mentioned, before we made a conscious decision to be more flexible.) If someone behaves poorly, they have no justification to wield to eject that person. In mild cases they’ll tolerate it; in major cases, they’ll make an addition to the rules to cover the new infraction. Over time the rules become an unwieldy tome, intimidating users who want to behave well, reducing the number of people who actually read them, and increasing the chance of accidental infractions. Otherwise-useful participants who make a slip get a pass, leading to cries of favoritism from users who’d had the rules brought down on them before—or else they don’t, and the community loses good members.
This suggests a corollary of my earlier admonition for flexibility: What written rules there are should be brief and digestible, or at least accompanied by a summary. You can see this transition by comparing the long form of one community’s rules, complete with CSS and anchors that let you link to a specific infraction, and the short form which is used to give new people a general idea of what’s okay and not okay.
The potential flaw in the “be excellent to each other” standard is disagreement about what’s excellent—either amongst the moderators, or between the moderators and the community. For this reason, I’d expect it to work better in smaller communities with fewer of either. (This suggests another corollary—smaller communities need fewer written rules—which I suspect is true but with less confidence than the previous one.) If the moderators disagree amongst themselves, users will rightly have no idea what’s okay and isn’t; when they’re punished for something which was okay before, they’ll be frustrated and likely resentful, neither of which is conducive to a pleasant environment. If the moderators agree but the users disagree with their consensus, well, one set or the other will have to change.
Of course, in online communities, simple benevolent dictatorships are a popular choice. This isn’t surprising, given that there is often exactly one person with real power (e.g. server access), which they may or may not choose to delegate. Two such channels I’m in demonstrate the differences in the above fairly well, if not perfectly (I’m not in any that really relies on a strict code of rules). One is very small (about a dozen people connected as I write this), and has exactly one rule*: “Be awesome.” The arbiter of awesome is the channel owner. Therefore, the channel is a collection of people who suit him. Since there is no other principle we claim to hold to (no standard against which to measure the dictator), and he’s not a jerk (obviously I don’t think so, since I’m still there), it works perfectly well.
The other is the one whose rules I linked earlier. It’s fairly large, but not enormous (~375 people connected right now). There are a few people who technically have power, but one to whom the channel “belongs” (the author of the work it’s a fan community of). Because he has better things to do than keep an eye on it, he delegates responsibility to ops who are selected almost entirely for one quality: he predicts that they will make moderation decisions he approves of. Between that criterion and an active side channel for discussing policy, we mostly avoid the problems of moderator disagreement, and the posted rules ensure that there are very few surprises for the users.
A brief digression: That same channel owner actually did do an experiment in the moderation of a social community. He wanted to know if you could design an algorithm for a bot to moderate an IRC channel, with the goal of optimizing the signal to noise ratio; various algorithms were discussed, and one was implemented. I would call it a tentative success; the channel in question does have very good SNR when active, but it moves slowly; the trivial chatter wasn’t replaced with insight, it was just removed. Also, the channel bot is supplemented by human mods, for the rare cases when the bot’s enforcement is being circumvented.
The algorithm he went with is not my favorite of the ones proposed, and I’d love to see a more rigorous experiment done—the trick would be acquiring ready bodies of participants.
Anyway. If instead of experimenting on controlled social groups, we surveyed existing groups that had survived, I think we’d find a lot of small communities with no or almost no codified rules, and then a mix of rules and judgment as they got larger. There would be a cap on the quantity of written rules that were actually enforced in any size of community, and I wouldn’t expect to see even one that relied 100% on a codified ruleset with no enforcer judgment at all.
(Now I kind of want to research some communities and write an article about this, although I don’t think it’d be particularly relevant for LW.)
*I’m told there is actually a second one: “No capitals in the topic.” This is more of a policy than a behavioral rule, though, and it began as an observation of the way things actually were.
A minute in Konkvistador’s mind:
I do have access to the forbidden post, and have no qualms about sharing it privately. I actually sought it out actively after I heard about the debacle, and was very disappointed when I finally got a copy to find that it was a post that I had already read and dismissed.
I don’t think there’s anything there, and I know what people think is there, and it lowered my estimation of the people who took it seriously, especially given the mean things Eliezer said to Roko.
Can I haz evil soul crushing idea plz?
But to be serious, yes if I find the idea is foolish, the people who take it seriously, this reduces my optimism as well, just as much as malice on the part of the Lesswrong staff or just plain real dark secrets since I take clippy to be a serious and very scary threat (I hope you don’t take too much offence clippy you are a wonderful poster) . I should have stated that too. But to be honest it would be much less fun knowing the evil soul crushing self-fulfilling prophecy (tm), the situation around it is hilarious.
What really catches my attention however is the thought experiment of how exactly one is supposed to quarantine a very very dangerous idea. Since in the space of all possible ideas, I’m quite sure there are a few that could prove very toxic to humans.
The LW member that take it seriously are doing a horrible job of it.
Upvoted for the cat picture.
Indeed, in the classic story, it was an idea whose time had come, and there was no effective means of quarantining it. And when it comes to ideas that have hit the light of day, there are always going to be those of us who hate censorship more than death.
I think such discussion wouldn’t necessarily warrant its own top-level post, but I think it would fit well in a new Meta thread. I have been meaning to post such a thread for a while, since there are also a couple of meta topics I would like to discuss, but I haven’t gotten around to it.
I don’t. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some—self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don’t see any possible upsides. Having a Benevolent Dictator For Life works quite well.
See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.
I don’t want a revolution, and don’t believe I’ll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.
I think Roko got a pretty clear explanation of why his post was deleted. I don’t think I did. I think everyone should. I suspect there may be others like me.
I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven’t, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is “non-danger” and “ineffectiveness”, and the truth will tend to win the argument over time, I think that would be a good thing.
The second rule of Less Wrong is, you DO NOT talk about Forbidden Topics.
Your sarcasm would not be obvious if I didn’t recognize your username.
Hmm—I added a link to the source, which hopefully helps to explain.
Quotes can be used sarcastically or not.
I don’t think I was being sarcastic. I won’t take the juices out of the comment by analysing it too completely—but a good part of it was the joke of comparing Less Wrong with Fight Club.
We can’t tell you what materials are classified—that information is classified.
It’s probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread.
Too much meta discussion is bad for a community.
The thing I’m trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that’s possible, and I believe it is more appropriate to discuss this in public.
(Actually, since I’ve been making noise about this, and since I’ve promised not to reveal it, I now know the secret. No, I won’t tell you, I promised that. I won’t even tell who told me, even though I didn’t promise not to, because they’d just get too many requests to reveal it. But I can say that I don’t believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)
How much evidence for the existence of a textual Langford Basilisk would you require before considering it a bad idea to write about it in detail?
Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
Look, my post addressed these issues, and I’d be happy to discuss them further, if the ground rules were clear. Right now, we’re not having that discussion; we’re talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you’re right, you’ll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
Really? Go read the sequences! ;)
Hell? That’s it?
Thanks. More reason to waste less time here.
I have been reading OB and LW from about a month of OB’s founding, but this site has been slipping for over a year now. I don’t even know what specifically is being discussed; not even being able to mention the subject matter of the banned post, and having secret rules, is outstandingly stupid. Maybe I’ll come back again in a bit to see if the “moderators” have grown up.
As a rather new reader, my impression has been that LW suffers from a moderate case of what in the less savory corners of the Internet would be known as CJS (circle-jerking syndrome).
At the same time, if one is willing to play around this aspect (which is as easy as avoiding certain threads and comment trees), there are discussion possibilities that, to the best of my knowledge, are not matched anywhere else—specifically, the combination of a low effort-barrier to entry, a high average thought-to-post ratio, and a decent community size.