Call For Agreement: Should LessWrong have better protection against cultural collapse?
As you are probably already aware, many internet forums experience a phenomenon known as “eternal September”. Named after a temporary effect where the influx of college freshmen would throw off a group’s culture every September, eternal September is essentially what happens when standards of discourse and behavior degrade in a group to the point where the group loses it’s original culture. I began focusing on solving this problem and offered to volunteer my professional web services to get it done because:
- When I explained that LessWrong could grow a lot and volunteered to help with growth, various users expressed concerns about growth not always being good because having too many new users at once can degrade the culture.
- There has been concern from Eliezer about the site “going to hell” because of trolling.
- Eliezer has documented a phenomenon that subcultures know as infiltration by “poseurs” happening in the rationalist community. He explains that rationalists are beginning to be inundated by “undiscriminating skeptics” and has stated that it’s bad enough that he needed to change his method of determining who is a rationalist. The appearance of poseurs doesn’t guarantee that a culture will be washed away by main-streamers, but may signal that a culture is headed in that direction, and it does confirm that a loss of culture is a possibility—especially if there got to be so many undiscriminating skeptics as to form their own culture and become the new majority at LessWrong.
My plan to prevent eternal September sparked a debate about whether eternal September protection is warranted. Lukeprog, being the decision maker whose decision is needed for me to be allowed to do this as a volunteer, requested that I debate this with him because he was not convinced but might change his mind.
Here are some theories about why eternal September happens:
1. New to old user ratio imbalance:
New users need time to adjust to a forum’s culture. Getting too many new users too fast will throw off the ratio of new to old users, meaning that most new users will interact with each other rather than with older users, changing the culture permanently.
2. Groups tend to trend toward the mainstream:
Imagine some people want to start a group. Why are they breaking away from the mainstream? Because their needs are served there? Probably not. They most likely have some kind of difference that makes them want to start their own group. Of course not everyone fits nicely into “different” and “mainstream”, no matter what type of difference you look at. So, as a forum grows, instead of attracting people who fit nicely into the “different” category, you attract people who are similar to those in the different category. People way on the mainstream end of the spectrum generally are not attracted to things that are very different. But imagine how this progresses over time. I’ll create a scale between green and purple. We’ll say the green people are different and the purple people are mainstream. So, some of the most green folks make a green forum. Now, people who are green and similar—those with an extra tinge of red or blue or yellow join. People in the mainstream still aren’t attracted, however, since there are still more in-between people than solid green or purple people, the most greenish in-between people begin to dominate. They and the original green people still enjoy conversation—they’re similar enough to share the culture and enjoy mutual activities. But the greenish in-between people start to attract in-between people that are neither more purple or more green. There are more in-between people than greenish in-between or green people, because purple people dominate in their larger culture, so in-between people quickly outnumber the green people. This may still be fine because they may adjust to the culture and enjoy it, finding it a refreshing alternative to purple culture. But the in-between people attract people who are more purplish in-betweeners than greenish in-betweeners. There are more of those than the in-between people, so the culture now shifts to be closer to mainstream purple than different green. At this point, it begins to attract the attention of the solid purple main streamers. “Oh! Our culture, but with a twist!” They think. Now, droves of purple main stream people deluge the place looking for “something a little different”. Instead of valuing the culture and wanting to assimilate, they just want to enjoy novelty. So, they demand changes to things they don’t like to make it suit them better. They justify this by saying that they’re the majority. At that point, they are.
3. Too many trolls scare away good people and throw off the balance.
Which theory is right?
All of them likely play a role.
I’ve seen for myself that trolls can scare the best people out of a forum, ruining the culture.
I’ve heard time and time again that subculture movements have problems with being watered down by mainstream folks until their cultures die and don’t feel worth it anymore to the original participators. A lot of you have probably heard of the term “poseurs”. With poseurs in a subculture, it’s not that too many new people joined at once, but that the wrong sort of people joined. The view is that there are people who are different enough to “get” their movement, and people who are not. Those who aren’t similar decided to try to appear like them even though they’re not like them on the inside. Essentially, a large number of people much nearer to the mainstream got involved, so the group was no longer a haven for people with their differences.
And I think it’s a no-brainer that if a group gets enough newbies at once, old members can’t help them adjust to the culture, and the newbies will form a new culture and become a new majority.
Also, I think all of these can combine together, create feedback loops, and multiply the others.
Theory about cause and effect interactions that lead to endless September:
1. A group of people who are very different break away from the mainstream and form a group.
2. People who are similarly different but not AS different join the group.
3. People who are similar to the similarly different people, but even less similar to the different people join the group.
4. It goes on this way for a while. Since there are necessarily more people who are mainstream than different, new generations of new users may be less and less like the core group.
5. The group of different people begins to feel alienated with the new people who are joining.
6. The group of different people begin to ignore the new people.
7. The new people form their own culture with one another, excluding old people, because the old people are ignoring them.
8. Old people begin to anticipate alienation and start to see new users through tinted lenses, expecting annoyance.
9. New people feel alienated by the insulting misinterpretations that are caused by the expectation that they’re going to be annoying.
10. The unwelcoming environment selects for thick-skinned people. A higher proportion of people like trolls, leaders, spammers, debate junkies, etc are active.
11. Enough new people who are ignored and failed to acculturate accumulate, resulting in a new majority. If trolls are kept under control, the new culture will be a watered down version of the original culture, possibly not much different from mainstream culture. If not, see the final possibility.
12. If a critical mass of trolls, spammers and other alienating thick-skinned types is reached due to an imbalance or inadequate methods of dealing with them, they might ward off old users, exacerbating the imbalance that draws a disproportionate number of thick-skinned types in a feedback loop and then take over the forum. (Why fourchan /b isn’t known for having sweet little girls and old ladies.)
Is LessWrong at risk?
1. Eliezer has written about rationalists being infiltrated by main-streamers who don’t get it, aka “poseurs”.
Eliezer explains in Undiscriminating Skeptics that he can no longer determine who is a rationalist based on how they react to the prospect of religious debates, and now he has to determine who is a rationalist based on who is thinking for themselves. This is the exact same problem other subcultures have—they say the new people aren’t thinking for themselves. We might argue “but we want to spread the wonderful gift of rational thought to the mainstream!” and I would agree with that. However, if all they’re able to take away from joining is that there are certain things skeptics always believe, all they’ll be taking away from us is an appeal to skepticism. That’s the kind of thing that happens when subcultures are over-run by mainstream folks. They do not adopt the core values. Instead, they run roughshod over them. If we want undiscriminating skeptics to get benefits from refining the art of rationality, we have to do something more than hang out in the same place. Telling them that they are poseurs doesn’t work for subcultures, and I don’t think Eliezer telling them that they’re undiscriminating skeptics will solve the problem. Getting people to think for themselves is a challenge that should not be undertaken lightly. To really get it, and actually base your life on rationality, you’ve either got to be the right type, a “natural” who “just gets it” (like Eliezer who showed signs as a child when he found a tarnished silver amulet inscribed with Bayes’s Theorem) or you have to be really dedicated to self-improvement.
2. I have witnessed a fast-growing forum actually go exponential. Nothing special was being done to advertise the forum.
Obviously, this risks deluging old members in a sea of newbies that would be large enough to create a newbie culture and form a new majority.
3. LessWrong is growing fast and it’s much bigger than I think everyone realizes.
I made a LessWrong growth bar graph showing how LessWrong has gained over 13,000 members in under 3 years (Nov 2009 - Aug 2012). LessWrong had over 3 million visits in the last year. The most popular post has gotten over 200,000 views. Yes I mean there are posts on here that are over 1⁄5 of their way to a million views, I did not mistype. This is not a tiny community website anymore. I see signs that people are still acting that way, like when people post their email addresses on the forum. People don’t seem to realize how big LessWrong has gotten. Since this happened in a short time, we should be wondering how much further it will go, and planning for the contingency that could become huge.
4. LessWrong has experienced at least one wild spike in membership. Spikes can happen again.
We can’t control the ups and downs in visitors to the site. That could happen again. It could last for longer than a month. According to Vladmir, using wget, we’ve got something like 600 − 1000 active users posting per month. We’ve got about 300 users joining per month from the registration statistics. What would happen if we got 900 each month for a few months in a row? A random spike could conceivably overwhelm the members.
5. Considering how many readers it has, LessWrong could get Slashdotted by somebody big.
If you’ve ever read about the Slashdot effect, you’ll know that all it might take to get a deluge bigger than we can handle is to be linked to by somebody big. What if Slashdot links to LessWrong? Or somebody even bigger? We have at least one article on LessWrong that got about half as many visits as a hall of fame level Slashdot article. The article “Scientologists Force Comment Off Slashdot” got 383692 visits on Slashdot, compared with LessWrong’s most popular article at 211,000 visits. (Cite: Slashdot hall of fame.) LessWrong is gaining popularity fast. It’s not a small site anymore. And there are a lot of places that could Slashdot us. I may be just a matter of time before somebody pays attention, does an article on LessWrong, and it gets flooded.
6. We all want to grow LessWrong, and people may cause rapid growth before thinking about the consequences.
What if people start growing LessWrong and wildly succeed? I would like to be helping LessWrong grow but I don’t want to do it until I feel the culture is well-protected.
7. Some combination of these things might happen and deluge old people with new people.
Does LessWrong need additional eternal September protection?
Lukeprog’s main argument is that we don’t have to worry about eternal September because we have vote downs. Here’s why vote downs are not going to protect LessWrong:
1. If the new to old user ratio becomes unbalanced, or the site is filled with main streamers who take over the culture, who is going to get voted down most? The new users, or the old ones? The old members will be outnumbered, so it will likely be old members.
2. This doesn’t prevent new users from interacting primarily with new users. If enough people join, there may not be enough old users doing vote downs to discourage them anymore. That means if the new to old user ratio were to become unbalanced, new users may still interact primarily with new users and form their own, larger culture, a new majority.
3. Let’s say Fourchan /b decides to visit. A hundred trolls descend upon LessWrong. The trolls, like everybody else, have the ability to vote down anything they want. The trolls of course will enjoy harassing us endlessly with vote downs. They will especially enjoy the fact that it only takes three of them to censor somebody. They will find it a really, really special treat that we’ve made it so that anybody who responds to a censored person ends up getting points deducted. From a security perspective, this is probably one of the worst things that you could do. I came up with an idea for a much improved vote down plan.
Possibly more important: What happens if we DO prevent an eternal September?
What we are deciding here is not simply “do we want to protect this specific website from cultural collapse?” but “How do we want to introduce the art of refining rationality to the mainstream public?”
Why do main streamers deluge new cultures and what happens after that? What do they get out of it? How does it affect them in the long-term? Might being deluged by main streamers make it more likely for main streamers to become better at rational thought, like a first taste makes you want more?
If we kept them from doing that, what would happen, then?
Say we don’t have a plan. LessWrong is hit by more users than it can handle. Undiscriminating skeptics are voting down every worthwhile disagreement. So, as an emergency measure, registrations are shut off, the number of visits to the website grows and then falls. We succeed in keeping out people who don’t get it. After it has peaked, the fad is over. Worse, we’ve put them off and they’re offended. Or, we don’t shut off registrations, we’re deluged, and now everyone thinks that a “rationalist” an “undiscriminating skeptic”. We’ve lost the opportunity to get through to them, possibly for good. Will they ever become more rational? LessWrong wants to make the world a more rational place. An opportunity to accomplish that goal could happen. Eliezer figured out a way to make rationality popular. Millions of people have read his work. This could go even bigger.
This is why I suggested two discussion areas—then we get to keep this culture and also have an opportunity to experiment with ways for the people who are not naturals at it to learn faster. If we succeed in figuring out how to get through to them, we will know that the deluge will be constructive, if one happens. Then, we can even invite one on purpose. We can even advertise for that and I’d be happy to help. But if we don’t start with eternal September protection, we could lose all this progress, lose our chance to get through to the mainstream, and pass like a fad.
For that reason, even if eternal September doesn’t look likely to you after everything that I’ve explained above, I say it is still worthwhile to develop a tested technique to preserve LessWrong culture against a deluge and get through to those who are not naturals. Not doing so takes a risk with something important.
Please critique.
Your honest assessments of my ideas are welcome, always.
- 3 Sep 2012 17:40 UTC; 20 points) 's comment on Open Thread, September 1-15, 2012 by (
- Preventing discussion from being watered down by an “endless September” user influx. by 2 Sep 2012 3:46 UTC; 19 points) (
- 12 Sep 2012 23:39 UTC; 6 points) 's comment on Meta: LW Policy: When to prohibit Alice from replying to Bob’s arguments? by (
- Poll—Is endless September a threat to LW and what should be done? by 8 Dec 2012 23:42 UTC; 5 points) (
- 3 Sep 2012 22:35 UTC; 4 points) 's comment on Open Thread, September 1-15, 2012 by (
- 3 Sep 2012 5:37 UTC; 2 points) 's comment on Preventing discussion from being watered down by an “endless September” user influx. by (
- 20 Sep 2012 8:38 UTC; 1 point) 's comment on Elitism isn’t necessary for refining rationality. by (
- 27 Dec 2012 1:22 UTC; 0 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
We have had way too many meta threads recently.
I’m starting to wonder whether it might be useful to have a ‘Meta’ section, which is separate from Discussion (and Main) for meta threads of all kinds.
Then when that gets filled with meta threads about the Meta section we can create the Meta Meta section.
If people would just start the title with “Meta:”, people would be warned up front. Some people here have already done this, and it is a common technique with a long pedigree on mailing lists.
I see the same thing being done with Meetup:.
Is this already suggested in the Wiki or FAQ?
Okay, I understand, because of this comment, why the thread was voted down. Thanks Konkvistador.
Yes. It’s starting to remind me of talk pages of Wikipedia’s policy and guideline pages, the scariest time sink I’ve ever seen where people spend megabytes of words in meta-, meta-meta-, and meta-meta-meta-discussions about things any sane person would hardly give a damn about, such as whether to write “3 September” or “September 3”. (One of the biggest regrets in my life is the huge amount of time I wasted in there.)
If a thread isn’t specific to a person’s interests, why does it need to be voted down? Do people feel obligated to read every single thread or something? Do you assume that all of you will have the exact same interests, and the same amount of interest in each? I do not understand the practice of thumbing down a thread because it’s a different topic from what you wanted.
If there are a lot of uninteresting threads posted, they will make the interesting threads drop down faster, reducing the amount of attention those threads get.
I consider this is a design defect of the forum software that limits scalability. Or if not a design defect, just a mismatch between design and use.
Blog software just was not designed for sustained discussion or filtered discussion. Just catch some eyeballs and let them spout off. That’s what it enables. It’s the same annoyance everywhere.
I’ll repeat my old geezer lament—TRN and other usenet readers were vastly superior for discussion lists compared to web forum software and blog software in general use.
Strongly agree. It’s hard to believe that the online community could lose so much functionality by accident, but it really happened.
I wonder if evolution has lost very valuable traits because there was no use for them for a while. Probably so.
Tentative theory: TRN worked really well for making long discussions readable. However, a great many of the newsgroups that used it were unmoderated. The culture of usenet was such that many unmoderated newsgroups worked pretty well, especially if you used TRN or somesuch.
When browsers happened, there were advantages—like color, page design, audio, and video. Before browsers, there were telnet screens—small, monochrome screens which only displayed monospaced ASCII text. People used to do ASCII art, of which not much remains but smilies.
I’m not sure that TRN would have worked in a browser without JavaScript since the page needed to be refreshed for every comment, and information about each comment you downloaded needed to be stored.
So, there was an interregnum during which people had to get by on forums (a method of organizing discussion which doesn’t seem to have grown in popularity, though it’s possible I’m missing something), and usenet developed an ugh field because of the lack of moderation. (There was protection against spam and DOS trolling, but as with hobbits in the Shire, the typical user had no idea that a large amount of volunteer labor was protecting them.)
So, when it became possible for TRN to be transferred to the web, not only would it require a large amount of really boring coding (and possibly a redesigned user interface—the original required learned a bunch of single-letter commands), but it has an ugh field because of the unmoderated groups.
I think the prime movers were economic and operational.
The economics of the web changed. Usenet was a cost center, not a profit center. Once you could monetize your content, many higher quality providers on usenet probably moved to blogs. Even where sites have web forums, the incentives are to catch eyeballs, not facilitate discussion, and they often will seek to control content on their site to maximize those eyeballs.
I used to post some at Sam Harris’ site. One day he pontificated on the self evident desirability of pumping more money into government schools. I was heartened when a number of libertarians lit into his self righteous certainties, and let him know that there were plenty of selves for whom his proposition was self evidently idiotic. Poof; the original blog post and all associated comments disappeared. Such are the wonders of moderation.
Also, with monetization, other high quality competition for eyeballs came online. You and I could pontificate back and forth, or we could listen to (or watch!) the best, brightest, and most educated in the world pontificate.
The operational changes came from a shift from central server university environments to home environments. Universities came with central admins to handle usenet server setups. It also came with the bandwidth to download all hierarchies, making them available to download in pieces by a user, who was likely on a system with TRN already installed.
I started in UNIX environments, but I’ve primarily used windows for at least a decade and a half. I’m sure they must be out there, but I don’t think I’ve ever even seen a windows box with a usenet server or TRN installed.
I don’t remember any web site ever implementing a TRN like interface for their forum, but it should be perfectly feasible today. You could even put little ads at the top.
What was that google discussion thing with federated servers? (Google Wave) Wasn’t that even real time? Real time collaboration with wiki like features for collaboration, but trn features as well for discussion, and maybe even filtering. That would be fun. Wish there was something like that out there.
I’m not sure that monetization has a huge amount to do with the story. Usenet was populated by hobby bloggers, and there are still a lot of hobby bloggers—many of them are the same people, but there are plenty of new hobby bloggers showing up all the time.
I’m not convinced that most of the professional bloggers are better than the best hobby bloggers.
It’s plausible that the unowned character of usenet couldn’t be duplicated these days, especially considering that rather few people would settle for an ascii-only medium. On the other hand, Moore’s Law might make a modern usenet feasible.
In any case, as you say, individual sites could have trn, and it could be combined very nicely with an rss feed to give that “page down through your favorite newsgroups” feeling.
You could have easily written the same content and got about as much feedback in one of the many recent threads. Perhaps this one.
I think this is an important risk, even though people are tired of meta discussion (and these posts are way too long for their useful content). This use case should be captured in some plan. Right now all we have is possibility of shutting down and possibly recovering from an archive, and that is not a good solution.
People with 0 Karma can still upvote, so there is a danger that new members will just upvote each other quickly. This problem could be solved if there is enough latency between a point where a user starts participating and where they can influence moderation by voting, i.e. only allow voting (both upvoting and downvoting) after a significant threshold, something like 500 Karma points, so that it would take at least a month at typical Karma gain rates of very active users to get there, sufficient for vetting of new members.
Current limitation of 4xKarma on downvoting amplifies initial votes, so can run out of control if applied to upvoting. If upvoting is limited to, say, 0.5xKarma, then any vote granted to a new member releases the potential effect of 1+0.5+0.5^2+...=2 points; if it’s 0.9xKarma, it has the potential effect of 10 points, and 1xKarma is critical, with potentially unlimited effect. So below 500 points where we may decide that a new user is known to be acculturated enough, no more than about 0.5xKarma of upvotes should be allowed.
I made a very similar proposal recently, just set at 1000 karma. There is some discussion there on the ups and downs of such an approach behind the link.
I think it’s weird that this post was made by a new user who says things like “vote downs”.
You have just poked fun at a person with a learning disorder who has put excruciating effort into avoiding making mistakes like that one. I laughed when I saw your comment—my self-confidence level is very high. I am, however, a little annoyed.
My intent wasn’t mockery.
I had no way to distinguish between the learning disorder hypothesis and the “has not assimilated LW culture/vocabulary” hypothesis, and the latter seemed a lot more salient and likely. You’re new, you’re making other cultural errors that just don’t happen to be so readily quoted, and I don’t remember you mentioning this anywhere before.
Ah. It’s hard to tell a person’s intent in text, and I used to experience discrimination all the time in the past before I did so much work on my spelling. I expect myself to be able to remember which versions of new words are correct, because I know that I’m capable of it. Perfectionist moment.
Thanks for taking the moment to explain that to me.
Ironically, after reading your incessant calls for improvement, the suggestion I have is to limit the number of top-level posts per month by a new member to the number of months they have been active.
Has it occurred to you that the reason I am talking about making website changes is because I have volunteered to do some work for free? I could just stop discussing it in the discussions, but then you may get random changes. Or I could just not volunteer. But then you get no free work. Which do you prefer?
It’s unlikely that this would occur to someone, so this should be a statement and not a rhetorical question (since it’s a question, I’m still not certain if it’s a statement, and if so what it means more specifically). The choice you describe is a false trichotomy. For example, you could be discussing the changes, but in a less verbose manner (this takes care of shminux’s complaint), or you could only summarize after you’ve finished designing them and were preparing to move to implementation (this takes care of suddenness of changes, allows revision).
Ok, just because he commented about the threads doesn’t mean he read any of them to find out I’m volunteering. You’re right.
I don’t see being less verbose as a good way to convey all relevant information. I guess this is too complicated for people to want to be involved in it. Maybe they don’t care that much or just want someone to figure it out for them or something. I never seem to assume that, though. I always seem to assume they’ll want to know. Maybe that’s an odd habit.
Still, I have a hard time understanding why they’d seek to censor somebody talking about something that was relevant, and could be important, that other people do want to talk about.
If I’m right that this place is at risk, we’re talking about the end of LessWrong. I could wait until I wasn’t a new user to discuss it, but I don’t know how long that takes or how much time is left. It makes very little sense to wait when something is both important and could happen at any time. So, being discouraged by others on this feels like senseless chaos.
Especially the guy who was like “You wrote too many posts I don’t want!”—that is just ironic!
Thanks for talking to me about this, Vladimir. (:
It is helping. (:
Your writing does tend to be verbose—the amount of interesting surprise per amount of text is low.
Part of the problem seems to be repetition among articles as well as within them.
Judging the amount of redundancy needed and then supplying good quality redundancy is a hard problem, and I think most people just have a habitual redundancy level. (Mine may be too low.) I haven’t seen an overt discussion of how to do redundancy well anywhere.
Many parts of this comment pattern-match to the poorly informed ranter type we get here that I was talking about earlier. (You don’t match overall, but sometimes people don’t seem to, at first...)
Well thanks for not insta-judging me. I hate it when that happens. I don’t think of myself as poorly informed (though most don’t think of themselves that way) but I know I am intense and can get really verbose. It seems like the component causing the problem right now is being verbose. I’m not sure what you want me to learn from your comment, but that’s what I’m taking away from it.
Here’s a point by point on what patterns you’re matching if that’ll help, but after that I want to end the thread because I find you kind of frustrating to talk to.
Response to feedback about how to talk to us: denying that that’s how to talk to us.
Subtly derisive remarks about others’ virtue (tolerance for complexity, commitment to site, interest level, initiative, etc.)
Passive-aggressive assertion about self-exceptionalism and your rare, yet unfailing tendency to see the best in others even when constantly disappointed—budding martyr complex.
Ignorance claim that happens to oblige other people to justify wanting you to stop doing something. Trigger-happy use of the word “censor”. Declaration of relevance, unsupported. Declaration of support from unspecified others.
Doomsaying. Also, you’re conveniently the only one who has noticed this Terrible Danger.
It’s an emergency! So, exceptionalism!
Nope, no rhyme or reason to what others are up to, just people who you’ve mysteriously chosen to hang out with acting at random.
This seems to me like a misunderstanding or a hearing-past of the point you’re responding to.
And then you end with a cookie for the person who engaged with you.
I know it can be tiring to explain to frustrating people why they are frustrating. Thanks for taking the time to type this up. Hopefully I’m good enough at taking criticism that I won’t stay frustrating long.
This seems rather harsh, given what Eliezer has been saying. If the person with ultimate power over the forum has been talking about the site “going to hell” it has to be expected that the language of doom will rub off on new users. This isn’t to say Epiphany matches patterns any less but we could perhaps avoid conveying isolation.
Actually, this eternal September business was preceded by the concerns of other people in my LessWrong could grow a lot but we’re doing it wrong thread:
Vladimir_M
CronoDAS
gjm (Specifically concerned about intentional growth that I proposed causing eternal September.)
beoShaffer
Risto_Saarelma
More people expressed concern when I talked about preventing it:
cousin_it
Xachariah
People are still expressing concern in this very thread:
Konkvistador
Armok_GoB
If there’s a volunteer interested in working on growth, and it looks like lots of growth is possible, but a bunch of people are concerned about a decline in culture and it’s a known risk of growing internet forums, and Eliezer is talking about the proliferation of undiscriminating skeptics, and I saw a forum collapse from it myself, doesn’t it make sense to talk about whether growth would destroy LessWrong before speeding up growth?
People were quick to up vote my growth post like there’s no tomorrow. It was the most popular post in almost a month. Then I write a post about the downsides of growth, and it’s down voted to the point of being hidden. Might this be optimism bias, normalcy bias, or denial at work? I don’t think that the rejections stated are the true rejection.
I see my name being taken kinda-in-vain here. I wasn’t saying “LW is about to be consumed by an Eternal September” but something nearer to “If we take the course Epiphany is proposing, we may inflict an Eternal September upon ourselves”. I think the same may be true for some of the other people you mention, but I haven’t gone back to check exactly what they said.
Did my edit solve this, Gjm?
Yes, with two minor caveats—probably too minor to merit half the number of words I’m about to spend on one of them :-).
1 As I already mentioned, the same concerns may apply to some of the other people you listed; I haven’t checked.
2 I’m still there bulking up your list of people worried about “this eternal September business”, even though what I was expressing concern about was something more specific. Your edit means that you aren’t misrepresenting me any more, but it’s still a little odd. Imagine, to take an melodramatically exaggerated example, that a creationist website puts up a list of “people who think Darwin was wrong”, and one of the people on the list is, say, Richard Dawkins. Even with a note explaining “Specifically, thinks that science has moved on since the 1850s and we now know lots of details Darwin didn’t” his name would be out of place on that list.
The reason why #2 is not a big deal is that, actually, I do think there is a real possibility that (even without deliberate attempts to grow) LW—or any other community—will suffer from “dilution” over time. But that isn’t what I said in the discussion you linked to :-). (And I certainly wouldn’t say that it’s likely to destroy LW, or anything like that.)
Okay, well it’s up to you, Gjm. I will remove you completely if you request.
Given the presence of this discussion, I don’t think that’s necessary.
Sometimes popularity does not correlate with good ideas; especially when unpopular things need to be done. Forum moderation triggers our hierarchy instincts. We don’t want trolling here, but any specific action against trolling feels dangerous; our instincts scream at us that the moderator is taking too much power and will certainly abuse it. We imagine a hypothetical scenario where the rules could be used against us, and we get a paranoid feeling that this is exactly what will happen.
For instance, now we have the new rule that replying to low-karma votes costs you some karma. Suddenly everyone imagines a situation where it would be reasonable to reply to a negative-karma post, and ignores that the prior probability of that is much lower than the prior probability of a negative-karma post not worth replying (but receiving many replies anyway).
It’s the “better be safe than sorry” bias talking, which means ignoring the costs of being “safe”. We want to be certain that no negative-karma comment worth replying goes unnoticed, ever. The costs in our time and attention be damned.
Perhaps your article was not about this, but… it just came at the wrong time, when it’s popular to oppose website moderation.
I think it’s especially telling that their main objections to this post are “It’s long.” and “It’s a meta thread.” while this popular post on growth definitely qualifies as a meta thread and the most popular post I’ve ever seen here is over three times longer than this thread. If they didn’t like meta topics, they’d have voted my growth post into oblivion. If they didn’t like long posts, they’d never have been interested in the sequences. If they didn’t like newbies posting meta threads, they would not have up voted my popular growth post to the point where it was the most popular post in almost a month.
None of these are the true rejection. On an individual level, maybe. On a group level, no.
“It’s long.” and “It’s a meta thread.” are both simplified versions of the actual objections. The full versions are “It took too long to come to a point so I gave up reading” and “It’s the umpteenth meta thread in the last week and I’m tired of them”, respectively.
You’ll note that the three-times-longer post you link to goes to great lengths to summarize its key points in the first few paragraphs. The structure of the post is also clear, and there are even three separate objections that people can read and address individually. Also, part of the “length” argument might be that you have page-long paragraphs with no breaks in them, which is harder to read.
Likewise, the growth post is a different kind of meta thread. It starts a new discussion and has data to back it up; although I disagree with pretty much everything in it, I saw no reason to downvote it. On the other hand, the current post is just rehashing the endless discussions we’ve had over the past few weeks that doesn’t seem to bring many new points to the table. When people say “we don’t want a new meta thread” they mean “we don’t want a new thread to discuss the same things that the last three meta threads were filled with.”
Thank you for this theory, Viliam. (:
I should have asked questions but my attitude was wrong. Instead I fell back on thinking habits that work to explain the behaviors of non-intellectuals. Now that I know more about LW’s reasons for having those types of rejections, (which were unexpected for me), I can see why this would be taken as insulting. I think I understand criticisms 1 and 2. I am trying to understand.
I noticed the Terrible Danger years ago, I’m just not as eloquent and don’t have the guts to back up my beliefs with volunteer labour.
I do! (Think of myself as poorly informed.)
(More seriously, the sense in which you use “poorly informed” is unclear, obviously on most topics one is poorly informed, but perhaps in the context where a person thinks themselves to be well-informed, they don’t simultaneously think themselves poorly informed. Belief in belief type situations might enable that though, where you believe that you believe to be well-informed, but you know that you aren’t.)
You are referring to beliefs, which should be judged by their truth, not their oddness or the time spent holding them (“habit”).
Most of the information you present doesn’t seem particularly relevant to me, and I expect to others as well. The topics you discuss are important, but it seems possible to summarize central points of your posts in something like 5 times less text, which might result in people actually reading them.
all the warning signs Alicorn said lower down are accurate, and I would also like to say that if you had made a post saying concisely and clearly that you were offering to make website changes yourself to help, I would’ve been upvoting you instead. I start reading one of your posts and pattern matches in my head to something that’s not worth getting to the bottom of.
I’m not the only one who has noticed the danger. At least seven other people are concerned about this, and so is Eliezer.
Just to be clear, this free work you talk about implies submitting Python / Javascript code on Github, right? And not “telling the programmers what they should do”, something which seems much less in demand?
I am capable of doing the programming myself, correct. I haven’t offered LessWrong a blank check, but I feel strongly about eternal September protection, so I’m willing to code it myself. If they want something else, they can ask, although motivation level is a key factor.
Thank you.
If anything, LW is far more at risk of becoming an echo chamber than of an eternal September. Fora can also die just by becoming a closed group and not being open to new members, and given that there’s a fairly steep learning curve before someone is accepted here (“read the Sequences!”) it would, if anything, make more sense to be reducing barriers to entry rather than adding more.
Have you seen our contrarians?
The echo chamber concern, in my mind, is not that everyone says the same thing- sure, there are enough pedants to point out small errors to keep people chattering forever.
The echo chamber concern I have is that people will only be from one discipline- and so they will not drink from wells of knowledge that are two doors down the hall.
Both risks should be addressed. Eternal September is not an optimal measure against Echo Chamber and conversely.
This overlaps what I was thinking—that fora die from insipidity and boredom at least as often from low-quality new members.
Do you have any particular reason to believe that? (many people show opposite concerns, and more specifically the stats seem to show increasing numbers of new users)
That doesn’t happen that often, and when it is it’s usually recognized that it’s a bit of a cliché. The case where it seems most likely to happen is if a new member starts proposing some great “new” ideas on AI or friendliness or morality, such as “it’s obvious, for Friendly AI all you have to do is tell the AI to be nice!”.
The quality of LW discourse is in my opinion dropping with more and more guessing of the teachers password.
It’s not clear whether you’re agreeing that endless September protection is warranted, or if you are saying that I have committed “guessing of the teacher’s password”. If it’s the latter, I would appreciate it if you would tell me the specific thing I said that you’re interpreting that way, thanks.
My other comment was about there being too many meta threads. This one was about the subject you opened.
Okay, do you mean the problem is in the subject line, or somewhere in the post? Something more specific is needed. Thanks.
You seem to be misunderstanding me. This was a comment on the topic you opened not a criticism of your opening post. That was here.
Claim: “Eternal September” is impossible to avoid.
The way “memetic movements” deal with Eternal September is one of the following:
(a) Dilute and die in the original sense (some of longer living meme complexes did this and are now “mainstream religions”). This might not be so bad. It may be that diluted/dead Christianity saved europe from collapse during the dark ages. In general I can think of many ways in which a counterfactual non-christian europe probably would have been much much worse off.
(b) Create an “inner school/outer school” division. This, again, is not so bad—but you must give up the notion that everyone is helpable. The idea with (b) is there is only so much room on an “ark” and it is not economical to save everyone.
(c) Go secret and stop new users from coming in. This also is not so bad—but you give up the goal of “raising the sanity waterline”/”worldwide baptism”/”nirvana for all sentient beings”/etc./etc.
LessWrong is the outer school. It exists as a magnet to attract those who may be capable of real growth in the Way. As is always the way in such things, it also attracts many others, people who imagine themselves to be capable of learning but who in reality desire only the outward trappings of rationalism, a new vocabulary to express the same wrong ways of thinking, a tribal sign used to flatter themselves at being above the masses outside. The more that LessWrong draws the former while repelling the latter, the more it can fulfil its real function, which is to find candidates who may be capable of membership in the inner school.
The inner school is jocularly alluded to as the Bayesian Conspiracy, in order to give the impression that it does not exist. Some have been members of the inner school for years before discovering that fact, while others falsely imagine themselves to be on the inside while the real inner circle knows that they will never enter within its invisible walls.
The inner school is run by an inner circle, the inner inner school, the Conspiracy beyond the Conspiracy, whose existence is known to none but its own members.
Well, it could be true.
(I’m just applying a standard template for the organisation of secret societies. I don’t actually have a hot line to the One.
Richard, have you ever been in an “esoteric society” before?
Not that I’m aware of. Am I in one now?
Maybe this mathematical approach would work. (h/t matt)
Sorry I didn’t incorporate this into the solutions page sooner, Luke. I didn’t check this thread for solutions. I will add this now. (I made a cliff notes version of the suggestions if you’re interested). I question, though, whether changing the karma numbers on the comments and posts in any way would have a significant influence on behavior or a significant influence on who joins and stays. Firstly, votes may reward and punish but they don’t instruct very well—unless people are very similar, they won’t have accurate assumptions about what they did wrong. I also question whether having a significant influence on behavior would prevent a new majority from forming because these are different problems. The current users who are the right type may be both motivated and able to change, but future users of the wrong type may not care or may be incapable of changing. They may set a new precedent where there are a lot of people doing unpopular things so new people are more likely to ignore popularity. The technique uses math and the author claims that “the tweaks work” but I didn’t see anything specific about what the author means by that nor evidence that this is true. So this looks good because it is mathematical, but it’s less direct than other options so I’m questioning whether it would work.
Luke, I am not sure there are mathematical approaches to social problems. For the same reason you can’t solve Google’s problem with just the PageRank algorithm.
The “inner school / outer school” seems to me that correct solution, because:
(a) I don’t want to see the message diluted. The diluted forms are already out there—people behaving rationally only inside the laboratory; people trying to use arguments and evaluate evidence properly except when talking about religion or politics; etc.
(b) I completely agree that many people are beyond help; that is: they don’t even want to become rational, and trying to convince them otherwise is hopeless or at least not cost-effective. Let’s do the rational thing instead, whatever it is.
(c) I support the idea of “raising the sanity waterline”. We can’t make everyone sane, but we should create as much sanity as possible.
In some sense, the (b) already exists. Many people are LW readers, but only some of them are CFAR members. And while the LW readers discuss some minor details, CFAR is organizing rationality minicamps. Some people talk, some people do; and those who do, recognize each other. Rationalist communities materialize in meatspace. Even if tomorrow hordes of trolls would overcome LW, it would be only a temporary setback; and a new discussion forum would probably appear soon if enough old members would feel disappointed.
Divide into sub-sites. Most obviously, rationality vs AI. Each group will be better able to maintain the ideal size for self-policing.
Create new software tools to support regular features like meetups, rationality quotes, re-discussions of Eliezer’s old posts, and open thread. The tools would have to reflect all requirements, including directing the right degree of user attention to them.
In a similar vein:
Balkanization. First, estimate the geographic location of users. Second, display to each user (a) the comments from the geographically nearest cluster of active users and (b) the most upvoted comments worldwide. The disadvantages should be obvious from giving this practice the name “balkanization”. The advantages are keeping the number of visible active users below Dunbar’s number, allowing new users to mix with old users, preserving online contact for meetups and other physical communities, and having a clear mechanism to automatically communicate the best material nonlocally.
Edit: My suggestion is complicated to code. A simplification would be to mimic Craigslist’s locations.
As a last-resort emergency measure, we might also want to literally shut down the site automatically if to many people try to join in a given day until we’ve found better ways to handle this.
We could just shut down new account creation rather than the whole site.
That’s what I sa-
Huh, guess it wasn’t what I said.
I agree: anyone wishing to shut down this internet website should be required to register numerous accounts within a one-day span, and all requests meeting such criteria should be automatically anonymised and approved.
That’s a really good point I didn’t think of. Hopeful something like not allowing more than one register per IP adress per day would fix it...
I like your idea of adding a requirement that they spoof IPs.
Hey! It’s a trivial inconvenience! But yea, it’s not really enough.
This has been done by many sites, and appears to be a useful tactic.
I don’t think downvotes should be a limited resource. They’re much more important than upvotes in filtering out bad comments and commenters, and it doesn’t take a brilliant commenter to recognize a bad comment. (Full disclosure: I’m out of downvotes.)
… downvotes are a limited resource?
Not particularly limited. At least not limited to anyone who is remotely active, not a lurker and isn’t always a troll. You have 4 times as many total downvotes you can give as you have karma.
I just noticed your implication. So, which am I; inactive, a lurker or a troll, and why does it mean my votes shouldn’t count?
Perhaps you noticed an implication but not one of mine? At least that must be the case if you take it as an implication personal to you. I responded to MugaSofer’s question from the recent comments thread and so didn’t know any context that pertains to yourself.
As a literal answer to your question I assume that your weeding vs commenting ratio must be higher than average. That would place you somewhat towards the ‘lurker’ end of the spectrum. That needn’t be considered offensive—it indicates having other higher priorities and the ability to restrain oneself from impulses to waste time arguing on the internet.
I approve of the downvote limitation by karma as it stands (unless a new, smarter system is coded). It isn’t a perfect indicator of how much consideration should be granted to a HTTP request representing a click by certain account but it is adequate for the task. If I were personally allocating the right to vote on an individual basis then I would grant you an unlimited supply of votes since from what I recall of your comments you meet the requisite standard of having-a-clue. But I don’t consider the automated and rather trivial system of raw karma totals to have the necessary information to make that judgement so are not especially outraged that you reached your quota.
The reason why accounts with low karma are limited in how much they can vote is because the anthropomorphised karma system doesn’t feel like it knows enough about the quality of your thinking to be comfortable granting you the amount of influence you are attempting to have by enacting judgement. Fortunately it isn’t exactly a complicated or strenuous task to establish trust with the karma system by writing a few comments or pasting in some inspirational quotes. In fact, anyone who did find it hard to work out how to gain karma by making comments is not likely to be the kind of person whose votes I would find personally useful.
I didn’t think my meaning was ambiguous; apologies if so.
Huh. You learn something new every day.
The trivial solution: meter new members of the community.
The difficult solution: Stop trying to control culture directly.
Stop making really long posts I don’t want to read. This is way more annoying than any imaginary september of infinite lameness
Well that is just ironic.
As an extra data point, I didn’t read this article nor the previous long one (didn’t downvote them either!) because they were just too long; I typically only read long articles if they are heavily upvoted or written by somebody who usually writes good stuff, or are on a topic I’m very curious about.
Ahh, well that’s some insight. Thank you Emile.
Long posts are normally stories about something, or technical explations. In the trollin case, three meta threads, no agreement. “Meta” is a love word for this community, but could be more restricted to people who already are close to the SI, or work for them. Or extremely moderated.