Preventing discussion from being watered down by an “endless September” user influx.
In the thread “LessWrong could grow a lot, but we’re doing it wrong.”, I explained why LessWrong has the potential to grow quite a lot faster in my opinion, and volunteered to help LessWrong grow. Of course, a lot of people were concerned about the fact that a large quantity of new members will not directly translate to higher quality contributions or beneficial learning and social experiences in discussions, so I realized it would be better to help protect LessWrong first. I do not assume that fast growth has to cause a lowering of standards. I think fast growth can be good if the right people are joining and all goes well (specifics herein). However, if LessWrong grows carelessly, we could be inviting an “Endless September”, a term used to describe a never ending deluge of newbies that “degraded standards of discourse and behavior on Usenet and the wider Internet” (named after a phenomenon caused by an influx of college freshmen). My perspective on this is that it could happen at any time, regardless of whether any of us does anything. Why do I think that? LessWrong is growing very fast and could snowball on it’s own. I’ve seen that happen, I saw it ruin a forum. That site wasn’t even doing anything special to advertise the forum that I am aware of. The forum was just popular and growth went exponential. For this reason, I asked for a complete list of LessWrong registration dates in order to make a growth chart. I received it on 08-23-2012. The data shows that LessWrong has 13,727 total users, not including spammers and accounts that were deleted. From these, I have created a LessWrong growth bar graph:
Each bar represents a one month long total of registration dates (the last bar is a little short, being that it only goes up until the 23rd). The number of pixels in each bar is equal to the number of registrations each month. The first (leftmost) bar that hits the top of the picture (it actually goes waaaay off the page) mostly represents the transfer of over 2000 accounts from Overcoming Bias. The right bar that goes off the page is so far unexplained for me − 921 users joined in September 2011, more than three times the number in the months before and after it. If you happen to know what caused that, I would be very interested in finding out. (No, September 2010 does not stand out, if you were wondering the same thing). If anyone wants to do different kinds of analysis, I can generate more numbers and graphs fairly easily.
As you can see, LessWrong has experienced pretty rapid growth.
Growth is in a downward trend at the moment, but as you can see from the wild spikes everyplace, this could change any time. In addition to LessWrong growing on it’s own, other events that could trigger an “endless September” effect are:
LessWrong could be linked to by somebody really big (see: Slashdot effect on Wikipedia).
LessWrong could end up on the news after somebody does something news worthy or because a reporter discovers LessWrong culture and finds it interesting or weird.
(A more detailed explanation is located here.)
For these reasons, I feel it is a good idea to begin constructing endless September protection, so I have volunteered some of my professional web services to get it done. This has to be done carefully because if it is not done right, various unwanted things may happen. I am asking for any ideas or links to ideas you guys have that you think were good and am laying out my solutions and the pitfalls I have planned for below in order to seek your critiques and suggestions.
Cliff Notes Version:
I really thought this out quite a bit because I think it’s going to be tricky and because it’s important. So I wrote a cliff notes version of the below solution ideas with pros and cons for each which is about a tenth the size.
The most difficult challenge and my solution:
People want the site to be enriching for those who want to learn better reasoning but haven’t gotten very far yet.
People also want an environment where they can get a good challenge, where they are encouraged to grow, where they can get exposed to new ideas and viewpoints, and where they can get useful, constructive criticism.
The problem is that a basic desire all humans seem to share is a desire to avoid boredom. There is possibly a survival reason for this: There is no way to know everything, but missing even one piece of information can spell disaster. This may be why the brain appears to have evolved built-in motivators to prod you to learn constantly. From the mild ecstasy of flow state (cite Flow: The psychology of peak experiencing) to tedium, we are constantly being punished and rewarded based on whether we’re receiving the optimal challenge for our level of ability.
This means that those who are here for a challenge aren’t going to spend their time being teachers for everybody who wants to learn. Not everyone has a teacher’s personality and skill set to begin with, and some people who teach do it as writers, explaining to many thousands, rather than by explaining it one-to-one. If everyone feels expected to teach by hand-holding, most will be punished by their brains for not learning more themselves, and will be forced to seek a new learning environment. If beginners are locked out, we’ll fail at spreading rationality. The ideal is to create an environment where everyone gets to experience flow, and no one has to sacrifice optimal challenge.
To make this challenge a bit more complicated, American culture (yes, a majority of the visits, 51.12%, are coming from the USA—I have access to the Google Analytics) can get pretty touchy about elitism and anti-intellectualism. Even though the spirit of LessWrong—wanting to promote rational thought—is not elitist but actually inherently opposite to that (to increase good decision making in the world “spreads the wealth” rather than hoarding it or demanding privileges for being capable of good decisions), there is a risk that people will see this place as elitist. And even though self-improvement is inherently non-pretentious (by choosing to do self-improvement, you’re admitting that you’ve got flaws), undoubtedly there will be a large number of people who might really benefit from learning here but instead insta-judge the place as “pretentious”. Interpreting everything intellectual as pretentious and elitist is an unfortunate habit in our culture. I think, with the right wording on the most prominent pages (about us, register, home page, etc.) LessWrong might be presented as a unique non-elitist, non-pretentious place.
For these reasons, I am suggesting multiple discussion areas that are separated by difficulty levels. Presenting them as “Easy and Hard” will do three things:
1. Serve as a reminder to those who attend that it’s a place of learning where the objective is to get an optimal challenge and improve as far as possible. This would help keep it from coming across as pretentious or elitist.
2. Create a learning environment that’s open to all levels, rather than a closed, elitist environment or one that’s too daunting. The LessWrong discussion area is a bit daunting to users, so it might be really desirable for people to have an “easy” discussion area where they can learn in an environment that is not intimidating.
3. Give us an opportunity to experiment with approaches that help willing people learn faster.
Endless September protection should be designed to avoid causing these side-effects:
Creating an imbalance in the proportion of thick-skinned individuals to normal individuals.
Anything that annoys, alienates or discourages users is going to deter a lot of people while retaining thick-skinned individuals. Some thick-skinned individuals are leaders, but many are trolls, and thick-skinned individuals may be more likely to resist acculturation or try to change the culture (though it could be argued the other way—that their thick skin allows them to take more honest feedback). For example: anonymous, unexplained down votes create a gauntlet for new users to endure which selects for a high tolerance to negative feedback. This may be the reason it has been reported that there are a lot of “annoying debater types”.
People that we do want fail to join because the method of protection puts them off.
There are two pitfalls that I think are going to be particularly attractive, but we should really avoid them:
1.) Filtering into hard/easy based on anything other than knowledge about rational thinking. There are various reasons that could go very wrong.
- Filtering in any other way will keep out advanced folks who may have a lot to teach.
If a person has already learned good reasoning skills in some other way, do we want them at the site? There might be logic professors, Zen masters, debate competition champs, geniuses, self-improvement professionals, hard-core bookworms and other people who are already advanced and are interested in teaching others to improve their skills, or interested in finding a good challenge, or are interested in contributing articles, but have already learned much of the material the sequences cover. Imagine that a retired logic professor comes by hoping to get a challenge from similarly advanced minds and perhaps do a little volunteer work teaching about logic as a past time. Now imagine requiring them to read 2,000 pages of “how to think rationally” in order to gain access to all the discussion areas. This will almost guarantee that they go elsewhere.
- Filtering based on the sequences or other cultural similarities would promote conformity and repel the true thinkers.
If true rationalists think for themselves, some of them will think differently, some of them will disagree. Eliezer has explained in undiscriminating skeptics that “I do propose that before you give anyone credit for being a smart, rational skeptic, that you ask them to defend some non-mainstream belief.” he defines this as “It has to be something that most of their social circle doesn’t believe, or something that most of their social circle does believe which they think is wrong.” If we want people in the “hard” social group who are likely to hold and defend non-mainstream beliefs, we have to filter out people unable to defend beliefs without scaring off those who have beliefs different from the group.
2.) Discouraging people with unusually flawed English from participating at all levels. Doing that would stop two important sources of new perspectives from flowing in:
- People with cultural differences, who may bring in fresh perspectives.
If you’re from China, you may want to share perspectives that could be new and important to a Westerner, but may be less likely to meet the technical standards of a perfectionist when it comes to writing in English.
- People with learning differences, whose brains work differently and may offer unique insight.
A lot of gifted people have learning disorders and gifted people who don’t tend to have large gaps between skill levels. It is not uncommon to find a gifted person whose abilities with one skill are up to 40% behind (or better than) their abilities in other areas. This phenomenon is called “asynchronous development”. We associate spelling and grammar with intelligence, but the truth is that those who have a high verbal IQ may not have equally intelligent things to say, and people who word things crudely due to asynchronous development (engineers, for instance, are not known for their communication skills but can be brilliant at engineering) may be ignored even though they could have important things to say. Dyslexics, who have all kinds of trouble from spelling to vocabulary to arranging sentences oddly may be ignored despite the fact that “children and adults who are dyslexic usually excel at problem solving, reasoning, seeing the big picture, and thinking out of the box” (Yale).
Everyone understands the importance of making sure all the serious articles get published with good English, but frequently in intellectual circles, the attitude is that if you aren’t a perfectionist about spelling and grammar, you’re not worth listening to at all. The problem of getting articles polished when they are written by dyslexics or people for whom English is a second language should be pretty easy—people with English problems can simply seek a volunteer editor. The ratio of articles being published by these folks versus the number of users at the site encourages me to believe that these guys will be able to find someone to polish their work. Since it would be so easy to accommodate for these disabilities, taking an attitude that puts form over function as a filter would not serve you well. If dyslexics and people with cultures different from the majority feel that we’re snobby about technicalities, they could be put off. This could already be happening and we could be missing out on the most creative and most different perspectives this way.
People who qualify under the “letter” of the standards do not meet the spirit of the standards.
For instance: They claim to be rationalists because they agree with a list of things that rationalists agree with, but don’t think for themselves, as Eliezer cautions about in undiscriminating skeptics. Asking them questions like “Are you an atheist?” and “Do you think signing up for cryo makes sense?” would only draw large numbers of people who agree but do not think for themselves. Worse, that would send a strong message saying: “If you don’t agree with us about everything, you aren’t welcome here.”
The right people join, but acculturate slowly or for some reason do not acculturate.
- Large numbers of users, even desirable ones, will be frustrating if newbie materials are not prominently posted.
I was very confused and disoriented as a new user. I think that there’s a need for an orientation page. I wrote about my experiences as a new user here which I think might make a good starting point for such a new user orientation page. I think LessWrong also needs a written list of guidelines and rules that’s positioned to be “in your face” like the rest of the internet does (because if users don’t see it where they expect to find it, then they will assume there isn’t one). If new users adjust quickly, both old users and new users will be less annoyed if/when lots of new users join at once.
The filtering mechanism gives LessWrong a bad name.
For instance, if we were to use an IQ test to filter users, the world may feel that LessWrong is an elitist organization. Sparking an anti-intellectual backlash would do nothing to further the cause of promoting rationality, and it doesn’t truly reflect the spirit of bringing everyone up, which is what this is supposed to do. Similarly, asking questions that may trigger racial, political or religious feelings could be a bad idea—not because they aren’t sources of bias, but because they’ll scare away people who may have been open to questioning and growing but are not open to being forced to choose a different option immediately. The filters should be a test about reasoning, not a test about beliefs.
Proposed Filtering Mechanisms:
Principle One: A small number of questions can deter a lot of activity.
As a web pro, I have observed a 10 question registration form slash the number of files sent through a file upload input that used to be public. The ten questions were not that hard—just name, location, password, etc. Asking questions deters people from signing up. Period. That is why, if you’ve observed this trend as well, I think that a lot of big websites have begun asking for minimal registration info: email address and password only. Years ago, that was not common, it seemed that everyone wanted to give you ten or twenty questions. For this reason, I think it would be best if the registration form stays simple, but if we create extra hoops to jump through to use the hard discussion area, only those who are seriously interested will join in there. Specific examples of questions that meet the other criteria are located in the proposed acculturation methods section under: A test won’t deter ignorant cheaters, but they can force them to educate themselves.
Principle Two: A rigorous environment will deter those who are not serious about doing it right.
The ideal is to fill the hard discussion area with the sort of rationalists who want to keep improving, who are not afraid to disagree with each other, who think for themselves. How do you guarantee they’re interested in improving? Require them to sacrifice for improvement. Getting honest feedback is necessary to improve, but it’s not pleasant. That’s the perfect sacrifice requirement:
Add a check box that they have to click where it says “By entering the hard discussion area, I’m inviting everyone’s honest criticisms of my ideas. I agree to take responsibility for my own emotional reactions to feedback and to treat feedback as valuable. In return for their valuable feedback, which is a privilege and service to me, I will state my honest criticisms of their ideas as well, regardless of whether the truth could upset them.”
I think it’s common to assume that in order to give honest feedback one has to throw manners out the window. I disagree with that. I think there’s a difference between pointing out a brutal reality, and making the statement of reality itself brutal. Sticking to certain guidelines like attacking the idea, not the person and being objective instead of ridiculing makes a big difference.
There are other ways, also, for less bold people, like the one that I use in IRL environments: Hint first (sensitive people get it, and you spare their dignity) then be clear (most people get it) then be brutally honest (slightly dense people get it). If I have to resort to the 2x4, then I really have to decide whether enlightening this person is going to be one of those battles I choose or one of those battles I do not choose. (I usually choose against those battles.)
How do you guarantee they’re capable of disagreeing with others? Making it clear that they’re going to experience disagreements by requiring them to invite disagreements will not appeal to conformists. Those who are not yet thinking for themselves will find it impossible to defend their ideas if they do join, so most of them will become frustrated and go back to the easy discussion area. People who don’t want intellectual rigor will be put off and leave.
It’s important that the wording for the check box has some actual bite to it, and that the same message about the hard discussion area is echoed in any pages that advise on the rules, guidelines, etiquette, etc. To explain why, I’ll tell a little story about an anonymous friend:
I have a friend that worked at Microsoft. He said the culture there was not open to new ideas and that management was not open to hearing criticism. He interviewed with various companies and chose Amazon. According to this friend, Amazon actually does a good job of fulfilling values like inviting honest feedback and creating an environment conductive to innovation. He showed me the written values for each. I didn’t think much of this at first because most of them are boring and read like empty marketing copy. Amazon.com has the most incredible written values page I’ve ever seen—it does more than sit there like a static piece of text. It gives you permission. Instead of saying something fluffy like: “We value integrity and honesty and our managers are happy to hear your criticisms.” it first creates expectations for management: ” Leaders are sincerely open-minded, genuinely listen, and are willing to examine their strongest convictions with humility.” and then gives employees permission to give honest feedback to decision-makers: “Leaders (all employees are referred to as “leaders”) are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion.” The Amazon values page gives their employees permission to innovate as well: “As we do new things, we accept that we may be misunderstood for long periods of time.” If you look at Microsoft’s written values, there’s no bite to them. What do I mean by bite?
Imagine you’re an employee at Amazon. Your boss does something stupid. The cultural expectation is that you’re not supposed to say anything—offending the boss is bad news, right? So you’re inhibited. But the thing they’ve done is stupid. So you remember back to the values page and go bring it up on your computer. It says explicitly that your boss is expected to be humble and that you are expected to sacrifice social cohesion in this case and disagree. Now, if your boss gets irritated with you for disagreeing, you can point back to that page and say “Look, it’s in writing, I have permission to tell you.”
Similarly, there is, what I consider to be, a very unfortunate social skills requirement that more or less says if you don’t have something nice to say, don’t say anything at all. Many people feel obligated to keep constructive criticism to themselves. A lot of us are intentionally trained to be non-confrontational. If people are going to overcome this lifetime of training to squelch constructive criticism, they need an excuse to ignore that social training. Not just any excuse. It needs to be worded to require them to do that and it needs to be worded to require them to do it explicitly despite the consequences.
Principle Three: If we want innovation, we have to make innovators feel welcome.
That brings me to another point. If you want innovation, you can’t deter the sort of person who will bring it to you: the “people who will be misunderstood for long periods of time”, as Amazon puts it. If you give specific constructive criticism to a misunderstood person, this will help them figure out how to communicate—how else will they navigate the jungle of perception and context differences between themselves and others? If you simply vote them down, silently and anonymously, they have no opportunity to learn how to communicate with you and what’s worse is that they’ll be censored after three votes. This ability for three people to censor somebody with no accountability, and without even needing a reason, encourages posters to keep quiet instead of taking the sort of risk an innovator needs to take in presenting new ideas, and it robs misunderstood innovators of those opportunities for important feedback—which is required for them to explain their ideas. Here is an example of how feedback can transform an innovator’s description of a new idea from something that seems incomprehensible into something that shows obvious value:
On the “Let’s start an important start-up” thread, KrisC posts a description of an innovative phone app idea. I read it and I cannot even figure out what it’s about. My instinct is to write it off as “gibberish” and go do something else. Instead, I provide feedback, constructive criticism and questions. It turns out that the idea KrisC has is actually pretty awesome. All it took was for KrisC to be listened to and to get some feedback, and the next description that KrisC wrote made pretty good sense. It’s hard to explain new ideas but with detailed feedback, innovation may start to show through. Link to KrisC and I discussing the phone app idea.
Proposed Acculturation Methods:
Send them to Center for Modern Rationality
Now that I have discovered the post on the Center for Modern Rationality and have see that they’re targeting the general population and beginners with material for local meetups, high schools and colleges and they’re planning some web apps to help with rationality training, I see that referring people over to them might be a great suggestion. Saturn suggested sending them to appliedrationality.org before I found this but I’m not sure if that would be adequate since I don’t see a lot of stuff for people to do on their website.
Highlight the culture.
A database of cultural glossary terms can be created and used to highlight those terms on the forum. The terms are already on the page, so what good would this do? Well, first they can be automatically linked to the relevant sequence or wiki page. If old users do not have to look for the link, this speeds up the process of mentioning them to new users quite a lot. Secondly, it would make the core cultural items stand out from all of the other information, which will likely cause new users to prioritize it. Thirdly, there will be a visual effect on the page. You’ll be able to see that this place has it’s own vocabulary, it’s own personality, it’s own memes. It’s one thing to say “LessWrong has been influenced by the sequences” to a new user who hasn’t seen all those references on all of those pages, and even if they do see them, won’t know where they’re from, versus making it immediately obvious how by giving them a visual that illustrates the point.
Provide new users with real feedback instead of mysterious anonymous down votes:
We have karma vote buttons, but this is not providing useful feedback for new users. Without a specific reason, I have no way to tell if I’m being down voted by trolls and I may see ten different possible reasons for being voted down and not know which one to choose. This annoyance selects for thick-skinned individuals like trolls and fails to avoid the “imbalance in the proportion of thick-skinned individuals to normal individuals” side-effect.
If good new users are to be preserved, and the normal people to troll ratio is to be maintained, we need to add a “vote to ban” button that’s used only for blatant misbehavior, and if an anonymous feedback system is to be used for voting down, it needs to prompt you for more detailed feedback—either allowing you to select from categories, or give at least one or two words as an explanation. Also, the comments need to should show both up votes and down votes. If you don’t know when you’ve said something controversial and are being encouraged to view everything you say as black-and-white good-or-bad, this promotes conformity.
A test won’t deter ignorant cheaters, but they can force them to educate themselves.
Questions can be worded in such a way that they serve as a crash course in reasoning in the event that someone posts a cheat sheet or registrants look up all the answers on the internet. Assuming that the answer options are randomly ordered so that you have to actually read them then the test should, at the very least, familiarize them with the various biases and logical fallacies, etc. Examples:
--------------
Person A in a debate explains a belief but it’s not well-supported. Their opponent, person B, says they’re an idiot. What is this an example of?
A. Attacking the person, a great way to really nail a debate.
B. Attacking the person, a great way to totally fail in debate because you’re not even attacking their ideas.
--------------
You are with person X and person Y. Person Y says they have been considering some interesting new evidence of what might be an alien space craft and aren’t sure what to think yet. You both see person Y’s evidence, and neither of you has seen it before. Person X says to you that they don’t believe in UFOs and don’t care about person Y’s silly evidence. Who is the better skeptic?
Person X because they have the correct belief about UFOs.
Person Y because they are actually thinking about it, avoiding undiscriminating skepticism.
--------------
Note: These questions are intentionally knowledge-based. If the purpose is to avoid requiring an IQ test, and to create an obstacle that requires you to learn about reasoning before posting in “hard”, that’s the only way that these can be done.
Encouraging users to lurk more.
Vaniver contributed this: Another way to cut down on new-new interaction is to limit the number of comments someone can make in a time period- if people can only comment once an day until their karma hits 20, and then once an hour until their karma hits 100, and then they’re unrestricted, that will explicitly encourage lurking / paying close attention to karma among new members. (It would be gameable, unless you did something like prevent new members from upvoting the comments of other new members, or algorithmically keeping an eye out for people gaming the system and then cracking down on them.) [edit] The delay being a near-continuous function of the karma- say, 24 hours*exp(-b karma)- might make the incentives better, and not require partitioning users explicitly. No idea if that would be more or less effort on the coding side.
Cons: This would deter some new users from becoming active users by causing them to lose steam on their initial motivation to join. It might be something that would deter the right people. It might also filter users, selecting for the most persistent ones, or for some other trait that might change the personality of the user base. This would exacerbate the filtering effect that the current karma system is exerting, which, I theorize, is causing there to be a disproportionate number of thick-skinned individuals like trolls and debate-oriented newbies. My theory about how the karma system is having a bad influence
Give older users more voting power.
Luke suggested “Maybe this mathematical approach would work. (h/t matt)” on the “Call for Agreement” thread.
I question, though, whether changing the karma numbers on the comments and posts in any way would have a significant influence on behavior or a significant influence on who joins and stays. Firstly, votes may reward and punish but they don’t instruct very well—unless people are very similar, they won’t have accurate assumptions about what they did wrong. I also question whether having a significant influence on behavior would prevent a new majority from forming because these are different problems. The current users who are the right type may be both motivated and able to change, but future users of the wrong type may not care or may be incapable of changing. They may set a new precedent where there are a lot of people doing unpopular things so new people are more likely to ignore popularity. The technique uses math and the author claims that “the tweaks work” but I didn’t see anything specific about what the author means by that nor evidence that this is true. So this looks good because it is mathematical, but it’s less direct than other options so I’m questioning whether it would work.
Vladimir_Nesov posted a variation here.
Make a different discussion area for users with over 1000 karma.
Make a Multi Generation Culture.
Limit the number of new users that join the forum to a certain percentage per month, sending the rest to a new forum. If that forum grows too fast, create additional forums. This would be like having different generations. New people would be able to join an older generation if there is space. Nobody would be labeled a “beginner”.
Temporarily turn off registration or limit the number of users that can join.
(See the cliff notes version for more.)
Should easy discussion participants be able to post articles?
I think the answer to this is yes, because no filtering mechanism is perfect and the last thing you want to do is filter out people with a different and important point of view. Unless the site is currently having issues with trolls posting new articles, or with the quality of the articles going down, leaving that freedom intact is best. I definitely think, though, that written guidelines for posting an article need to be put in “in your face” expected places. If a lot of new users join at once, well-meaning but confused people will be posting the wrong sorts of things there—making sure they’ve got the guidelines right there is all that’s probably needed to deter them.
Testing / measuring results:
How do we tell if this worked? Tracking something subjective like whether we’re feeling challenged or inundated with newbies is not going to be a straightforward matter of looking at numbers. (Methods to assist willing people learn faster deserves it’s own post.) Just because it’s subjective doesn’t mean tracking is impossible or that working out whether it’s made a difference cannot be done. I suspect that a big difference will be noticed in the hard discussion area right away. Here are some figures that are relevant and can be tracked, that may give us insight and ways to check our perceptions:
1. How many people are joining the hard forum versus the easy forum? If we’ve got a percentage, we know how *much* we’ve filtered, though we won’t know exactly *who* we’ve filtered.
2. Survey the users to ask whether the conversations they’re reading have increased in quality.
3. Survey the users to ask whether they’ve been learning more since the change.
4. See which area has the largest ratio of users with lots of vote downs.
(This could be tricky because people who frequently state disagreements might be doing a great service to the group, but might be unpopular because of it, and people who are innovative may be getting voted down due to being misunderstood. One would think, though, that people who are unpopular due to disagreeing, or being innovative, assuming they’re serious about good reasoning, would end up in the hard forum.)
Request for honest feedback:
Your honest criticisms of this idea and your suggestions will be appreciated, and I will update this idea or write a new one to reflect any good criticisms or ideas you contribute.
This is in the public domain:
This idea is hereby released into the public domain, with acknowledgement from Luke Muehlhauser that those were my terms prior to posting. My intent is to share this idea to make it impossible to patent and my hope is that it will be free for the whole world to use.
Preventing discussion from being watered down by an “endless September” user influx. by Epiphany is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
- Making Rationality General-Interest by 24 Jul 2013 22:02 UTC; 45 points) (
- 3 Sep 2012 17:40 UTC; 20 points) 's comment on Open Thread, September 1-15, 2012 by (
- 20 Sep 2012 11:14 UTC; 11 points) 's comment on Elitism isn’t necessary for refining rationality. by (
- 14 Sep 2012 8:28 UTC; 10 points) 's comment on [META] Karma for last 30 days? by (
- Number of Members on LessWrong by 17 Aug 2012 5:47 UTC; 6 points) (
- 12 Sep 2012 23:39 UTC; 6 points) 's comment on Meta: LW Policy: When to prohibit Alice from replying to Bob’s arguments? by (
- Call For Agreement: Should LessWrong have better protection against cultural collapse? by 3 Sep 2012 5:35 UTC; 5 points) (
- Poll—Is endless September a threat to LW and what should be done? by 8 Dec 2012 23:42 UTC; 5 points) (
- 9 Jul 2014 10:13 UTC; 2 points) 's comment on [moderator action] Eugine_Nier is now banned for mass downvote harassment by (
- 16 Dec 2012 0:36 UTC; 2 points) 's comment on 2012 Survey Results by (
- 20 Sep 2012 8:38 UTC; 1 point) 's comment on Elitism isn’t necessary for refining rationality. by (
- 14 Sep 2012 6:31 UTC; 1 point) 's comment on Call for Anonymous Narratives by LW Women and Question Proposals (AMA) by (
- 30 Sep 2012 2:33 UTC; 0 points) 's comment on Elitism isn’t necessary for refining rationality. by (
- 4 Jul 2014 20:04 UTC; 0 points) 's comment on Self-Congratulatory Rationalism by (
- 26 Oct 2012 5:08 UTC; -1 points) 's comment on Heading off a near-term AGI arms race by (
- 20 Sep 2012 7:10 UTC; -4 points) 's comment on Elitism isn’t necessary for refining rationality. by (
Regarding elitism: LW is elitist, and would not be what it is without its elitism. What else differentiates LW from /r/skeptic or agi-list? The LW community recognizes that some writings are high quality and deserve to be promoted, and others are not. If anything, I wish LW would become more elitist.
The way you’re using elitism isn’t the way that I’m using it. Here’s the dictionary definition:
e·lit·ism
practice of or belief in rule by an elite.
consciousness of or pride in belonging to a select or favored group.
What I’m saying is more like “It won’t do anyone any good to act like we should rule other people, and it won’t do anyone any good to behave arrogantly.” There’s a difference between caring about the quality of work that your group is putting out and having standards for your work, versus behaving like a jerk. If there wasn’t, then work ethics and quality would be elitist.
Voting may be included at LessWrong, and voting is associated with choosing rulers, but we’re not voting on who rules. We’re just voting on writings.
People may be proud of joining LessWrong but it’s not exactly a select or favored group. Unless you mean that LessWrongers favor it, but if you’re going to say “LessWrongers are elitist because LessWrongers favor LessWrong” then you’d also have to say “Icanhascheezburger users are elitist because they favor hanging out with people and laughing at lolcats.”
Elitism is a word that is often used to describe a nasty prejudice. I think its a terrible idea to start applying that to things that aren’t elitist, especially important ones, like enjoying work ethics and quality.
If you still want me to think that “elitism” is the right word for what you are talking about, I will need you to explain.
LW could be considered a select group by discussion board standards. For example, posters who haven’t studied the rather large amount of presumed background knowledge are, to a decreasing but still significant extent, only reluctantly tolerated. Some people accustomed to more typical discussion boards do seem somewhat miffed about the idea that LW has such prerequisites at all, and I assume this is because they perceive it as elitist.
Bringing this back to the main point, LW already does a reasonably good job at covering what you call the ‘hard’ material. It’s hard to overstate how fickle and delicate online communities can be. I’m wary of attempting to change the norms of the existing community in order to produce more ‘easy’ material. (This is what you are effectively proposing, since newbies can’t produce their own ‘easy’ material, it would be the blind leading the blind.) Therefore I think that job should be delegated to another website (maybe appliedrationality.org) rather than shoehorned into LW.
Seconded. I suggest adding appliedrationality.org to the LW Wiki under Getting Started, near the top.
I sort of took your suggestion. (See OP under Center for Modern Rationality).
Is LessWrong Elitist:
By that definition, restaurants are elitist because people with no knowledge of silverware and table manners are only reluctantly tolerated. Roads are elitist because drivers with no knowledge of traffic rules are only reluctantly tolerated. Grocery stores are elitist because only people with no understanding of trade and shoplifting laws aren’t tolerated. Is there any place you can go in the civilized world and be accepted regardless of whether you have knowledge relevant to that place? Even in jail, inmates are expected to know better than to drink out of the toilet and that food goes in their mouth. The mental ward might be the only place—but that isn’t a place of acceptance.
Let’s look at the dictionary definition for the word elitist, now, as it’s more detailed:
(of a person or class of persons) considered superior by others or by themselves, as in intellect, talent, power, wealth, or position in society: elitist country clubbers who have theirs and don’t care about anybody else.
catering to or associated with an elitist class, its ideologies, or its institutions: Even at such a small, private college, Latin and Greek are under attack as too elitist.
a person having, thought to have, or professing superior intellect or talent, power, wealth, or membership in the upper echelons of society: He lost a congressional race in Texas by being smeared as an Eastern elitist.
a person who believes in the superiority of an elitist class.
Reasons LessWrong isn’t automatically elitist, as relates to the above:
Regardless of whether LessWrong members have more or less talent, intellect, power, wealth or position, if they do not have a superior attitude about it, that doesn’t qualify them as elitist by definition 1.
Depending on whether LessWrong wants to be a place where everybody can learn or a place where only people thought to have “superior intellect or talent, power, wealth, or membership in the upper echelons of society” can join, it might be non-elitist.
If LessWrong defines itself as “A place where people who want to refine their rationality gather” then it’s not a group defined by “talent, power, wealth, or membership in the upper echelons of society”, it’s a place defined by common interest.
Do you believe that LessWrong is an elite class, and that they are superior? I don’t.
On educating new rationalists:
Do they even have a forum? I don’t see how this is going to work. Explain this plan.
The trade off between being elitist and new comers is something to think about. Apparently some LWs aspire to bring more people, as rationalists teachings assumes. Point is whats costs to pay for growing with (same?)quality.
I am strongly opposed to requiring a version of Crocker’s rules to get into good discussions. Wanting to have people be civil to me does not particularly mean that I compromise intellectual rigor. A forum that required Crocker’s Rules to participate could be interesting, but it could also be 4chan.
And poorly-informed ranters hereabouts typically think they are the smartest people on the site and the sheep around them are too cowardly, emotional, or stupid to “rationally debate” them. Adding an explicit norm according to which poorly-informed ranters may identify themselves as magnanimously granting favors to everyone they harass, and people who object to such harassment as in violation of that norm (because they’re not “taking responsibility for their emotional reactions”) would not be an improvement. (Of course they’ll ignore the part about receiving feedback as a favor.)
You seem to be assuming that honest criticism also has to mean throwing manners out the window. I do not assume this, and it didn’t occur to me to predict that others would assume that when I was writing this. I’ll have to update that part.
Poorly informed ranters wanting to debate does sound annoying, I didn’t realize there was a problem with that. It seems to me the best way to deter them would be to paste a link that’s directly related to their points and ignore them. Do that enough times and they’ll probably wake up and realize they’ve got a problem with not knowing what they’re talking about. What have you guys tried? Maybe a better question would be “What would you suggest?”
Hm—rather, I’d say she is assuming that the words you posted, which she then quoted, would reduce civility for a net loss.
I’ll requote what I thought was the relevant (i.e. most disagreed-upon) part:
Making good posts requires that you take a large measure of responsibility for the audience’s response. And this is a skill that is difficult to learn/teach as a new user/culture. Having something like the quoted phrase in an authoritative place would send conflicting messages about what constitutes a “good post,” leading to fewer people learning the skill of writing for their audience.
I agree in that I think most people would interpret my wording to mean “Throw manners out the window in favor of honesty.” but I don’t think it has to be that way.
As far as taking responsibility for emotions goes, there’s a limit to what you can do. If you have to tell them something unpleasant and disappointing, if that’s the truth, you can’t control the fact that they’re going to be disappointed. If you sugar-coat, they may not realize the gravity of the situation, and what happens next could be worse. Reality is sometimes unpleasant, that’s all there is to it. I can want you to be happy all I feel like, but if the reality isn’t happy for you, there’s nothing I can do about that. If I know of a solution, I’ll usually say so. If not, I can say things like “I know you really care about this, so I hate to say this...” and reassure them that I don’t dislike them, but that doesn’t change the fact that the reality is unpleasant.
“Throw manners out the window” is not what I said you were proposing. I think you may be missing some of what I am saying, or maybe I was just being opaque. So I’ll try and give you one clear paragraph:
Thinking about other peoples’ emotional responses makes communicating with them much more effective, not less effective. If we want to have a “hard discussion” section, or even just a difficult discussion, I want people to be in the habit of thinking about other peoples’ emotional responses, not to consider it “not their responsibility.”
To be clear, when I say “thinking about other peoples’ emotions,” I don’t mean typical “manners stuff” like sweetening difficult truths, etc. I mean actual thinking, about other peoples’ emotions. And changing what you say so that the other person will understand what you’re trying to communicate. That part’s important! Or to put it another way, in order to communicate as best you can, you must take responsibility for your audience’s emotional responses insofar as they affect what happens to your message, which is often a lot.
Yeah, that’s worthwhile, and it’s an art. I’m not sure how that would even be communicated to people if it were, say, put into the rules or something. It would be nice if that level of quality could be expected but I don’t see any way to do that. Do you?
It might be sweet to find some existing experts in teaching people to speak so that they will be understood by people with complicated and relevant internal states.
(Relationship counselors? People who teach autistic people conversation skills? Psychologists who study conversation? Psychologists who study the difference between what the speaker thinks and what the listener thinks?)
Anyhow, maybe teaching people this is a near-solved problem, maybe not (and maaaaybe I’ll do some research on this before next time I talk about it :D ). And maybe it’s unsolvable. But I’d guess it’s solvable—lots of things that seem impossible are really us being bad at the skill that makes it possible.
I haven’t come across this either. Doesn’t the downvoting minimize this problem?
That said, I like civility to be one of the core principles of any discussion group—but without every feeling we have to agree with what someone else is saying.
No, not really. They say things like “hahahaha, sure, downvote me more, that only proves me right, you’re unable to actually address my arguments!” And then people try to address their arguments and get nowhere. It’s a remarkably consistent type, actually. This problem is one of the things the controversial new trollfeeding tax is meant to handle.
Spot on. And bizarrely enough there even seems to be a remarkable correlation in the kind of positions this type supports. Something along the lines of an “Incorrect Metacontrarian Cluster”.
http://websites.psychology.uwa.edu.au/labs/cogscience/documents/LskyetalPsychScienceinPressClimateConspiracy.pdf seems relevant:
Votes don’t train newbies.
Being a new users who gets voted down sometimes, I can tell you it seems completely random. I can’t tell whether it’s a troll, or someone with a vendetta or what it is. And even if I brainstorm a bunch of guesses, the little number at the top of my comment doesn’t tell me which one is correct. This expectation that downvotes are going to help new users learn how to behave is even worse than that though, in a whole bunch of ways at once. I wrote about that here:
Idea For Karma Improvements and Why We Need Them
Yes they do.
You have been given an abundance of explanations regarding people’s reactions which you could, if it is your desire, use to gain more support for your comments.
My model of the reception of your comments suggests that you do have several people with a ‘vendetta’, or at least several people who are highly predisposed to downvote you prior to reading your contributions. But that is to be expected. I get people targetting me all the time and if I didn’t it would probably be a sign that I was neglecting my duty. Having a few individuals targetting you isn’t a problem. The problem comes when you cannot garner sufficient support from the other, neutral readers to counter the initial downvotes and leave most of your comments as net positive. That is a sign that is worth paying more attention to politics and perception—and again you’ve got personal feedback you could use toward that end.
Are you really saying that, if motivated, you couldn’t work out how to change your behavior such that your comments were more likely to be well received? I mean come on, the thought “Oh, I suppose I should convey less arrogance” is a good starting place for reducing social sanction in just about any social structure that you are relatively new member of. (Note that I am talking specifically about conveyed arrogance, not actual arrogance. People can get away with being completely obstinate and incapable of learning from the words of others so long as they send the right signals of humility.)
You read that comment completely out of context and also you seem unaware that at first I was not getting constructive criticism. People only started criticizing me after I decided I was tired of unexplained downvotes and started to advertise in various places (at the ends of my discussion posts, and in my various comments expressing an interest in being challenged intellectually) that I genuinely want honest criticism. My experience is that LessWrong members needed to be convinced that it really was okay to criticize me before they started giving me the large amount of helpful feedback you’re seeing. You’re very bold, Wedrifid, so you probably figure other people are as comfortable criticizing others as you are. Maybe you think I must have been getting bold criticisms this whole time. I wasn’t.
The context in which I wrote that comment was this: I was explaining that OTHER new people don’t get feedback, in order to explain that the downvotes aren’t training them. If you think about it, you even said in your own post that it was the explanation that people use to improve themselves. The votes aren’t the same as verbal feedback. Are the other newbies getting the kind of feedback I am? I bet most of them aren’t. I was outgoing enough to guess that the reason I wasn’t getting feedback is because people didn’t feel comfortable criticizing me and chose to begin advertising that I want honest criticisms. I doubt most of your newbies are doing the same thing. Try an experiment. Make a new account. Post things people won’t like. See how many of them actually get verbal feedback. Then, advertise that you want constructive criticism. Post the same number of things people won’t like, and count how many of those got you verbal feedback.
I find your perspective on vendettas and duty refreshing, so thank you. Your comment makes me feel glad that you think I am worth saving. But since you intended to save me from my own stupidity, I feel a little annoyed that you thought I needed it. Do you observe from my behaviors that I do not apply constructive criticism? That I whine about problems rather than contributing to the solution? My intent was to notify you guys that without feedback, the downvotes don’t train newbies. People seem to think they do, but unless people tell you their reasons for pressing the button, it’s just a flurry of numbers. The power is not in vote buttons, it’s in clear communication.
I definitely want to know when I make a mistake, but if you find yourself typing something to me like “oh, come on” or “are you really” in the future, please consider that I may not actually be stupid enough to warrant it. Thanks.
I’m not sure to what extent I did that, but in any case I have a core disagreement with the claim that downvotes do not train newbies. My expectation is that the simple feedback mechanism increases the speed at which newbies absorb local norms and all my observations thus far confirm this. It isn’t the only thing that teaches newbies and it isn’t a perfect mechanism but it certainly helps. Most people don’t like getting downvotes and are take action to avoid them.
My position is that even in the absence of any explicit verbal feedback downvotes do train newbies (and non-newbies). Verbal explanations can also help (and sometimes hinder). I expect that there is plenty of scope for improving newbie learning through constructive feedback—this is something that complements and works alongside the karma system, not something made necessary because the the karma system is completely ineffective for the purpose.
It is almost always a bad idea to use oneself as an example when making any kind of general criticism of the karma system. Disagreement will inevitably seem personal!
I disagreed. This is a testable prediction but not easily so. With a suitably designed experiment I would predict a greater degree of learning in the voted on but not explained group than you would. To be clear I think the power is in the vote buttons AND in clear communicaiton.
I am of course willing to use different phrasing. I was intending to convey that it is well within your capability to avoid downvotes if that was a task you set for yourself. It is legitimate to have other higher priorities than avoiding downvotes but those who are not trying to avoid them may appear not to be learning from them. That is, I was questioning that the “Ephiphany” anecdote is an indication that newbies do not learn from downvotes because they don’t have enough information. I acknowledge from the parent that you are referring to earlier experience prior to you changing the way you interact and so the above is less applicable.
If you’re willing to make Crocker’s rules a codified and accepted norm of the “hard discussion area”, you might as well go the whole way and make it very clear to ranters how wrong they are, in the most obnoxious way you can come up with—including flames, status putdowns, etc.
Yes it sounds distasteful and it is, but it has some very compelling advantages: (1) it deters other users from naïvely expending effort on unproductive discussions, which is something Eliezer has been complaining about; (2) it will hopefully discourage the vast majority of ranters, thus allowing us to minimize the scope of controversial technical measures such as bans and posting restrictions and restrict them to the most intractable cases.
Just “pasting a link that’s relevant to their points” is not nearly enough to discourage anyone.
In my experience, being obnoxious doesn’t deter others from being obnoxious. Quite the opposite, in fact.
I’m not very worried about an endless September. LW is pretty good at downvoting people when they make rookie mistakes in reasoning and argument, or when they are mean or trollish. The new troll toll (or whatever solution we settle into after a few months) should go even further toward preventing endless September. Moreover, I think the content itself here does a fairly good job of filtering out many kinds of people we don’t want.
Finally, I think Xachariah’s point is important: “the eternal September effect is primarily caused by new-member with new-member interaction.” I would say that LW already does a good job of limiting this. For example, new members who don’t understand the culture are downvoted, which means their comments are hidden by default. Also, people are already incentivized to lurk for quite a while before commenting or posting, because the community is clearly intelligent and is constantly using community jargon they could easily be downvoted for misunderstanding.
I also don’t think we should make it harder for people to join (e.g. with a quiz). Instead, I think we should make it even easier for the kinds of people we want to find LW and engage. Here are some ideas for doing that:
Improve the wiki (already in progress).
Create a nicely-formatted ebook of The Sequences (already in progress).
Have an army of volunteers regularly comment on selected blogs and discussion forums (e.g. for computer scientists, cognitive scientists, mathematicians, and formal philosophers), linking back to the stickest relevant LW posts. (This project is under development: if you want to be part of this once it’s ready, please notify malo@intelligence.org.)
Create an attractive “Welcome to Less Wrong” page that can quickly guide people to posts they’re most likely to be interested in. This would include a video of somebody explaining in 5-10 minutes what Less Wrong is about, the values of the community, etc. (Project under development.)
Create a “community values” page. I like this idea of yours very much. The values could be pretty simple, things like: (1) Don’t be a dick. “Do unto others 20% better than you expect them to do unto you, to correct for subjective error.” (2) When disagreeing, aim for the higher levels of the disagreement hierarchy. (3) Be clear. Be specific. Give examples. Hyperlink to longer explanations or relevant articles. (4) Before commenting, try to list some reasons you might be wrong about the thing you’re about to say. (5)...
Have the system deliver a welcome message to a user’s inbox when they first sign up, one linking to the “community values” page, the “welcome to LW” page, etc.
Create handy one-stop FAQs (on rationality, decision theory, etc.) that become standard resources for their subjects on the internet, and make sure they link heavily to LW. (This project is also in progress.)
I don’t think I like the idea of splitting the community into hard and easy forums, nor the idea of inducting something like Crocker’s rules into the code of conduct.
What’s the case for a video? Seems a little cheesy, IMO.
OK, but let’s make sure they really do have some domain expertise in the area that they’re leaving comments in, so they don’t make us look bad. Link.
I like this idea. One potential logistical glitch: If the user isn’t already familiar with reddit, they won’t know what an orange envelope means and they may just see it as orange forever and never click on it.
People like videos and it makes the community more human to newcomers.
People like videos? I hate videos to the point that I will go out of my way to avoid links with videos in them, and I’ve seen this sentiment expressed by other people here.
I hate video because it goes too slow. I can read at least twice as fast as a video goes. It always feels like such an excruciating waste of time. Also, I can’t use find in page. I am addicted to find in page. Ctrl-F and me are attached at the hip. Of all the pages I open, the proportion I read in entirety is very small. Ctrl-F is like half my way of navigating the internet. I’m really glad to see someone else express this. I thought i was the only one.
I like videos. They are more passive than written text and feel less cognitively demanding per unit time. In fact, I will often prefer to watch/listen-to a video/audio recording more than once in order to achieve the same level of retention as reading text in a concentrated fashion, thereby exchanging time for concentration-willpower.
I suppose I have nothing to complain about as long as the transcript is present and easy to get to.
Some do and some don’t.
I seem to recall lots of complaints on lukeprog’s first Q&A about the fact that the answers were delivered in video format.
FYI he also provided a text transcript.
[comment deleted]
Transcripts are fairly expensive; patio11 pays for transcripts to be made for his podcasts (a big factor in why those submissions do well on Hacker News), but IIRC the quoted figure is north of $100. So you would pay… but would you pay enough?
[comment deleted]
College students would be flaky and unreliable, and you’d want at least 2 for error-checking. You get what you pay for.
Confirmation bias and selection effects?
Echoing komponisto, my job is incredibly non demanding of my cognitive resources so I constantly listening to audiobooks, youtube channels, and TCC/TMS Lectures at 2x speed. Over the course of an 8 hr work day I can finish about 200 pages at reasonable comprehension.
I’d be curious to hear your evidence for this. In any case, even if there is conclusive evidence that internet users prefer video presentations over corresponding text presentations, it’s not obvious that this trend extends to LWers or potential LWers.
Also, this seems to have been a flop. I suspect that if videos were a good fit for LW concept transmission, we’d have seen more success with that small experimental effort.
Those video experiments were very poorly produced. That’s not the kind of video I have in mind. And video would of course only be there in addition to text.
I would have enjoyed and reccomended even poorly produced videos if you guys had bothered to extend them. I keep meaning to finish the last third or so of the sequences I haven’t read, but their all over the place and it makes sense for me to start from the top. It’d be great if I could listen while doing other things. In my case, painting mostly, in other cases, probably cleaning, laundry, dishes, pet care and other activities that take up very low or no verbal mental resources.
Guys. It’s not rocket science. You’re smart. You have good content. Present it well. Or better. If you can’t do that, hire someone who can. Get it out there. If you can’t do that, hire someone who can.
You are the decision maker here who will determine whether any changes go into effect. If this is your final decision on the matter of preventing endless September, let me know, and I will place a note at the top of this discussion to prevent people from continuing to waste time on it. If not, then I will focus on debating it with you because I still disagree but it would be a waste of time for me to move forward if you feel that LessWrong needs no more protection against endless September.
No, I want to be debated. I might change my mind.
Okay. I will start a new post specifically as a call to agreement for us decide whether LessWrong should have better endless September protection. It will take me a little while to get it all organized. Give me a bit of time.
Or, we could debate the subject here.
Well, you were right. Thumb up.
Well I thought it out carefully and added in citations and whatnot, so now it’s kinda too long for a comment. Sorry you did not get the post where you wanted it, but it’s done now. here it is.
Filtering is not the answer.
As noted the eternal September effect is primarily caused by new-member with new-member interaction. Instead of taking cultural cues from established members, new-members take cultural queues from other new-members and learn incorrect cultural lessons. Mechanisms to prevent eternal September are to assimilate new members more rapidly and to dissuade new-members from posting as much until they have been assimilated (and especially dissuade them from influencing other new members). Filtering is only useful in that it retards the acquisition of new recruits slow enough to allow the old recruits to assimilate.
Assuming we’re in danger of an eternal September, the correct question to ask is not, “How do we filter better?” but “How do we convince new members to lurk until they’re assimilated?”
An obvious solution : Make the site appear, to new members, as if only (some desired fraction) of members are new.
Distinguish between “new” and “experienced” members. Let new members turn into experienced members when they meet some criteria, possibly post count, karma, or even votes by experienced members. Systematically prevent new members from interacting with too many other new members by simply not showing them the posts made by these other new members.
I’m not actually sure if I think this is a good idea, but it might be worth mentioning anyway.
This seems absurdly hard to implement
It seems not hard to implement naively.
Discussion threads would truncate for new users from new user comments (experienced user comments on new user comments would be invisible to new users).
Our caching gets more complicated.
Many candidate tests for “experienced” seem obvious, but some might be very easy to game (funny comments on HPMOR posts qualify you).
If this is done, posts upvoted past a threshold should also be visible to everyone.
This does nothing to increase the capacity of older members to tolerate newbies—and that’s important, too. You’d be giving all the older members … how many times as many messages? I’m new, and I can’t keep up with my messages. I can’t imagine what it would do if I was an old member, and all of these new people were responding to me. If I were an old member in that situation, I would try to ignore the new users, and also, I would become increasingly annoyed with them demanding so much of my attention. That would lower the value of using the forum, and it may cause old members to quit.
It would also frustrate old members when new members weren’t aware of each other’s comments. That would be confusing.
Do you see a way to resolve these issues?
There’s a limit to how fast this can be done. That’s, essentially, why something additional is needed.
Deterring them from posting will ward off good people because they’ll lose momentum or be annoyed, and will increase the proportion of thick-skinned and / or persistent types who can deal with the annoyance. Not all thick-skinned / persistent people are bad, some are leaders or are gifted with those abilities, but creating gauntlets of annoyance will increase the proportion of undesirable thick-skinned / persistent types like trolls, newbie debaters, etc.
Essentially, dissuasion IS filtering, so if you’re going to filter, you may as well be conscious of it and use a method that is likely to attract the type that you want. My questionnaire would filter for people who like learning or don’t mind looking things up. The karma system currently in place filters for trolls and debaters. Dissuading people from posting will exacerbate the effect of the karma system if it remains as-is. The combination of the two would may result in a hideous unintentional filter.
If done well, it would also encourage a higher proportion of people that are the right type, discouraging mainstream people who aren’t genuinely interested in the culture from creating a new majority and taking over. Which is why i suggested the questionnaire that I did. That would select for people genuinely interested in rationality, most others won’t take the time to fill out such a questionnaire.
I disagree, but it might be “How do we convince members to lurk until their assimilated without scaring any of them off
Public domain and creative commons are not the same thing. In particular, I don’t think you can make a share-alike requirement on a public domain item.
You missed a more important and fundamental misconception: namely, the OP is trying to apply copyright-related practices (releasing into the public domain, Creative Commons’s licenses) to ideas. In other words, he is confusing patents and copyrights.
Furthermore, although it is noble for the OP to try to keep a line of innovation free from patents, the OP’s written promise not to apply for a patent on something probably has no legal weight because it was not made in exchange for any kind of consideration. (The requirement that a promise maker obtain some sort of considerations for the promise to be enforceable in a court of law is a basic principle of contract law.). Note that “I hereby place this post in the public domain,” and “I hereby give everyone a license to this post under Creative Commons bla bla,” are exceptions to the general rule that promises made “without consideration” are not legally enforceable, but, again, releasing into the public domain and Creative Commons’s licenses have nothing to do with ideas or patents.
The most important thing about patents is that the vast majority of actors who are sued for infringing a patent are selling at least tens of millions of dollars a year in infringing products or services. In other words, the vast expense of patent litigation means that most people using technology to improve the world can safely ignore patents (plans to improve the world that entail someone’s selling tens of millions of dollars a year worth of goods and services probably being the biggest exception).
The second most important thing to know about patents, by the way, is that sometimes venture capitalists will refuse to invest in a company either because the company lacks patents or has competitors who have patents, but this is really just a corollary of the first most important thing about patents, since almost all venture-capital investment is made with the hope that the investee will someday sell at least hundreds of millions of dollars a year in goods and services.
TL;DR: while I commend the OP’s generous spirit, his paragraph about patents is unnecessary.
(OP upvoted, BTW.)
Thank you. But wait. A copyright and patent are not the same thing. If you release the rights to a patent, might you still retain the copyright, because it is different?
Well, yeah, but if you decide to hoard the copyright on your post (i.e., the post above), that decision would not prevent anyone from creating or selling a product or service that incorporates inventions described in the post. The only thing your copyright on your post can make illegal is the making of copies of the exact same sequence or almost-exact same sequence of words in your post.
No. For instance, a movie based on the post might not involve any common sequences of words. Especially if it’s silent.
I accept the correction.
ADDED. Well, if I wanted to get technical, I would point out that the post is near the lower limit in size of works that can be copyrighted. That is, even in the best of circumstances, it would be difficult to prevail in a copyright infringement suit on the basis of such a small number of words, and the particular part of copyright law that deals with movies based on novels is probably far from the best of circumstances. So, I could make the technical argument that my statement was probably correct because I was referring to one particular rather-short Less-Wrong post, not copyrightable works in general including things like novels. But enough!
That’s correct.
“Public domain” is sometimes used in a much vaguer sense to mean the information is out there and being used and shared, but this vaguer sense is best avoided.
I suggest to Epiphany to either:
Strike out “public domain” and replace it the idea of being “open licensed”, or
As Alicorn suggests, declare it public domain. Creative Commons has tools for this (the advantage being that you give a lot more clarity—so I know that you mean the same thing as I understand by public domain). See the nice summary and: Apply CC0 to your own work.
Hope that’s helpful.
Well that goes to show how much I know about law. You have successfully detected my “throw everything at it but the kitchen sink” strategy. I have no idea how to fix this. But thank you for trying to help.
Public domain is by far the more permissive option. If you want public domain, just go with that.
I would like to be able to take your advice but I don’t know enough about the law to tell who knows enough about the law that I should actually take their advice. This is a riddle.
‘Eternal September’ situations are caused by new users as a proportion of total users. 10,000 new users in a month could cause an eternal September here, but 10,000 new users would barely be a blip on, say, Pinterest’s culture. Growth at a steady percentage of users seems like the safest way to head off an eternal September situation; as the site gains more users it becomes naturally more resilient to culture shift by newbies.
Is LessWrong increasing in growth%, decreasing in growth%, or are growth% numbers roughly staying the same?
I agree that it is safer to keep growth at a manageable pace than it is to try and grow it faster while also trying to prevent endless September. However, I would disagree with the idea that managing the pace of growth will prevent endless September. So relying on pacing to solve the problem seems like a bad idea to me.
My perspective is that groups trend toward normal over time as they grow larger—regardless of the pace. I think this is most noticeable during a deluge, that a large deluge will definitely speed up the process, and that growth curves are such that this can seem to happen over night. It may be “correlation implies causation” to assume that the reason a culture is watered down is due to too many people, I think it’s because it eventually attracts a snowball of people who are ever less like the original group. People who start a new group have some kind of difference—why would you break away from the herd if this is not so? Once you’ve started the group, it is less likely to find people who are very much like you than people who are just plain similar. So as the group grows, the people attracted to it are similar to the ones already there, but not really similar. Of course, as the circle of people enlarges, what is defined as similar becomes more relaxed over time. Different people are necessarily overwhelmed by mainstream people eventually due to the fact that there are more mainstream people than different people. That’s the way I’ve observed it happening. If you know an example of a culture that actually stayed the same with a very large numbers of users, I’d be interested in hearing about that. You mentioned Pinterest, but do we know whether their beginning culture was retained? It sounds like you are just saying the current culture would be retained. For all I know, they’ve already been through an endless September and the culture they have now is very different from the one they began with.
I’m glad you brought this up, worded in that way, because it revealed these different perspectives.
Because I think endless September can happen regardless of whether we pace growth, I still think it’s important to do something to prevent that.
As for whether it’s increasing or decreasing or staying the same… over the long-term, they’re definitely increasing. In the recent short-term, they’re decreasing. That looks to be due to the falling back you might see after a large spike (the spikes last a little and taper off), perhaps combined with a common dip in numbers (It looks pretty normal for this site for them to be low for a few months in a row).
Summary of Solution Ideas:
(In alphabetical order.)
A ban button for older active users that works if pressed by enough of them in a certain time period.
Pros:
Increases the capacity of the site to deal with an influx of trolls.
Frees up vote buttons to do what they’re intended to do. For instance: People probably don’t down vote nearly as much as would reflect their opinion since it triggers a troll tax and hides the comment.
Prevents “feeding trolls” by giving the trolls negative attention.
No need to rely on moderators to be good at being strong and tolerating stress because their tough decisions bring harsh criticism.
Cons:
It is possible that desirable contrarians would be banned (though that could happen with moderators just the same).
Give older users more voting power.
A mathematical approach was suggested which would give older users more voting power.
Pros:
All karma totals will be more likely to reflect what the older members want.
Cons:
Votes reward and punish but they don’t instruct well. This leaves users ignorant about what specifically to change, so it’s power to acculturate them is limited by that, and the more clueless the new user is the more limited the power of votes.
A deluge of the wrong type of user may result in lots of people ignoring karma because so many other people are acting in ways that don’t get them good karma, or because they’re the wrong type and don’t care much about karma.
It isn’t known how much karma influences behavior in the first place. (I couldn’t find anything about this in my searches.) I think this might work well on the right type (both motivated by karma and similar enough to older users to pick up on voting patterns) but is not as likely to work on the members that would cause eternal September.
Highlight the culture by making the names of biases, logical fallacies and terms from the sequences linked.
Pros:
Makes it fast for older users to link to useful rationality materials, encouraging them to tell new users about them more often, speeding up acculturation.
Making core cultural items stand out will cause new users to recognize them as relevant, when they might otherwise write them off as “some big word I don’t know”.
Highlighting rationality related terms sends a visual message that we’re prioritizing rationality. Visual messages can have more impact.
Cons:
(unknown)
Limit the comments new users can make, increase limit based on karma, later remove the restriction.
Pros:
Encourages new users to lurk more, acculturating before saying a lot.
Cuts down on the amount of newbie comments older users have to wade through.
Cons:
New users may lose momentum and may not stick around.
Limit the ratio of new users that can post in x time period.
Pros:
Turning off registration has been rumored to work for other sites.
Sure-fire way to keep growth at a manageable pace.
Will not offend people who are offended by elitism.
Cons:
Will definitely prevent some number of good new users from joining.
Prompt users to provide two or more words of verbal feedback when voting (not mandatory).
Pros:
New users will know why they got down voted which will speed up the process of correcting it.
Cons:
(unknown)
Require an agreement to accept and give constructive criticism (with a requirement for good manners).
Pros:
People who can’t deal with being held to the standard of being rational or can’t deal with updating will be intimidated and less of them will join.
Cons:
This will encourage more thick-skinned individuals to join than thin-skinned ones and may decrease the proportion of people who aren’t over-confident debate junkies and trolls.
Require an educational rationality knowledge quiz to use discussions (but not to register).
Pros:
Ensures that new users are familiar with important elements of rational discussion (even if only because of the questionnaire) that will reduce clueless behavior.
Increases the hassle that trolls and spammers need to go through to make endless new accounts, deterring them.
People who aren’t serious about refining rationality won’t go to the bother.
Reduces the speed at which the population grows.
Cons:
Some people may not fill out the form due to laziness, because it’s an obstacle to their inspiration to comment, or because they don’t have time right now and forget to.
Send people with poor rational thinking skills to the Center for Modern Rationality or similar.
Pros:
Some beginners will choose to get training and that will be a good thing.
Some beginners will wait to post until they’re further along.
Cons:
Tell a person to go somewhere else and they may just ignore you.
Ideas that were culled:
(Both of these were culled due to the fact that they’d result in duplicate posts, none of which would contain all the info.)
Separate new users and old users into different discussion areas to contain the endless September or protect the older culture, letting beginners move up after they accomplish a certain level of rationality.
Pros:
Including beginners somehow at the site is less likely to offend people who are offended by elitism.
Newbies would have a place to learn as a group.
If users were directed effectively (perhaps with the rational knowledge quiz) it would contain the eternal September while still allowing some growth and being a way to acculturate new users.
Cons:
Labeling people as beginners might make it harder for them to learn or make them resent us (though shooing them away with down votes or allowing them to frustrate older users with ignorance will have the same effect.)
New users wouldn’t acculturate as fast and might not acculturate at all (though if the alternative is to lose the culture completely, this is justified.)
This would result in duplicate posts since the different forums would often want to talk about the same things.
None of the posts would contain all the information.
Sending people to the Center for Modern Rationality is a better option.
Multi Generation Culture
Limit the number of new users that join the forum to a certain percentage per month, sending the rest to a new forum. If that forum grows too fast, create additional forums. This would be like having different generations. New people would be able to join an older generation if there is space.
Pros:
Nobody would be labeled a “beginner”.
Cons:
This would result in duplicate posts since the different generations would often want to talk about the same things.
None of the posts would contain all the information.
This is pretty thought-provoking; thanks for laying it all out. I think each of the devils are in their respective details. People have very different intuitions about, for example, how many people will be turned off by a quiz requirement, or how many useful contributions would be cut off by a karma restriction on comment quantity, and it’s hard to make progress toward quantifying that without running experiments which may be temporarily harmful, have confounding factors, and take a lot of manpower.
In the end, we usually settle on “loudest intuition wins” but it would be nice to make some progress on that.
I’m not sure how technically feasible it is, but I’d be interested in having something like the WikiWords system from MediaWiki(the base for TV Tropes) for internal links and/or links to the wiki. I already try to link to them whenever relevant, but it’s a non-neglible inconvenience to find the right urls and add the right markup.
Perhaps (down)voting could automatically open a reply box, thus encouraging more detailed feedback while still allowing user discretion. More feedback is usually good, but sometimes someone has already written a good critique that I can just upvote or something. So I don’t like making it mandatory. -edited to clarify that I meant MediaWiki rather than the TV Tropes specific variant.
Mm good idea, I don’t know why I overlooked that (making it prompt the user when voting rather than requiring it) I will change the idea.
TV Tropes’ markup system is a godawful homegrown mess and I wouldn’t recommend using it; it’d be incompatible with wiki markup and unfamiliar to pretty much everyone that hasn’t done time on TV Tropes. Incorporating some subset of MediaWiki markup into the blog wouldn’t be a bad idea, though.
Sorry, I was thinking of MediaWiki, but I put TV Tropes because I just finished explaining the parts of MediaWiki I like in the context of explaining TV Tropes, and I don’t uses any other MediaWiki sites, so I TV Tropes was much more mentally salient than MediaWiki.
TV Tropes is based on pmwiki, actually, although it’s got a great deal of homebrew code on top of that (including much of its markup). MediaWiki’s what Wikipedia uses, along with the Less Wrong wiki and many other post-Wikipedia wikis. The two are both written in PHP and accept SQL backends, but they don’t have much in common in terms of interface, and there are pretty substantial differences in markup as well.
I haven’t spent a lot of time in MediaWiki, but for example it doesn’t do pmwiki-style WikiWords; internal links are established via [[double square brackets]] instead.
Ok, so what I’m trying to say is I want WikiWords, approximately like whats offered on TV Tropes, and I was going along with what I thought you were saying because I don’t do any other wikis or know much about TV Tropes codebase.
(This used to be the draft of my endless September poll)
First: A LessWrong seed bank. If this forest grows diseased or burns to the ground, the means to replant. Already in the LessWrong seed bank: The Sequences, FAQ, User Guide and MediaWiki.
Second: Terms of surrender. When conditions X, Y and Z are met, LessWrong will fold or reboot.
That’s an excellent idea, but I can’t think of any clear metric of success or failure, short of really unlikely ones like ‘during the annual poll, LWers majority vote for astrology’.
This is for ideas to prevent disaster, not solve it after the fact. Also, if the suggestion is “Leave the wiki and sequences up”, you’re essentially saying “Do nothing”. This just doesn’t read like a plan.
My prediction is something HPMOR related- either more links to lesswrong in the Author’s Notes, or HPMOR itself had a spike that month.
Another way to cut down on new-new interaction is to limit the number of comments someone can make in a time period- if people can only comment once an day until their karma hits 20, and then once an hour until their karma hits 100, and then they’re unrestricted, that will explicitly encourage lurking / paying close attention to karma among new members. (It would be gameable, unless you did something like prevent new members from upvoting the comments of other new members, or algorithmically keeping an eye out for people gaming the system and then cracking down on them.)
[edit] The delay being a near-continuous function of the karma- say, 24 hours*exp(-b karma)- might make the incentives better, and not require partitioning users explicitly. No idea if that would be more or less effort on the coding side.
Problem: Limiting the number of posts doesn’t limit the number of comments, so they’d still be able to overwhelm older users with newbie comments or create their own culture in the comments. I think this idea would be ineffective unless, by “posts” you meant “comments” (or added some similar plan for comments).
I meant “comments” by “posts.” I’ll edit the grandparent to be clearer.
Thanks, Vaniver. The OP has been updated. I also used find to see whether there were other ambiguities around the word “post” in the OP. Caught a few. (:
Limiting the posts would cause new users to lose momentum. A lot of them might lose steam after joining and give up. That would be risky. Also, because a large proportion would give up, this would filter users. We’d end up with a larger proportion of the type of user persistent enough to tolerate this. I don’t know what that sort of person looks like.
I’ll add the idea to the pile, but I can’t really sell it if those things aren’t addressed.
Your description of the problem seems spot on to me, and most of your proposed solution sounds sensible as well. Using a questionnaire to let users graduate into the advanced section seems a little exploitable, though.
One alternative could be requiring a new user to select one of their recent comments for “admission review”. If the “reviewers” agree that the comment is unusually good, they let the user in, otherwise they give some guidance on what can be improved, and let the user try again in a week or so. That may also have the side effect of improving the quality of discussion in the easy section, as users try to write comments that are good enough for “admission review”.
Why is my questionnaire exploitable?
The problem with having old users review new user’s comments is:
We haven’t verified that they’d be willing to do this.
If a lot of new users come in all at once, that would be a chore.
This might actually scare off old users. Or create a backlog of comments to go through that prevents new users from participating. There’s a high risk of this going totally dysfunctional. Unless you see something about it that I don’t?
As someone that still considers themself to be a “newbie”, I actually have a few thoughts on this. (I know my account is actually quite old- I discovered this site and spent a day or two on it when I was avoiding studying for a test in college, but then I forgot about it and didn’t happen to rediscover the site for well over a year).
I don’t really have a feel for how often you guys get new users stumbling in here and posting, but I have to say that at least for me, wandering into the discussion section in the beginning sent me running right back to the sequences. I think that unless you come in primed with a background in logic or debate (and maybe even then), most of the types of people you’re interested in attracting will realize that they are missing crucial background knowledge and go try to get that knowledge before they engage.
So I suppose that if you have someone that doesn’t have the patience to try and at least spend a few days reading over the core sequences and learning the vocabulary then downvotes could be one way to indicate to them that they’re not ready to discuss at the level of the group. So I would suspect that as others have mentioned in the comments, it would only really be a problem if you’ve got a bunch of new users at the same time talking to each other and not getting that effect.
I think that one way of addressing it would be to create separate areas for people who want to talk about the old stuff and people that want to talk about the new stuff. I know that personally there were many times as I read through the sequences that I wanted to make a comment or ask a question that wasn’t always already addressed in the comments when the article came out. Those were often very insightful and helped to clarify some of the questions that I had, but I sort of felt like the time for contributing to them had passed. But if you had an area dedicated to people that were working through the sequences it would somewhat sequester the newer users to work through the basics without making them feel like they were being shuttled off to the kids table. And of course the more experienced users could show up and make suggestions, raise the level of discussion, etc.
To me it seems like we already have easy and hard sections; Discussion and Main. I’m fairly new so that may not be how others treat them. I do think it would be beneficial to dedicate some discussion areas to specialty topics. Some of the recent articles (the theoretical UDT ones for instance) would benefit from being grouped in a separate section to preserve context since related articles may be few and far between. I imagine the meetup notices would benefit as well.
If LW does implement a newbie discussion section I think it will be important to ensure that the majority of users actually migrate to the normal sections so that a newbie subculture doesn’t form its own mini Eternal September. I don’t have any good ideas for how to offer incentives to users to move out of a general newbie section into the more appropriate sections for the topics they want to discuss; I think it may be demand driven. By the time a site grows large enough to need separate sections posters will be motivated to choose the most appropriate section to reach their target audience. When a site is small (or slow) enough one section generally suffices. So we could try an experiment and create a few new sections and watch the ratio of new articles in the Main/Discussion sections to the specialty sections. If there are few articles created in the new sections then LW may still be small enough to operate efficiently with only one or two sections.
As I see it, main is not a discussion area, it’s a blog. If you post in main you’re publishing an article. Discussions are discussions, they have standards there, but I don’t think the idea is to post articles. I’m also new.
I don’t. That would completely defeat the purpose. The whole idea of a newbie forum is to sandbox the endless September.
So in essence you don’t really want new users unless they didn’t need the newbie forum in the first place? Maybe I’m misunderstanding you. Do you think it’s beneficial to host an eternal september solely to keep it from leaking into the “important” parts of LW?
No, it’s because anybody who comes that’s willing to learn should have a chance to learn. If we prevent them from joining the regular discussions, they don’t get a chance to learn. If they do join, everyone loses their chance to learn—the old members will leave due to tedium (because that’s where THEY go to learn and they need to be around people who can give them a challenge in order for that to happen) and so there will be no one around to explain everything. Essentially the newbies will be in a forum of their own in any case. If they’re in a forum of their own HERE though, then we can at least figure out a way to explain things to that many people at once, or people will explain as they have time for it, which is slower, but it’s better than the old members leaving the site.
Maybe I’m too optimistic in thinking that most users could eventually migrate into the normal discussion areas. If so, then you’re probably right that just containing the eternal september is the best solution.
Well, eternal September, by definition, means many won’t completely acculturate. If they all acculturate and move up, then the beginner area will be a temporary place for newbies to learn, and there was no eternal September. If there is an eternal September, then it’s purpose would be to contain the eternal September. Those who acculturate would move on, but not everyone would acculturate.
Speculative: the Singularity Summit Australia 2011 was held in late August that year during the National Science Week. Then again, could be a case of post hoc, ergo propter hoc. Could be no cause other than the variance chiming in, which is to be expected from time to time.
What about Methods of Rationality? September 2011 is mid-way through its upswing. I see no easy way to quantify reviews, though, short of manually going through the thousands on FF.net...
Actually, you might find my http://www.gwern.net/hpmor#analysis useful!
Looking at all reviews posted per day, in September 2011, there does in fact seem to be a large spike in number of posted reviews.
Thank you Kawoomba, it sounds like an interesting theory. True that correlation is not causation, but maybe if we map other events to the numbers, we will see a pattern. (:
I definitely agree with you that we should avoid IQ testing.
I generally dislike the names “Hard” and “Easy”, but I don’t have anything better at the moment. Maybe “Beginner” and … “Intermediate”? “Experienced”? I’m not sure.
Also, a minor point. About this:
I understand what you were trying to do with that, but if we’re being accurate, both of those answers are correct.
I think it would prime people that this is something game-like.
Okay I will reword the question, thanks for pointing that out.
I don’t know what names would be best. I wonder whether the word “Advanced” will put people off and sound elitist. Using the words “Easy” and “Hard” implies that the people in the “Hard” area are choosing to challenge themselves and are putting in more effort to do so. “Beginner” implies that after a while you’re supposed to go to the other forum, when really, everyone learns at their own pace and some people may just prefer for it to be easy. Maybe you don’t like the words because they’re a little bit cute, as if it were a game? Maybe fancier words like “Challenging” or “Difficult” would appeal more?