Engaging Intellectual Elites at Less Wrong
Is Less Wrong, despite its flaws, the highest-quality relatively-general-interest forum on the web? It seems to me that, to find reliably higher-quality discussion, I must turn to more narrowly focused sites, e.g. MathOverflow and the GiveWell blog.
Many people smarter than myself have reported the same impression. But if you know of any comparably high-quality relatively-general-interest forums, please link me to them!
In the meantime: suppose it’s true that Less Wrong is the highest-quality relatively-general-interest forum on the web. In that case, we’re sitting on a big opportunity to grow Less Wrong into the “standard” general-interest discussion hub for people with high intelligence and high metacognition (shorthand: “intellectual elites”).
Earlier, Jonah Sinick lamented the scarcity of elites on the web. How can we get more intellectual elites to engage on the web, and in particular at Less Wrong?
Some projects to improve the situation are extremely costly:
Pay some intellectual elites with unusually good writing skills (like Eliezer) to generate a constant stream of new, interesting content.
Comb through Less Wrong to replace community-specific jargon with more universally comprehensible terms, and change community norms about jargon. (E.g. GiveWell’s jargon tends to be more transparent, such as their phrase “room for more funding.”)
Code changes, however, could be significantly less costly. New features or site structure elements could increase engagement by intellectual elites. (To avoid priming and contamination, I’ll hold back from naming specific examples here.)
To help us figure out which code changes are most likely to increase engagement on Less Wrong by intellectual elites, specific MIRI volunteers will be interviewing intellectual elites who (1) are familiar enough with Less Wrong to be able to simulate which code changes might cause them to engage more, but who (2) mostly just lurk, currently.
In the meantime, I figured I’d throw these ideas to the community for feedback and suggestions.
At the risk of sounding ridiculous, I will self-identify as a member of the intellectual elite since no one else seems to want to.
I’m occasionally engaged in LW and I’m interested in rationality and applied psychology and the idea of FAI.
I don’t think LW is necessarily the best venue for discussing big important ideas. Making a post on the internet is something I might spend 4-5 working hours on. It might even be something I’ll spend a couple days on, but that’s an inconsequential amount of my time. And the vast majority of the people who read whatever post I generate will spend generously 15-20 minutes thinking about it. I’m actively working on reading and checking the math in a 300 page textbook in order to make a post on LW six months from now that maybe 100 people will read and almost no one will take seriously. If my day job weren’t writing academic papers with similarly dim readership prospects this would surely be overwhelmingly demoralizing. There’s a commitment issue here where it doesn’t make sense to invest a lot of time in impressing/convincing LW readers. I have no guarantee that anyone is seriously engaged with whatever idea I present here as opposed to just being entertained, and most of the people reading this forum are not looking for things to seriously engage with. There’s a limit to how many, how big, and how strange the ideas you encounter once a week in a blog can be. They might be entertaining, they might be interesting, but they can’t all change the way you see the world. It takes a lot of time (for my mind at least) to process new ideas and work through all the implications.
LW is set up in such a way that it’s a constant stream of updates, and any given post can expect a week or two of attention, at which point it fades into the background with all the other detritus. But big ideas are hard to grapple with in a week, and so most LW responses are the sort of off the cuff suggestions that you get when you expose people to a new idea they don’t fully understand. I’ve been reading LW for 9 months now and I’m still on the fence about FAI. The internet makes publishing much easier, but it doesn’t make thinking any easier. This is I think one of the reasons that science hasn’t abandoned publishing in journals and why there aren’t many elites on the web. Accessing content is already much much easier than digesting that content. I have whole binders full of papers I need to read and digest that I don’t have time for. And so does everyone else probably. LW posts are primarily entertainment and most of the people who post here are doing it for a brief applause or to float an idea they haven’t seriously worked on yet.
I’m also less clear as to what sort of content you want that you don’t have. What’s your end goal?
If I had to make code suggestions, I would say that discussions on a single post get too long before anything is resolved. There seems to be no point in commenting once there’s a certain number of comments, and so discussion tends to sort of stall out. I’d be interested to see what the distribution of # of comments on high karma posts looks like and whether there’s a specific number of comments which seems to function as a sort of glass ceiling. I also think that as time goes on things get pushed down the queue and become invisible. The fact that no matter how brilliant your idea is it’s basically got a week in the limelight and then will be forgotten forever isn’t super conducive to using LW to seriously discuss difficult problems.
And this is all off the top of my head, because of course I haven’t seriously thought about this.
Thanks! But 100 people is a serious underestimate: far more people read your previous posts on the subject (only a fraction of readers vote, and you got 43 upvotes on your post about nanotechnology). If you wind up with some substantive technical criticisms, I expect Eric Drexler will take time to reply, I for one will be seriously interested, and it will be frequently linked to in discussions about Drexlerian nanotechnology.
The most-viewed LW post has hundreds of thousands of hits, and there are many at 10,000+ (search the link for “pageviews”).
I think that’s fine. Public forums in general aren’t proper places to seriously discuss difficult problems. What they offer is width, not depth.
Your complaint is really that there are not enough smart people who are willing to spend a lot of time and effort grappling with the problems which you are interested in. That’s a very common complaint :-) and I don’t think this problem can be fixed by fiddling with the structure of a web site.
Is this an example of a proposed code change that you feel would solve this issue?
A possible re-ordering of the site could allow for easy routes to posts on a particular topic, be they old or new. Something like tagging system, but more clear-cut, so whenever someone adds a new post, it goes straight into a particular section. Also, the post could be regenerated every time the discussion extended to more than 300 comments, to keep things fresh (keeping the old comments in an easily accessible archive).
Since my views are intended to be part of the discussion here, there are not well thought-through by your standards. My apologies- if you want to stop reading leplen, you have every right to.
I’m a Melbourne University first-year student, which qualifies me as “intellectual elite” by some descriptions and not by others. My thoughts:
1- Use censored forums, where you have to show some degree of credentials to be allowed to make a response. Either make there rules to prevent somebody getting in at all to avoid jealousy, or make criterion of intellectual content merely to post. These standards would be much higher than the current Lesswrong Karma standards. 2- Have permanent “spotlight” posts. 3- Have significant numbers of “reinforcement” posts for the masses, designed to encourage them to read said posts and gradually change themselves bit by bit. These posts won’t be mere padding, as they will focus on delving into how to change one’s life to adapt to the things learned in the “top-level” posts.
These require a lot of institutional change, which requires Elizier to get behind the idea and it’s advantages. If he does, however, making changes to the site to solve these problems is trivial.
Hm. Let me throw out several points in random order:
-- I don’t think LW is a “general-interest” forum. Not even “relatively”. However that’s fine—there are really no such things as general-interest forums because their lack of focus kills them. What you have, actually, is online communities some of which spend their time chatting about whatever in the general section of their forums. But that general section is just for overflow, the community itself is formed and kept together by something that binds much tighter than general interest.
-- If I rephrase your post along the lines of “LW is a web-based club for smart people. How do we get more smart people to join our club?”—would you object?
-- Size matters. In particular, online communities have certain optimal size for cohesiveness—be too small and it’s just a few old-timers making inside jokes; grow too big and you drown in a cacophony of noise. I’ve seen online communities mutate into something quite different from the original through massive growth. That may be fine in the grand scheme of things, but the original character is lost.
-- While attracting “elite” how are you going to get rid of hoi polloi? If people arrive, set up camp in LW, and start discussing Jennifer Anniston’s butt and what a horrible hangover did they have today after being gloriously trashed yesterday, what are you going to do about it?
-- There is correlation between “being highly successful in real life” and “being able to avoid wasting time chattering away on the ’net”.
-- I think I would support some additional granularity to this site (subreddit style), especially if we get some population growth. Nothing like Reddit itself, of course, but the existence of two parts and two parts only seems to be an artifact from the olden days (when you went to school up the hill both ways).
-- And finally, the important question: what do you want to achieve? Is it just having more smart people around to talk to, or there’s more? In particular, with Pinky and the Brain flavour?
Quoting this because IMO it’s the most important of the lot. Almost all the people I think of as ‘old guard’ barely post anymore because they’re too busy working at CFAR and/or working on personal projects
Retirees have the wisdom of experience, are seasoned writers, and have few external obligations. Contributing to LW would not be a waste of their time or abilities.
How many retirees post here?
Do people know retired professors or other smart retired people who would do well on Less Wrong?
As one data point, my father has been retired for 7 years. He got a PhD in physics and then became a software engineer after deciding he didn’t really enjoy research. He’s interested in LessWrong-y topics like rationality, optimal philanthropy, and some of the areas of philosophy that are often discussed here. He’s read and enjoyed some of the articles I’ve linked him to on LessWrong. He should be a shoe-in, right?
But he didn’t grow up in a time when online communities were a thing. They’re just not part of his life and he has no interest in joining one.
Just curious: do you know he has no interest, or do you assume he has no interest?
Yeah, I realized that while writing it. You’re right—I don’t know for sure that he has no interest at all. Although it is true that he hasn’t made an account here despite reading and enjoying some posts here.
I have also never heard him mention any other online communities, and I talk to him often enough that I’d expect it to come up.
I think you may be underestimating the amount of gentle hand-holding necessary for someone to develop an affordance, and think it might be worth seriously presenting it to him as a potential hobby.
StackExchange site solves this problem by gradually increasing user powers with their karma. Then even if the old guard spends less time online, their actions are more visible.
On the other hand, there are not as many “actions” one can do on LW. And we probably wouldn’t want to limit things like “announcing a new meetup” to old users.
Here is a list of possible LW actions that could require some karma threshold:
upvoting comments
downvoting comments
publishing articles
upvoting articles
downvoting articles
editing wiki
commenting in troll threads
moving articles between Discussion and Main
Beyond that, I don’t know. Perhaps users with huge karma could get ×2 or ×3 multipliers when voting, but more than that would probably be a bad idea.
I laughed at that.
It strikes me as somewhat double-edged though, insomuch as Pinky and the Brain never actually succeed at their nightly plan (as far as I know). And since TRY often never gets further than “talk about”, we might ask how the conversation actually contributes to the ultimate goal. So far I would say LW has had a net positive effect therein; I expect we’ll continue to pay attention to that effect and shape it consciously.
Double-edged is good :-)
Part of the problem is that a lot of LW cultural dynamics are built around status competition but we don’t really have a lot of “rationality lessons” in the canon for redirecting such things very well as a group. The entire “rationality dojo” framing (with implications of pretend conflict and tournaments and ranking and so on) has always seems like it was strategically designed to appeal to people who are male and under 30. From a world saving perspective there’s a lot to say for the framing in terms of inspiring young men to do positive things with their passion and personal energy, but it also carries with it some of the problems I see in the existing comments...
With the dojo framing, you’ll tend to appeal to people who haven’t leveled up yet, but who want to, and some people who haven’t leveled up have a chip on their shoulder when they start out. This chip is visible in the sort of reaction that runs “How dare you you use word ‘elite’ without irony, as though to suggest there are people who are objectively more awesome than I am!” (And generally, I suspect there is at least sequence’s worth of content on the issues that were touched upon when The Level Above Mine was written but its complicated to talk about too much and I’m afraid to do much more than gently point in that direction for now.)
For what it is worth, when I read the title I was thinking you meant the goal was to find “people who are more awesome than even our highest status posters”. Like, how would we get Christof Koch to make an account and hang out for a bit? How do we get congress people to make accounts and hang out for a bit? And so on...
When Kaj reviewed their book I took the opportunity to help set up the Q&A with Harpending and Cochran. That seemed to work pretty well. Hanson even stooped to commenting on his sister blog, I think maybe because we had someone who it was profitable for him to interact with, and we were dealing in more empirical way than normal with moderately controversial and high impact questions. (Also, this suggests another optimization strategy: look at Hanson’s comments and figure out what made it worth his time to make the comment and then try to encourage more of that.)
The book Q&A could potentially work as a formula rather than a one time event: find new and interesting and somewhat controversial books. Have someone with a good mind and high karma write a solid intellectual review and then if the book is good enough for the review to be positive invite the author to come for a Q&A. Either highly managed “the top N comments will be answered at the author’s leisure later” or else by having the author create an account and get their hands dirty. Personally, I’d respect the dirty hands approach, but I could see how some authors might be intimidated by the prospect and maybe not think the payoff was worth the risk… it would be tricky to balance things I suspect...
Part of the problem with this strategy is that it would take non-trivial elbow grease applied to social networking, and social networking with strategic overtones is often exhausting… people generally don’t do bizdev for fun, but rather for a real salary. Maybe if some interaction with the authors was part of the role then a few volunteers could be found who were willing and able to do it?
Subtopics, so that FAI, personal efficency, and effective altruism, for example, could be tracked separately by people who are interested in each.
Different functionality for different types of posts: meetup planning, casual discussion, quotes repositories, welcome threads, advice repositories, etc. You might also add a method for adding and voting on excellent articles from outside LW. As-is, all functions are handled by the same post/nested-thread format, which is not necessarily the best suited for each one.
Better layout design. It’s best to get a design expert on this, but my sense is that the front page, and also other pages, are not laid out in a clear and appealing way.
Social-networking integration. People use Facebook, blogs, etc. to connect nowadays, so make it easy for LW members to do this. E.g., users could optionally add links to FB and other social networks in their profiles, and you could make it easy to share/like/+1 a post.
Rework the Discussion/Main distinction. As-is, this is very unclear. Best as I can tell, those who are supposed to post to Main know it, and everyone else is supposed to post to Discussion, after which the mysterious Lords of LessWrong promote a few posts. Is that how it is? In any case, a better way can be found.
These suggestions make me wonder whether the next version of LessWrong should run on Discourse.
It certainly would save on development time.
Assuming that it won’t need extensive to reworking to bring LW’s existing features over, at least.
True.
One FB features that would absolutely rock on LW is the ability to tag users in posts. If you think some discussion is good for MrsX, and/or you want ver opinion, it should be easy to tag another users.
Upvoted for many sensible suggestions.
+1 because of the first point. Right now we are using this catch-all Reddit style “discussion” forum to encompass absolutely everything and it is a mess.
To minimize the burden on someone you really want to participate you could conduct an audio interview with him and then post the transcript and link to the audio.
College professors who have just published a book will be especially open to writing a post that will generate publicity for their book. Tell them the post will go live on the day that Amazon starts selling their book and you will urge readers who like the book to write an honest review of it for Amazon. LW could develop a reputation as a place where if you participate and write a high quality book, lots of people will write online reviews of the book.
I’m skeptical. Suppose it is true—it doesn’t follow that there’s a realistic possibility of broadening the appeal, much less making it the “standard” general-interest discussion hub. I think there are dozens of reasons it’s unlikely, but off the top of my head: there are many publications and forums already motivated to do so who have deeper pockets, and it is incredibly hard to corral contributors in such a way that they generate high value content for you without wanting to captain their own ships.
But that’s supposing it’s true. An outside observer might reasonably see a little bit of hubris in the claim that LW is the highest-level discussion forum on the entire internet. And I think it’s pretty obvious that it’s not general interest. It’s a very narrow set of interests: cognitive bias, AI, philosophy, cryogenics, philanthropy, evo psych, economics, lifehacking. And those interests are approached from a very unique ideological perspective—most people just aren’t into cryogenics, whether that’s rational or not. Even people very interested in one or more of these topics may not be interested in the others, and they’re very likely to be interested in other topics, which LW does not have room for. Such as celebrity gossip, or sous-vide cooking, or hip hop.
But… can you name any higher-level discussions forums for me? Those who tell me LW isn’t the best discussion hub always fail to point me to a better one.
I think you mean “cryonics.”
Yes, I meant cryonics. Thanks.
You realize best and better are subjective in this case, right? I mean, you could maybe make an argument about “higher-level” being objective, but you’re never going to win that fight. People will just walk away, which is the higher-level response.
EDSBS and Power and Bulk focus on college football and powerlifting, respectively. TNC’s Horde pulls in commentary on everything from the Thirty Years War to Egypt’s current conflict. They all shade into occasional discussions of Poetry and the Human Condition. I’m not a contributing member at any of these sites, so I don’t have a dog in this fight. But LW is not even the only site with extensive, didactic, deeply felt fanfiction. All forums developed past a certain point express their values through epic.
I read a tiny fraction of the internet, like most of us, so I’m sure there are myriad discussions out there I would enjoy even more.
There’s no disputing taste, and higher-level sounds a lot like an expression of taste to me so, by all means, enjoy LW. This seems like a great place for people interested in the things LW is interested in. I would totally buy a claim that this is the best forum for rationalist transhumanists interested in AI. Just remember, whenever you think that everyone would acknowledge your place’s awesomeness if only they knew about it, that, as is true for each person, The Place Where You Are From Sucks.
Even granting the premise (LW as the highest-level general discussion forum on the internet), the question remains of how that compares to all the other venues out there. And, honestly, it still doesn’t come close to what I can get at a good academic conference, or even over dinner with colleagues. Your competition isn’t Reddit, it’s all the places where elites (however constructed) are already hanging out and minimizing noise. For this, focus is a feature, not a bug.
Even for the internet, you pointed out the value of more focused forums (which is where I’ll generally go for higher level discussion). But I don’t consider this place “relatively-general-interest”. I come here when I want FAI or rationality discussions. So, one suggestion would be to consider that your goal of strengthening the general interest aspect might be ignoring your strengths.
I don’t know what an ‘elite’ is, but I try to engage here on narrow topics I know something about, and find it a generally frustrating experience.
If you are who I think you are, then you have a PhD in computer science, which indicates that you are probably fairly intellectually elite. If you have found engaging on LW to be generally frustrating, then your input on the matter may be helpful.
Ha! Well, I often engage here on broad topics I know little about, and still find it generally frustrating.
How so? I’m new here, so I am interested in hearing different viewpoints about this site.
I have a similar experience (though I sure comment on different areas) and impressions… I find it that this place is of questionable end-use-value as a general interest forum due to the founder effect of an extremely specific goal and worldview brought by its initial members, homogeneity of culture and goals and life experience on the part of its higher-ups, and a very strong set of shared assumptions about how the world works which is often founded more in pleasing fiction and ideology than actually trying to figure out reality.
In other words, the exact same thing as any web forum made up of humans. With the exception that it is often much much more (annoyingly) self-important than most due to the subset who honestly in seriousness somehow believe their small subset of upright talking monkeys are saving the universe. Can definitely be a fun and occasionally useful place but it sure ain’t the most useful thing in the world and I honestly have no idea how anyone could possibly get that thought. Nothing special, move along (unless you happen to enjoy it).
On the other hand my technical explanations of stuff tend to go over with fewer hitches here than in other electronic spots, and when they do have hitches they have a greater tendency to come from extreme familiarity with a technical context vastly different than the context I come from which leads to misleading inferences.
Thanks for replying. What you described fit my superficial impressions as well, but since I have not been an LW member for very long, I do not want to judge too hastily.
Well, I’m an intellectual non-elite, but I’ve got an opinion.
LessWrong is great. But it isn’t “general interest”. There is some very solid (and entertaining) content on rationality, but transhumanism/friendly AI and all it’s manifold implications set the tone for a good portion of everything on here.
To your proposed projects:
Isn’t that called a magazine?
LW’s jargon is part of it’s charm… and, I think, it’s effectiveness in communicating ideas without engaging the previous connotations and biases of it’s readers. Of course, it probably does interfere with LW gaining more widespread, mainstream appeal among “intellectual elites” so, to that end, get rid of it.
Generally, I think LW is awesome. But “highest quality on the internet”? Yowza. It’s a big internet and that’s a big supposition. (Is there a name for a bias where you become a bit too thrilled with your own online forum’s greatness?)
I’d say keep doing what ya’ll are doing. By their nature, online forums seem to fill relatively narrow-interest niches and I think LW does a great job of that.
LessWrong = Highest-quality-FAI-no-politics-allowed-genius-filled-online-forum. Of 2013.
If you have suggestions I’d like to hear them.
Forgive me, but the premise of this post seems unbelievably arrogant. You are interested in communicating with “intellectual elites”; these people have their own communities and channels of communication. Instead of asking what those channels are and how you can become part of them, you instead ask how you can lure those people away from their communities, so that they’ll devote their limited free time to posting on LW instead.
I’m in academia (not an “intellectual elite”, just a lowly grad student), and I’ve often felt torn between my allegiances to the academic community vs. the LessWrong community. In part, the conflict exists because LessWrong frames itself as an alternative to academia, as better than academia, a place where the true intellectuals can congregate, free from the constraints of the system of academic credibility, which unfairly penalizes autodidacts, or something. Academia has its problems, of course, and I agree with some of the LessWrong criticisms of it. But academia does have higher standards of rigor: peer review, actual empirical investigation of phenomena instead of armchair speculation based on the contents of pop science books, and so on. Real scientific investigation is hard work; the average LW commenter seems too plagued by akrasia to put in the long hours that science requires.
So an academic might look at LW and see a bunch of amateurs and slackers; he might view autodidacts as people who demand that things always be their way and refuse to cooperate productively with a larger system. (Such cooperation is necessary because the scientific problems we face are too vast for any individual to make progress on his own; collaboration is essential.) I’m not making all this up; I once heard a professor say that autodidacts often make poor grad students because they have no discipline, flitting back and forth between whatever topics catch their eye, and lacking the ability to focus on a coherent program of study.
Anyway, I just figured I’d point out what this post looks like from within academia. LessWrong has repeatedly rejected academia; now, finally, you are saying something that could be interpreted as “actually, some academics might be worth talking to”. But instead of conceding that academia might have some advantages over LW and thus trying to communicate with academics within their system, you proclaim LessWrong to be “the highest-quality relatively-general-interest form on the web” (which, to me, is obviously false) and then you ask actual accomplished intellectuals to spend their time conversing with a bunch of intelligent-but-undereducated twenty-somethings who nonetheless think they know everything. I say that if members of LW want to communicate with intellectual elites, they should go to a university and do it there. (Though I’m not sure what to recommend for people who have graduated from college already; I’m going into academia so that I don’t have to leave the intellectually stimulating university environment.)
I realize that this comment is awfully arrogant, especially for something that’s accusing you of arrogance. And I realize that you are trying to engage with the academic system by publishing papers in real academic journals. I just think it’s unreasonable to assume that “intellectual elites” (both inside and outside of academia) would care to spend time on LW, or that it would be good for those people if they did.
Attracting academics to Less Wrong is not incompatible with approaching them through academic channels (which MIRI has been doing), and does not require separating them from academic communities (which I doubt MIRI intends to do).
Point me to where Luke denied that academia has any advantages over LW. If you’re going to claim that LW is obviously not “the highest-quality relatively-general-interest forum on the web”, it would help your case to provide an obvious counterexample (academic channels themselves are generally not on the web, and LW has some advantages over them, even if the reverse is also true). LW is also not as homogeneous as you appear to believe; plenty of us are academics.
It is at least as unreasonable to claim without justification that it is impossible to attract intellectual elites to LW, or that it would be bad for those people if they did.
You’re straw-manning here. Not conceding isn’t the same thing as denying. To not concede something, one just has to omit the concession from one’s writing. But this is just quibbling. The real issue is the attitude, or the arrogance, that LW may have with respect to academia. Nobody wants to waste time justifying themselves to a bunch of arrogant amateurs after all.
Anyway, some web channels where academics hang out:
MathOverflow
LambdaTheUltimate
The arXiv
StackExchange
The N-Category Cafe http://golem.ph.utexas.edu/category/
ScienceBlogs
(Cracked.com probably does a better job of being a smart, general interest forum than Less Wrong, it’s a great deal more popular at least. But being the highest quality popular forum is a bit like being the smartest termite in the world. Specialized forums are where the elite action is.)
Another channel where academics hang out:
Less Wrong
What’s up with this dichotomy between LW and academia? I’m sure plenty of people on here have high-level degrees or work in some academic field.
Also: What are a few examples of the arrogance you see against academia on this forum? I would actually express the opposite view, and say that LW is pretty friendly to academia, with people citing mainstream books and articles all the time, etc. Not much fringe stuff going on here as far as I can tell.
Some of the people on LW have academic degrees or work at academic jobs, but I can’t think of many active posters who seem to be on the track of becoming the sort of academician whom other academicians will recognize and pay attention to.
My impression of the typical LessWronger is someone who might be clever enough to be a run-of-the-mill academic worker, but who didn’t get on with the program where they’d basically need to put the majority of their time and output into the academic machine to get any hope of establishing a career. The equally clever people in academia don’t have spare time to hang out at LW and actually do eventually get quite a lot better at their chosen thing than the average LW’er at any of their miscellaneous interesting things of the week. Meanwhile, LW is the akrasia culture, where the sort of highly focused and high-achieving people who end up making a name for themselves in modern academia are invisible, and the people hanging out here have no way of picking up their cultural habits. Instead, there’s a large peer group of low-achieving procrastinators who like to post interesting forum messages to identify with and unconsciously learn habits from.
I’m not sure why anyone would expect a post about trying to attract academics to LW to mention that academia has some advantages over LW. It’s just not relevant to the subject. The fact that MIRI has been increasingly making use of academic channels is an implicit concession that they have advantages.
Ok, yes, there are web-based academic channels. StackExchange is even a good contender for highest-quality relatively-general-interest forum on the web.
(Cracked? Are you kidding?)
I would love to locate and learn how to integrate into more interesting high-signal channels! If anyone feels like they wouldn’t be polluted with a little attention from LWers, would you mind sharing the ones you know?
Nice rant :-) A bit overboard, though—may I make a suggestion? Read it again, but replace “LW” with “internet discussion forum”. That should put your statements like “LessWrong frames itself as an alternative to academia” or “LessWrong has repeatedly rejected academia” into proper perspective.
LOL
You do realize that LW has no shortage of grad students and even gasp! actual academics who read and post here?
Increasingly I post my thoughts to Facebook for the following very simple reason: If I don’t like a comment on my status thread, I click ‘x’ on it, and then it’s gone without a trace. Intellectual elites who are not teenagers living in Wyoming will often have other mailing lists, or even other live human beings, from whom they can get consistently high-quality conversation with none of the dreck that emerges when all of real life’s checks and balances are disengaged.
This isn’t to say that to engage intellectual elites you must offer an ‘x’ button on all comments on their post, just like Facebook does—though that would be a good start at not having them walk away and go someplace where there’s only intelligent people to talk to. I’ve been wondering if it would be possible to design a less democratic karma system which would serve the same function, though more in a context of “What is the successor to Wikipedia?” or “What is the successor to peer review?” with trying it as a successor to Less Wrong just being one possible way of testing out the latter more important functions. One also notes that many academic types have self-contradictory beliefs about ‘censorship’ which will prevent them from clicking ‘x’ on comments on their own posts, so that they instead go somewhere else with more heavily selected people or somewhere that Internet folk can’t comment at all, so that it is desirable if it is not the academic who has to click ‘x’, or if the academic is not the only person who can click ‘x’.
I finally note that to get people to stick around, you have to offer them a pleasant experience in the short-term and continuing gains to their life in the long term. Facebook offers the former but not the latter.
This. I don’t really have a burning desire for a general interest high-powered messageboard because I get lots of my general interest feedback from college friend/google reader hivemind/friends of friends/etc. And that discussion ends up funnier and more personalized, since my friends have more context about my questions and more shared allusions. When I need information they can’t supply, I go to a specialized forum or use my own blog and facebook to put out a general call for help from people I may not know I know.
I wonder whether there’d be any sense in implementing kill lists as an option: if you wanted to, you could put any user on your ignore list, after which their comments (and any responses to them) would show up to you as hidden, just as if the comments had been heavily downvoted.
It would act as a bit of a compromise between letting everyone just delete anyone else’s comments, and being forced to see everyone’s comments by default.
Great minds.
Ah, your comment was so down on the page that I didn’t see it before replying. :-)
Meaning that you’re posting them more often there rather than here? or is the tradeoff against posting someplace else?
If the tradeoff is against LW this is surprising to me because I find the quality of comments and ease of navigation on LW to be much better than on FB, including on your FB posts, which I follow, and this is presumably after you’ve already deleted the worst comments.
Maybe the difference is that on LW I don’t have to read all the replies to what you write and so can easily find the top comments, but you’re reading all the replies in both cases so that LW’s vote up feature isn’t providing you the same advantage that it’s providing your readers like myself.
This sounds a great deal like the Knol-and-Citizendium-vs.-Wikipedia disagreement over whether subject-matter experts should have to put up with having their work edited by uncredentialed dogs. It seems that exposure — actually getting readers and responses — is more relevant than tight authorial control in creating things that people actually find worth reading and working on.
It’s easy to cite Knol and Citizendium as failures of scholarly curation, but what of Springer’s Encyclopedia of Mathematics? I know at least five of epic-level mathematicians who wouldn’t dream of writing an article for any random website, but jumped at the chance to write a section of the EOM.
Or the Stanford Encyclopedia of Philosophy, which tends to be my first resource to visit if there’s a new philosophical topic I’m interested in.
Does anyone read what they write? I did not know of the EOM until you mentioned it, although I do have occasion to look up mathematical topics now and then. I have never known Google to turn up a link to it, and checking a few searches now, it’s nowhere in the results. Wikipedia is always in the first page, and usually Mathworld also.
Yes? One can usually find a concise introduction to X in it, and it’s typically easier than doing a lit review oneself.
That’s Springer for you. They’re not exactly new media geniuses.
Isn’t what you’re asking for basically traditional moderation? That can work to some extent, but I’ve also observed issues with bias, cliquishness, non-transparency, and general abuse of power in heavily moderated forums, so there’s room for improvement.
These are all obviously problems with any typical democratic karma system.
Those issues don’t tend to be as bad with a karma system because the power is more distributed, so there’s a less issue with any given individual voter acting corruptly. Karma voting can suffer from bandwagoning and group-think though.
Previously.
Individual voters acting corruptly. Previously. And previously.
I’m not saying they don’t do it, I’m saying it has less effect than biased moderation.
Are you going to explain why, or just keep re-asserting it?
This is just based on my personal observations from spending a lot of time on both traditionally moderated forums and on reddit and LW. In particular I feel more open to express myself on a forum where downvotes are the main form of punishment rather than banning.
He did. You are being disingenuous.
ShardPhoenix did not come close to establishing that you could expect moderation to be biased enough that it would be worse.
ShardPhoenix explained. Whether you your or paper-machine happen to disagree with that explanation is an entirely different question to whether he explained. As such, this question is logically rude.
ShardPhoenix did not explain this (i.e. why those effects are stronger, not that they exist), and that was what the supposedly-logically-rude comment was in reference to.
Providing some set of effects that enter into a complex interplay and then asserting that they dominate is an incomplete argument. Noticing this is nothing like the examples of logical rudeness in your link. It’s not approaching this either.
If you believe that, I’ve got an invisible dragon in my garage up for sale.
Agreed. Tolerating dreck kills communities. We’d be better off if the bottom 70% of comments were invisible. But the ratio of negative to positive feedback Less Wrong is already enough to turn good people away, so I rarely downvote.
I would trim the fat far more aggressively if pruning comments didn’t feel like punishing people.
That’s easy to test. Make a private club—an online community where you have to be invited to join and will be kicked out if you don’t measure up. Do you expect such a private club to be a good replacement for LW?
Come to think of it, do you happen to know ANY active, vibrant, useful communities which aggressively expel people not only for being trolls or assholes, but just because they aren’t good enough?
Yes… but I’m not allowed to talk about them on the wide open internet where random people might hear about them.
You could talk about ones that used to be active and useful in the past but have since died.
Good on you.
I’m not suggesting that removing 70% of commenters would lead to a more vibrant community. I am suggesting that removing (or hiding) 70% of comments would lead to a more vibrant community. Intelligent commentary and discussion drown when articles accumulate more than hundreds of comments, for example.
My grandparent thread was inspired by Eliezer’s, but after I pared it down to a couple of sentences it really isn’t about attracting elites anymore. I’ll flesh the idea of it out on a comment in the Open Thread (and later link to it here), if you’d like to continue discussion.
ETA I’ve fleshed out the idea on the Open Thread, so lets please move discussion there.
I think after you make a habit of removing 70% of comments, about 70% of your commenters would decamp for better pastures. Not to mention the quis custodiet ipsos custodes? problem.
Let me offer you an alternative: an ignore list. Anyone can make invisible any comment or any commenter he dislikes or thinks not worth his time, but the vanishing act is for his eyes only, the rest of the visitors still see everything there is. Each can tailor the the appearance of the site to his individual taste.
That’s not a perfect solution for a variety of reasons, but I think it’s better than site-wide pruning of “unworthy” comments.
You can’t create an algorithm for generally promoting good comments—that would have to be an artificial intelligence that would recognize a good comment from a bad one. You can only create algorithms that make it more easy or more difficult to protect the community values… whatever they are.
Imagine a website with 10 people, where 11th person comes a writes a good comment. But for some irrational reasons, the original 10 people all dislike the comment. Does the system allow them to remove the comment? Yes or no?
If you say “yes”, you have the “quis custodiet ipsos custodes” situation. But if you say “no”, then the situation will be exactly the same if the 11th persons posts a genuinely bad comment… the original 10 people will not be allowed to remove it. Which is bad, and much more frequent.
That’s not a good solution! It means that if there are hundreds of trollish comments on the website, regardless of how all my friends downvote them, I still have to see all of them. Too much noise.
We will have to disagree about that.
I explicitly do NOT want other people to filter my information input. Don’t take this as an absolute—I’m fine with spam filters—but at this point in this particular context we do not have ” hundreds of trollish comments” and what’s often downvoted is what the local population disagrees with.
I don’t want another echo chamber.
I believe it’s because we are a relatively unknown website. We had a few trolls in the past, but they gradually went away or had their accounts deleted. With more fame, this could change… although until that happens, I cannot provide exact data.
Private bittorrent trackers come to mind. Though over there, “good enough” is not measured by quality of conversation, but by your ability to keep up a decent ratio.
The problem is that this isn’t as well correlated with karma score as one would like.
From an elite perspective, most of LW discussion would consist of trivia, internal affairs, uninformed opinion, and superficial disputation. The community might have curiosity value because of its idiosyncrasies, or more than curiosity value for someone who had an independent reason to be interested in its themes or practices; so a few such people might study it once, and a rarer few might lurk, but hardly anyone of that status is going to become a regular.
The term “intellectual elite” rubs me the wrong way. It makes me think of Mensa and other groups with too much ego for their own good. Isn’t there something more neutral? “Elitism” is not a good label to be stuck with.
Suppose we’re looking for something like “the sort of people who get frequently cited by other people they don’t know”. What word would you prefer to use to label a group like this?
This seems like a good definition, but the term that comes to mind is “celebrity” not “elite”,
I don’t remember seeing many Britney Spears quotes in Nature. It does get a bit tricky and circuitous with just who’s doing the quoting though. Political talk show hosts probably get quoted by lots of people, for example, but don’t seem to be the people we’re looking for here.
Okay, how about this:
Intellectual elites are the largest cultural cluster of such people who get approvingly cited by both other elites and non-elites and whose sayings people would approvingly cite even if they did not know anything about the person saying the thing, merely on the strength of the substance of the thing.
This would rank out infamous people, who get referred to a lot, but not approvingly, populist celebrities, unless they manage to form their own large clique of mutual affirmation and run of the mill celebrities whose sayings are only interesting because the person saying the thing is famous.
You’re defining intellectual elites basically through social/memetic influence and I’m not sure that’s the right approach.
Defining some sort of intrinsic intellectual quality at an useful level to someone who isn’t already expected to know the concept we’re trying to define seems a lot harder than looking at memetic influence. Are there other big failure modes than getting dominated by popular populists, and are there obvious problems with the idea of screening off the populists by also measuring the interconnected appreciation in the hopefully true elite cluster?
Thinking about this a bit more, it does seem like it might just get us the cluster of people running the mass media instead of the academia. Then again, the mass media refers to the academia a lot more than the academia refers to the mass media, so perhaps we could still get somewhere by following the flow through the clusters.
Depends on your goal. If your goal is to influence the culture, this is a useful definition.
Luke’s goal seems to be “Engaging Intellectual Elites at LW”. As I mentioned somewhere in the comments, I don’t know what Luke’s terminal goal is.
I thought it meant “people with specialist training” as opposed to the non-theoretical parts of LW that are about everyday instrumental rationalism.
To me ‘elites’ means ‘high status’. But this does not necessarily correlates with ‘high competency’.
I have the feeling that by “intellectual elites” the OP just means “non-stupid people” (for specific values of “stupid”) :-)
Ignore user feature.
This gives most of the benefit of the “give people facebook censorship power” preference of Eliezer without the catastrophic game theoretic implications of doing that in an environment where you cannot choose who you subscribe to. I expect the ability to block users to drastically cut down both on perceived and actual low value content on the site and significantly improve user experience.
In this vein I’d find some sort of personal user-tagging system (similar to what RES offers) quite useful. I sometimes get users mixed up based on writing style or tone, and something like this would help distinguish that-user-I-found-obnoxious-but-quite-insightful from that-user-who-was-just-obnoxious.
This would also solve several recurring drama festivals. Nice suggestion.
It can be difficult sometimes to convince A that they need to block B for the sake of their own sanity.
It’s also a feature that can be implemented as browser plugin if anyone with that skill cares—with or without endorsement from on high.
If A ignores B, should we allow B to respond to the comments of A?
On the one hand, allowing invisible responses gives B the last word in everything.
On the other hand, disallowing invisible responses encourages B to make off-topic responses somewhere else (as followed the wake of the troll tax).
Yes. If other people also do not like B’s comments then they are free to ignore B too. If the other readers do like B’s responses then chances are A really is speaking bullshit and B correcting them is a desirable outcome.
I think people stopped doing that later. During the first days of troll tax it felt like a cool rebellion, but later it became merely a trivial inconvenience.
I think that in some situations individual settings may help, but generally global settings are more useful.
Imagine a site with 100 users where 1 obvious troll comes and starts posting. Which option is better: (a) the first five users downvote the troll’s comments so they become invisible for the rest of the users, or (b) each of the 100 users sees the troll’s comments and must remove them individually? Now imagine dozen trolls.
I believe the former is better, because it requires 5% of the work to achieve the same result. And we need to get a good work-to-result ratio, especially if the site becomes more popular, it will attract more of the worst kind of users. There are people ready to post thousands of stupid comments. There are people ready to make a new user account every few weeks (perhaps using the same name and appending a new number) to get out of everyone’s personal killfiles. There are people ready to use proxy sites to register dozens of user accounts. I saw them on other websites. The more popular a site is, the more it is exposed to them. So it would be good to have mechanisms to deal with them automatically, globally.
For example, current LW would be vulnerable to the following kind of attack: someone (one person using proxy servers, or an organized group—just imagine that we attracted an attention of a larger mindkilled group and we seriously pissed them off) registers hundreds of user accounts, posts some comments, upvotes each other, accumulates a ton of karma, downvotes everyone else. A complete site takeover, possible to be done by a simple script.
There is a simple mechanism that would prevent that: Don’t give new users rights to upvote. A new user may only post comments, until they get for example 20 karma from the existing users; and only then they are allowed to upvote others. For more safety, introduce a time limit: you have to get 20 karma points and then wait another week, and only then you are allowed to upvote. -- A similar strategy is used by Stack Exchange: users get rights gradually, so there is a limited damage new users can do. You pay for your rights by contributing the content. And it seems to work.
EDIT: And if you want to invite someone important, who wouldn’t have the patience with the rules, you (website admin) can simply create an exception for them, so they can post their article immediately.
Question: Someone(Such as me, but it seems likely others may feel the same) reads this and asks themselves “Am I an intellectual elite?” or “Am I engaging enough on Less Wrong?”
How do you want that person to answer those questions?
I mean, there are at least a few modes of failure I can think people may have upon seeing that appeal:
A: Someone isn’t really all that intellectually elite, and they think “Ah! I’m elite, Luke wants me to post more.” Posts More, increases noise, decreases quality of discussion
B: Someone really is that intellectually elite, and they think “Well, I may have coauthored a paper, but I’m no elite like Yudkowsky.” Lurks, stays disengaged.
C: Someone really is that intellectually elite, and they really are engaging plenty, and they think “Lukeprog still wants more? I’m spending too much time posting at work as it is!” Disengages from exhaustion.
How do you avoid those problems, bearing in mind that not all of them are necessarily equally problematic or likely, and that it seems likely the list isn’t exhaustive?
(Ninja Edit) As an example of the list not being exhaustive, a quick review of comments that had occurred after I typed most of this but before I submitted implies I should add the following:
D: Those questions are ill formed, and the premise behind them needs to be rejected all together.
Note, personally, D is actually one of the reasons I comment less often on a thorough level. It can be fairly time consuming to cover all of the bases for a comment and make sure I’m not assuming things that aren’t in evidence before I even start talking. But as per the above, I’m not even sure if me trying to check my comments for unwarranted assumptions to that level is desired.
E: Someone who thinks of themselves as intellectually elite, thinks of Eliezer and LW as not intellectually elite (no degree; writes fanfic), and considers the request to be Luke seeking to borrow “elite” credibility for non-credible ideas. Posts yet another blog post demeaning LW.
F: Someone who thinks of the notion of “intellectual elite” as a social problem, an effort to privilege some kinds of mental work (getting rich people to pay you to write philosophy papers) over other mental work (say, organizing religious charities, labor unions, or political movements), and considers the request to be a symptom of LW’s lack of spiritual, social, or political consciousness. Posts yet another blog post demeaning LW.
Is the “engaging” in the title a verb or an adjective?
The title should read “Engaging Engaging Intellectual Elites...”
You cite MathOverflow as an excellent community, and I agree. Joel Spolsky (Joel on Software, and later founder of StackOverflow) gave a talk on the The Cultural Anthropology of Stack Exchange and how they attracted and maintain communities. The talk starts off with a criticism of deeply nested replies. I disagreed with that criticism, but found the rest of the talk concrete and lucid.
One issue, and I haven’t read the other comments to see whether it’s already been brought up, is that it seems hard to justify time expenditure on LessWrong to elites whose time is valuable, or at least to the people that elites need to justify their time expenditure to. For example, time on MathOverflow can be justified to research mathematicians (and/or to grant committees) by the promise that it is a space to ask and answer questions in research mathematics and therefore to advance the cause of mathematics in general. It’s less clear to me what the justification for spending time on LessWrong is to, say, a smart business executive. (Or is this not the kind of audience you have in mind by the term “intellectual elite”?)
For example, neither Luke himself nor Anna Salamon spend much time posting on lesswrong, despite their affiliations and relevant interests. The same reasoning likely applies to others.
One thing which might be useful is alt tags for the links throughout the sequences. The cross-references can be overwhelming—it’d be nice to hover over a link and see a quick summary of what’s being linked to and why it’s relevant.
As I see it the question isn’t so much about elites and non-elites, but about problem solvers and storytellers. Storytellers give you a reason to want to cross the ocean, problem solvers make sure that you have boat that doesn’t sink. Much of the internet is populated by storytellers—someone might do a lot of problem solving in their day job, and then write stories about it in a blog.
MathOverflow, the top site cited as what Less Wrong should be aiming for, is very much about problem solving—you post a problem and can hope to get an answer from some really bright people within a couple of hours. But Jonah Sinick asked why more winners of mathematical prizes don’t post their thoughts for all to see—well maybe that isn’t the way to win prizes.
It’s hard to see where Less Wrong fits into this division. One would think that rationality would be about getting away from the stories that we have been told and getting down to problem solving—new ways to deal with akrasia, poverty or death. Less Wrong looks like it ought to be the public (i.e. storytelling) face of an enterprise which goes in for such problem solving—that is what internet forums often are. But when you dig deeper, you just tend to find more storytelling. I would guess that Elizier has excellent problem solving skills, but the Elizier we see is very much a storyteller.
In the end it’s hard to see what Less Wrong is about. Informal discussion is all very well, but to get more interest from elsewhere it needs more of a sense of focus—what sort of problems should we be looking at and what is being done to tackle them.
When I saw the title, “Engaging Intellectual Elites at Less Wrong” I thought—woohoo, they’re interested in being interesting again!
I return to Less Wrong very frequently but the topics don’t seem to interest me as much as they did years ago. Most articles seem to be about meet-ups, and with some thought I could identify why some of the other topics (such as the ones about optimal charity) are less interesting to me.
I thought the topics on metaphysics were really fascinating. I’m not sure that’s the correct term but metaphysical what-is-the-structure-of-the-universe-possibly-like and what are different ways I can imagine my role in the system in ways that stretch my brain? This seemed something very unique about the content here at Less Wrong when I started spending time here.
I find interesting stuff on the ‘Recent on Rationality Blogs’ section often and enjoy watching TED talks.
...I’ve been wondering for a while if this disconnect in interest means I’m belonging to a different subset than the typical target of this blog.
(The last sentence the most sincere and subtle way I could think of to distance myself from the immodesty of self-identifying as an intellectual elite. In contrast, I appreciate leplen’s cavalier approach in the opening sentence.)
Fascinating topics include:
considering death and exploring ways that can be felt about it at different emotional/cognitive distances
the simulation hypothesis, levels of reality, layers of maps verses territory, etc
understanding values, stability of values, subjective verses extrinsic morality, wire-heading
If one thinks of LW not as the final destination to which intellectual elites should be attracted, but a useful and productive entry point for LW-associated organisations (whether formally affiliated or loosely linked) also including LW itself, it seems to me that things are largely on the right track. Judging from the welcome threads there seems to have been a great deal of success in recruiting young members especially, but not only, through HPMoR. And often there is a youthful feel to discussions here, quite appropriately as many young members are discovering a like-minded community for the first time and learning how one can move beyond Spock-rationality to something deeper—and establishing a reputation in the community at the same time.
But at the same time others seem to be slowly becoming more engaged in LW, or at least aware of it, thanks to the greater engagement of LW and LW-associated organisations with the more “mainstream”. From a different perspective the question might be, what does it gain an academic researcher with an existing reputation in the “mainstream” to participate here? And, is this something most LW members would want? After all LW has long had a strongly-defended reputation as an outsider organisation—would more involvement by “mainstream elite” take away those aspects of LW that make it what it is? As a “mainstream” academic myself I think this would be highly likely.
There also seems to be some kind of tension here—LW would like to attract more intellectual elites while at the same time MIRI is reducing efforts in the same direction MIRI 2013 strategy due to developments in the academic mainstream. Does this indirectly speak to the same issue?
I don’t see much downside in being seen as becoming more mainstream. LW isn’t itself a mainstream academic organization, but it’s not like we’re all fundamentalist hipsters.
That’s not actually true. MIRI is putting less effort into grassroots movement-building, but is actually putting more effort into attracting people from the academic elite. The name change was partially motivated by the need to appeal to mainstream academics, and MIRI has been fairly successful at attracting academics with little or no prior exposure to LW to their logic research workshops.
I initially had an extremely negative emotional response to this post. Then I realized you actually point out exactly why I had such a response with point number 2.
Less Wrong has a language problem.
To someone first coming to read the discussion forum with only a vague idea of what Less Wrong represents (maybe even slightly biased towards thinking of the community as an elitist cultish ivory tower exercise in intellectual masturbation), this post, to be short, doesn’t help.
What makes this sort of funny is that your actual message is just, “We want more high quality content producers.” which is completely harmless and the implicit goal of any fledgling online community aiming to grow. But you wrap this message in the exclusionary, “We want more smart people.”
I’d argue that you achieve both goals by just attracting more people to the community. Consider something like the sprawling Somethingawful forums. The only barrier to entry is $10.00, and there’s a thriving science and academics subforum.
Definitely remove the jargon. This would help on the cult side of things. And for pete’s sake, don’t go fishing for “Intellectual Elites”, just try and grow the community—elites will come in due time.
I’m a 2001-regged user on Something Awful and I recently quit entirely (after having been drifting away for a while) due to frustration with how irrational that place is. I don’t want LW to become like that.
Are you suggesting that LW should evolve towards something resembling SA? 8-0 That’s an… unusual idea.
Luke was asking for “high quality relatively general interest forums”. Somethingawful is damn near foundational as a general interest forum community, and has high quality, albeit niche subforums.
The point wasn’t that Lesswrong should be more like Somethingawful, the point was that attracting a larger user-base will incidentally achieve Luke’s goal.
I will admit that my knowledge of SA and its environment and culture is limited, but nothing I’ve seen would cause me to think of them as particularly “high quality”.
I don’t know what his goal is.
I haven’t read the SA forums regularly in a while, but AFAIK they’re like Reddit: the good stuff’s tucked away in (some of) the lower traffic specialist subforums, while the more popular, general subforums naturally lie at the lowest common denominator level because of the sheer number of people reading & posting.
My webpages sometimes are linked on the SA forums (usually relating to either nootropics or Neon Genesis Evangelion; I’ve poked around the relevant threads looking for useful points or citations or criticism). Nothing about them ever made me think that the SA forums were unusually insightful or intelligent.
This is why veteran SA goons never read the general discussion page. =)
SA is a good forum as long as you consider “Hey everybody look at this asshole!” a good style of argument.
With “asshole” generally defined as someone arguing unpopular positions.
Often it’s just people being stupid, and the approach works quite nicely. But then it’s also this pretty much every time someone does that.
I can’t believe that no-one has mentioned Ask Metafilter (http://ask.metafilter.com/). There seem to be a large number of people there capable of intelligent discussion (not quite at the level of Less Wrong), but they covered a far broader range of topics.
How do you measure or assess a person’s “metacognition”?
For that matter, how do you measure or assess your own metacognition?
I think LW would profit immensely from a team of professional moderators who
take a thread with lots of comments and turn it into a structured overview of arguments and counterarguments (and give an evaluation of how open or complete the discussion is)
and mark open discussion points and important aspects that haven’t gotten propper attention yet
rate comments in a way that is more informative than mere up/downvotes, possibly in more than one dimension ( Relevance / logical structure of the comment / Quality of the central argument)
promote high quality comments, especially if they put a new perspective on a topic that had seemed to be closed
take interesting spin-off discussions and promote them as a thread on its own
Of course, funding these moderators might run into this problem:
http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/
But if we don’t get funding organized, we really suck as rationalists.
That sounds awfully expensive. Also, it might actually be hard to find someone who can do that competently enough to do more good than harm.
One lower cost way to do this is to explicitly ask people more to expand a comment into a Main-level post. Or ask people to pause a disagreement and expand it into a dialogue in discussion. This is a technique I use to curate comments on my blog. It helps highlight high quality stuff without making everyone trawl through the threads.
How exactly are you measuring “quality” ? Is it an objective metric that we can apply to any blog or forum ? If so, why don’t you write a piece of code that will scrape any blog or forum, and output its quality score ? That would be a good place to start, IMO.
That problem is AI-complete.
Firstly, I’m not convinced that this is the case (though I am open to being convinced). Secondly, what does “quality” mean if this is true ? Does it mean “a forum that lukeprog personally likes”, or is there more to it ?
See Zen and the Art of Motorcycle Maintenance. Or, alternatively, skip it, because it’s horribly over-rated.
I don’t see the relevance. The notion of “quality” as an independent substance, as it is presented in the book, bears little relevance to lukeprog’s post.
FWIW I did enjoy Zen and the Art of Motorcycle Maintenance. The philosophy in the book is shoddy and full of holes, of course, but it’s not a book about philosophy—it’s a book about one man’s struggle with love and madness.
Then perhaps MIRI should find some smart people with good math skills to create the AI.