Intellectual insularity and productivity
Guys I’d like your opinion on something.
Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson’s and Paul Graham’s stuff. But I don’t think Robin Hanson really counts as Overcoming Bias used to be LessWrong.
The community seems to not update on ideas and concepts that didn’t originate here. The only major examples fellow LWers brought up in conversation where works that Eliezer cited as great or influential. :/
Another thing, I could be wrong about this naturally, but it seems to clear that LessWrong has not grown. I’m not talking numerically. I can’t put my finger to major progress done in the past 2 years. I have heard several other users express similar sentiments. To quote one user:
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general.
I’ve recently come to think this is probably true to the first approximation. I was checking out a blogroll and saw LessWrong listed as Eliezer’s blog about rationality. I realized that essentially it is. And worse this makes it a very crappy blog since the author doesn’t make new updates any more. Originally the man had high hopes for the site. He wanted to build something that could keep going on its own, growing without him. It turned out to be a community mostly dedicated to studying the scrolls he left behind. We don’t even seem to do a good job of getting others to read the scrolls.
Overall there seems to be little enthusiasm for actually systematically reading the old material. I’m going to share my take on what is I think a symptom of this. I was debating which title to pick for my first ever original content Main article (it was originally titled “On Conspiracy Theories”) and made what at first felt like a joke but then took on a horrible ring of:
Over time the meaning of an article will tend to converge with the literal meaning of its title.
We like linking articles, and while people may read a link the first time, they don’t tend to read it the second or third time they run across it. The phrase is eventually picked up and used out the appropriate of context. Something that was supposed to be shorthand for a nuanced argument starts to mean exactly what “it says”. Well not exactly, people still recall it is a vague applause light. Which is actually worse.
I cited precisely “Politics is the Mindkiller” as an example of this. In the original article Eliezer basically argues that gratuitous politics, political thinking that isn’t outweighed by its value to the art of rationality, is to be avoided. This soon came to meant it is forbidden to discuss politics in Main and Discussion articles, though it does live in the comment sections.
Now the question if LessWrong remains productive intellectually, is separate from the question of it being insular. But I feel both need to be discussed. If our community wasn’t growing and it wasn’t insular either, it could at least remain relevant.
This site has a wonderful ethos for discussion and thought. Why do we seem to be wasting it?
- Prediction market sequence requested by 26 Oct 2012 10:59 UTC; 39 points) (
- 18 Jun 2012 8:41 UTC; 24 points) 's comment on [Link] You Should Downvote Contrarian Anecdotes by (
- 12 Jan 2013 15:39 UTC; 7 points) 's comment on On private marriage contracts by (
- 31 Aug 2012 18:49 UTC; 1 point) 's comment on Dealing with trolling and the signal to noise ratio by (
- 12 Jul 2012 13:49 UTC; 0 points) 's comment on Rational Ethics by (
- 11 Jun 2012 18:29 UTC; 0 points) 's comment on Our Phyg Is Not Exclusive Enough by (
Intellectual insularity is because we don’t en masse read other sources. So we can’t discuss them. Sure, good books are mentioned in the post, but that didn’t create a collective action. What could?
Proposal: At the beginning of the month, let’s choose and announce a “book of the month”. At the end of the month, we will discuss the book. (During the month, discussing the book should probably be forbidden, to avoid spoilers and discouraging people who haven’t read it yet.)
Have we grown as a website? I don’t know—what metric do you use? I guess the number of members / comments / articles is growing, but that’s not exactly what we want. So, what exactly do we want? First step could be to specify the goal. Maybe it could be the articles—we could try to create more high-quality articles that would be very relevant to science and rationality, but also accessible for a random visitor. Seems like the “Main” part of the site is here for this goal, except that it also contains things like “Meetups” and “Rationality Quotes”.
Proposal: Refactor LW into more categories. I am not sure how exactly, but the current “Main” and “Discussion” categories feel rather unnatural. (Are they supposed to simply mean: higher importance / lower importance?) A quick idea: Announcements for information about SIAI and upcoming meetups; Forum for repeating topics (open discussion, rationality quotes, media thread, group diary); Top Articles for high-voted articles, and Articles for the remaining articles. In this view, our metric could be to have enough “Top Articles”, though of course having more meetups is also great.
Also, why are Eliezer’s articles so good? He chose one topic and gradually developed it. It was not “hit and run” blogging, but more like teaching lessons at school. Only later, another topic. That’s why his articles literally make sequences; most other articles don’t.
Proposal: We could choose one topic to educate other people about, such as mathematics or statistics or programming, and write a series of articles on this topic. (This can be also done by one person.) It is important to have more articles in sequence, a smooth learning curve, so they don’t overwhelm the layman immediately.
The common factor to all three proposals is: some coordinated action is necessary. When LW was Eliezer’s blog, he did not need to coordinate with himself, but he was making some strategic decisions. To continue LW less chaotically, we would need either a “second Eliezer” (for example Luke wrote a sequence), or a method to make group decisions. Group coordination is generally a difficult problem—it can be done, but we shouldn’t expect it to happen automatically. (One possible solution could be to pay someone to write another sequence.)
Upvoted. I think that refactoring LW is a strong move, but it’s also one which has been discussed for a while and hasn’t happened. I think that’s because there’s never been a well-presented case for new sections, but the site admins are the ones to talk to about that.
I like this idea but it seems like it’s on the wrong side of the 80⁄20 value/effort split. badger’s summary of EPHJ is one twentieth of the length of the book it summarizes, but contains at least half of the value one gets from reading that book.
Kaufman’s Personal MBA comes to mind as another thing to model off of. He’s read hundreds of business books, and has distilled them down to create a mostly complete business education in 400 pages. The book reads like the blog- an explanation of a part in a few pages, and then on to the next part, with the parts fitting together to make a lean system.
Perhaps a summary contest? Identify some book as a valuable addition to LW, and announce a contest with a prize and deadline for posts that summarize the book or possibly posts that turn the book into a sequence. (The candidate posts might get their own section, with the best one or a hybrid of the best ones being pushed to main, so that people don’t have to see three or four of the same thing if they don’t want to.)
Why not post a list of such valuable or potentially valuable books and see if anyone has already read them and is willing to do a quick skim and summarise?
I should probably add that I’m opposed to the idea of a summary contest because it will cost a relatively large number of people a lot of time and gain them very little.
Summaries aren’t too useful. On the other hand, commentaries and in-depth discussion might be useful. For example, I’ve occasionally thought of doing a chapter by chapter discussion of Good and Real, with additional material like a Haskell implementation of his Quantish universe (since I don’t really understand it).
Please do this. I’m finding it impenetrable.
Mmm. Active reading of quality books is its own reward- the prize is for sharing the notes, and to raise the option to attention. It seems fine compared to a book club, but I agree that it’s generally an economic model that favors the buyer over the producers.
Unless you’re talking about fiction I’m not sure why spoilers matter. Better to encourage people to discuss parts they don’t understand.
I think we could rewrite Eliezer’s articles. I would disagree with the statement that they are “so good”. The material is great of course, but the way he goes about conveying it is not for everyone. I can’t really see a whole cohesive structure as I am going through and frequently, I am not so sure what point he is making. His use of parable just obfuscates the point for me; his constant referral to his story “The Simple Truth” in Map and Territory really bothered me because that story was difficult for me to get through and I just wanted to see his point in plain text. I still have trouble organizing LW material into an easy-to-think-about structure. What I am looking for is something more resembling a textbook. Very structured, somewhat dry writing (yes, I actually prefer that), maybe some diagrams. I’d do it, but I am not sure I have a strong enough understanding of the material to do so.
Isn’t that precisely the end goal of SIAI?
(#EliezerYudkowskyFacts)
I think this is a great analysis, and I like the specific proposals. I got involved in LW only about a year ago, and while I read through pretty much all of the Sequences, I felt a bit left out that I couldn’t participate while they were originally unfolding. A “book of the month” program, or else some kind of coordinated discussion of specific topics, could go a long way toward allowing that kind of ongoing participation.
I also really like the proposed re-categorization of posts. I’m never quite sure what’s supposed to go where, and it seems like a lot of the most important stuff (like this post) end up in Discussion. To state the problem more generally, there seems to be a natural divide in thing-space between “procedural posts” (announcements, meet-ups, quotes, etc.) and “substantive posts” (basically everything else). But we presently group “high-level” substantive posts along with procedural posts, rather than with other substantive posts, which seems awkward. Seems better to first make the basic distinction between procedure and substance, and then find a way to identify the high-level substantive posts within that category.
On the subject of coordinated action, who actually has the authority to make this kind of change, and what sort of process would that entail? I don’t really know much about the current LW org chart, but these sorts of concerns seem to be coming up with increasing frequency, so I’d like to figure out how we can actually do something about it.
Speaking of OB, We have an expansive list of Eliezer’s posts organized by topic but no such sequence exists for Robin Hanson. His posts on status seeking are incredibly important for human rationality.
I purpose that we produce a sequence devoted to RH’s posts. If someone who read most of his posts can point me in the right direction I volunteer to do it. My summer’s off from classes, I just have work and then my private projects and public project would be good for me to signal usefulness to LW, OB and the communities associated with them.
EDIT: RH gave me his blessing. I’m reading OB. Just crossed through to 2007. Writing major themes and interconnections as I go.
I think he has an unfortunate tendency to treat status as a golden hammer, attempting to explain everything in terms of status, whether or not it’s a good explanation.
His over-eager application to everything new he hears does not greatly diminish the usefulness of his foundational work on the general effects of status. Those are the ones Karmakaiser would want an index of.
Of all the things you and I agree on, I never thought this point would be one.
It wouldn’t surprise me at all if Robin himself agreed with this criticism.
Sure, it would increase his status among rationalists. :D
If you’re willing to spend your summer doing whatever Less Wrong thinks is high-value, are you taking suggestions? It’s plausible that there is higher value stuff than this proposal, possibly much higher value.
My education/math level is college sophomore with an intro C and calc I class under my belt so anything I do would have to be basic grunt work. My private projects were just goign to be getting a head start on fall classes with SICP and Calc II.
Within those parameters of my own usefulness, yea I’m cool doing stuff for LW/SIAI/CMR . PM me if you got suggestions.
When can we expect the first post of this series?
I’ll be frank.
The black dog is stronger than I. My mother died in May and I took on these projects in an attempt to stay busy and not depressed. Unfortunately all I did was damage my reputation by appearing flighty and flaky. After losing about 60$ on Beeminder, I realized that any special projects must be put on hold until I sort myself out. It’s a good project to do, Hanson has a lot of unique insights, but I cannot, for now at least, do them.
That is so horrible to hear! My condolences.
Well, it wasn’t for this, but I did compile 24 of the better posts on medicine at http://www.gwern.net/Drug%20heuristics#fn11 (as an illustration of the low marginal value of much medicine).
I got sidetracked by Evil Real Life Issues of Local Importance and didn’t make much progress in July on a Comprehensive post about a topic (my first post was going to be about the topic “If truth is lovecraftian (that is there exist real basilisks, Why Use Truth, this the recent post as a fulcrum and using other posts to support the theme) that approach is proving to complex for my available time commitment so I’m just going to break it up into shorter posts. That should allow me to get something out next week and produce a summary of large-ish chunks of posts and how they relate to each other assuming the Evil Local Important Issues have truly gone away and I can get some hobby non work focus stuff done. Depressingly I didn’t have enough time for Summer Projects as I thought so I’ll only be able to sum up a small amount of the whole as classes begin in the fall and then I’ll be juggling work/school life and I simply won’t have energy for anything but short posts.
Sorry for the delay, but it looks like Hofstadter’s Law has struck me. You’ll most likely see progress intermittently with larger swaths being done during academic breaks. I honestly thought my summer was more free, again, whoops.
Robin tags his own posts by topic.
EY also posts links to other useful posts of his for reference but I find reading the sequences in indexed order is easier than reading tags or chronological order. Every blogger has important ideas that they want to say and sometimes tags don’t do everything you need them to. Like EY’s posts, he installed a karma/voting system late in his blogging career and so his early posts in particular may be unduly ignored.
I imagine it’d be boring to index your own posts in your blog into an ebook like format since you already know your ideas. Since I haven’t read all of OB it might even be fun for me to do it. I wouldn’t be procrastinating to read OB anymore. It’d be working. Yay!
I’ll let RH be the final arbitrator since it’s his blog and just email him asking if he wants something like this done. I’m a bored undergrad in need of a project so why not?
Not if he wants it done. Ask if he has any objection to you writing something like this for LW.
Eh, it’s his blog so I’d feel better making it for his site and just linking his index in the sequences.
Right but the point is to expand the intellectual horizons of LessWrongers. At the very least to familiarize them with the origin and arguments for many ideas they probably run across on this site all the time. A sequence that just appears in the wiki will have little immediate impact and will have more of a delayed effect (since sequence readers will also start reading the Hanson sequences).
They will show up the “New on Overcoming Bias” so I think they will still be read live, though they might not be discussed on LW.. Also if posted on OB I think RH will probably be more willing to give feedback either before they are posted or in the comment section, which would much improve the quality of such articles.
But OB is now his personal blog, you writing posts might seem like sending the signal that he wants OB to go back to what it was.
Maybe cross-post them? Thought that might cause some odd feelings for those who read both OB and LW. “Oh no, where do I comment I can’t decide!” Those who don’t read OB, many new posters probably aren’t even familiar with it, will also be more willing to comment on it here than there.
Cross post sounds good. Every important sequence index could follow a “Humans guide to words” like summary.
At the bare minimum once you are done a promoted Main Post called: “The Hanson Sequences” or something like that might work to grab peoples attention.
I really hope RH approves something like this. His thinking has been a great behind the scenes influence on many key rationalists and LWers. But since they are harder to conveniently cite people thus straw man his arguments (they especially like to do with the status theories) or worse misapply or misuse them.
My email included a link to this comment chain so he’ll be aware of all this. I’ll let him dictate how and where the summary is posted for the RH sequences.
Oh of course I was just thinking out loud.
No problem if he approves I’ll post a top level discussion thread letting people know what’s going on and what shape he would like it to take etc.
We tag articles by topic on LW too.
Considering his ideas have had a such a great influence on us, why don’t we have summary posts of his positions to frame proper sequences? To give an example a summary post like this of Robins positions on a given topic.
Either I’m badly misunderstanding you, or your post is at odds with a great many facts about LessWrong and other internet communities. A few examples:
What??? LW is constantly citing and discussing science, philosophy, and other stuff that didn’t originate on LW. Indeed, most of The Sequences consists in stuff that didn’t originate on LW, as do almost all of my posts, as do lots of other LW content.
LW has made progress on many topics that Eliezer talked about on LW: decision theory, the science of human values, and more. Finding examples outside the topics Eliezer raised may be difficult because (1) Eliezer covered so many topics, and (2) Eliezer’s Sequences define the major subject matter of the blog. (E.g. we haven’t made progress on French politics because that’s not a topic of the blog.)
Yvain, myself, Anna Salamon, and many others have written hundreds of useful and well-liked posts since The Sequences. In what sense is it “Eliezer’s blog”? It’s also untrue that Eliezer no longer writes updates.
Sure, LW could be better, but what are you comparing to? Every time I try to have a conversation outside LW/OB I am slapped in the face by how much worse other communities tend to be. LessWrong is, by internet community standards, extremely high in intellectual productivity and non-insularity.
So… what am I missing? Have I misunderstood what you’re saying?
Yes, Less Wrong is better than all other places. But I hope you will agree that this is not an optimistic prognostication. I do not think we are doing particularly well, if you just look at us and look at how we are doing rather than comparing this place to other places.
I’d like to remind you of some of the words from my favorite essay, which is also one of your favorite essays:
I do not think we are doing the best we possibly can, and I think that is very bad.
I agree, but these are salient exceptions, not the rule. It is “Eliezer’s blog” in the sense that The Sequences are the most important thing here, but people are barely reading them (or so I hear)
They are so very, very rare, though. And the others you listed, indeed many of the others who made good contributions at all, have all but stopped.
I don’t think your point is strong evidence for your conclusion, unless you are directly observing insularity and low intellectual productivity when you visit other websites. In which case, it seems more prudent to just say that directly. It’s entirely possible that conversations on LW/OB are better, but are better only because (some) people have read the sequences.
Edit: Clarity
Intellectual productivity from the last two weeks:
Conspiracy theories as agency fictions
Armstrong’s thoughts on measuring optimization power
Open problems related to Solomonoff Induction
Loebian cooperation
Optimizing affection
Problematic problems for TDT
Avoiding motivated cognition
Several posts from How to Purchase AI Risk Reduction
Non-insularity from the last two weeks:
Gut bacteria and thought
Combining causality with algorithmic information theory
Debate on ethical careers
Zietsman on voting
Thick and thin
Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent
Central planning is intractable
Computer science and programming links
Naturally many individuals will update. But as memories fade I think over time the influence of articles like the cited ones will mostly only remain in thick hard to communicate ways such as how they calibrate some rationalist’s heuristics. My complaint isn’t that we fail to note or bring up interesting ideas, my complaint is we fail to propagate them in the community in the same way we propagated original articles. We as a subculture don’t update. I also mention we don’t propagate the original articles as well as we should. Ideas originating off site on average get less debate and are seldom further built on. As several readers have pointed out this might be ameliorated by better indexing. I suspect a big reason for this may be that posts not in a sequence that are of high quality tend to be orphaned and more seldom read.
Concerning cited productivity. Reading the sequences and reading everything since the sequences is a disappointing exercise. I do especially enjoy your work and say Yvains and yes Eliezer’s core is the result of several years perhaps even a decade of low intensity independent research and thought. It is enhanced by several early high quality community members filling in the gaps and extending it, but still. If find it surprising that a much larger LessWrong has been unable to leverage enough crowd-sourcing or even mine enough talent from its readers who are already spending large amounts of time on it, to manage to make as much progress as EY did. To give an specific example of a failure to leverage brains the LessWrong wiki is very useful but it does not match EY’s original hopes by a long-shot.
Did EY eat all the low hanging fruit? Seem unlikely but maybe he did. Regardless we don’t seem to be in the process of standing up on his shoulders.
How many of these will be referenced by anyone in two years time?
This is a good question. As of now, probably none.
We should be careful of what conclusion we draw from that. I have two ideas: They all suck (for some value of ‘suck’). LW is structured the wrong way for cumulative productivity.
Indexing is key, I think.
I think the problem is that these posts aren’t well-indexed, so they tend to get forgotten once they fall of the recent posts pages.
And the recent posts page is moving too fast.
(Which would not be such a problem if we had separate lists for articles like these and for the other articles.)
Wow. That answers that question. (I had previously been somewhat more convinced by the insular/unproductive discussion. It would seem I was too vulnerable to persuasion towards discontent. Oops.)
Well, as the great Iezer-el son of AIXI once wrote in the Scroll “Why Our Kind Can’t Concentrate”, LWers tend to be biased towards contrarianism, criticism, and anti-authoritarianism. So you’re hardly alone.
I may be missing the joke but I think you are refering to “Why Our Kind Can’t Cooperate”.
It was a typo, but then I realized it was an equally valid way of describing the consequences of our biases: we can’t concentrate on any particular theory or approach...
I know, I’m even found myself lamenting at times that here I too often find myself in the role of defending the the orthodoxy. It’s highly unnatural!
This list noticeably lacks any historical analysis. My sense is that history studies on the level of Bureaucracy or The Politics of the Prussian Army would be met with indifference or disfavor. Analysis like that in Aramis, or the Love of Technology would be met with disfavor or outright hostility.
When the topic is human social engineering (like raising the sanity line), this is not evidence that members of this community are likely to be able to do the impossible.
I disagree with ‘notably’; it also lacks any civic engineering analysis.
I should probably phrase this point nicer.
I think a good knowledge of history is essential to successfully performing massive changes to society (like raising the sanity line). Even though good historical analysis is very difficult, and prone to significant bias, its importance to the task makes its absence worthy of remark.
Do you think civil engineering analysis is necessary for this task in the same way? Honestly, I think analogizing raising the sanity line to civil engineering is moving backwards.
A study of history is no doubt useful for ensuring massive change attempts do not fail in obvious ways, but that’s not to say it’s essential, nor that it’s important enough to make the list.
In Chapter 7 of MoR, Harry thinks the following:
The answer to that last question is NO. It would not be easy to fix the trends that led to the Reign of Terror. Believing it would be easy is an error on par with believing that there is strong empirical evidence of the existence of God. Believing that it might be easy after a little investigation is on par with believing that Friendliness is an easy problem. If Harry had spent 1⁄4 of the effort learning European history as he spent learning high-end physics, he’d know that already.
I assert that raising the sanity line is a harder problem than preventing the Reign of Terror once the French deposed Louis XVI. Not knowing history makes it essentially impossible to avoid otherwise obvious pitfalls. Reasonable folks could disagree about how much history to study, but total absence of investigation of history is not a rational amount given the stated goals.
I don’t exactly disagree, but I’m concerned you might be downplaying the bias you mention in an ancestor. My study of the field’s been fairly casual (and focused more on archaeological than historical methodology), but I’ve seen enough to know that academically respectable analyses vary wildly, and generally tend to line up with identity-group membership on the part of their exponents; most of the predictive power of history as a field also seems to lie in interpretation rather than in content. To make matters worse, we don’t have time to verify historical interpretations empirically; few respectable ones make significant predictions that’re valid on timescales less than a few decades.
If we’re interested in making predictions about the future based on the historical record, therefore, we’re left with the problem of choosing an interpretation based on its own internal characteristics. We do have some heuristics to work with, like simplicity and lack of post-facto revisions around major changes in the past, but solving this problem in a reliable way looks to me like it might be Friendliness-complete. And the consequences of failure are scarcely less dire than failing at Friendliness itself, if we’re using it to inform our approach to the latter problem.
I agree with you about how difficult the problem of finding unbiased history—the problem is probably harder than gwern suggested. At best, this problem is Friendliness-complete, in that if Omega gave us a solution to Friendliness, it would include a solution to this problem. And I’m not optimistic that the best case is true.
I think solving the problem is a prerequisite to solving Friendliness. It’s probably a prerequisite for a rigorous understanding of how CEV or its equivalent will work. The fact that the community (and SIAI to a lesser extent) think this type of analysis is irrelevant is terribly disturbing to me.
Why do you believe this?
The FAI project is about finding the moral theory that is correct,(1) then implementing potential AGIs so that they will implement that process of making decisions. I’m not aware of anything other than history that is a viable candidate to be evidence that a particular moral theory is correct.
Further, a FAI would need the capacity to predict how a human society would react to various circumstances or interventions. Again, history is the only data on how human societies react.
(1) I acknowledge the need to taboo “correct” in this context in order to make progress on this front.
Its possible that you’re using correct to mean something completely different than I would use it to mean, but I don’t see how history is supposed to be evidence that a moral theory is correct. Are you saying that historically widespread moral theories are likely to be correct?
This is something that the AI is supposed to figure out for itself, not something that would be hardcoded in (at least not in currently favored designs).
I find the idea that ‘studying history is valuable for trying to do big things’ counterintuitive. I think it would be valuable for you to try to share your intuition as a post. I would find a set of several examples (perhaps of the form “1) big idea 2) historical evidence of why this idea won’t work well”) very useful for getting a sense of what you’re talking about. I’d also like to see some discussion of why mere discussion of object level lessons (say for example, “coordinating large groups of people is hard”) isn’t as good as discussing history.
Until someone does this, I doubt we’ll see much historical discussion.
Because society, unlike say physics, is a thick problem, so in order to have any chance to make reasonable decisions is to calibrate yourself by knowing a lot of history.
I’m sorry, I wasn’t clear. I meant “unless you are directly observing insularity and low intellectual productivity when you visit other websites”.
Thanks for writing this post! This is something that I’ve noticed and have been trying to actively fight. (DA sequence, Thinking and Deciding review.)
A pithy way to express this is along the lines of a sense that more is possible: we have a sense that more is out there. For example, I was researching an entirely unrelated post which began with a reference to the Litany of Gendlin, and from that learned who Eugene Gendlin was, found and read his book on therapy, which seems like it might be as useful for dealing with akrasia as it is for other problems. There was this whole well of value there- and far as I can tell the most LW drew out of that well was a paragraph that made for a nice poem.
Right now, LW is very philosophical, and seems like it’s for AGI researchers by AGI researchers. I think that’s a very good foundation to build off of, but it doesn’t feel complete to me. I would very much like to see more types of people involved in rationality- decision analysts, psychologists, atheist activists, scientists in general, etc.- as active participants in the conversations here. It would be awesome for us if Jonathan Baron started posting here, for example, but he’s not going to unless LW pays for the time it takes. It worried me that a number of the experts XiXiDu interviewed visited the site after the interview piqued their interest, and were turned off by commenters’ responses to their answers.
I’m advocating growth, but do care about making it sustainable and positive growth. If LW is more things to more people, then it’s not as optimized for the narrow group that it’s optimized for now, and careful thought needs to be put into how, when, and why to grow. As to why, it seems obvious that there’s a lot out there right now that LW readers would want to see but aren’t seeing because no one is showing it to them.
(I’ve got more to say, but it’ll have to wait until after I get back from a meeting.)
It is. As it turns out, the thing I thought I invented called RMI is basically the same as Gendlin’s Focusing; I’d just never heard of it and came up with a version of my own. Nowadays, I recommend that book to my new students if they have trouble learning the method from my materials. (Like Gendlin, I’ve noticed that some people seem to just already know how to do it, or pick it up almost immediately; the rest need varying amounts of practice and training to do it successfully.)
In and of itself, I do not consider “focusing” (boy is that the wrong name for the process) to be a panacea or even much of a cure for anything, let alone everything. It’d be like saying that a screwdriver is a cure for your television set not working. All it really does is let you open up the access panel and have a look in… or in Gendlin’s case, provide an opportunity for the therapist to have a look in and offer some suggestions of what to tweak in there. If you’re going to do more with it than poke around randomly, it helps to have some schematics and assembly diagrams of what you’re working on.
(Btw, the reason I say “focusing” is the wrong name for the process, is because what most people would think of as a mental act of “focusing” would lead them to do almost the exact opposite of what is required to succeed at it. I wish he’d called it, I don’t know… searching? grasping? contemplating? I suppose those wouldn’t have sold a book as well, but then, it’s not a book for people who need to focus, either, so, go figure!)
Definitely- I would have gone for something like “listening” or “discovering.” (I think when I explain it to people, I’ll start off with the rider-elephant model of the conscious-unconscious brain, and then call it listening to your elephant.)
Yep. He might’ve had an even bigger bestseller with a name like “Listening To Your Inner Self” or “The Wisdom Within” or some such.
Going somewhat back to the topic at hand, one of the best things about LW over the years has been finding out about stuff like this, prospect theory, and a whole bunch of other topics in research that I otherwise wouldn’t have heard of and incorporated into my work. I’d still be spending a lot of time trying to come up with exercises to teach what Gendlin already has in his book, for example.
Listening seems to be a bad word when you want someone to focus on something kinesthetic.
“Gendling”.
Is there a word you like better than “listening” and “focusing”? Maybe “attuning”?
I’m fond of “attending”.
I’m haven’t read Gendlin but got my interaction with emotions from other sources.
When trying to explain it to someone I think it can be useful to teach by example. “Where in your body do you feel the emotion? Put your hand on that spot.”
The hand is a good feedback to know that the person understood what you want from them. It also helps them to be more aware of the emotion.
From there it depends on what I want to do. If the goal is simply about knowledge it can be useful to let the person describe what they are feeling.
I don’t know whether having a word to describe the process helps for implementation in a way where it becomes your default way of dealing with emotions.
“Monitoring”? (I’m not actually familiar with the subject.)
It just occurred to me in the other thread that he may have meant it more in the photographic sense of focusing a lens on an image until it becomes clear rather than in the conventional sense of concentrating.
1) Insularity: I actually don’t think LW is all that insular. Users often link to science articles, ask for opinions on other writers, discuss films and books, etc. Exactly what set of sites or communities is LW being compared to here when you call it insular?
2) Growth (in terms of users): This is quantifiable. http://www.google.com/trends/?q=less+wrong Looks like a big jump at the beginning of 2011, perhaps when HPMoR took off, and fairly constant since. Anyway, I’m not sure that becoming big in terms of raw users is all that much of a goal, although high-quality users certainly is (at least to me).
3) Growth (in terms of articles): I agree this is a problem. There are weird incentives with karma for main vs discussion for getting promoted and such, which probably turns off people from writing a “series” of posts.
4) Organization of content in useful chunks: Also agree that this is a problem. Though we often talk about Anki, the actual Anki flashcards available are quite poor (as I found when I tried to download ones for cognitive bias). Same with the organization of the so-called sequences.
I think #1 and #2 are not that important. I think #3 and #4 are ultimately site formatting problems. There have been many suggestions made, like tweaks to the karma system, subreddits, and etc. Given the ethos of the sequences, I’m surprised that some of these changes haven’t been tested in a trial period to see whether it improves the quality of the content. That seems the obvious play.
For me, (1) and (2) are linked. I dabbled on LW, and presented some of my own ideas in the comments sections. None of them piqued anyone’s interest, even if they were on pillar topics like FAI. I stopped being interested in LW, because:
EY stopped being as active, and no one with his clarity and perspective took his place as an article writer. I didn’t see as many interesting ideas to talk about.
I wasn’t able to engage others in the comment sections. I didn’t see anyone I was on the same wavelength with to talk about the ideas I did see.
I don’t just come to a site like LW to self-improve. I come to engage with intelligent, rational people. I don’t get the new site layout. The “Posts” vs “Discussion” split appears totally arbitrary now that they are parallel. Is this place a wiki or a forum or a social news site? Everything is very unfocused, and there isn’t enough of a userbase to keep that many interesting discussions going. I check in maybe once a month now, and it looks more and more like a knowledge-management site than a discussion forum.
I agree that it can be difficult to get a start commenting on LW. The karma system favors regulars, because people skip over comments whose user names they do not recognize and/or have low votes, and this is a self-reinforcing process. Again I think experimenting, in this case with the way comments are presented, could be beneficial.
Fair enough. It is definitely a bit of a turn-off to get downvotes with no comments, but every community has their common ways of communicating.
I actually prefer Luke as an article writer. Eliezer is the better writer in terms of clarity and language skills, but Luke is a better researcher and brings up a lot of interesting ideas.
I agree that Main isn’t very active lately, but Discussion tends to have fairly good discussions.
It definitely seems like Main is an announcement section for meetups, and Discussion is where discussions happen.
I’ll check out some of Luke’s articles!
Insular in the sense of being incapable of adopting an idea created elsewhere even when useful.
I wasn’t concerned by this. But yes the article was a bit ambiguous on that. I’ve edited to try and fix this. So I guess we are in agreement on everything but point one. :)
I agree with Luke that LW is not insular in this sense, at least compared to any alternative I’ve seen. I’d be willing to bet that if found a comparison site (such as Reddit) that we would have more outgoing links.
Posting & discussing a link is something that in practice overlaps but isn’t identical with people updating on the material behind the link enough for it to become part of the expected background knowledge on LW.
Reddit is all about outgoing links. Insularity may be a complaint because so many users have experience with hacker news and reddit which lack insularity through their very structure.
A traditional forum might be a better comparison.
I agree with pretty much all of this. This is slightly off-topic, but:
I don’t just think we should be discussing new arguments that fall within our cluster of topics—IMO, we should be branching out even more. For a while now, a handful of LWers have been arguing in comments like this one that we need a much wider range of scholarship, and that subjects outside of LW’s typical math/science cluster—yes, even those icky-looking liberal artsy ones—are worth studying. This seems like a pretty reasonable suggestion. After all, the site is overwhelmingly male, white, atheist, young, consequentialist (or wannabe-consequentialist), transhumanist and heavily math/science focused. Heck, that’s a near-perfect description of me. As a result, there’s a natural tendency for us to be ignorant of certain subjects, and consequently to discount them. E.g., for me, subjects like anthropology are unknown unknowns: I don’t know what the field is even about or how relevant it is to LW-style rationality topics.
Trouble is, we run the risk of falling into the typical autodidact failure mode of being recklessly overconfident about these topics after reading introductory material or opinion pieces on each subject. Personally, I have no idea how to go about studying something like history, and I’d most likely step on an intellectual land mine. What would be really awesome is if people on LW with expertise could point us towards appropriate introductory texts, answer questions, or even teach other LWers.
My impression is that LW is well off the cliff already, given the replies to posts by or links to the AI researchers who are skeptical of the UFAI issue. The quantum physics reaction is no better. There is a lot of noise here from the well-meaning and overconfident amateurs who refuse to identify as such.
The fact that the sequence you are referring to is known as the “Quantum Mechanics” sequence is evidence of the failure of that sequence to achieve its goal.
It was called that by its author, wasn’t it?
If I write a book with the primary goal of teaching calculus, then spend 2⁄3 of the book arguing that Leibniz invented calculus before Newton, I’m likely to fail at teaching calculus.
If I title the book, “Leibniz was Right,” I’m just compounding the error, right?
You could be right. We are beyond hope. You should abandon us and leave us to our abysmal failure and move on to other communities where you don’t feel the need to constantly insult everyone. All will benefit!
I’ve noticed this when reading certain things. Half-formed thoughts like “huh this might make a good LW article if I compared it to Sequences Lesson #452”, usually followed by “but it doesn’t have enough math” or “it’s not rigorous enough” or something similar.
This is a good inclination. But myy personal take, having written many many research papers for students of the humanities, is that most of it is fairly worthless dreck. The humanities have been strongly influenced by critical theory, to their detriment.
Most of critical theory is dreck. But most dreck is not critical theory.
While my primary concern is with the community not updating on good ideas from outside on the stuff we are already interested in, so this is a bit OT but don’t worry because yours is a related concern and something I think worth attention.
For a concrete example of something I think we’re missing, I watched just enough game theory videos to realise it’s full of useful ideas that would be valuable to the kind of things LW discuss, but a relatively small proportion of LW people seem to use those ideas.
I’m an example of what I’m criticising here—I know there’s lots of important information in that field I don’t know.
Your post is all generalizations, with almost no specific examples. I think I disagree with most of the generalizations, but it would take an equally long post for me to explore why for each generalization.
In any case, you don’t make any recommendations for how Less Wrong users should change their behavior. I might agree with those if you had made them. Here are some possible policy prescriptions based on your complaints:
If you come across a cool concept Less Wrong is unfamiliar with, share it. I agree with this prescription.
If you want to write about something, consider doing some scholarship and figure out what people outside Less Wrong have already said about it, and citing it. I agree with this prescription.
Systematically assign Less Wrong users to consume other reading materials and discussion sites to find the best concepts and share them on Less Wrong. This seems like it might be promising, I don’t know.
Stop voting up posts unless they look like a typical EY sequence post. I disagree with this prescription. A typical EY sequence post is way longer than it needs to be and doesn’t cite any studies.
Try harder to write posts about topics EY covered. This seems like a silly heuristic. We should write posts about whatever topics are most profitable. If you want to argue that there are profits to be made in further exploration of specific topics EY covered, go ahead. If you think this is true for all topics EY covered, I suspect you are suffering from the halo effect around EY. The fact that the topic has been discussed in the past should, all else equal, make us less inclined to discuss it since more of what can usefully be said has already been said. Basic rationality posts on Less Wrong are as (probably) rare because basic rationality was covered a lot already in the sequences.
Choose an accurate title for your post so people will not do too badly by assuming it says exactly what the title says. I agree with this prescription.
Reread a post in full before linking to it. I’m not sure whether I agree with this prescription. Ideally every post would have a summary at the beginning, and it would be acceptable to read the summary only before linking if you’d read the full post in the past.
Discuss politics more outside the comments. I’m not sure whether I agree with this prescription. Discussing politics for its own sake seems low value to me because Less Wrong doesn’t have enough people to be influential, but it could be useful if it’s going to be a rationality exercise or example somehow. I suspect EY’s idea of using long past rationality failures as examples is a bad one because most people are way more familiar with contemporary rationality failures.
I recommend in future posts of this sort you attempt to take a random sample of Less Wrong posts and discuss them, so you will actually have evidence to support your claims.
I often do this all the time and see others do so as well. Unfortunately it dosen’t seem to propagate the same way main articles written by Yvain or EY seem to even when the writing is of comparable quality. There seems little sense to accompany quality texts written by outside authors with more than some additional commentary or emphasis. Why duplicate labour and rewrite something that is already ok?
The “low hanging fruit” posts that recently popped up in discussion seemed like a promising trend to me. I want a lot more of people noticing how to do something slightly more optimally and posting it to discussion.
Maybe this is because more people see articles that are in Main?
Policy prescription: Allow posting of links in Main. I agree; whether something goes in Main should be based on how useful and important it is, not superficial considerations like whether you need to click a link to read it.
I agree. Maybe a policy where link articles over say 20 karma go there, or perhaps a once a month “best links” summary?
You’re in denial.
I’m in denial about what? Could you be more specific? I like getting frank criticism but your current statement is a little too general to be useful from my perspective.
It seems to me that you have pigeonholed me even when I said I only thought I disagreed. I can feel this impinging on my rationality. I didn’t have a firm stance on anything before your comment, but now saying “Konkvistador is right” seems like backing down, and I’m somewhat averse to doing it. (Nothing that would be very difficult to overcome, I’m just explaining how I think you’ve made me less rational.)
I do disagree with your assessment that LW has made very little progress. As Luke pointed out below, we frequently have new work being done, and references to work being done by other people. I suspect the source of this feeling is that nearly all of the progress lies within a very small field. This blog is a mix of human rationality (specifically, improving the way we think), and Artificial Intelligence.
If you are like me (and I have a slight suspicion that you do fit into this category), you don’t actually pay that much attention to the discussions of Artificial Intelligence. I really don’t particularly care that we’ve found a slightly better way to make computers perform Solomonoff Induction. It isn’t part of the fields I work in, and to actually understand it, I would have to study AI to a much greater depth than appears useful to me.
In lukeprog’s list, if I’m being as generous as I can about which of the two categories the topics fall in, I wind up with 8 posts on rationality and 8 on Artificial Intelligence. (I did count things like learning to program as rationality, even though they’re edge cases). I do think it’s worth noting that the amount of rationality material in the “intellectual productivity” section is less than 1⁄2.
So, if you aren’t really paying that much attention to the AI posts, that means that about half the posts in the last two weeks haven’t been of much value to you. Then we have to consider that not all of the “rationality” material is particularly interesting to you.
And so we wind up with a blog where the only people to whom a large fraction of the posts on this website are particularly relevant are people working in or with a strong interest in Artificial Intelligence. Which is a bit of a shame, since that’s probably nowhere near all of the rationalists in the world.
Can you think of anything specific from outside of LW that we should have updated on, but haven’t?
Thanks for this post. I agree with a lot of it, and that with which I disagree, I still think is important to discuss. I have several tangential thoughts. I’m not sure how coherently organized this comment will be, but I’ll try.
Intellectual Productivity: I agree this is a problem. I think there are a number of factors about LW that, in some ways, discourage intellectual productivity. I’ve said it before, but if I want to read something technical, I’ll read the sequences, since I haven’t finished them. As a result, I read very little of main. Raemon mentions that he doesn’t read main as much because it is more difficult to get to. Ultimately though, I think the biggest problem is that main is not as interesting as discussion. Reading main, like reading the sequences, takes much more work. You have to actually think and digest what you’re reading. Discussion is much easier to browse through.
In addition, I think people are far more hesitant to post in main. I know I am. I’ve never made a main post, although I’ve thought of a number that might be appropriate.* Most of these posts I am worried are A) Trivial insights for most LWers even though they just occurred to me, B) Not substantial enough to write more than a couple of paragraphs on, C) Are based on my own introspection and not literature searches, or D) simply boring. I suspect other users have similar hesitation about just not being good enough for main.
On the flip side, at least LW is beating the pants off of similarly trafficked sites when it comes to intellectual productivity.
I don’t think insularity is necessarily a problem. Others have beat this horse already though, so I’ll let it lie.
With regard to titles, I do think that tends to happen. I linked a friend to “Why Are Individual IQ Differences OK?”, thinking it talked much more about race and much less about religion, until I reread it and was quite surprised. So, positive data point.
*I would very much appreciate feedback on potential posts I’ve thought about writing. If any of these seem interesting to you, please let me know, since that will help answer my questions above. General descriptions of the three I am most likely to write are below.
An Intuitive Explanation of Many Worlds: The Quantum Physics Sequence (Which, admittedly, I haven’t finished yet, but would do so obviously, before I wrote this) did not intuitively seem to necessarily lead to Many Worlds as Eliezer seemed to suggest. It was not at all obvious to me. What flipped the switch was an entirely different line of reasoning: Quantum physics says everything is a wave. You and the desk next to you aren’t really different things, you’re just different parts of the same wave function. Now what happens when parts of the wave function evolves sentience? The me that is where I am now can’t observe the probability blob that is me anywhere else, but that doesn’t mean that there is zero probability that I’m there.
Reduce Anger by Evaluating Monetary Value of Time. One of the things I think about often when driving is road rage. I find it interesting from an ev psych perspective. For example, I notice that if I look at a driver’s face, my frustration with em dramatically decreases, because I realize ey is a person, and not a car. This doesn’t generalize very well outside of driving, but another principle seems generally applicable to me. Drivers tend to get frustrated when other drivers cost em very small amounts of time. If you’re stuck behind a car going 15 mph slower than you would (let’s say 35 vs 50), even if you are stuck there for five miles, you’re still only losing around seven minutes of time. If you value your time at $20/hr, that’s still only around two dollars. I suspect that people subconsiously overvalue time and undervalue money because it is a status signal—having lots of time is low status and having lots of money is high status.
Observations and conclusions from my journal, which ended up looking nothing like I described it as in that post. It’s basically an excel spreadsheet, where I tracked well… everything.
Road Rage seems to me to be a symptom of drivers’ inability to communicate with each other. All you have are your lights and your horn. The usual methods for defusing conflicts are impossible—when in a car, you can’t make requests of, apologize to, or even say “thank you” to another driver. There’s no room for politeness in the system, and the lack of standard social feedback makes people feel like others are treating them with contempt, which in turn makes them angry.
On a side note, the earliest description of what might be called “road rage” occurs in the Ancient Greek tragedy Oedipus the King; Oedipus and his biological father, neither knowing the other’s identity, get into a fight over whose chariot has right-of-way.
I don’t know what the customs are where you are, but in the U.K., one politely waves to a driver who has politely let one through, to acknowledge their politeness.
Re #1, I think your argument is a nice intuition pump to motivate Many Worlds (MW), but I don’t think it engages seriously with the main alternative interpretations (which was the biggest problem in the Eliezer sequence). When you say “Quantum physics says everything is a wave” you are almost on MW already: you have excluded by fiat epistemic interpretations of the wave function, as well as Bohmian interpretations, hidden retrocausal variables, etc.
I am most definitely interested in #3, and hope you write it. This would be my pick.
I would read #2. I can see it possibly leading to some interesting comment discussions.
#1 would just be noise to me (but in this regard I am likely to be in the LW minority).
I concur. I read the sequences, then I read every post from the end of the sequences until that time (May 2011). I was amazed just how little seemed to have been taken in even from the posts on LW since the end of the sequences.
I have faint hopes the Center for Modern Rationality can seed a new set of community norms.
Hm. Now you say it, I think I’ve definitely read some excellent non-Elizier articles on Less Wrong. But not as systematically. Are they collated together (“The further sequences”) anywhere? I mean, in some sense, “all promoted articles” is supposed to serve that function, but I’m not sure that’s the best way to start reading. And there are some good “collections of best articles”. But they don’t seem as promoted as the sequences.
If there’s not already, maybe there should be a bit of work in collecting the best articles by theme, and seeing which of them could do with some revising to make whatever the (in retrospect) best point more clear. Preferably enough bit of revising (or just disclaimers) to make it clear that they’re not the the Word of God, but not so much they become bland.
People have indeed started on this, but we could probably do with more. Go for it :-)
Where?
http://wiki.lesswrong.com/wiki/Sequences#Sequences_by_Others
What are some examples? Which posts from your reading have you noticed in particular that haven’t been absorbed sufficiently as subcultural memes? What is it that I may have missed and could be benefited by going back and reviewing?
Luke lists some recent ones here.
I’m confused; is your criticism that posts after the sequences failed to introduce new ideas the way the sequences did or didn’t stick in the community’s collective memory?
They introduced new ideas and failed to stick in the community memory. Why is not clear. (I could easily come up with just-so stories to retrospectively explain it, of course.)
One just-so story: The sequences are mentioned everywhere as The Way To Read Less Wrong; random archive posts are not. Therefore a larger fraction of LW has read the sequences.
Yeah, that’s the one that occurred to me too. The site is stuck in 2008. Even Eliezer doesn’t necessarily think the stuff he wrote word for word there any more.
Also the sequences are permalinked, one click away from the front page.
As per our discussion on irc, I agree!
I am a defender of “read the sequences”. People should!
One step I’ve found often interesting that builds on EY’s posts is to… read some of the stuff he mentions! For example, Influence is a great book.
In particular, we need more maths around here. I am totally displaying the problem I’m complaining about, but for example there is a great shortage of game theory. And I’d like to see people work through Khan Academy or similar.
If someone compressed the salient points into something that is 10% or less in size, this would even be plausible.
The sequences are already a summary. Summarizing too much more risks committing the usual sins of science journalism.
I find it difficult to believe that the average commenting LWer couldn’t spare the time to read the major sequences. That may be the issue for some, sure; but is it the dominant factor?
I’m probably having typical mind fallacy here, and possibly also privilege of having spare time. When I found LW, I devoured the sequences over a few days — then re-read them slower, fascinated. But I’m a pretty avid reader both of books and blogs, so substituting sequences in place of other things I would have read was neither very much opportunity cost nor a disruption to my personal habits. If I were substituting reading the sequences for some other activity it might have been more of both.
And this was before I’d encountered the “you should read the sequences” meme, so there wasn’t any interference from the “assigned reading” complex.
But still — I wonder if instead of pushing “you should read the sequences” we should push “the sequences are pretty damn awesome”.
I think you mean Typical Mind Fallacy, expecting too much that people are like you. Mind Projection Fallacy is projecting features of maps, like uncertainty, onto the territory.
You’re right. Fixed.
Are you serious? A summary that is as long as a multi-volume novel (apparently 4000 printed pages or so)? Feel free to look up the definition of the word summary.
The published literature on heuristics and biases alone is rather larger than that.
Let me wiki it for you:
Absorbing the sequences requires weeks of concentrated study and then at least a few follow-ups.
This is not at all how the sequences are written.
On your math point:
Patrick offored in september last year to do tutoring
http://lesswrong.com/lw/7vd/free_tutoring_in_mathprogramming/
Maybe we should build a network of people who’d apply enough peer pressure and guidance to replicate the level of pressure present in a classroom to get stuff done and learn math. We shouldn’t overload Patrick but would it be helpful to have LW affiliated University of Reddit, or Udemy or even just Skype class.
Maybe a list of all material Eliezer recommended ever would be useful. It wouldn’t do much for insularity, but at least we could start asking people to read actual books and articles not included in the sequences.
And yes I do agree people should “read the sequences”. I try to promote this with frequent linking to the specific articles, hopefully setting up tab explosions, but I fear I may have just contributed to overuse of titles as phrases.
I actually started on a list of all the rationality references in HPMoR as a project. I’m somewhere around Chapter 40 but haven’t worked on it in a while.
That sounds great! I recall someone asking for interesting rationality related blogs. A main article that summarizes all book recommendations in the sequences (yes there are writers besides Eliezer), HPMOR as well as a short summary of all those blogs should be a good first step towards solving this.
It should also include the rational why we should read them would be a good start to solving much of this.
Perhaps we could after that is published hold 3 month challenge to find the best concept from such material that LessWrong should update on but hasn’t.
Post this in discussion as a link.
As a note for anyone interested: Khan academy covers everything from math to biology, chemistry, physics and a whole lot of other topics. I personally find that the methods employed there are very useful in learning those topics, but YMMV.
First—upvoted, completely agree.
I think “insular” is a bad phrasing, though. It implies “doesn’t listen to outside sources.” The issue is more-so that the sequences are VERY prominent, and NOTHING else approaches that prominence. The wiki could ostensibly aspire to that goal, but in it’s current state comes no where close.
It’s not that the community is closed off to new or external ideas, it’s that the community has this very prominent stone tablet of the Sequences—a very prominent and largely STATIC piece of content. The solution seems obvious—we simply need to bring the other material up in to prominence (as a number of people have suggested)
You seem to grasp the ideas here, I’m just trying to highlight the linguistic issue with your original phrasing, since a lot of people seem to be objecting that we’re NOT insular. Because… we’re not. We just have this big stone tablet that takes up a lot of the sky line and draws attention away from the non-insular parts of the community :)
One reason people aren’t so big on linking the sequences is that Eliezer has, for some time now, been writing a book consolidating the sequences into a much more readable text. I think people are waiting until that comes out to bug everyone to read Eliezer’s work.
Update: The books are on hold because we need Eliezer for other projects. If we’re lucky, a professional writer will be able to take Eliezer’s (substantial) work and finish it (with minimal input from Eliezer). We have a retainer with one writer who has written at least one bestseller so far, and will take a crack at Eliezer’s books once he finishes his current project, probably late this fall.
Gah! That is no doubt a good decision all things considered but still frustrating to hear as a prospective reader!
Whatever this project is, it must be something truly exceptional, to preempt a useful project presumably close to publication.
From what I understand (guess) Eliezer is doing a lot of work on the new centre for rationality spin off from SI.
I wish there were some transparency in what he is involved in and to what degree. Not a wholly unreasonable thing to ask from a non-profit. Hopefully major donors have a better visibility into the SI/EY day-to-day operation.
EY should start a webshow called day in the life of EY which is just a webcam attached to his head so we can know more about his personal life than he already shares with us xD
But he ought not go to France if he does.
(I am chagrined to admit that when I first heard about that event, it was just described as a “Toronto professor,” and I thought “How unfortunate!” When I later discovered it was Mann I was like “Oh, well, that explains it then.”)
I have no idea what you are referring to.
Ack! Wrong link!
That’s embarrassing.
Link fixed.
At least, that’s what he tells his HPMoR readers!
Eliezer Yudkowsky.
Better yet,
-Eliezer Yudkowsky
(Yes, which allows Konkvistador to upgrade his confidence significantly beyond ‘guess’ to ‘know with reasonable confidence because it is stated and there isn’t much reason for it to be a lie’.)
Fiction bestseller or non-fiction bestseller?
Non-fiction pop-sci.
Posting my idea from irc here too. We should look for ways to make the claims of this post more concrete and testable. I propose crawling the site to create a LW citation index. We can then make measurements—which new posts are picked up by the LW community? Does everyone always refer back to EY, or do we talk about the new stuff? etc
This. From the citation matrix it would be easy to calculate internal LessWrong “Page Rank” and find the most influential pages. The only problem is, once this methodology is used and known, people will start behaving differently.
There are some technical details about the exact choice of the model. Simplest version would analyze only articles: each article has the same initial value (karma is ignored), and only links from article to article are considered (links from comments are ignored). (Multiple links from page A to page B are treated as a single link.) This is easiest to do.
If we want to include the karma, the simplest way would be to treat articles with zero or negative karma as non-existent (remove them from the model). I am not completely certain how to treat higher karma. If I understand it correctly, Google Page Rank simulates a random user who with probability 85% clicks a random link on the page, and with probability 15% chooses a new starting page from a uniform distribution—we could replace the uniform distribution with a weighted distribution where positive karma is the weight of the page. I am not sure how much the results would be sensitive to the “15%” value.
If we want to include comments in the model, considering their karma is IMHO inevitable, otherwise it would be too easy to game the system (by writing new comments to highly ranked pages). But the comments are not new nodes in the graph (that wouldn’t work, because to an average comment nobody is linking; and it would be wrong to treat a comment in the article as a comment linked by article), so perhaps they could be treated as a part of the article. A link from the comment would be like a link from the article, just weaker. How exactly weaker, that is determined by the comment karma compared with the article karma. For example a link in a 5 karma comment below a 20 karma article would be treated like a 0.25 link. (If the same link is in more comments, only the best weight is taken. If the comment has higher karma than the article, the link strenght is capped at 1.0.)
Here is the pseudocode:
ARTICLES = articles with karma > 0
TOTALKARMA = sum(A.karma) for each article A in ARTICLES
for each article A in ARTICLES:
… for each hyperlink H in A:
… … LINKS(A, H.target) = 1.0
… for each comment C with karma > 0 in A:
… … for each hyperlink H in C:
… … … CLINK = min(1.0, C.karma/A.karma)
… … … LINKS(A, H.target) = max(LINKS(A, H.target), CLINK)
… TOTALLINKS(A) = sum(LINKS(A, A2)) for each article A2 in ARTICLES
for each article A1, A2 in ARTICLES:
… RANKFLOW(A1, A2) = LINKS(A1, A2) / TOTALLINKS(A1) # where 0.0 / 0.0 = 0.0
for each A in ARTICLES:
… RANK(A) = A.karma / TOTALKARMA
repeat many times:
… for each article A in AS:
… … NEWRANK(A) = 0.15 A.karma/TOTALKARMA
… … NEWRANK(A) += 0.85 sum(RANKFLOW(A2, A)) for each article A2 in ARTICLES
… RANK = NEWRANK
Has this gone anywhere?
As far as I know, no.
And mysteriously the author of this article leaves the site, after he finds himself advised by a “top poster” to do so since his contributions are “harmful”. Gee what convenient timing.
Maybe he failed a private struggle session? Though obviously new reasons are probably in the works, who knows they may even be posted here! Joy.
If I understand it correctly, author specifically asked a top poster for an opinion, and after receiving negative judgement, decided to stop writing “until further notice”.
Therefore, please, let’s not make this a drama; let’s not make it something it obviously is not. Even if you happen to disagree with the negative judgement, there is a huge difference between “hey you, I don’t like your recent comments, get out!” and “how do you like my recent comments? -- honestly, I don’t—then I guess I need some rest”.
Having said that, I think that this article was a positive contribution, even if it perhaps started with the wrong premises. For example I had a feeling that LW is not exactly what I would like it to be, and I am glad that someone else said it first, and gave me a convenient opportunity to speak my mind. Later from other comments I see that some criticism was wrong. But for me this just means that it is easy to misidentify the exact cause of the discontent, and perhaps a discussion like this can help to identify it.
For example to me it now seems that the website is generally OK, it should just be better organized, because there is a lot of different stuff in “Discussion” and most of it gets quickly scrolled away and forgotten. LW may seem less productive than it really is, because it is a bit disorganized.
To the extent that this implies that LessWrong is reacting badly to valid criticisms (and I apologise if you’re not)....
That state of affairs (“good post” → “we don’t like” → “please leave”) seems less likely than “bad post” → “we don’t like” → “please leave”. You have to posit that the post’s claims are true and that we react badly to these true claims, whereas the “bad post” state doesn’t require truthiness of claims and gets “react badly” as a given. I tentatively accept your hypothesis as possible but I’d like a little more evidence before I consider it plausible.
This motivates me to not postpone further the small discussion post I was thinking to write about a useful bit of vocabulary (from outside of Less Wrong) for suggested adoption. Expect it later tonight. Don’t expect anything too special mind you.
Edit to add: Actually it seems I’ll have to postpone this a day or two, apologies.
It has been almost two months yet I’d still very much like to see that small discussion post! (^_~)
It greatly grew (and greatly slowed) in the writing, but I just posted the first portion of it to Main.
Great! Looking forward to future ones.
This seems obviously wrong to me, so I probably don’t understand what you mean. Once you remove the ideas of Hofstadter, Jaynes, Drescher, Kahneman, Pearl, Dawkins, Asimov, Nozick, etc… from Less Wrong, there isn’t a whole lot left. Am I wrong?
These ideas where all presented in the sequences. New ideas outside of LessWrong when not packaged in a semi-original syntehsis post by a LWer don’t propagate. If you think about it that way this seems a waste. Why duplicate effort if the original author of the idea is did it right?
Also in the past two years while we do link to new ideas, they don’t seem to propagate and are hardly ever referenced a year later. Sequence posts however are.
E.g., despite two to five posts on the matter and many comments, there seems to be a huge disconnect between how folk like Wei_Dai, cousin_it, Vladimir_Nesov, &c. interpret Solomonoff induction, and how average LW commenters interpret Solomonoff induction, with the latter group echoing a naive, broken interpretation of the math and thus giving newer people mistaken ideas. It’s frustrating because probability theory is one of few externally-credible things that sets LW’s epistemology apart and yet a substantial fraction of LW folk who bring up algorithmic probability do so for bad reasons and in completely inappropriate contexts. Furthermore because they think they understand the math they also think they have special insight into why the person they disagree with is wrong.
For example? (I don’t recall average users mentioning the subject all that much, right or wrong.)
I haven’t seen this as applied to Solomonoff induction.
I suppose I meant “relatively average”. Anyway I don’t know where to find examples off the top of my head, sorry.
IIRC I’ve seen it two to five times, so this specifically is not a big deal in any case.
I’ve seen more general errors pertaining to algorithmic probability much more often than that, sometimes committed by high-status folk like lukeprog, who wrote a post (sequence?) allegedly explaining Solomonoff induction.
Thank you. While I don’t recall the examples myself I believe your testimony regarding the two to five examples you’ve noticed. I expect I am much more likely to notice such comments in the future given the prompting and so take more care when parsing.
I can see why that would be disconcerting.
Yes, but they did not “originate here” (which is what I was responding to in the part I quoted).
When thinking of insularity, the first example of content from LW that came to my mind was a post where someone supported what might be called “LW-native” politics, and the most highly upvoted comment was a defense of the “LW-outsider” politics. This seems incredible non-insular.
When thinking of productivity, my first thought was of previous intense lamentation of problems being social that I saw all over the site when I joined, and comparing it to the very positive social interactions that people seem to have had and be having at meetups.
So in short… I agree with Luke. Either I don’t understand you at all or I entirely disagree.
I have rarely if ever experienced websites (short of, say, wikipedia) where people used scientific sources to figure out what to post. I would love to see an example of another site where that is an established social norm.
All that said, I also agree that Less Wrong is not organized in a way that is optimal for how I (and apparently most people) use it. I think Villam’s Forum/articles/announcements structure would make more sense and be more in line with how I use the site.