[Meta] The Decline of Discussion: Now With Charts!
[Based on Alexandros’s excellent dataset.]
I haven’t done any statistical analysis, but looking at the charts I’m not sure it’s necessary. The discussion section of LessWrong has been steadily declining in participation. My fairly messy spreadsheet is available if you want to check the data or do additional analysis.
Enough talk, you’re here for the pretty pictures.
The number of posts has been steadily declining since 2011, though the trend over the last year is less clear. Note that I have excluded all posts with 0 or negative Karma from the dataset.
The total Karma given out each month has similarly been in decline.
Is it possible that there have been fewer posts, but of a higher quality?
No, at least under initial analysis the average Karma seems fairly steady. My prior here is that we’re just seeing less visitors overall, which leads to fewer votes being distributed among fewer posts for the same average value. I would have expected the average karma to drop more than it did—to me that means that participation has dropped more steeply than mere visitation. Looking at the point values of the top posts would be helpful here, but I haven’t done that analysis yet.
These are very disturbing to me, as someone who has found LessWrong both useful and enjoyable over the past few years. It raises several questions:
What should the purpose of this site be? Is it supposed to be building a movement or filtering down the best knowledge?
How can we encourage more participation?
What are the costs of various means of encouraging participation—more arguing, more mindkilling, more repetition, more off-topic threads, etc?
Here are a few strategies that come to mind:
Idea A: Accept that LessWrong has fulfilled its purpose and should be left to fade away, or allowed to serve as a meetup coordinator and repository of the highest quality articles. My suspicion is that without strong new content and an online community, the strength of the individual meetup communities may wane as fewer new people join them. This is less of an issue for established communities like Berkeley and New York, but more marginal ones may disappear.
Idea B: Allow and encourage submission of rationalism, artificial intelligence, transhumanism etc related articles from elsewhere, possibly as a separate category. This is how a site like Hacker News stays high engagement, even though many of the discussions are endless loops of the same discussion. It can be annoying for the old-timers, but new generations may need to discover things for themselves. Sometimes “put it all in one big FAQ” isn’t the most efficient method of teaching.
Idea C: Allow and encourage posts on “political” topics in Discussion (but probably NOT Main). The dangers here might be mitigated by a ban on discussion of current politicians, governments, and issues. “Historians need to have had a decade to mull it over before you’re allowed to introduce it as evidence” could be a good heuristic. Another option would be a ban on specific topics that cause the worst mindkilling. Obviously this is overall a dangerous road.
Idea D: Get rid of Open Threads and create a new norm that a discussion post as short as a couple sentences is acceptable. Open threads get stagnant within a day or two, and are harder to navigate than the discussion page. Moving discussion from the Open Threads to the Discussion section would increase participation if users could be convinced thatit was okay to post questions and partly-formed ideas there.
The challenge with any of these ideas is that they will require strong moderation.
At any rate, this data is enough to convince me that some sort of change is going to be needed in order to put the community on a growth trajectory. That is not necessarily the goal, but at its core LessWrong seems like it has the potential to be a powerful tool for the spreading of rational thought. We just need to figure out how to get it started into its next evolution.
- Downvote stalkers: Driving members away from the LessWrong community? by 2 Jul 2014 0:40 UTC; 58 points) (
- 27 Nov 2016 13:57 UTC; 42 points) 's comment on On the importance of Less Wrong, or another single conversational locus by (
- 22 Oct 2014 4:15 UTC; 16 points) 's comment on Open thread, Oct. 20 - Oct. 26, 2014 by (
- 16 Jun 2015 22:07 UTC; 6 points) 's comment on Effectively Less Altruistically Wrong Codex by (
- 6 Apr 2015 15:42 UTC; 6 points) 's comment on Ottawa LW Meetup Saturday April 16th by (
- 8 Oct 2014 8:58 UTC; 2 points) 's comment on Open thread, Oct. 6 - Oct. 12, 2014 by (
- 10 Oct 2014 12:00 UTC; 0 points) 's comment on Open thread, Oct. 6 - Oct. 12, 2014 by (
- 7 Jun 2014 18:33 UTC; -10 points) 's comment on The End of Bullshit at the hands of Critical Rationalism by (
My friend kytael (not his real name, but his Less Wrong handle) has been on Less Wrong since 2010, has been a volunteer for the CFAR, and lived in the Bay Area for several months as part of the meatspace rationalist community there. For a couple of years, I was only a lurker on Less Wrong, and occasionally read some posts. I didn’t bother to read the Sequences, but I already studied cognitive science, and I attended lots of meetups where the Sequences were discussed, so I understand much of the canon material of Less Wrong rationality, even if I wouldn’t use the same words to describe the comments. It’s only in the last year, and a bit, that I got more involved in my local meetup, which motivated me to get involved in the site. I find myself agreeing with lots of the older Sequence posts, and the highest quality posters (lukeprog, Yvain, gwern, etc.) from a few years ago, but I too am deeply concerned about the decline of vitality on Less Wrong, as I have only started to get excited about it’s online aspects.
Anyway, when I too asked kytael:
(I asked him more, or less, the same question)
He replied: “I think the best way to view Less Wrong is as an archive.”
Since he was tapped into the Bay Area rationalist community, but was a user of Less Wrong from outside of it as well, he was in an especially good position to provide better hypotheses as to why use on this website has declined, due to his observation.
First of all, the most prominent figures of Less Wrong have spread their discussions across more websites than this one, where much discussion from those popular users who used to spend more time on Less Wrong now discuss things. Scott’s/Yvain’s Slate Star Codex is probably the best example of this, another being the Rationalist Masterlist. Following a plethora of blogs is much more difficult than just going through this one site, so for newer users to Less Wrong, or those of us who haven’t had the opportunity to know users of this site more personally, following all this discussion is difficult.
Second of all, the most popular, and common, users of Less Wrong have integrated publicly more, and now use social media. Ever since the inception of the CFAR workshops, users of Less Wrong have flocked to the Bay Area in throngs. They all became fast friends, because the atmosphere of CFAR workshops tends to do that (re: anecdata from my attendance there, and that of my friends). So, everyone connects via the private CFAR mailing lists, or Facebook, or Twitter, or they start businesses together, or form group homes in the Bay Area. Suddenly, once these people can integrate their favorite online community, and subculture, with the rest of their personal lives, there isn’t a need to only communicate with others via the lesswrong.com, the awkward blog/forum-site.
Finally, since the inception of Less Wrong, Eliezer Yudkowsky, and others, started Less Wrong having already reached the conclusion that the best, ‘most rational’ thing for them to do was to reduce existential risk. Eliezer Yudkowsky wrote the Sequences as an exercise for himself to re-invent clear thinking to the point where he would be strong enough to start tackling the issue of existential risk reduction, because he wasn’t yet prepared for it in 2009. Secondarily, he hoped the Sequences would serve as a way for others to catch up his speed, and approach his level of epistemology, or whatever. The instrumental goal of this intent was obviously to get more people to become awesome enough to tackle existential risk alongside him. That was five years ago. As a community goal, Less Wrong was founded as dedicated to ‘refining the art [and (cognitive) science of human rationality’. However, the personal goal for its founders from what was the SIAI, and is now the MIRI, is provide a platform, a springboard, for getting people to care about existential risk reduction. Now, as MIRI enters its phase of greatest growth, the vision of a practical ‘rationality dojo’ finally exists in the CFAR, and with increased mutual collaboration with the Future of Humanity Institute, the effective altruism community, and global catastrophic risk think tanks, those who were the heroes of Less Wrong use the website less as they’ve gotten busier, and their priorities have shifted.
They wanted to start a community around rationality, to improve their own lives, and those of others. Now they have it. So, those of us remaining can join these other communities, or try something new. The tools for those who want this website to flourish again remain here in the old posts: Eliezer, Luke, and Scott, among others, laid the groundwork for us to level up as they have. So, aside from everything else, a second generation, a revival of Less Wrong, where new topics that aren’t mind-killing, either, can be explored. If those caring among us do the hard work to become the new paragon users of Less Wrong, we can reverse its Eternal September.
After this primary exodus from Less Wrong, others occurred as well. I personally know one user who had some of the most upvoted, and some featured, posts on Less Wrong until he stopped using this website, and deleted his account. Now, he interacts with other rationalists via Twitter, and is more involved with the online Neoreaction community. It seems like a lot of Less Wrong users have joined that community. My friend mentioned that he’s read the Sequences, and feels like what he is thinking about is beyond the level of thinking occurring on Less Wrong, so he no longer found the site useful. Another example of a different community is MetaMed: Michael Vassar is probably quite busy with that, and brought a lot of users of Less Wrong with him in that business. They probably prioritize their long hours there, and their personal lives, over taking time to write blog posts here.
Personally, my friends from the local Less Wrong meetup, and I, are starting our own outside projects, which also involve students from the local university, and the local transhumanist, and skeptic, communities as well. Send me a private message if you’re interested in what’s up with us.
Isn’t there something inherently self-destructive about a website that teaches “winning”? I mean, when people start winning in their lives, they probably spend less time debating online...
If someone starts a startup, they have less time to debate online. If someone joins a rationalist community in their area, they also spend less time online, because they spend more time in personal interactions. Even if you just decide to exercise 10 minutes every day, and you succeed, that’s 10 minutes less to spend online.
(I don’t consider myself very successful in real life, my ambitions are much higher than where I am now, and I still remain in the LW top contributor list only because my time spent on other websites dropped by an order of magnitude.)
Unless your (instrumental) goal is to write something online, as was Eliezer’s case. Which suggests that we should write about the things we care about (as long as they can be enjoyed by people who try to be rational). You know, something to protect, without the affective spirals.
So instead of trying to increase the debates on LW (which is a lost purpose per se, unless pleasant procrastination is the goal), the right question is: What is the thing you care about? Is there a topic so important to you, that you are willing to spend your time learning it and becoming stronger? (Is it compatible with rational thinking, or is it just a huge affective spiral?) If you have an important topic, and it can be approached rationally, then that’s exactly the thing you should write about… and LW is one of those places where you could publish it.
Maybe the thing stopping you is thinking “but this isn’t about rationality; it is about X”. Well, drop that thought. This is exactly the difference between the Sequences-era LessWrong and the new LessWrong. Eliezer wrote the meta stuff, and he himself admits that he “concentrated more heavily on epistemic rationality than instrumental rationality, in general” (because that was related to his main issue: programming the AI). You don’t have to write this stuff again. (Well, unless you feel extremely qualified to; but you probably don’t.) That was Eliezer’s calling; you write about your calling. It would perhaps be best for the community if you were an expert on overcoming akrasia, creating communities, teaching or testing rationality, and similar instrumental rationality topics; but if you are not an expert there, you don’t need to pretend. Write about the stuff you know. At least write the first article and see the reactions (worst case, you will republish it on your blog later).
Upvoted. My thoughts:
For full disclosure, I don’t consider myself very successful in real life either, and my ambitions are also much higher than where I am now. This is a phenomenon that my friends from the Vancouver rationalist meetup have remarked upon. My hypothesis for this is that Less Wrong selects for a portion of people who are looking to jump-start their productivity to a new level of lifestyle, but mostly selects for intelligent but complacent nerds who want to learn to think about arguments better, and like reading blogs. Such behavioral tendencies don’t lend themselves to getting out an armchair more often.
Mr. Bur, I don’t know if you’re addressing myself specifically, or generally the users reading this thread, but, like Mr. Kennaway, I agree wholeheartedly. I personally don’t feel extremely qualified to rewrite the core of Less Wrong canon, or whatever. I want to write about the stuff I know, and it will probably be a couple of months before I start attempting to generate high-quality posts, as in the interim I will need to study better the topics which I care about, and which I perceive to not have been thoroughly covered by a better post on Less Wrong before. I believe the best posts in Discussion in recent months have been based on specific topics, like Brienne Strohl’s exploration of memory techniques, or the posts discussing the complicated issues of human health, and nutrition. With fortuitous coincidence, Robin Hanson has recently captured well what I believe you’re getting at.
My prior comment got a fair number of upvotes for the hypothesis about why there was an exodus from Less Wrong of the first generation of the most prominent contributors to Less Wrong. However, going forward, my impression of how remaining users of Less Wrong frame the purpose of using it is a combination of Mr. Bur’s comment above, and this one.
Note: edited for content, and grammar.
Any blog selects for people who like reading blogs. :D
LW is about… let’s make it a simple slogan… improving your life through better thinking in a community.
Which is like your hypothesis, with the detail that those nerds want to experience a supportive environment. Specifically, an environment that will support them in correct thinking (as opposed to: “you just have to think positively, imagine a lot of success, and the universe will send it to you” or: “don’t think about it too much, join this get-rich-quickly scheme”), and in their clumsy attempts at improving the productivity (neither: “just be yourself, relax, learn to accept your situation”, nor: “too much talk and no action, either show me some amazing results right now or shut up”).
Same here. I would like to write about education in general, and math education specifically. But to make it better than just random opinions, random memories, and random links to “Scenes From The Battleground”, I need to read some more materials and gather information.
Agreed wholeheartedly.
All purposes seek their own destruction. You achieve a goal and continue on to further things. Even purposes to provide an ongoing service will decay as the world changes around it and new methods must be found.
What is LessWrong to be? A thing that was, or a thing that still has a role? And if the latter, what is that role and who will drive it, given that the founders and several of the former leading lights have moved on to other loci of activity?
Creating rationalist communities—a work that has to be done offline, by different people at different places, but we can coordinate and share success stories here.
Rationality curriculum—I would love to read a progress report from CFAR. When they have some materials that other people can use, that’s again a work for everyone in their own city.
Other than that, I think we should try to apply rationality in things we care about, whatever that is. For example, I am interested in computer programming: I would like to know whether some programming languages are really better than others, or whether that’s just an affective death spiral. As a reader, I think that reading about most topics where the author knows what they talk about and tries to be rational, would be interesting.
[WARNING: GOOEY PERSONAL DETAILS BELOW]
I became part of much of the meatspace rationalist community before I started more frequently using Less Wrong, so I integrate my personal experience into how I comment on here. That’s not to mean that I use my personal anecdotes as evidence for advice for other users of this site; I know that would be stupid. However, if you check my user history on Less Wrong, you’ll notice that I primarily use Less Wrong myself as a source for advice for myself (and my friends, too, who don’t bother to post here, but I believe should). Anyway, Less Wrong has been surprisingly helpful, and insightful. This has been all since 2012-13, mostly, well after when it seems most of you consider Less Wrong to have started declining. So, I’m more optimistic about Less Wrong’s future, but my subjective frame of reference is having good experiences with it after it hits its historical peak of awesomeness. So, maybe the rest of you users here concerned (rightfully so, in my opinion) about the decline of discussion on Less Wrong have hopped on a hedonic treadmill that I haven’t hopped on yet. I believe the good news from this is that I feel excited, and invigorated, to boost Less Wrong Discussion in my spare time. I like these meta-posts focused on solving the Less Wrong decline/identity-crisis/whatever-this-problem-is, and I want to help. In the next week, I’ll curate another meta-post summarizing, and linking to, all the best posts in Discussion in the last year. Please reply to me if this idea seems bad, or unnecessary, to stop me from wasting my time writing it up, if you believe that’s the case.
I’m not really a very old user, maybe three years (and after becoming more active in real-life meetups I switched to an alt that used my real name, so I’m not as inactive as I look, though still pretty inactive these days). But I have to say, it subjectively feels like the quality of everything on lesswrong is lower than it was when I joined.
And I’ll tell you what I perceive the difference to be:
1) All my favorite writers stopped writing here. I have to go elsewhere to find their content. Previously, I felt that most of the stuff I read here was at a level above me in terms of insightfulness and level of philosophical rigor… and now, with a few exceptions, I don’t.
2) The user-base shifted such that it was no longer a homogeneous entity which I labeled as an in-group. People here don’t just automatically share my outlook on morality, epistemology, “free will”, consciousness, and even politics anymore. Previously, the core sequences were pretty in-line with what I initially believed, and the entire userbase shared those views. That’s not to say I don’t value diversity of opinion, but there is something special about a group that agrees with you on every core issue. The inferential distance just keeps growing wider and wider.
3) I can’t quite put my finger on it, but somehow commenting here gradually began to feel more like I was arguing a viewpoint, rather than cooperative mutual discovery, If I want to argue with people who are wrong on the internet,(heh, username) I can go do that anywhere.
Of course, this doesn’t mean LW is objectively worse. I was fairly impressed when i first started reading, so this might simply be regression to the mean (as in, maybe LW was always a random walk and I first joined because it hit the right buttons for me at some point in that walk, and now it’s moving away from that point.) Or it might just be rosy retrospection.
I’ve kind of accepted A...lesswrong kind of began “dying” for me a year ago. But I don’t recommended it as an action...it’s an unfortunate thing to happen.
I like option B the best. A lot of the good stuff I find here these days tends to be links to other things. C and D will probably increase the numbers we’re measuring, but won’t actually raise the content quality. With respect to C, Political discussion is wonderful on its own but has the side effect of causing the types of people who are primarily interested in talking about politics (as opposed to science or philosophy) to speak up more often, which drives down quality. With respect to D, that’s not changing anything other than the location of the activity. Who cares if it’s all in one thread or not? I think I disagree about it being harder to navigate.
I just want to second point (3) that you made. Constructive commenting, where you acknowledge the strengths of the post, and then point out the flaws and also suggest how to fix the flaws might go a long way in incentivizing discussion.
To elaborate further, the sheer absence of the blue-green, “arguments are soldiers” mentality which previously abounded (no doubt helped by the fact that everyone had a large set of shared core premises) made every conversation seem like a step forward. People did nit-pick excessively just as we do today, but somehow it felt like the good kind of nitpicking..
But it’s not that people were nicer or more diplomatic. They were a lot more playful back then, but they would still be fairly blunt even by internet standards. It was something else...I think it was simply that they were more interested in getting to the right answer than they were in “winning” in the rhetorical sense.
I guess to put it dramatically, LW has gradually been consumed by the Dark Arts, which has caused a lot of quality people to get bored and leave.
I’d really like idea D. Open threads aren’t terrific for developing ideas due to the navigation and visibility problems.
Seems to me like the idea D will create navigation and visibility problem in Discussion. It will be like a Facebook wall.
And if the Discussion page will be full of quickly-scrolling two-sentence “articles”, nobody will write anything decent there anymore.
If we have less content, then we simply have less content. Either accept it, or write some content you would like to see. But increasing the number of articles artificially, by converting each Open Thread comment into a separate article, that is a lost purpose.
Open threads don’t have to be banished. But cluttering discussion is hardly a problem. I can still see threads from Mid May on the first page; these threads already have dramatically lower activity (any thread has dramatically lower activity after a few days). It’s a long way to the point where thread discovery is a problem.
Maybe have a third place to post “articles” with drastically lower entry requirements? Something like an open forum next to main and discussion.
Somehow I feel bad about “articles with drastically lower entry requirements”. If something is really, really… how to say it… for example if it’s just a hyperlink to some article without providing a summary, or an idea expressed in one sentence and then “Discuss!”… uhm, just no. That’s for a Facebook status, or a chat. If someone is lazy to write, then perhaps they just shouldn’t write; maybe it’s better to have less content, than lazily written content.
However, I wouldn’t object against having a separate “Open Forum” category (as in: “Main”, “Discussion”, “Open Forum”), moving all Open Threads into the Open Forum, and perhaps even creating a new Open Thread every day (preferably automatically by a bot).
I would also encourage users with upvoted (say, karma about 10 or more) top-level Open Thread comments to rewrite those comments as “Discussion” (potentially maybe even moved to “Main”) articles. I mean, they already received a positive feedback from the community, so there is no need to fear. But I mean rewriting, not merely reposting. I don’t want to encourage sloppy posting, only to encourage users. (To prevent the “maybe I will spend an hour polishing the article, but probably no one will like it anyway, so why bother” thinking.)
The problem is that LW is not getting many impressive posts from either talented outsiders or regular commenters who managed to graduate into quality article writers.
I suspect EY making Roko flame out in 2010 set up a dynamic where fluff posts and snarky comments are fine, but being an outsider who can write quality posts about something actually interesting is not, since that sets you up for a similar loss of face. People who could write the sort of content LW was supposed to be getting can do so just as well at their own blog which isn’t subject to random bursts of MIRI autocracy, and most seem to do that.
Also, LW doesn’t really have an incentive structure to go above and beyond writing stuff that the average reader will upvote after a quick read. There isn’t a ladder for local posters who are hooked to the local feedback systems to go up into producing increasingly impressive content from the status of “frequent commenter who receives mostly upvotes”.
Many of the most-influential, highly-respected people in SIAI/MIRI circles don’t read LessWrong much, or post but don’t make comments. I’m thinking of Eliezer, Michael Vassar, Carl Schulman, and Peter deBlanc, but if you look at the MIRI team, you’ll see mostly names I don’t recognize from LW. lukeprog does, now, and that’s great, but I suspect that within the MIRI org chart, spending time here costs a bit of status. It signals that you’re one of the followers rather than a leader. Replying to comments may also cost status if you perceive your status as higher than the person you’re responding to.
I also think earlier LW had more discussion about futurism, transhumanism, and artificial intelligence, and those things brought people in. More importantly, people engaged with those topics had specific questions that had answers.
The voting system favors posts that don’t have anything offensive or that you can disagree with over posts that are interesting and hence controversial.
Maybe upvotes on Discussion-level posts should get more karma than a comment. Perhaps something like 3-5 karma per upvote.
It is always frustrating to see a good comment on your Discussion level post receive more karma than your post itself. A decent Discussion-level post is at least an hour’s worth of work; a good comment is more like 10 mins. The person who steps up and sets the stage should be rewarded.
See also http://slatestarcodex.com/2014/06/04/open-thread/#comment-95554
I think the most interesting was the discussion about 4chan
The well-kept garden thing obviously hasn’t succeeded as planned, so should we be aspiring for some kind of 4chan for rationalists?
I disagree; LW has succeeded far better than, say, SL4, and better than OB. Despite having just a tiny fraction of the population and activity and a heavily restricted set of topics, SL4 was a much less pleasant forum to use.
Do you think LW has succeeded because EY attempted to make it a well-kept garden?
I think it’s one of the factors, along with making posting easier, more explicitly trying to foster a community (how many SL4 or OB meetups were there ever?), the content contribution of the Sequences, and using a forum software with pervasive moderation.
I wouldn’t mind seeing an experimental sub-reddit here that made all comments anonymous while keeping the voting, so that you get some of the benefits of anonymity without being as noisy as 4chan.
4Chan isn’t all that special. It’s wildly popular, sure, but most of that is because of first-mover advantage: it was the first English-language community with the right incentive structure (in particular, normative anonymity, easy image insertion, and fast-moving auto-expiring threads) to hit critical mass.
I don’t think the initial core community of anime perverts has all that much to do with its eventual culture, either. Certainly if it had been seeded with a different type of American geek we’d have ended up with something pretty similar, albeit with more naked pictures of Dejah Thoris and fewer of Rei Ayanami; but I don’t even think a counterfactual 4Chan for fraternity bros would have looked very different, except of course that bros don’t spend enough time at their computers for that to work. 4Chan’s format doesn’t provide any enforcement mechanisms strong enough for the seed culture to shape its evolution much, so what we’re seeing now is more like the middle of an attractor defined by the incentives embedded in the format.
For making good posts, what kind of pr0n should we reward rationalistfags (ratfags for short) with?
X-D
I once posted to Main (http://lesswrong.com/lw/6uw/how_to_enjoy_being_wrong/).
Afterwards, I felt bad about it somehow—like I had done something wrong or unappreciated—despite having a substantially positive karma balance on that post. I think the reason was that almost all the top-level comments were neutral or negative and there was not much encouraging discussion, and I think the post might have been moved to Discussion—it was certainly not promoted.
It’s actually interesting to go back and look at that because I now realize that that was a reasonably successful post and probably should have encouraged me further. Instead it did not. I wonder if something similar has happened to others.
Looks to me like you were a victim of a culture of hyperdeveloped cynicism and skepticism. It’s much easier to tear things down and complain than to create value, so we end up discouraging anyone trying to make anything useful.
I think a big issue is that any of the big contributors of the past, lukeprog, EY, Yvain, gwern, Kaj_Sotala etc. aren’t writing articles here anymore and there is no other similarly good and popular writer that would do the same today. There is no purpose coming here, except for the Open Threads. Posting and making comments itself is not very fun because you always have to watch out what you say.
Anything that requires many people to change their habits probably won’t going to happen. Changing norms is difficult for the same reason, so idea D is possible, but a bit hard.
I think a combination of
and
and generally allowing a more free discussion about related “fun” issues
could work, but I don’t know how would you go about implementing the first one. Like I said, changing norms and habits is a bit difficult and making announcement to the effect of “Be more co-operative and cheerful” is probably going to work as well as announcing “Be nice and don’t bully!” in schools. Just relaxing moderation wouldn’t be enough. Because honestly, I haven’t even noticed any moderation here, there’s just the feeling of it.
There’s the possibility of starting from scratch and making the subreddit /r/lesswrong more active. Then changing habits and community norms wouldn’t be such a big problem. LW related subreddits like /r/rational and /r/HPMOR are already pretty active so expanding this “LW cluster” on reddit would only be natural progression. But I feel like even if that happened, it wouldn’t be a satisfying solution for many.
I haven’t posted here for that long but I think that a more co-operative and cheerful style of discussion would strengthen the community, encourage people to post more, and ultimately strengthen the rationalist cause. In short, it would seem that adopting such more benevolent norms would be the rational thing to do...
Agreed. Pedagogy progresses best when earnest ideas receive earnest and constructive feedback.
I would agree, but “adopting benevolent norms” is probably not so easy to do in practice. Nasty people can be very subtle in their attempts to demean and put down other people. Probably a lot of the time they aren’t even aware of it themselves. A concerted effort to improve the level of benevolence would be very costly in time and energy and would invite endless meta-debate.
Trike should be able to check this. My guess is that the site has a lot higher view rate than 2-3 years ago, from both casual and registered users, but maybe they do not vote as often as before (or are more negative than before?), keeping the average karma/post steady.
Also, does the median karma per post match the mean, or is the latter skewed by high-karma outliers?
I have access to LW google analytics. Traffic on LW has trended down since its peak in 2012, but not as steeply downwards as Discussion posts… perhaps a 15% drop.
My pet theory is the same as the one I’ve always offered: the LW user moderation is too heavy-handed, writing LW posts isn’t that much fun, and there’s a culture of “how dare you write that post” (e.g. “was this really appropriate for (Main|Discussion)? it really should have gone in (Discussion|an Open Thread)” is a common refrain). And there’s become a kind of deflationary phenomenon where what was once appropriate for Main becomes appropriate for Discussion becomes appropriate for an open thread (e.g. this was a featured post in the early days of LW; nowadays a link with explanatory text is frequently an open thread post). I think we should try (a) telling people in threads like these they should write up interesting post ideas if they have them (to save LW!) and (b) go friendly/easy on those who do write posts.
Note that something like this has been discussed as a problem since 2011.
The nice thing about user moderation in the form of voting is that it’s easy to throw a lot of content at the forum and see what sticks… it will get filtered automatically. So why not do that?
After I look at the old Main or Discussion, I mostly remember the best posts and am hesitant to post lower quality stuff. Not sure if this is a common sentiment.
Consider that your internal estimate of the quality of the post you will make is noisy. For example, I did not expect this to become my most popular post, and I did not anticipate all the negative feedback I received on this post (also potentially interesting to note: although that second post ended at +5, I think it was lower than 0 at times and overall I felt pretty punished for writing it).
Oh, I agree that my estimate of how my post would be rated is piss-poor, judging by how my comments are rated. I was talking about how I personally feel about their quality vs how I personally feel about the quality of the best posts. Maybe I’ll dig through my drafts and post something after a slight polishing, just to see what happens.
Sounds excellent!
What do you base your guess of higher readership upon? e.g. there’s been no new HPMOR this year, so no n00bs from that.
Well, the surveys have consistently shown growth, although IIRC the last survey showed less growth.
Growth in survey answers, presumably? We’re talking about hits here, which includes the lurking masses. And should be a much more solid number, if we have it.
And growth in status of the survey.
If you mean growth in the status associated with taking the survey, which is what actually matters for gaining responses, then I’m not sure about that. I haven’t run the regression I’d need to generalize, but my own ritual “I took the survey” responses have gained me less karma each year.
There are other population dynamics that could explain this, but they all look a little far-fetched to me.
It would be great to see some numbers, and which parts of the site they’re viewing. For this particular question, we’d want to look at the visits to the Discussion page, rather than visits to general parts of the site such as the Sequences or Wiki.
The open threads are popular and valuable. Please don’t destroy the good part of this site in your efforts to fix the bad.
B—posting links to articles—is already possible. It’s fallen out of fashion, I’m not sure why. So far as I remember, link posts in Discussion went over well enough so long as there was a substantial excerpt or a summary rather than just the link.
That’s the problem. Posting a summary is a trivial (or not so trivial) inconvenience.
That’s interesting—you’ve got 3 karma points. When I post a link, I usually add an excerpt or summary on utilitarian grounds, since I think it’s less total work for me to give some indication of why other people should be interested than for a number of people to click the link than for me (who’s already read the link and know something about it) to check it out.
I gather the media thread isn’t a good enough place for posting links.
I was, for a period, a major submitter of links to Hacker News. The process for doing that with the bookmarklet they provide is literally two clicks and 10 seconds. How many of each is it for LW today?
Not seeing a summary is a sufficient inconvenience that I ignore the link.
But what does it matter if 1% of all links that should end up here, actually do? Hacker news is a proven model, people not clicking without summaries isn’t an issue.
A proven model of what, though? I don’t read Hacker News (or reddit, or 4chan), because every time I’ve looked around those places, I’ve seen nothing worth staying for, just shiny distraction.
If Less Wrong has declined, what has it declined from and what do people want it raised to?
It has declined from high quality intellectual discussion (admittedly, of questionable direct value in every day life) to . . . basically crickets.
All the main content creators/conversation starters have gone off to other projects, or formed local meatspace communities that suck up their time.
Possibly an impressive record for LW.
Maybe we’ve discovered that open threads are the most valuable discussions and the rest of the site isn’t worth it?
One could also say that that is where they prefer to spend their time.
Yes, that seems to be true. I didn’t mean to cast it as a negative thing.
All I’m saying is that we have a supply problem, and you’re raising a demand issue. Also, the issue you’re raising is based on an anecdote that seems sufficiently niche as to not be worth the tradeoff (i.e. not solving the supply issue). If you have evidence of generality of the demand for summaries, I’d like to see it.
It’s a frequent complaint (and not just by me) when people post links without summaries.
Personally, if someone wants me to read something, they’d better tell me what it is first, or I just ignore it.
Complaint isn’t actually a high enough barrier. If I had a waiter serve me breakfast every morning in bed, and suddenly I had to go to the kitchen for it, you bet I’d complain. The question is, would people not visit links based on the title alone?
In any case, I’ve explained this enough times that I think I’ve done as much as I could have. I’ll just leave it at this.
Similarly to what some others have written, my attitude toward LessWrong is that it would best thrive with this model:
1. Embrace the Eternal September.
If LessWrong is successful at encouraging epistemic and especially instrumental rationality, people who have benefited from the material here will find less value in staying and greater opportunities elsewhere. LessWrong doesn’t need to be a place to stay any more than does a schoolhouse. Its purpose could be to teach Internet users rationality skills they don’t learn in ordinary life or public school, and to help them transition into whatever comes next after they have done so.
Since culture is always changing, to best aid new waves of people, the Sequences will need to be scrapped and crafted anew on occasion.
2. Aim lower.
Eliezer had motives in writing the Sequences in the way he did, and he also had a very narrow background. It has often been noticed that the demographics here are absurdly skewed toward high IQ people. My presumption is that our demographics is a consequence of how things like the Sequences are written. For example, Eliezer’s supposedly “excruciatingly gentle” introduction to Bayesianism is in fact inaccessible for most people; at least it was difficult for me as a high-but-not-very-high IQ person with (not-recent) years of statistics training, and I pointed friends toward it who simply gave up, unable to make progress with it. A new Sequences could do well to have multiple entry points for people of different backgrounds (i.e. abandon the programmer jargon) and ordinary IQs.
3. Extend higher.
If we want to keep longtime participants from moving on, then we have to give them additional value here. I can’t give advice here; I feel I’ve already learned more theoretical rationality here than I can effectively ingrain into habit.
Upvoting #2, way after the fact.
I gave my own reasons for mostly abandoning this site in a post.
There were additional specific factors, some involving Eliezer’s high-handed interventions to remove or downgrade things I’d posted without, I think, considering them carefully. A big one was when gwern responded to a post of mine with a vicious attack, not on my post, but on me as a person. I replied with something to the effect of, “As a rationalist, you should recognize that attacking someone has a cost, so what exactly is the benefit to you here?” He responded by saying that he just felt like it.
That wasn’t what bothered me. What bothered me was that his comment was cruel and senseless, exactly the opposite of what this website is supposed to encourage—yet this denunciation of rationality in his personal behavior had more upvotes than downvotes. That showed me that this website isn’t really about rationality, at least not to most of those who read and vote.
I feel a little bad about admitting that his personal attack succeeded in his goal of reducing my presence here. But it wouldn’t have, if the LW community hadn’t assisted.
I disagree strongly with this characterization and feel this comment simply continues the pattern that I was criticizing. For some examples:
Cyan on Goetz’s statistical understanding: http://lesswrong.com/lw/jj6/using_vs_evaluating_or_why_i_dont_come_around/aerh
Goetz’s argument against Shakespeare: http://lesswrong.com/lw/j24/to_like_or_not_to_like/a1pj
the ‘vicious attack’: http://lesswrong.com/lw/h75/optimal_rudeness/8rfh
and let’s not forget Goetz’s classic http://lesswrong.com/lw/h56/the_universal_medical_journal_article_error/ (which you can see he’s still complaining about in the above comment)
If people think I’m wrong, not just unnecessarily insulting, well, go through Goetz’s comment and post history, starting at the beginning, and see if my summary does not strike you as fitting better than a narrative of baseless persecution. Cyan’s mention of ‘narcissistic injury’ is right on the money.
This would be a good thing to encourage—though I’m not sure that allowing submissions from elsewhere is the best way to achieve it. I assume that the “Rerunning the Sequences” thing that was going on a bit back was an attempt at this—but the reposts didn’t seem to get much attention or discussion. For various reasons, people are less receptive to content that is old—heuristics imply that it will be less salient on average, the author is less available for feedback/interaction, the timing for absorbing new information takes more discipline, there is less discussion, etc… It would be better to have posts which rehashed the main ideas in the sequences, which would solve this problem. Additionally, it would add new perspectives, insights and refinements—it wouldn’t be completely redundant.
Another issue that LW has is being contrarian for the sake of being contrarian. This is really off-putting to me, and most likely to other users as well. Encouraging redundancy would alleviate this problem since people would be able to gain status/distinguish themselves more easily without being contrarian.
I really dislike this idea. It takes effort to read political stuff properly, and also drives away new-comers. There is more than enough discussion on politics in the blogs surrounding LW, if that is something you want to discuss.
Problem 1: The subject area is defined too narrowly. Instead of limiting ourselves to “refining the art of human rationality”, I would like the forum to allow any content which is interesting to an audience of atheist humanists who favor solving problems through a rational / analytic approach and who cherish a rationalist style of discourse. This also applies to how the forum markets itself outside.
Problem 2: Much of the time, the forum feels too much like a battle arena and too little like a community. In particular, I felt great disillusionment with LessWrong after my proposal to restrict downvotes to traditional use-cases of moderation received vehement opposition. Possible improvements:
Add a lot of community features to the site. For example, integrate the google groups for LW business networking and LW parenting into the site (there is currently no way for newcomers to find them). Create a platform for LW couch surfing. LW crowdfunding. Subforums for people seeking advice anonymously. Et cetera.
Revamp the Karma system. For example, go for something more like StackExchange (e.g. you can’t downvote a comment, you can only “flag as inappropriate”).
Publish much more of the stuff going on in meetups to the site. For example, videos. Maybe we also can allow people to participate in meetups remotely through e.g. Google hangout.
I believe that friendly behavior and not downvoting are two different things, but these ideas seem mixed together in some proposals.
I would prefer if LW became more friendly, and less like a “battle arena”. I mean, when I meet with rationalists at meetups, I am so happy, and I love them all… so why don’t my words here reflect it? This is a thing that needs to be fixed, and that I need to be reminded of more often.
But upvoting and downvoting is different from that. Votes != words. Clicking upvote a dozen times is not an equivalent of saying “I love you”. We need more warm speech, not indiscriminate upvoting. At least this is how I feel about it. My idea of a better LW is a place with warmer discussion, not a place where hyperlinks without a summary get upvoted. That would be solving a wrong problem.
Wouldn’t that be like comments on Facebook? I am afraid it would incentivize people to post controversial comments. These days a comment with 3 upvotes and 0 downvotes has a higher score than a comment with 7 upvotes and 10 downvotes; without downvoting it would be the other way round.
Yes. And even more generally—if your (rationality-related, but not necessarily) activities are in the real world, then write about them here. Tell us what happened at your meetup. Tell us about things you debated in your google group. Etc.
Some people tried that, the problem is you can’t have more than cca 10 people in a hangout, and even then it goes very slowly. :(
I’m not convinced that’s a bad thing. It certainly would help avoid groupthink or forced conformity. And if someone gets upvoted for posting controversial argument A, then someone can respond and get even more votes for explaining the logic behind not-A.
So, what is your opinion on neoreaction, pick up artists, human biodiversity, capitalism, and feminism?
Just joking, please don’t answer! The idea is that in a debate system without downvotes this is the thread where strong opinions would get many upvotes… and many people frustrated that they can can’t downvote anymore, so instead they would write a reply in the opposite direction, which would also get many upvotes.
We wouldn’t have groupthink and conformity. Instead, we would have factions and mindkilling. It could be fun at the beginning, but after a few months we would probably notice that we are debating the same things over and over.
There certainly needs to be some way to moderate out things that are unhelpful to the discussion. The question is who decides and how do they enforce that decision.
Other rationalist communities are able to discuss those issues without exploding. I assume that Alexander/Yvain is running Slate Star Codex as a benevolent dictatorship, which is why he can discuss hot button topics without everything exploding. Also, he doesn’t have an organizational reputation to protect—LessWrong reflects directly on MIRI.
I agree in principle that the suggestion to simply disallow upvotes would probably be counterproductive. But how are we supposed to learn to be more rational if we can’t practice by dealing with difficult issues? What’s the point of having discussions if we’re not allowed to discuss anything that we disagree on?
I guess I think we need to revisit the question of what the purpose of LessWrong is. What goal are we trying to accomplish? Maybe it’s to refine our rationality skills and then go try them out somewhere else, so that the mess of debate happens on someone else’s turf?
As I write this comment I’m starting to suspect that the purpose of the ban on politics is in place to protect the reputation of MIRI. As a donor, I’m not entirely unsympathetic to that view.
If this comment comes off as rambling, it’s because I’m trying not to jump to a conclusion. I haven’t yet decided what my recommendation to improve the quantity and quality of discussion would be.
There’s no ban in place on discussing politics. We do have highly controversial discussion about far out political ideas like neoreactionism.
Indeed, this ban relates to how we discuss topics, not what topics we discuss.
An example would Eliezer’s “Traditional Capitalist Values”, or suicide bombers mentioned in other articles in Sequences.
Which of the proposed solutions do you prefer?
[pollid:701]
I think this would work better with each question asked separately, since they are not mutually exclusive. Idea A: Accept that LessWrong has fulfilled its purpose and should be left to fade away, or allowed to serve as a meetup coordinator and repository of the highest quality articles.[pollid:702] Idea B: Allow and encourage submission of rationalism, artificial intelligence, transhumanism etc related articles from elsewhere, possibly as a separate category.[pollid:703] Idea C: Allow and encourage posts on “political” topics in Discussion.[pollid:704] Idea D: Get rid of Open Threads and create a new norm that a discussion post as short as a couple sentences is acceptable.[pollid:705]
Personally, I fall on the ‘all of the above(except idea A)’ side of the fence. I primarily use LessWrong for the Main board, as it is an excellent source of well-edited, well-considered articles, containing interesting or useful ideas. I want the remainder of the site to thrive because if there is not a large, active userbase and new users being attracted, then I would expect to see the types of content I want to see become less frequent. All of these ideas seem like good things to do, keeping in mind that if these do not actually support the goal of making good Main articles more frequent, then they are not good things, and it seems possible that some of these could backfire.
Allow more and more varied posts to survive without karma bombing. My favorite other idea is charge downvoters one karma point for each downvote as is done on stackexchange.
I was actually surprised that (down)voting doesn’t cost karma. But doing so creates an imbalance and I now see it as beneficial. It is more the policity followed by voting that is the problem.
Did anything specific happen (or stop happening) in the fall of 2011?
I also notice that the “featured articles” are for the most part re-postings of old articles. When did this start?
Meetups were ramping up around then.
There are certain signalling risks as you broaden the discussion topics, so I’d specifically vote against C. The single-Discussion-forum, social norm, and karma model works very well the more targeted the discussion, and frays further more varied topics become.
I’d be interested to see the number or distribution of users creating top-level Discussion opening posts. There are advantages to having a number of high profile, well-written advocates, but there are downsides as well—if there are three people /really/ good at starting discussions, you get great discussions until one of them has a busy workweek. If there are thirty people only mediocre at it, the highs are as high, but there’s significantly less loss if one person has their internet connection go down.
At least as a user, the Top Contributor List on the right side of this page is somewhat discouraging.
As a user who has partially-finished posts and ideas for posts, but has not published them, another matter is that it’s not terribly clear where and how to start. The Sequences are have the greatest promotion and are the most obvious place to look within the site hierarchy, but they seem to be (probably intentionally) built to avoid such metacontent. The FAQ page actually has some useful tips, but it’s fairly ‘deep’ in the site hierarchy, and is written with an expert’s knowledge of ‘how’ things work and thus lacks things like a “what tags are commonly used” bit. Filtering by Top tends toward inside-ball due to certain mechanics. There is no obvious list of rules (what is a core LessWrong topic?), even as the rules are not static (the recent “no hypothetical violence/illegal action” rules). I’m very unsure how well the site tolerates discussion on well-traveled ground, or what degree folk are expected to search for similar pre-existing topics before posting. The first rule-like post I could find is Well-Kept Gardens Die By Pacifism which… uh, not the most encouraging.
Of course, the social norms discouraging content exist for reasons.
I think LWers and online communities in general should realize that, just like any other economic good, they are subject to obsolescence.
Also, not necessarily the reason for the decline in activity needs to be treated to make LW popular again. A refocus might just be more effective.
Just like any product, an online community needs to know its niche and market that niche heavily.
I think the most effective step to undertake is for the group to make an effort and decide what direction it want to give to the site: be it a repository, a place for discussing rationality and IA within or without the boundary traced by the Sequence, and so on, and make this the better place for those kind of information.
When do you imagine guns or butter will finally be obsolete?
Which is to say I don’t think there is any such general rule of obsolescence in economics. And especially when it comes to a branded thing which can be modified, obsolescence would then seem to be a choice of the designers and not a fundamental phenomenon to be reckoned with.
Guns or butter? Probably never. A gun or some brand of butter? You do the math...
Just as LW is not “the online community”, but just a (very specific) online cummunity. And it needs to cater to its audience, otherwise… It’s luck is that it’s just very well focused, we need to recognize this and make it more marketable.
Technology is a big component in every model of economic growth, and by definition technological progress makes some other things obsolete.
Once everyone uploads :-P
I’ve never managed to figure out how the transition from “population can double in a few decades” to “population can double as fast a forkbomb” is supposed to reduce scarcity.
The community has a strong in-group feeling to it. I have read most of the sequences and visited the site on and off for a few years. I always feel like an outsider.
I feel like there is a higher standard then there is on other websites that I typically comment on. It makes me feel less inclined to post and comment, when I otherwise might. Even on a very casual place like the IRC, I still feel like an outsider. It feels intimidating to join a conversation, compared to a regular IRC, or a random forum.
I can’t say that these things are bad. Strong communities and high standards can be good.
I think discussion as currently practiced puts off a class of potential new lesswrongers. I would expect discussion in a non-faith-based rationalist community to include regular challenges to almost all core beliefs. Instead, if a new-ish lesswronger comes in and posts an unpopular interpretation of something, she is karma-bombed out of there.
I think a rational community needs to distinguish between “garbage” and “annoying.” With garbage perhaps beneficially being taken out (although in a hierarchical message system it is that important to remove it.)
But “annoying,” I think tolerance of “annoying” ideas may be an important mechanism for building a following.
I think we have the growth statistics in Discussion I would expect from a forum so invitingly named but which displays such a strong and negative reaction to alien concepts.
The change that appeals to me is making a downvote cost the downvoter 1 karma point. This is how it is done in the stackexchange family of boards. This plus a tiny bit of moderation keeps those boards in quite high quality.
Points refuted a thousand times get dull to refute the 1001st time; it’s hard for a community to realise that for the newcomer it’s new and relevant, even if for the community it isn’t even Tuesday.
Of course, this sort of thing is where a FAQ may be useful.
As something perhaps related to this… is it possible LW became dogmatic and stubborn over time and it generated the sort of place that wasn’t that interesting to follow because nearly everything had already been said?
I’d like to believe I came to accept a lot of the LW common views as I dug into the sequences and realized many misconceptions I held about reality. (Perhaps there is some bias I’m unaware of that is causing me to believe I’m less biased than I am?) But I’ve noticed EY among others here who agree with him seemed to just dig in to their views more deeply as time went by.
A couple examples:
Conjunction Fallacy and the Linda Problem—It just isn’t difficult at all to see why this is not as much a case of people’s weakness to comprehend formal mathematical probablities as it socially functioning adults’ desire to engage in non-awkward conversations. If I remember right, EY wrote some long post that tl;dr (paraphrase) said the conjunction fallacy must exist because it’s been studied a lot.
“Lifeism” on LW—It’s weird to me that some folks (who are very familiar with typical mind fallacy) cannot accept the fact that some people wouldn’t want to live forever. The failure to have the option to exist indefinitely just isn’t that big a deal to some people, and this is—it seems to me—the sort of thing that many on LW seem(ed) intent to prove was mathematically in error.
Dust Specks v. Torture—Kooky—no matter how often I’m told to shut up and multiply.
Anyway, the one reason I liked LW is it was really smart people who were willing to change their views based on evidence and the pursuit of reality. I don’t claim to have done some sort of exhaustive study on the all the material here (nor could I since a good-sized chunk of it is above my head), but I think it suffers from all the same sorts of problems and biases typical internet community hiveminds do. And maybe that just got old and annoying to people?
I do believe that people have done experiments specifically to test this interpretation, and found that the Conjunction Fallacy does actually exist, in basically the way the Linda Problem suggests it does. That is, it’s not just “but we repeated the experiment and are sure there’s the effect we measured” but “we considered alternative explanations, and did experiments to confirm or disconfirm those explanations, and they’re disconfirmed.”
I think you are right. And I think the conjunction fallacy as a weakness to intuit probability is real. (The Monty Hall problem vexes my intuition once about every 18 months)
But I think it was vastly overstated and does not apply to “real life” situations in nearly the same way as in testing environments.
If someone approaches me and says “John is super athletic and 7.5 feet tall. Which is probably true? John is a bank teller? Or John is a bank teller and played NBA basketball...”
I’ll think they’re probably telling me about John, in part, because he has acheived something noteworthy—like playing NBA basketball.
I won’t hardly give one damn about if I’m right about this random person’s probability quiz regarding John. I’ll just be polite and give it a quick guess.
It might occur to me the person is mistaken… and that this person’s mistakeness is a higher probability a 7.5 foot super athletic guy turned down big cash and fame to be a bank teller.
At any rate, to my recall, some LWers just seemed to be out of touch with the real world on this one.
It’s certainly easier to demonstrate in testing environments. But I think the mistake of using ‘representativeness’ to judge probability does come up quite a bit in real life situations.
But… it’s still a conjunction! You shouldn’t think John becomes more likely when another constraint is put on it. You might ask “did the first John never play in the NBA, or does that cover both cases?”
Typically, human minds are set up to deal with stories, not math, and using stories when math is appropriate is a way to leave yourself easily hackable. (Mentioning the NBA should not make you think there are more bank tellers, or that bank tellers are more athletic!)
Your reply is a good example—not to pick on you—of what I’m talking about.
Of course it’s “still a conjunction”. Of course the formal probability is lower in the case of the conjunction regardless of if John is 10 feet tall and can fly. But in the real world good instrumental rationality involves the audicity to come to the conclusion that John is a NBA basketball player despite the clues in the question. The answer might be the questioner is wrong about John, and that isn’t a valid option in the lab.
I’m pretty confident that I understand your position, and to me it looks like you’re falling exactly into the trap predicted by the fallacy. Would it be a good use of our time for me to explain why? (And, I suppose, areas where I think the fallacy is dangerous?)
Sure.
No. If you did reply with this to someone who approached you in a social situation, you’d be more likely to “lose” than if you were just polite and answered the question with your best guess.
It is socially awkward to do labwork in real world social environments. So, while your follow up questions might help you win in correctly identifying the highest probability for John’s career path, you’d lose in the social exchange because you would have acted like a weirdo.
It’s good to be aware of the conjunction fallacy. It’s good to be aware of lots of the stuff on LW. But when you go around using it to mercilessly pursue rationality with no regard for decorum, you end up doing poorly in real life.
The real heart of the conjunction fallacy is mistaking P(A|B) and P(B|A). Since those look very similar, let’s try to make them more distinct: P(description|attribute) and P(attribute|description), or representativeness and likeliness*.
When you hear “NBA player,” the representativeness for ‘tall and athletic’ skyrockets. If he was an NBA player, it’s almost certain that he’s tall and athletic. But the reverse inference- how much knowing that he’s tall and athletic increases the chance that he’s an NBA player- is much lower. And while the bank teller detail is strange, you probably aren’t likely to adjust the representativeness down much because of it, even though there are probably more former NBA players who are short or got fat after leaving the league than there are former NBA players that became bank tellers. (That is, you should pay as much attention to 1% probabilities as you should to 99% probabilities when doing Bayesian calculations, because both represent similar strengths of evidence.)
When details increase, the likeliness of a story has to not increase, assuming you’re logically omniscient, which is obviously a bad assumption. If I say that I’m wearing green, and then that I’m wearing blue, it’s more likely that I’m wearing just green than wearing green and blue, because any case in which I am wearing both I am wearing green. This is the core idea of burdensome details.
So lets talk examples. When an insurance salesman comes to your door, which question will he ask: “what’s the chance that you’ll die tomorrow and leave your loved ones without anyone to care for them?” or “what’s the chance that you’ll die tomorrow of a heart attack and leave your loved ones without anyone to care for them?” The second question tells a story- and if your estimate of dying is higher because they specified the cause of death (which necessarily leaves out other potential causes!), then by telling you a long list of potential causes, as well as many vivid details about the scenario, the salesman can get your perceived risk as high as he needs it to be to justify the insurance.
Now, you may make the omniscience counterargument from before- who is to say that your baseline is any good? Maybe you thought the risk was zero, but on second thought it’s actually nonzero. But I would argue that the way to fix a fault is by doing the right thing, not a different wrong thing. You say “Wow, that is scary. But what’s the actual risk, in numeric terms?”, because if you don’t trust yourself to estimate what your total risk of death is, then you probably shouldn’t trust yourself to estimate your partial risk of death.
*I use infrequently used terms to try to make it clear that I am referring to precisely defined mathematical entities.
Agreed that it’s a good idea to be polite. Disagreed that the conjunction fallacy is just because people are polite. There are lots of experiments where people are just getting the formal math problem wrong or being primed into giving strange estimates.
But even if we suppose that the person is trying to ‘steelman the question,’ that is a dangerous thing to do in real life. “Did you get the tickets for Saturday?” She must mean Friday, because that’s when we’re going. “Yes, I got the tickets.” Friday: “I’m outside the theater, where are you?” “At work; we’re going tomorrow! …you got the tickets for tomorrow, right? Because now the show is sold out.”
Yes, it’s a good social skill to judge the level of precision the other person wants in the conversation. Responding to an unimportant anecdote with a “well actually” is generally seen as a jerk move. But if you’re around people who see it as a jerk move to insist on precision when something meaningful actually depends on that precision, then you need to replace those people.
And if they were intentionally asking you a gotcha, and you skewer the gotcha, that’s a win for you and a loss for them.
Huh? First, Linda’s occupation in the original example is trivial, since I don’t know Linda and could not care less about what she does for a living.
And “replacing” people is not how life works. To be successful, you’ll need navigate (without replacing) all types of folks.
This sounds weird to me. Who does this?
Anyway… I get the conjunction fallacy. There are plenty of useful applications for it. I still think the core of how it is presented around here is goofy. Of course additional conjunctions = lower probability. And yep, that isn’t instantly intuitive so it’s good to know.
Agreed. That’s why I gave a non-trivial example for the broader reference class of ‘steelmanning questions’ / ‘not noticing and pursuing confusion.’
Disagreed. Replacing people is costly, yes, but oftentimes the costs are worth paying.
It is one of many status games that people can play, and thus one that people sometimes do play.
There’s a certain irony to saying this right after you got done talking about the typical-mind fallacy.
“Torture vs. Dust Specks” is one of my least favorite posts on LW, but not because I disagree with the community’s conclusions. (I do in letter but not in spirit; I’d pick “specks” as it’s stated, but that’s because my idea of the pain:suffering mapping, while consequentialist, is non-utilitarian.) Rather, it’s proven to be inflammatory far out of proportion to its value as a teaching tool: new posts under it tend to generate more uninformative controversy than actual trolling, even though they’re almost always sincere. Almost as bad, we tend to get hung up on useless details of the scenario, even through transposing the core dilemma into any consequential ethic (and a number of non-consequential ones) should be trivial.
A sane community would have realized this, shut the monster up in the proverbial attic, and never spoken of it again. We’ve instead decided to hold a party in said attic whenever it comes up, with the monster as the star attraction.
Ha. Good point. :)
Perhaps we largely agree, though I think dust specks was a more terrible option to choose for the thought experiment than it sounds like you do. It doesn’t work. At all. It’s not even interesting… and it was kooky in mind that so many people were pretending like this was some sort of real ethical dilemma.
If you had something that was actually painful to compare the torture to, then you’d have a more difficult putt. As it was, the LWer was presented with something that wasn’t even a marginal inconvenience (dust speck), told to “shut up and multiply” by a big number to arrive at a true and unbiased view of ethics...and people actually listened and agreed with this viewpoint.
It might be the culty-est moment of LW. Blindly justifying utter nonsense in the mind of the hive. (It reminds me of the Archer meme… “Do you want people to dismiss you as a crankish cult? Because that’s how you get people to dismiss you as a crankish cult.” Ha.)
Note: I just mistyped a word and had to delete one letter… how much torture is that marginal inconvenience worth according to DSET (Dust Speck Ethics Theory)? ;)
Okay, guess it falls on me to bring out a party hat for the monster.
I don’t really want to get into the details; there’s a thread for that and it isn’t this one. But I’ll just briefly note that “Specks” is nothing more or less than what you get when you actually take utilitarianism (or some of its relatives) seriously. It breaks if you don’t treat all discomfort as a single moral evil (or if a dust speck doesn’t register as discomfort), or if you don’t treat everyone’s discomfort as commensurate, but that’s precisely what utilitarianism does—and as a serious ethical theory it’s much older than LW.
The dilemma’s ill-posed in several ways, yes; it’s been proven many times over to mindkill people; and in any crowd other than this one it’d be a reductio of utilitarianism. But the logic does make sense to me; I just don’t buy the premises.
(Incidentally, I’m still not sure whether Eliezer was going for a positive answer. Hardline utilitarianism seems at odds with what he’s written elsewhere on the subject of suffering, particularly in Three Worlds Collide—and note that he never takes an explicit position, he just says that it’s obvious.)
I feel that dubious honor goes to the moment when we elected to use an invented word for “cult” in order to decrease the search engine presence of “Less Wrong” + “cult”.
I have no problem with people who don’t want to live forever (or even for an incredibly long time). Part of my transhumanism is that people should be allowed to die on their own terms. Sure, it makes me sad that my family will one day die, but it’s not my place to make that decision for them.
What I do have a problem with is people dismissing anti-deathism without giving proper arguments (mostly just accepting the status-quo) or telling me I also should accept death as a neutral or positive thing.
Shelly Kagen was helpful to me in being more accepting of death.
I grew up Evangelical Christian and I’m often fascinated by what I view as a case of something like death denial in the LW/cryonics/transhumanism crowd. It reminds me of the people I knew who embraced religion as a death transcendence mechanism.
There is gratuitous pain that often accompanies the dying process. Plus, loved ones will miss you—that sucks. But “death” is just a transition to non-existence. If you stop existing—and are unaware of your non-existence—that seems utterly neutral by any measure. (The only counterargument I remember is some sort opportunity cost plea whereby staying alive allows you to accumulate more utilons and fuzzies...therefore death = bad.)
Further, from a evolutionary standpoint, it seems we should be aware that the bias against death is likely extremely strong, since any species without a strong “anti-death” drive likely died out. It’s part of the reason it irked me about LW that some argued so vehemently that death is rationally bad.
That “tiny bit” of moderation is vastly more than is done here. You complain about discussion threads getting downvoted, but these threads aren’t closed, deleted, or otherwise removed nearly as often as their stackexchange cognates.
Karma is worthless. Charging karma for downvotes will change nothing.
For a new visitor with low-ish karma, downvotes mean losing the ability to post or comment or something like that. That’s what I remember, I stopped posting negative karma magnets long enough ago that I haven’t dipped to that range in a while. But the point is what happens to new visitors.
1) these threads and comments are hidden, you have to click to look in them as though unpopularity were a predictor of basiliskitude.
2) stackexchange sites are more purposeful (help on java, help on android, etc.) than “discussion” should be so a higher level of editing makes sense. With an easy ability to jump from thread to thread, lesswrong should not require pruning as much as it requires labeling and easy navigation.
I think discussion needs to be more “social.” We have the main board for the highly edited works of art. If you want active thinkers who are not already lesswrongers to hang around here and possibly become sufficiently infected, I think you need to have a place where they can shoot the shit, hack at the feet of the icons just in case they are clay.
I base my opinion on my own experiences as a student at Caltech and as a professor after that. Perhaps I did not publish my beer soaked ideas about quanta in journals, but I sure as hell found a lively discussion of these and other philosophical considerations readily available, and their availability made me a smarter physicist.