Thoughts on “Operation Make Less Wrong the single conversational locus”, Month 1
About a month ago, Anna posted about the Importance of Less Wrong or Another Single Conversational Locus, followed shortly by Sarah Constantin’s http://lesswrong.com/lw/o62/a_return_to_discussion/
There was a week or two of heavy-activity by some old timers. Since there’s been a decent array of good posts but not quite as inspiring as the first week was and I don’t know whether to think “we just need to try harder” or change tactics in some way.
Some thoughts:
- I do feel it’s been better to quickly be able to see a lot of posts in the community in one place
- I don’t think the quality of the comments is that good, which is a bit demotivating.
- on facebook, lots of great conversations happen in a low-friction way, and when someone starts being annoying, the person’s who’s facebook wall it is has the authority to delete comments with abandon, which I think is helpful.
- I could see the solution being to either continue trying to incentivize better LW comments, or to just have LW be “single locus for big important ideas, but discussion to flesh them out still happen in more casual environments”
—I’m frustrated that the intellectual projects on Less Wrong are largely silo’d from the Effective Altruism community, which I think could really use them.
- The Main RSS feed has a lot of subscribers (I think I recall “about 10k”), so having things posted there seems good.
- I think it’s good to NOT have people automatically post things there, since that produced a lot of weird anxiety/tension on “is my post good enough for main? I dunno!”
—But, there’s also not a clear path to get something promoted to Main, or a sense of which things are important enough for Main
—I notice that I (personally) feel an ugh response to link posts and don’t like being taken away from LW when I’m browsing LW. I’m not sure why.
Curious if others have thoughts.
- 27 Nov 2016 13:57 UTC; 42 points) 's comment on On the importance of Less Wrong, or another single conversational locus by (
- What would you like to see posts about? by 19 Jan 2017 23:39 UTC; 9 points) (
Two notes on things going on behind the scenes:
Instead of Less Wrong being a project that’s no org’s top focus, we’re creating an org focused on rationality community building, which will have Less Wrong as its primary project (until Less Wrong doesn’t look like the best place to have the rationality community).
We decided a few weeks ago that the LW codebase was bad enough that it would be easier to migrate to a new codebase and then make the necessary changes. My optimistic estimate is that it’ll be about 2 weeks until we’re ready to migrate the database over, which seems like it might take a week. It’s unclear what multiplier should be applied to my optimism to get a realistic estimate.
I think it’s better to be somewhat separate from CFAR. CFAR has their own priorities, which could make LW neglected.
Oooh.
So there will be the React-based UI, the Meteor middle layer, and some database (Mongo?) in the back? Who will host the server?
If you are already talking about migrating the database, do you have the front end pretty much ready, then?
You have to be careful about switching over with an incomplete feature set as LW isn’t terribly healthy and the transition shock might turn out to be very hazardous...
Apollo/GraphQL. I expect us to pay a typical hosting company to host the server; it’s unclear yet who.
Yes and no; Telescope’s core is already fully functional and has a roughly similar data structure to Reddit, and so we can move posts to posts and linkposts to linkposts and comments to comments. So that part of the migration seems clear, and is the sort of thing that Trike has already done before (in moving from Overcoming Bias to Less Wrong).
But our customized version of Telescope will probably handle them differently than the core does. Suppose, for example, that we want to move from Less Wrong’s html post creation to Markdown post creation, then we need to convert all the old posts (stored as html files) into Markdown source code for those files. And until we have the Markdown post creation the way we want it, it doesn’t make sense to actually code it.
Yeah, I’m worrying about this. Switching before it’s better than current LW is bad; switching once it’s better than current LW is okay but might waste the “reopening!” PR event; switching once it’s at the full feature set is great but possibly late.
Perhaps switch once it’s as good, but don’t make a big deal of it? Then make a big deal at some semi-arbitrary point in the future with the release of full 2.0.
How about doing a public beta for a month or two, with a warning that afterwards everything posted on the new server will be deleted (including new user accounts, etc.), data from old server will be imported, the old server will become read-only, and the new server will become the official one.
Since you can embed Markdown in HTML, you might find that you don’t need to convert the posts.
Is this something you need more people to help with?
It’s very possible that I’m confused here, or missing a cool technical trick. (And also maybe we should have separate Markdown and html editors, instead of forcing everyone to use one, which would also make it trivial to import all the old posts while still moving future posts mostly to Markdown.)
More people would be appreciated! Email me for details.
Inline HTML is valid in Markdown:
https://daringfireball.net/projects/markdown/syntax#html
It sounds like the original Markdown has some extra restrictions how you close the outermost HTML tag, but I suspect most parsers ignore that part of the “spec”.
Do you know if you’ll be able to maintain their familial relationships as well?
We picked Telescope because it has a threaded commenting system, as opposed to systems like Discourse.
I’m curious what plans you have re: open source accessibility on the new codebase?
It might be cool to get the minimum viable version up and running, with a focus on making the documentation necessary to contribute really good, and then do a concerted push to get people to make various improvements.
That may not work, but it’d be an obvious time to try for it.
Like the current codebase, it’ll be hosted on github.
I started blogging recently. Some rationalists have apparently found it good. I would like to make these posts I’m writing contributions to the LessWrong project. I would like to cross-post instead of link my posts. But am only comfortable doing so under a pseudonym because of options I want to leave open for things to discuss on my blog.
I have therefore recently created this new account. I think I need 10 karma or something to post though. So, soon (TM). When I get around to posting some helpful comments to earn it.
It looks as if your decision to say “I will earn some karma” rather than “I am new here, please upvote this and give me some karma” has led to everyone upvoting your comment. I find this both amusing and pleasing.
With downvotes disabled, getting 10 karma is easier than ever. So one can be pretty confident about this.
I badly miss downvotes. There’s a lot of stuff I think just needs to be downvoted into oblivion and things aren’t going to be good until we can do that again.
I don’t things were particularly good a month ago when we had downvotes.
Removing low-quality content has increasing marginal utility, like removing drains on your attention; you’re not going to notice a big difference until most of the low-quality content is gone. Getting downvotes back is one tool for removing low-quality content but plausibly others are needed. It would be great if most of the posts in Discussion were high-quality, for several reasons, e.g. people feel more like Discussion is a place they could put their highest-quality content.
The difference between having 50% bad content and 30% bad content isn’t just the 20% of bad content; it’s also the contributions from all those who would keep visiting if they anticipated a 30% chance of seeing bad content but would not keep visiting if they anticipated a 50% chance of seeing bad content.
I think there is an argument that the effort lost steam when our best response to downvote abuse was shutting off downvotes as a “temporary stopgap” measure.
Many, arguably most, of the consequences of downvotes don’t show up in the immediate term. Habits and expectations take time to change, posters choose whether or not to leave altogether, and so on.
There are some annoying spammy things like the “click” guy, some gleb stuff that’s bad and some other spammy stuff. But I think it’s a stretch to think that that’s the main problem.
Getting rid of all of that would still leave the site looking mostly as it does now, but emptier.
agreed on link posts. I wish posters had to write a sentence or two explaining why I should follow the link, and to jumpstart the comment thread.
One of the problems is that if you are on LW you are probably interested in discussion. So being taken to the link takes you away from any comments, if there are any. I would prefer that clicking on the post took you to the page with the comments, with another click necessary to get to the link.
I think part of the problem is that you can’t give a description with link posts, so the best you can do is add a comment with a description.
This should be changed. In the meantime, I only click on link posts with at least one comment, when the comment indicates some reason for me to do so.
I think a serious issue with posting content on Less Wrong, and why I don’t do it beyond link posts, is that Less Wrong feels like a ghetto, in that it’s a place only for an outcast subset of the population. I don’t feel like I can just share Less Wrong articles to many places because Less Wrong lacks respectability in wider society and is only respectable with those who are part of the LW ghetto’s culture.
This doesn’t mean the ghetto needs to be destroyed, but it does suggest that many of our brightest folks will seek other venues for expression that are more respectable, even if it’s dropping (rising) to the neutral level of respectability offered by an anonymous blog. We might come home and prefer to live in LW (the discussions), but an important part of our public selves is oriented towards participating with the larger world.
Maybe as a reader you’d like Less Wrong to be a better place to read things again, just as the average person living in a ghetto may prefer for its luminaries to continue to focus their efforts inward and thus make the ghetto better on average, but as a writer Less Wrong doesn’t feel to me like a place I want to work unless I don’t think I can make myself respectable to a wider audience.
+1; I think this is a major part of my reluctance to write top-level posts in Discussion.
Do you think the Less Wrong of, say, two years ago was less ghetto-ish?
no
This could be estimated by the number of new users per month. (Excluding the sockpuppets.)
Not quite. I think there’s a three-way distinction to be made here, between being (1) small (not many users), (2) niche-y (users are unusual in some way), and (3) creepy (users are unusual in some highly displeasing way). If LW “feels like a ghetto” for an “outcast subset of the population” that “lacks respectability”, I think that’s #2 or #3 rather than just #1, and I’m curious exactly what gworley has in mind.
I wouldn’t have come up with #2 and #3, but those are definitely related to the issue.
That’s mostly a CSS problem. The respectability of a linked LW article would, I think, be dramatically increased if the place looked more professional. Are there any web designers in the audience?
Not quite. In some corners of the ’net LW has… a reputation.
Yes, I know. I bet Islamists don’t think highly of it either.
I bet Islamists don’t think of it.
One general suggestion to everyone: upvote more.
It feels a lot more fun to be involved in this kind of community when participating is rewarded. I think we’d benefit by upvoting good posts and comments a lot more often (based on the “do I want this around?” metric, not the “do I agree with this poster” metric). I know that personally, if I got 10-20 upvotes on a decent post or comment, I’d be a lot more motivated to put more time in to make a good one.
I think the appropriate behavior is, when reading a comment thread, to upvote almost every comment unless you’re not sure it’s positive keeping it around—then downvote if you’re sure it’s bad, or don’t touch it if you’re ambivalent. Or, alternatively: upvote comments you think someone else would be glad to have read (most of them), don’t touch comments that are just “I agree” without meat, and downvote comments that don’t belong or are poorly crafted.
This has the useful property of being an almost zero effort expenditure for the users that (I suspect) would have a larger effect if implemented collectively.
I think it would be horrible practice. Gold stars for everyone!
If the upvotes become really plentiful they would lose most of their value. You’ll just establish a higher baseline (“What, my comment didn’t get +20? Oh, how unmotivating!”)
I disagree. The point is that most comments are comments we want to have around, and so we should encourage them. I know that personally I’m unmotivated to comment, and especially to put more than a couple minutes of work into a comment, because I get the impression that no one cares if I do or not.
What’s wrong with gold stars for everyone who makes a non-spammy, coherent point?
Inflation.
If everyone gets a gold star for most every post, gold stars lose any value.
I think this makes grossly false assumptions about how human psychology actually works.
Imagine applying that logic to, for example, computer games. Hey, let’s get rid all achievements and ranks that are handed out willy nilly to people who just turn up and play the game. Instead, you will now only get any recognition for your efforts if you are a lot better than average.
It’s funny how successful games almost always hand out lots of ‘inflationary’ gold stars just for turning up and playing. To build the user-base, you give people rewards for their efforts, not punishments for falling in the bottom 90%.
The goal of successful computer games is to maximize the playerbase without regard to the quality of that playerbase (with exceptions made for people who drive other players away—cheaters, harassers, etc.). If a reasonably docile idiot shows up and clicks where he is expected to click, the game would be happy to reward him with a variety of virtual goodies. Drool—click—star! - drool—click—star! - drool...
Notably the goals of LW are different. I don’t think we should reward people for just showing up. I think we should actively filter idiots out, docile or not. I don’t want more posts—I want more high-quality posts which you shouldn’t expect if you’re rewarding quantity. A pile of mediocre posts, festooned with gold stars, will make LW just another island of mediocrity in the big ’net ocean.
But what is the incentive for people to take the considerable time and trouble to write high-quality posts if there is virtually no-one here to read them, except perhaps the most extreme anal nitpickers?
If you optimize the system for zero “idiots”, you have to be careful that your system doesn’t converge to the trivial solution of having no-one at all commenting or posting, or at least something close to that, where you have a small number of very negative people sniping at anyone who says anything, whilst simultaneously bemoaning the lack of content.
Sure, that’s a failure mode that exists. Going to any end of the spectrum is rarely a good idea. But we started with discussing the inflation of incentives. If even a mediocre post gets gold stars, what’s the incentive to write an extra special better-than-the-usual post? Looks inefficient to me, you get the same gold stars but spend more effort :-/
Well it seems reasonable to reward a mediocre post with upvotes, and a great post with more upvotes and promotion to main (and if main ceases to exist, there could be similar forms of recognition for excellence).
The same is true of comments.
People who are capable of producing great posts and comments will be incentivized to do so, as long as they end up above the mediocre stuff.
For me, there are two disincentives against trying to produce great posts: firstly that I will get sniped on something noncentral to my point (and it is a lot of work to vet a post to take preventative measures against all possible snipes), secondly that LW is kind of lacking in high-quality critics who will engage/my post will be somewhat ignored even if it is good.
Why would you care? The desire to be liked by everyone is a trap. So somebody snipes your post on something noncentral—just ignore the comment.
Part of the skill of writing a great post is tarting it up so it appeals to high-quality critics :-D If no one engages, you were not interesting enough.
Maybe, or maybe the high quality critics left the site, or maybe they are here occasionally but since there’s no easy way to sort and filter posts they missed it in a sea of poor quality posts…
So you’re a pretty princess, it’s just that all knights are away battling dragons and there are so many peasants milling around it’s easy to get lost in the crowd.
I heard there is a site which talks about rationality and defines it as winning… :-/
Yes so locally one wins by leaving LW, not posting here and instead going off to walled gardens on rationalist Facebook, which seems to be what people have actually done.
Well then.
The desire to get positive social feedback is a fundamental part of human psychology that isn’t negotiable. If people get mostly negative feedback at LW, then it shouldn’t surprise you if people just don’t come here any more.
Rationality for humans, as opposed to vulcans, requires building systems that encourage humans to take part in activities that build rationality, and that means using a realistic model of human motivation, not a wishful-thinking model.
Note the “by everyone” part.
The desire to get positive social feedback should not make you dependent on approval and it should not make you unable to ignore chunks of feedback which you think are useless.
One should be a vulcan because that way one would be more rational. But since one is actually a human, one should make plans around human responses to negative feedback; humans are very VERY sensitive to negative social feedback and tend to retreat from it. So if you make LW a site that is great for non-existent vulcans who don’t really care whether feedback is negative or positive, then that site will suck on planet earth.
Some humans are very sensitive.
Being “very VERY” sensitive is basically a mild disability, similar to being always anxious or having a minor phobia. It is neither a default nor a desirable characteristic.
There are a LOT of humans who will respond to unjustified (in their opinion) and useless criticism with “fuck off” and not give it a second thought.
sure, but I think the crux of the disagreement isn’t really whether some people respond badly to criticism or not. The crux is whether it is a good idea to set up a site where the majority of contributions (say the top 75%) are met with positive reinforcement such as upvotes—perhaps with the top 10% getting some kind of extra-special positive reward, or whether one should instead give the majority of contributions (say the bottom 75%) some form of negative reinforcement.
I claim that the former will result in a site with more high-quality content, all other things being equal, on average. And I think the reason for this is to do with human psychology and community dynamics.
I haven’t seen this proposed by anyone. Are you sure you’re not fighting a strawcreature?
It doesn’t have to be that specific number or way of doing things—the general point is “do we mostly punish or mostly reward”.
I haven’t seen anyone propose “mostly punish” either.
I was saying, basically, that IMHO the median post doesn’t deserve an upvote, especially from everyone.
Things that are extra-good should get upvotes, things that are extra-bad should get downvotes, thing that are meh-just-average should get neither. Of course everyone makes this evaluation differently, so the number of upvotes is essentially the count of people who thought that your post/comment was extra-good. If no one thought you were extra-good (or extra-bad), I think you deserve a zero score.
Zero is neutral—it’s not punishment.
What exactly do you mean by “should” here? Is it “should” as in the empirical claim “should = these actions will maximise the quality and number of users” or is it some kind of deontological claim like “should because u/Lumifer inherently believes that a mediocre post/comment should map to 0″?
I ask because it is plausible that
the optimal choice of mapping is not mediocre → 0, where we judge optimality by the consequences for the site
you and others are inherently pissed off by people posting an average comment and getting +1 for it
Oh, here “should” means “in my arrogant opinion it wouldn’t be horrible if”.
The answers to your two plausibles are (1) maybe, I don’t know. Depends on what you are optimizing for, anyway; (2) Nope. I’m not pissed off, inherently or otherwise.
OK forget the phrase pissed off—what I am trying to get at is deontology vs consequences
I don’t have a deontological rule which says “No rewards for the mediocre” though I do think that rewarding mediocrity is rarely a good idea (on consequentialist grounds).
The difference between our approaches is the difference in focus. In the trade-off between attracting more people and becoming more popular vs maintaining certain exclusivity and avoiding an Eternal September, you lean to the popularity side and I lean to the exclusivity side.
I do not think that this is the tradeoff that we are actually facing. I think that in order for the site to be high quality, it needs to attract more people. Right now, in my opinion, the site is both empty and low quality, and these factors currently reinforce each other.
So do you think we’re facing any trade-off, or the direction is clear and we just need to press on the gas pedal?
I think that at some point adding more “free gold stars”, i.e. upvotes, badges etc to people would look silly and be counterproductive, but we are nowhere near that point, so we should push the gas pedal, aim to upvote every non-terrible post at least somewhat, upvote decent posts a lot and create new levels of reward—something like lesswrong gold—for posts that are truly great.
We should limit downvotes substantially, or perhaps permanently remove the downvote and replace with separate buttons for “I disagree but this person is engaging in (broadly) rational debate” and “This is toxic/spammy/unrepentantly dumb”.
These buttons should have different semantics, for example “I disagree but this person is engaging in (broadly) rational debate” might be easy to click but not actually make the post go down the page. The “This is toxic/spammy/unrepentantly dumb” might be more costly to click, for example have a limited budget per week and cause the user to have to click an additional dialogue, perhaps with a mandatory “reason” field which is enforced, but would actually push the post down the way downvotes currently do, or perhaps even more strongly than downvotes currently work.
It may be worth noting that the way to get to “upvote every non-terrible post at least somewhat” while still leaving space to “upvote decent posts a lot” is … to have at least some of your users be much more miserly with their upvotes. Even if your goal is to have most things end up on +1 or so, you should be glad that there are users like Lumifer (and, for what it’s worth, me) who mostly only upvote things they think are unusually good. (If everyone upvotes almost everything, you have no way to distinguish better-than-average posts or comments from average ones.)
Yes, there is a point at which more upvoting starts to saturate the set of possible scores for a comment, but we are nowhere near that point IMO. And if we were, I think it would be much better to add a limited-supply super-upvote to the system.
I agree that having more choices than just up/downvote would be useful.
I am not sure how will you persuade the locals to upvote everything non-terrible. That’s a rather radical change in the culture. Instead in the spirit of this can I suggest automation? A small script can randomly upvote all posts which didn’t have one of your downvote-equivalent buttons pressed. The rate of upvoting is adjustable and declines with time.
If you want to make it even better, let users pick a waifu and make it so that at certain thresholds she pops up, breathlessly exclaims “Oh, that was so great! I’m so glad you’re here!”, flashes you, smiles, and disappears. We can call her Clipp… um, probably that’s a bad idea :-/
On a bit more serious note, the problem of attracting and incentivizing users is… well explored. There was that thing called Farmville and you can look at any decent freemium game for contemporary examples. How to addict users to little squirts of dopamine is big business. The problem, of course, is the kind of crowd you end up attracting. If you offer gold stars, you end up with people who like gold stars.
Everyone likes gold stars, but not everyone likes decision theory, rationality, philosphy, AI, etc. Even if we were as good as farmville at dopamine, the farmville people wouldn’t come here instead of farmville, because they’d never have anything non-terrible to say.
Now we might start attracting more 13 year-old nerds… but do we want to be so elite that 13 year old nerds can’t come here to learn rationality? The ultimate eliteness is just an empty page that no-one ever sullies with a potentially imperfect speck of pixels. I think we are waaaay too close to that form of eliteness.
Comment being non-spam and coherent is considered a bare minimum around here. Using the rule of upvoting nearly everything would induce noise. With the current schema of being a signal of quality, or used to say ‘more like this’ (not necessarily even ‘I agree’) provides a strong signal of quality discourse which is lost otherwise.
I’m generally in favor of this. One obstacle is that I don’t like putting effort into finding things to upvote; there might be good comments being made on bad top-level posts that I’m ignoring, and I don’t particularly want to wade through a bunch of bad top-level posts to find and upvote the best comments on them.
I wonder if we could find a scalable way of crossposting facebook and g+ comments? The way Jeff Kaufmann does on his blog (see the comments: https://www.jefftk.com/p/leaving-google-joining-wave)
That would lower the frictions substantially.
I personally would favour any approach that minimizes the amount of discourse that happens in walled gardens like Facebook and Google+.
Walled gardens are probably necessary for honest discussion.
If everything is open and tied to a meatspace identity, contributors have to constantly mind what they can and can’t say and how what they’re saying could be misinterpreted, either by an outsider who isn’t familiar with local jargon or by a genuinely hostile element (and we’ve certainly had many of those) bent on casting LW or that contributor in the worst possible light.
If everything is open but not tied to an identity, there’s no status payoff for being right that’s useful in the real world—or if there is, it comes at the risk of being doxed, and it’s generally not worth it.
The ideal would probably be a walled garden with no real name policy. I’ve considered writing a site along these lines for some time, with many walled gardens and individually customizable privacy settings like Facebook, but I’m not sure what model to base the posting on—that is, should it look like a forum, like Facebook/Reddit, like Tumblr, or what?
I make the suggestion because precisely because we will definitely lose that war.
I don’t think that is in any ways certain. There are PLENTY of high quality indexable discussion sites on the web as counter examples (HN, Reddit).
Even if it were, we should not go from “we will lose this war” to “we should behave now as if the war has already been lost”.
Reddit/HN seem like examples of extreme success, we should probably also not behave as if we will definitely enjoy extreme success.
Okay, I think I know why I don’t like link posts. It’s because I can’t perform a single click that gets me to both the content and the comments. Instead I need to click twice: once to see the content and once to access the comments on the content. I feel slightly betrayed by the interface when I click on the title of the post and my usual expectations about getting to see both a post and its comments aren’t satisfied.
Reddit, Hacker News, and similar sites work on the “title goes to source, comments go to comments” model. I suspect it will be more expectation-violating overall to have different behavior here. (I agree that there is an expectation shift in going from LW as a place with only self-posts to a place with linkposts.)
For the record, Hacker News and Reddit also annoy me every time this happens.
I do think it was a fine design choice given that it does seem to work for a lot of people.
But I’d personally rather have a convention where you click the link, and then see a typical discussion page with short summary / conversation prompt, followed by comments all in one place.
Maybe we can build a user setting for this (excluding the summary)? Or, actually, if we’re already building a system to allow users to edit tags (a la Stack Overflow), maybe it wouldn’t be terrible to let users edit the summary (a la wiki).
It’s awesome that you guys are really considering ways to incorporate changes people want.
I wonder, since you’re going to have to put a lot of work into the refurbishing project and resources are finite, would it be worth generating some kind of survey for members to take about what kind of features/alterations/options they’d most like to see? I ask because it occurs to me that soliciting ideas in open threads, while absolutely useful as far as encouraging discussion and exchange of ideas goes, might present a patchy or unduly-slanted picture of what the majority of members want. Prolific commenters (like me!) might dominate the discussion, or certain ideas might look more important because they generate a lot of discussion. A survey of some sort might give you clearer data.
That’s not to say you should necessarily do things because the majority want them, this isn’t a democracy as far as I’m aware and some popular requests might be unworkable. It just could be useful to know. Of course you’re better placed to determine if it’s worth the effort.
(Also, this isn’t in any way prompted by Raemon’s point about the link posts—it was your reply about possible implementation options that put it my head).
Do you feel the “link post ugh”?
[pollid:1179]
Google suggests nothing helpful to define Keganism, and that Keganites are humans from the planet Kegan in the Star Wars Expanded Universe. Could you point me to something about the Keganism you’re referring to?
FWIW I view a lot of the tension between/within the rationality community regarding post-rationality as usually rooted in tribal identification more than concrete disagreement. If rationality is winning, then unusual mental tricks and perspectives that help you win are part of instrumental rationality. If some of those mental tricks happen to infringe upon a pristine epistemic rationality, then we just need a more complicated mental model of what rationality is. Or call it post-rationality, I don’t really care, except for the fact that labels like post-rationality connotationally imply that rationality has to be discarded and replaced with some other thing, which isn’t true. Rationality is and always was an evolving project and saying you’re post- something that’s evolving to incorporate new ideas is getting ahead of yourself.
In other words, any valid critique of rationality becomes part of rationality. We are Borg. Resistance is futile.
I only heard this phrase “postrationality” for the first time a few days ago, maybe because I don’t keep up with the rationality-blog-metaverse that well, and I really don’t understand it.
All the descriptions I come across when I look for them seem to describe “rationality, plus being willing to talk about human experience too”, but I thought the LW-sphere was already into talking about human experience and whatnot. So is it just “we’re not comfortable talking about human experience on in the rationalist sphere so we made our own sphere”? That is, a cultural divide?
That first link writes “Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.”. Yet I would imagine everyone on LW would be interested in talking about System 1 and how it works and anything interesting we can say about it. So what’s the difference?
I’m not a massive fan of the ‘postrationality’ label but I do like some of the content, so I thought I’d try and explain why I’m attracted to it. I hope this comment is not too long. I’m not deeply involved but I have spent a lot of time recently reading my way through David Chapman’s Meaningness site and commenting there a bit (as ‘lk’).
One of my minor obsessions is thinking and reading about the role of intuition in maths. (Probably the best example of what I’m thinking of is Thurston’s wonderful Proof and Progress in Mathematics.) As Thurston’s essay describes, mathematicians make progress using a range of human faculties including not just logical deduction but also spatial and geometric intuition, language, metaphors and associations, and processes occurring in time. Chapman is good on this, whereas a lot of the original Less Wrong content seems to have rather a narrow focus on logic and probabilistic inference. (I think this is less true now.)
Mathematical intuition is how I normally approach this subject, but I think this is generally applicable to how we reason about all kinds of topics and come to useful conclusions. There should be a really wide variety of literature to raid for insights here. I’d expect useful contributions from fields such as phenomenology and meditation practice (and some of the ‘instrumental rationality’ folk wisdom) where there’s a focus on introspection of private mental phenomena, and also looking at the same thing from the outside and trying to study how people in a specific field think about problems (apparently this is called ‘ethnomethodology’.) There’s probably also a fair bit to extract more widely from continental philosophy and pomo literature, which I know little about (I’m aware there’s also lots of rubbish).
There’s another side to the postrationality thing that seems to involve a strong interest in various ‘social technologies’ and ritual practices, which often shades into what I’ll kind-of-uncharitably call LARPing various religious/traditional beliefs. I think the idea is that you have to be involved pretty deeply in some version of Buddhism/Catholicism/paganism/whatever to gain any kind of visceral understanding of what’s useful there. From the outside, though, it still looks like a lot of rather uncritical acceptance of the usual sort of traditional rubbish humans believe, and getting involved with one particular type of this seems kind of arbitrary to me. (I exclude Chapman from this criticism, he is very forthright about what he think is bad/useless in Buddhism and what he thinks is worth preserving.) It’s probably obvious at this point that I don’t at all understand the appeal of this myself, though I’m open to learning more about it.
Obviously different people do things for different reasons, but I infer that a lot of people started identifying as post-rationalist when they felt it was no longer cool to be associated with the rationalist movement. There have been a number of episodes of Internet drama over the last several years, any one of which might be alienating to some subset of people; those people might still like a lot of these ideas, but feel rejected from the “core group” as they perceive it.
The natural Schelling point for people who feel rejected by the rationality movement is to try to find a Rationality 2.0 movement that has all the stuff they liked without the stuff they didn’t like. This Schelling point seems to be stable regardless of whether Rationality 2.0 has any actual content or clear definition.
How this all feels to me:
When I look at the Sequences, as the core around which the rationalist community formed, I find many interesting ideas and mental tools. (Randomly listing stuff that comes to my mind: Bayes theorem, Kolmogorov complexity, cognitive biases, planning fallacy, anchoring, politics is the mindkiller, 0 and 1 are not probabilities, cryonics, having your bottom line written first, how an algorithm feels from inside, many-worlds interpretation of quantum physics, etc.)
When I look at “Keganism”, it seems like an affective spiral based on one idea.
I am not saying that it is a wrong or worthless idea, just that comparing “having this ‘one weird trick’ and applying it to everything” with the whole body of knowledge and attitudes is a type error. If this one idea has merit, it can become another useful tool in a large toolset. But it does not surpass the whole toolset or make it obsolete, which the “post-” prefix would suggest.
Essentially, the “post-” prefix is just a status claim; it connotationally means “smarter than”.
To compare, Eliezer never said that using Bayes theorem is “post-mathematics”, or that accepting many-worlds interpretation of quantum physics is “post-physics”. Because that would just be silly. Similarly, the idea of “interpenetration of systems” doesn’t make one “post-rational”.
It seems like you are making that error. I’m not seeing anybody else making it.
There’s no reason to assume that the word postrational is only about Kegan’s ideas. The most in depth post that tried to define the term (https://yearlycider.wordpress.com/2014/09/19/postrationality-table-of-contents/) didn’t even speak of Kegan directly.
Calling the stage 5 a tool or “weird trick” also misses the point. It’s not an idea in that class.
This would make me a post-rationalist, too.
This wouldn’t.
I guess the second part is more important, because the first part is mostly a strawman.
Not in my experience. It may seem like it now, but that’s because the postrationalists won the argument.
Congratulations on successfully breaking through an open door, I guess.
-- Why truth? And..., 2006
The last debate I had on the LW open thread whether it’s worthwhile to have an internally consistent Bayesian net would be a practical example of the first conflict.
You have people in this community who think that a Bayesian net can basically model everything that’s important for making predictions and if one spends enough effort on the Bayesian net, intuition is not required.
Not sure if I understand it correctly but seems to me like you are saying that with limited computing power it may be better to develop two contradictory models of the world, each one making good predictions in one specific important area, and then simply use the model corresponding to the area you are currently working in… than trying to develop an internally consistent model for both areas, only to perform poorly in both (because the resources are not sufficient for a consistent model working well in both areas).
While the response seems to… misunderstand your point, and suggest something like a weighed average of the two models, which would lead exactly to the poorly performing model.
As a fictional example, it’s like one person saying: “I don’t have a consistent model of whether a food X is good or bad for my health. My experience says that eating it in summer improves my health, but eating it in winter makes my health worse. I have no idea how something could be like that, but in summer I simply use heuristics that X is good, while in winter I use a contradictory heuristics that X is bad.” And the other person replying: “You don’t need contradictory heuristics; just use Bayes and conclude that X is good with probability 50% and bad with probability 50%.”
I don’t have a Bayesian model that tells me how much magnesium to consume. Instead, I look at the bottle with the magnesium tablets and feel into my body. Depending on the feeling my body creates as a response I might take the magnesium tablet at a particular time or not take it.
On the other hand the way I consume Vitamin D3 is very different. I don’t have a meaningful internal sense on when to take it but take the dosis of Vitamin D3 largely based on an intellectual calculus.
I’m not saying anything about limited computing power. I don’t use the felt sense for magnesium dosing because I’m lacking computing power. I also can’t simply plug the felt sense into an abstract model because that might detach the connection to it or decrase trust that it needs to work.
Bayesianism is also no superset of logic (predicate calculus). See the Chapman article. Reasoning in the framework of logic can be useful and it’s different than Bayesianism.
Weird, this comment thread doesn’t link to our prior discussion, there must be some kind of mistake. =)
A Bayes net can have whatever nodes I think it should have, based on my intuition. Nobody ever suggested that the nodes of a man-made Bayes net come from anywhere except intuition in the first place.
If I am trying to predict the outcome of some specific event, I can factor in as many “conflicting perspectives” as I want, again using my intuition to decide how to incorporate them.
I want to predict whether it will rain tomorrow. I establish one causal network based in a purely statistical model of rainfall frequency in my area. I establish a second causal network which just reflects whatever the Weather Channel predicts. I establish a third causal network that incorporates Astrological signs and the reading of entrails to predict whether it will rain. You end up with three nodes: P(rain|statistical-model), P(rain|weather-channel-model), P(rain|entrails-model). You then terminate all three into a final P(rain|all-available-knowledge), where you weight the influence of each of the three submodels according to your prior confidence in that model. In other words, when you verify whether it actually rains tomorrow, you perform a Bayesian update on P(statistical-model-validity|rain), P(entrails-model-validity|rain), P(weather-channel-model-validity|rain).
You have just used Bayes to adjudicate between conflicting perspectives. There is no law stating that you can’t continue using those conflicting models. Maybe you have some reason to expect that P(rain|all-available-knowledge) actually ends up slightly more accurate when you include knowledge about entrails. Then you should continue to incorporate your knowledge about entrails, but also keep updating on the weight of its contribution to the final result.
(If I made a mistake in the above paragraphs, first consider the likelihood that it’s due to the difficulty of typing this kind of stuff into a text box, and don’t just assume that I’m wrong.)
Part of the reason Chapman’s article doesn’t land for me at all is that he somehow fails to see that interoprating between different systems of meaning and subjectivity is completely amenable to Bayesian thinking. Nobody ever said that intuition is not an important part of coming up with a Bayes network. Both the structure of the network and the priors you put into the network can come from nowhere other than intuition. I’m pretty sure this is mentioned in the Sequences. I feel like Scott defends Bayesianism really well from Chapman’s argument, and if you don’t agree, then I suspect it’s because Chapman and Scott might be talking past each other in some regards wherein you think Chapman is saying something important where Scott doesn’t.
What Scott defends in that post isn’t the notion of a completely consistent belief net. In Moat-and-Bailey fashion Scott defends claims that are less strong.
Chapman also wrote the more mathy followup post: https://meaningness.com/probability-and-logic
From Chapman model of the world Scott defends Bayesianism as a level 4 framework against other frameworks that are also level 4 or lower (in the Kegan framework). A person who’s at the developmental stage of level 3 can’t simply go to level 5 but profits from learning a framework like Bayesianism that gives certain clear answers. From that perspective, the person likely needs a few years in that stage to be able to later grow out of it.
Right. You don’t use your brains pattern matching ability to pick the right model but you use a quite simple probabilistic one. I think that’s likely a mistake. But I don’t know whether I can explain to you why I think so in a way that would convince you. That’s why I didn’t continue the other discussion.
Additionally, even when I think you are wrong that doesn’t mean that nothing productive can come out of the belief net experiment.
What can a “level 5 framework” do, operationally, that is different than what can be done with a Bayes net?
I admit that I don’t understand what you’re actually trying to argue, Christian.
Do well at problems that require developing ontology to represent the problem like Bongard’s problems (see Chapman’s post on metarationality)
Yes, fully understanding would likely mean that you need to spend time understanding a new conceptional framework. It’s not as easy as simply picking up another mental trick.
But in this thread, my point isn’t to argue that everybody should adopt meta-rationality but to illustrate that it’s actually a different way of looking at the world.
Yeah, that’s my thought on post rationality too
Part of the issue seems to be that some rationalists strongly reject what has come to be called post-rationality. I’ve certainly gotten plenty of blow back on my exploration of these topics over the last couple years from rationalists who view it as an antirationalist project. It’s hard for me to measure what proportion of the community expresses what views, but there’s a significant chunk of the rationality community seems to be solidifying into a new form of the antecedent skeptic/scientific rationality culture that is unwilling to make space for additional boundary pushing much beyond the existing understanding of the Sequences.
Maybe these folks are just especially vocal, but it does make the environment more difficult to work in. I’m on writing very publicly now because I finally feel confident enough that I can get away with being opposed by vocal community members. Not all are so lucky, and thus feel silenced unless they can distance themselves from the existing rationalist community enough to create space for disagreement without intolerable stress.
What is “post-rationality”?
Knowing about rationalism plus feeling superior to rationalists :-).
EDITED to add: I hope my snark doesn’t make gworley feel blown-back-at, silenced, and intolerably stressed. That’s not at all my purpose. I’ll make the point I was making a bit more explicitly.
Reading “post-rationalist” stuff, I genuinely do often get the impression that people become “post-rationalists” when they have been exposed to rationalism but find rationalists a group they don’t want to affiliate with (e.g., because they seem disagreeably nerdy).
As shev said, post-rationalists’ complaints about rationalism do sometimes look rather strawy; that’s one thing that gives me the trying-to-look-different vibe.
The (alleged) differences that aren’t just complaints about strawmen generally seem to me to be simply wrong.
Here’s the first Google hit (for me) for “post-rationalist”: from The Future Primeval, a kinda-neoreactionary site set up by ex-LWers. Its summary of how post-rationalists differ from rationalists seems fairly typical. Let’s see what it has to say.
First of all it complains of “some of the silliness” of modern conceptions of rationalist. (OK, then.)
Then it says that there’s more to thinking than propositional belief (perhaps there are rationalists who deny that, but I don’t think I know any) and says that post-rationalists see truth “as a sometimes-applicable proxy for usefulness rather than an always-applicable end in itself” (the standard rationalist position, in so far as there is one, is that truth is usually useful and that deliberately embracing untruth for pragmatic reasons tends to get you in a mess; rationalists also tend to like truth, to value it terminally).
So here we have one implicit strawman (that rationalists think propositional belief is everything), another implicit strawman (that rationalists don’t recognize that truth and usefulness can in principle diverge), something I think is simply an error if I’ve understood correctly (the suggestion that untruth is often more useful than truth), and what looks like a failure of empathy (obliviousness to the possibility that someone might simply prefer to be right, just as they might prefer to be comfortable).
Then it suggests that values shouldn’t be taken as axiomatic fundamental truths but that they often arise from social phenomena (so far as I can tell, this is also generally understood by rationalists).
Then we are told that “some rationalists have a reductionistic and mechanistic theory of mind” (how true this is depends on how those weaselly words “reductionistic” and “mechanistic” are understood) and think that it’s useful to identify biases and try to patch them; post-rationalists, on the other hand, understand that the mind is too complex for that to work and we should treat it as a black box.
Here we may have an actual point of disagreement, but let’s proceed with caution. First of all, the sort of mechanistic reductionism that LW-style rationalists fairly universally endorse is in fact also endorsed by our post-rationalists, in the same paragraph (“while the mind is ultimately a reducible machine”). But I think it’s fair to say that rationalists are generally somewhat optimistic about the prospects of improving one’s thinking by, er, “overcoming bias”. But it is also widely recognized that this doesn’t always work, that in many cases knowing about a bias just makes you more willing to accuse your opponents of it; I think there’s at least one thing along those lines in the Sequences, so it’s not something we’ve been taught recently by the post-rationalists. So I think the point of disagreement here is this: Are there a substantial number of heuristics implemented in our brains that, in today’s environment, can be bettered by deliberate “system-2” calculation? I do think the answer is yes; it seems like our post-rationalists think it’s no; but if they’ve given reasons for that other than handwaving about evolution, I haven’t seen them.
They elaborate on this to say it’s foolish to try to found our practical reasoning in theory rather than common sense and intuition. (This is more or less the same as the previous complaint, and I think we have a similar disagreement here.)
And then they list a bunch of things post-rationalists apparently have “an appreciation for”: tradition, ritual, modes of experience beyond detached skepticism, etc. (Mostly straw, this; the typical rationalist position seems to be that these things can be helpful or harmful and that many of their common forms are harmful; that isn’t at all the same thing as not “appreciating” them.)
So, a lot of that does indeed seem to consist of strawmanning plus feeling superior. Not, of course, all of it; but enough to (I think) explain some of the negative attitude gworley describes getting from rationalists.
Ah, that’s easy. Can I just go straight to being a super-extra-meta-post-rationalist, then?
This is helpful, thanks.
In the “Rationality is about winning” train of thought, I’d guess that anything materially different in post-rationality (tm) would be eventually subsumed into the ‘rationality’ umbrella if it works, since it would, well, win. The model of it as a social divide seems immediately appealing for making sense of the ecosystem.
The best critique of post-rationalism I’ve seen so far. It matches my thought as well. Please consider making this a post so we can all double-upvote you.
While rationality is nominally that which wins, and so is thus complete, in practice people want consistent, systematic ways of achieving rationality, and so the term comes to have the double meaning of both that which wins and a discovered system for winning based around a combination of traditional rationality, cognitive bias and heuristic research, and rational agent behavior in decision theory, game theory, etc.
I see post-rationality as being the continued exploration of the former project (to win, crudely, though it includes even figuring out what winning means) without constraining oneself to the boundaries of the latter. I think this maybe also better explains the tension that results in feeling a need to carve out post-rationality from rationality when it is nominally still part of the rationalist project.
I don’t think it is.
Rationality is a combination of keeping your map of the world as correct as you can (“epistemic rationality”, also known as “science” outside of LW) and doing things which are optimal in reaching your goals (“instrumental rationality”, also known as “pragmatism” outside of LW).
The “rationalists must win” point was made by EY to, basically, tie rationality to the real world and real success as opposed to declaring oneself extra rational via navel-gazing. It is basically “don’t tell me you’re better, show me you’re better”.
For a trivial example consider buying for $1 a lottery ticket which has a 1% chance of paying out $1000. It is rational to buy the ticket, but the expected outcome (mode, in statitics-speak) is that you will lose.
So, um, how to win using any means necessary..? I am not sure where do you want to go outside of the “boundaries of the latter”.
I’m not sure that’s what people usually mean by science. And most of the questions we’re concerned about in our lives (“am I going to be able to pay the credit in time?”) are not usually considered to be scientific ones.
Other than that minor nitpick, I agree.
Any chance you could be bothered to write a post explaining what you’re talking about, at a survey/overview level?
I strongly encourage you to do it. I’m typing up a post right now specifically encouraging people to summarize fields in LW discussion threads as a useful way to contribute, and I think I’m just gonna use this as an example since it’s on my mind..
Having new conversational focus seems good. Right now, at least, there doesn’t seem to be too much of a common thread in terms of discussion topics or central themes that people focus on.
I, too, have seen some mentions to Kegan, most notably in Benjamin Hoffman’s posts here.
I don’t quite understand constructive developmental theory and having some beginner-friendly discussion would be great.
The things most people are interested in discussing are frowned upon/banned from discussion on LW. That’s why they go to SSC. The world has changed in the past 10 years, and the conversational rules and restrictions of 2009 no longer make sense today.
The rationalsphere, if you expand it to include blogs like Marginal Revolution, is one of the few intellectual mechanisms left to disentangle complex information from the clusterf* of modern politics. Not talking about it here through a clear rationalist framework is a tragedy.
One important difference between LW and SSC: Everyone knows that SSC is Scott’s blog. Scott is a dictator, and if he wants to announce his own opinions visibly, he can post them in a separate article, in a way no one else can compete with. It would be difficult to misrepresent Scott’s opinions by posting on SSC.
LW is a group blog (Eliezer is no longer active here). So in addition to talk about individual users who post here, it also makes sense to ask what does the “hive mind” think, i.e. what is the general consensus here. Especially because we talk here about Aumann agreement theorem, wisdom of crowds, etc. So people can be curious about the “wisdom of the LW crowd”.
Similarly, when a third party describes SSC, they cannot credibly accuse Scott of what someone else wrote in the comments; the dividing line between Scott and his comentariat is obvious. But it is quite easy to cherry-pick some LW comments and say “this is what the LW community actually believes”.
There were repeated attempts to create a fake image of what the LW community believes, coming as far as I know from two sources. First, various “SJWs” were offended that some opinions were not banned here, and that some topics were allowed to be discussed calmly. (It doesn’t matter whether the problematic opinion was a minority opinion, or even whether it was downvoted. The fact that it wasn’t immediately censored is enough to cause outrage.)
Second, the neoreactionary community decided to use these accusations as a recruitment tool, and they started spreading a rumor that the rationalist community indeed supports them. There was a time when they tried to make LW about neoreaction, by repeatedly creating discussion threads about themselves. Such as: “Political thread: neoreactionaries, tell me what do you find most rational about neoreaction”; obviously fishing for positive opinions. Then they used such threads as a “proof” that rationalists indeed find neoreaction very rational, etc. -- After some time they gave up and disappeared. Only Eugine remained here, creating endless sockpuppets for downvoting anti-nr comments, and upvoting pro-nr comments, persistently maintaining the illusion of neoreaction being overrepresented (or even represented) in the rationalist comminity.
tl;dr—on LW people can play astroturfing games about “what the rationalist community actually believes”, and it regularly happens, and it is very annoying for those who recognize they are being manipulated; on SSC such games don’t make sense, because Scott can make his opinion quite clear
They can accuse Scott of being the sort of fascist who would have a [cherry-picking two or three comments that aren’t completely in approval of the latest Salon thinkpiece] far-right extremist commentariat. And they do.
Yep, here is an example.
Can we elect a dictator?
I think we did.
This is the first I heard of that… I’m not sure the legitimacy of that in the eyes of long-time users.
-- Tao Te Ching
To allow the clusterfuck of politics inside you need robust filters against torrents of foam, spittle, and incoherent rage. Generally speaking, this means either wise and active moderation or a full-featured set of tools for the users to curate their own feed/timeline. At the moment LW has neither.
Sincere question: Do you think the SSC comments section accomplishes politics while filtering out foam, spittle etc? (or perhaps the comments section there is more robust to simply ignoring bad comments, which isn’t the same on a forum?)
Having no moderator experience, I guess there is probably a lot on that end that I don’t know.
I think the SSC comments are pretty bad, but I’m not sure they’re any worse on politics than other topics.
FWIW, I was linked to a SSC post today about “race and criminal justice in America”—so, five-alarm hot button topic—and I quickly read through about half of a super-long comments section, and it was great. Plenty of debate, minimal spittle, collaborative and civil, fact-based and in good faith.
SSC does quite well with politics. I would guess that some of it is because discussion is high-brow, some of it is because other users don’t have problems pointing out that someone is an idiot, but mostly because Scott has little compunctions about banning. For example, at some point he basically banned all vocal NRx people because he didn’t want SSC to be primarily seen as a neoreactionary forum.
SSC also has a fairly user-hostile UI which by now I think is deliberate as Scott doesn’t want to shepherd a large community.
I get the impression that SSC comments have managed to do rational debate better than LW does. People who do bad things there are reliably purged by Scott. The topics are interesting which keeps smart people coming.
Take a population of smart people and regularly cull the most dark-arts/mudslinging/anti-epistemology few %.
I don’t like the SSC comments much because I feel like most of what I say there gets ignored and buried, but I definitely think that SSC is very good at dealing with politics.
I don’t think LW is, in fact, capable of talking about politics rationally; if it did, it wouldn’t have much influence; and trying will harm its core interests through divisiveness, distraction, drawing bad users, and further reputational damage.
Agreed. I think avoiding politics on LW does more harm than good overall these days, and that people get mindkilled in plenty of other ways even without it. (I personally don’t want to talk about politics on LW, but I’m in favor of other people doing so, especially to the extent that it results in political action.)
This would be a problem with an obvious solution if Discussion was structured anything like a normal forum.
Main is one thing. The “community blog” structure works there. But Discussion in reality functions like a forum and it suffers from the lack of basic, common forum-features like sticky threads, posts bumping based on activity, and the ability to create sub-fora.
If politics had its own sub-forum, people could choose to enter it or not, simple as that. Nothing fancy about it—political discussion available, but cordoned off behind one more click. Same feature could help organize the subject matter more effectively in other areas too. No need to slice it too fine. Say you have a main area for all the general rationality and logic stuff, as well as site-business things like a welcome thread and site-related posts like this one. Then you have a few—two or three, no more than five—sub-fora split into “Science”, “Politics”, “AI” etc.
Now one could argue that the creation of a politics section of any sort would attract a different type of member and that could impact the discourse in other areas. Not saying that’s not a possibility but heck, LW attracts a fair few cranks anyway.
Mind, I don’t know how possible any of these changes are—I’m only arguing their desirability.
Yeah, I agree that it would be really great if Discussion had subreddits.
There is additionally the point that the ban leads to people compartmentalizing rationalist thought practices from politics. How do you become a rationalist political being if you aren’t able to practice rationalist politics in the supportive company of other rationalists?
“How do you get a clean sewer system if you insist on separating it from the rest of the city?”
I’m having trouble parsing the intended meaning. Can you clarify?
I’m not steven0461, but I’m pretty sure the intended meaning is: Asking for a “rationalist political being” is like asking for a “clean sewer”; it’s a contradiction in terms because politics is fundamentally anti-rational. So when you say “How do you become a rationalist political being if …” you have already made a mistake.
(I don’t think I agree; politics is part of the real world and I see no reason to think that rationalists should never find sufficient reason to become involved. I might agree with the more modest claim that most of us most of the time would do well to pay much less attention to politics than we do.)
There is the obvious counterargument of “Try ignoring your sewer system for a few years and see where it gets you”. I suspect that drowning in shit is not a pleasant experience.
Then steven0461 should taboo “politics” and perhaps for the purposes of this thread replace it with “government policy.”
Maybe. But most of us get to influence government policy mostly via involvement in politics, and if (in someone’s opinion) politics is fundamentally anti-rational then they may conclude that almost all rationalists should try to minimize the time and effort and emotional investment they give to government policy.
But I’m engaging in the usually-futile activity of defending the position of someone else with whom I don’t entirely agree, and who is in fact (I assume) here and able to defend himself. So I’ll stop.
I don’t think LW qualifies as a sufficiently supportive company of rationalists for at least two major reasons: (1) Eugine and his army of sockpuppets, (2) anyone can join, rationalist or not, and talking about politics would most likely attract the wrong kind people, so even if LW would qualify as a sufficiently supportive company of rationalists now, that could easily change overnight.
I imagine that if we could solve the problem of sockpuppets and/or create a system of “trusted users” who could moderate the debate, we would have a chance to debate politics rationally. But I suspect that a rational political debate would be quite boring for most people.
To give an example of “boring politics”, when Trump was elected, half people on internet were posting messages like “that’s great, now Americal will be great again”, half people on internet were posting messages like “that’s horrible, now racists and sexist will be everywhere, and we are all doomed”… and there was a tiny group of people posting messages like “having Trump elected increased value of funds in sectors A, B, C, and decreased value of funds in sectors X, Y, Z, so by hedging against this outcome I made N% money”. You didn’t have to tell these people that rationalists are supposed to bet on their beliefs, because they already did.
Funnily enough, I heard rumors that George Soros placed a big bet on the markets going down after the election and lost very very badly.
I think it is a rather unsympathetic strawman characterization of what they rationalist political debate would be. Even if one could make money off of purely thinking—and I don’t want to debate the efficient market hypothesis here—I would hope that the purpose of the debate would be over rational government policies that address the underlying concerns of all sides. For example, what underlying fears and insecurities lead to support for trumps anti-immigration, anti-Muslim position? What legitimate basis exists, charitably, for these fears? What potential policy could both address these underlying concerns, and be supported by both parties and independents? More to the point, what additional data would be useful to have, in the form of polls that are not currently being conducted or some such?
Nate Silver’s 538 blog is an example of such a rationalist resource, but he only covers politics during election season and there isn’t much community building going on.
Well, I’ve been here two weeks now and it’s been good. Interesting. Learned some things, had some decent discussions.
I don’t mind the links, I just don’t think they should be posted one by one, and I don’t think the post title should be the link. Put the link in the body of the post. And users who like to contribute lots of links to random articles rather than their own blogs—that’s fine, good even, but maybe consider collating a week’s worth into one post. So you might have a few different conversations going on in the comments, so what? Better than half the links posted being a “miss” and sitting there with no comments.
Comment quality. Now look, it’s awful cheek from a newbie like me, I know, but I’ll give my honest opinion because it might be useful to see the perspective of a new member—not a returning old member or a long-time lurker but a really new member. After all, if you want the place to thrive to you need to attract and retain new members, right?
It’s not just the number of comments, or even their level of engagement with the main post, it’s the whole tone. There’s this sort of… malaise, for want of a better word, that seems to hang over the place. I sometimes get the sense that people aren’t really enjoying being here. There’s this sort of dry, formal detachment in a lot of the comments and it’s hard to separate out personalities and characters (with several notable exceptions—gjm, for one). Basically, it feels like people either aren’t having fun or don’t want to look like they’re having fun. (Not Lumifer, obviously. Lumifer definitely has fun.) Point is, I came here all enthusiasm, ready to enjoy myself having interesting debates with interesting people—which I have had, but the atmosphere is like, totally harshing my buzz, man.
That’s my two cents, I’ll shut up now.
Well rationality is hard work and demanding; maybe that is confusing you.
It is hard work to put out a small amount of well researched and well-reasoned material and very easy to churn out a lot of low quality material.
I am not suggesting that your material is low quality but I think you could probably move somewhat in the direction -volume +quality.
This is the sort of thing I would downvote if I could. Not helpful. If you want to build and grow a community on a website, you need to make using that website pleasant. StackExchange, for example, understands this very well; so does Reddit.
Pardon me, but I don’t really give a cuss about Effective Alturism. Can’t rationality stand on its own? Yes there seems to be a lot of EA people here, but the two subjects are different and the involved people in this community are not a complete subset of the other.
I do too. I don’t know all the reasons, but one is simply web page design. The external page is often slow to load and unpleasant to read in comparison. This often comes with no benefit relative to just having the text in the post on LW.
Additionally, I assume that authors on other sites are a lot less likely to engage in discussion on LW, whether in comments or further posts. That seems like a big minus to me.
A related problem with link posts is that since I get sent to an external site, it’s sometimes annoying to return to LW to upvote the post after reading it.
I agree about comment quality. For most posts, there’s a paucity of discussion and lots of comments seem to be roughly Facebook-level discourse (or lower) except far fewer than compared to Facebook.
I also think that links seem to detract from the general LW atmosphere. What about a general policy of either reposting things (if you’re the author) or writing at least a paragraph of discussion on the link before giving it?
Then, link-based posts don’t immediately bounce you away, but you’re free to click on them if the given summary / comments seem sufficiently interesting?
Do you feel the “link post ugh”?
[pollid:1178]