Effectively Less Altruistically Wrong Codex
My post on the fact that incentive structures are eating the central place to be for rationalists has generated 140 comments which I have generated no clear action in the horizon.
I post here again to incentivize that it also generates some attempts to shake the ground a bit. Arguing and discussing are fun, and beware of things that are fun to argue.
Is anyone actually doing anything to mitigate the problem? To solve it? To have a stable end state in the long run where online discussions still preserve what needs being preserved?
Intelligent commentary is valuable, pools are interesting. Yet, at the end of the day, it is the people who show up to do something who will determine the course of everything.
If you care about this problem, act on it. I care enough to write these two posts.
- 27 Nov 2016 13:57 UTC; 42 points) 's comment on On the importance of Less Wrong, or another single conversational locus by (
Ultimately I don’t think that increased discussion is really the most desirable end state., People who actually want to get shit done gradually move away from posting because they realize it’s an incredibly low return activity. People who cared about AI risk started MIRI, people who cared about habits of mind started CFAR, people who cared about altruism started Givewell.
Why would you want to spend your time posting on a forum, probably not changing anyone’s mind, when you could be actually working towards something in an organization.
One function of a forum like this is to get the people together who will work on projects like these, but you should HOPE they leave—it means they’re doing something far more valuable with their time.
Ultimately, it means that the people who stay by and large AREN’T the people who are going to do things—and doers come to the site, see that, and realize they can’t find who they’re looking for here.
What I’m saying is—this site has created a number of organizations that actually work towards the values that the forum espouses. Will most of those organizations actually accomplish their goals? Probably not, if you’re playing the odds, but it’s really to early to say. But the fact that those organizations were created really is the end state. The discussions THEY have actually point towards creating change in the real world. Mission accomplished.
The internet is a good medium for discussion because it lets you articulate complex arguments in writing and cite/examine sources. Also internet discussions harvest cognitive surplus from random people taking breaks from their regular work. You get more diverse perspectives than you do with the people in a single organization, you broadcast your conclusions to a larger audience, and it’s easier to draw new people in. More on the advantages of internet discussions.
(There are also reasons the internet is a bad medium for discussion, many of which don’t apply to LW very much.)
I agree with some of this, but it doesn’t change the fact that there are far more high value things you can do in your downtime (eg read a book, meditate, exercise)
I perceive heavily diminishing returns to exercise past the first few hours of exercise a week, and to meditation past the first 15 minutes per day. For books, I would say it depends on the book. This short blog post has more insight in it than most of the books I read as a kid. Many of Paul Graham’s essays would probably be book-length if they were written by a less exceptional writer. In general the best internet writing I read seems more insight-dense than the best book writing I read, but the best internet writing is scattered.
Wow, I would say in general the book reading i’ve done has been far more fruitful than the internet reading I’ve done—the signal to noise ratio in an individual post might be higher, but the signal to noise ratio of finding good content is much much higher for online (IME) - additionally, I think the real issue here is the return of posting an article vs. reading a book, not of reading an article vs. reading a book.
What are some of the best books you’ve read? I tend to be pretty disappointed with most books.
Yeah, I think there is low-hanging fruit in assembling collections of great blog posts.
It’s definitely relevant though because internet reading trades off against book reading for the people who will read your post. It’s possible that there is some kind of power law distribution of post quality, and a small number of blog posts generate a large fraction of the value for readers. But it may be hard to predict how much value you will generate for your readers in advance—for example, I didn’t predict this post would be as well-received as it was.
Aren’t you a bit too confident about what is “more high value” for other people?
You’re right, a more objective word might be ’productive”.
Basically, you are concerned that LW is not what it was once, and you’d like to see it revived or at least supplanted with an equivalent.
Although I’ve been on LW for while, I’ve never been that active. However I still noticed the general decline in volume of discussion in the last year. I spend more time on reddit in r/machinelearning, and the various singularity/AGI/AI related subreddits. Basically I’m interested in AI/future studies but not so much x-rationality, for reasons similar to those outlined by Yvain years ago.
From my perspective the exodus isn’t so bad, because it seems that the community which remains still contains a core of intelligent readers educated/interested in issues I care about.
From the start the main appeal of LW—from my perspective—was the somewhat higher quality of discussion than what you would find in big public reddits (like /Futurology for example), due in part to the various mechanisms the founders worked out.
To really revive LW, it may need to change significantly. Interests shift over time, online communities form and then disband. I don’t claim to know what changes would increase membership/activity volume, but—not being so interested in xrationality—I only follow a particular subset of this site’s discussions conversations, and a subset that wouldn’t necessarily benefit from increasing general volume.
My concern is that there is no centralized place where emerging and burgeoning new rationalists, strategists and thinkers can start to be seen and dinosaurs can come to post their new ideas.
My worry is about the lack of centrality, nothing to do with the central member being LW or not.
Well, from what I remember, LW has always been diverse. There is a core that has been interested in AI risk since EY’s early writing and the SL4 mailing list. There is a newer group that HMPOR brought in, and so on. Some of these groups explicitly complained about too many posts in categories they were not interested in.
One of the proposed solutions was to fork LW into subreddits. That’s what reddit does after all, and it seems to works for them.
What happened instead was the exodus—a fork into separate sites. The EA people have their own forum now. The rationality bloggers hang out on blogs/facebook. The MIRI AI risk people have their own forum as well.
How does centrality in particular help? I mean it helps a little to have less pages to load to get the content that you want, but on the other hand when posts are too frequent its annoying to have to wade through a bunch of stuff you are not interested in. LW of course is now pretty low volume—but if you look back at the history, there was a time when people were complaining (numerous people at various times) that there was too much stuff they didn’t like (at least this is how I remember it, but I’m not even bothering to search to find some example posts).
I just saw Jurassic World—so my mind is having some extra trouble interpreting your use of ‘dinosaurs’. I guess you mean old high quality posters who no longer post?
If you have some ideas you want to write and communicate and get feedback on, then your best bet is probably to write them up first on your own blog, and then submit them for discussion on multiple sites, and then gently link the resulting discussions together. Also, directly emailing people and asking for comment is sometimes useful—totally depends on your goals.
The other strategy is to just write stuff and let the internet figure it out. I dont blog so much recently, but when I used to blog more I just wrote articles and never bothered with promoting them or even telling anybody about them in anyway (not even my friends). Surprisingly, people somehow found some of the better articles regardless, and this even lead to some very interesting discussions—basically I met John Smart that way. I even made a little money when one of my articles was turned into a column on game developer magazine.
So anyway—it’s not clear to me that centralization helps enormously. It has some advantages, but also some serious disadvantages. For example LW is open, but that does not mean that it is completely free of top-down influence from MIRI/EY—which some writers of very different viewpoint would find annoying/unacceptable.
Could you give a link?
announcement thread
Thanks!
What are the best ones?
That phrase may have overestimated the number of such subreddits—I mainly read r/Singularity (moderated by MIRI people, similar to LW, low volume), and r/artificial. There is an r/agi but it is very low volume. r/futurology is very high volume and future-optimist.
r/machinelearning is the most serious and the AMAs there are pure gold (Hinton, Bengio, Schmiduber, Lecun, Ng, etc). It’s main value for me is making it easier to stay up to date on ML/AI, saving most of the trouble of having to read through tons of abstracts from the various conferences.
/r/Futurology is also really annoying because people keep having the same arguments over and over again.
Could creating comprehensive overview pages for the arguments and linking people to them whenever the arguments came up be useful?
It might but most redditors don’t really click links. I find it more useful to ignore them, occasionally skimming the arguments and upvoting the non-stupid comments.
I would also strongly recommend /r/thisisthewayitwillbe
/r/artificial is the official AI subreddit.
In a small attempt to help, I cross-post all my high-quality LW-relivant posts to LW.
This post from one year ago discussed a similar problem. Suggestions for returning LessWrong to a position of centrality included:
Allowing and encouraging more links posts and the discussion of them, on topics of interest to rationalists, such as machine intelligence and transhumanism, as Hacker News does now.
Allow and encourage posts on more political topics in Discussion, but probably not Main. Dangers here could be mitigated by banning discussion of current politicians, governments, and issues, or banning discussion on specific topics. I personally think this wouldn’t work because moderation and banning would need to be strictly enforced, assuming the user base doesn’t naturally follow the ban. Considering LessWrong has a history of fatigue among moderators, doing something like this which may effectively lower the sanity waterline here (for a temporary period) might ruin it more.
Get rid of Open Threads and create a new norm that a discussion post as short as a couple sentences is acceptable.
I think creating new norms is a collective action problem. For whatever reason(s), maybe mostly fear of downvoting, thinking what would be posted isn’t “appropriate enough” for LessWrong, and indifference, no single individual(s) are incentivized to take risks in posting more and more novel content. Or something like that. Generating a new norm of encouraging others to give more upvotes to posts which are on the edge of LessWrong’s Overtown windown, or “appropriate content” criterion, may again be another collective action problem. Also, that seems risky.
I think some actions were provided in the previous threads, they just weren’t made actionable. John Maxwell made some observations, which could be turned into actions.
Users on Less Wrong could downvote less. I personally use both upvoting and downvoting sparingly on LessWrong, unless a comment or post really stands out as great or awful. This seems like a thing we can’t get a whole community to do.
Instead of merely upvoting a post or comment, leave a comment like “great post” as a comment, or whatever positive feedback, as this is more a motivator. This in turn may incentivize people to post more often over the long-term.
I bolded the last one because it seems actionable. I think another bottleneck is many suggestions to fix this sort of problem revolve around changing site mechanics, level of moderation, and encouragement from popular figures for a change in culture and/or behavior. Nobody seems to think we can fix all these things by contacting Trike Apps (who maintains and bulids LessWrong), and asking them to change the site mechanics. I don’t think that would work, anyway. I think if one want to change how LessWrong works, one needs to contact the moderators of the site, its real owners, or whatnot, and bring proposals directly to them.
LessWrong doesn’t have to have a uniform standard for upvoting and downvoting. For example, upvoting doesn’t differentiate between “interesting” and “correct”. As I have said before, someone might even feel compelled to downvote an interesting speculative idea for the fear that other readers might mistake the lack of downvotes for the speculative idea being thought of as mostly correct by other LWers.
To solve it, we could encourage people to use tags or informal tags, such as putting a tag inside square brackets in the title of the post, to clearly indicate, for example, how certain a poster is about their idea. For example, a post could have a title like this: “A statement [Epistemic state:possible][Topic: something]” or “A statement [Epistemic state: a speculation] [Topic: something]”. I think that it is likely that readers would treat different tags differently, for example, something might still be a curious idea, even if it is unpolished and has flaws, thus, if it was tagged properly (e.g. “unlikely” or “speculation”, “a spherical cow style model”), it would not merit a downvote, because it would be clear that that post is not going to be mistaken by other readers for a one that purports to be accurate and certain. On the other hand, tags “certain” or “highly likely” would be useful for readers who prefer not having to wade through various speculations and want to read more reliable posts. Of course, if someone tried to pass off their pet idea as a certain fact, they could be downvoted.
Tags work pretty well on reddit, and folks already use the Link: tag here. However I think that having too many tags or too complex of a tag system could also just contribute to the low volume problem.
In particular I like the idea of tags for “fiction” and possibly for “speculative”. Although if we are to be completely honest with ourselves, the sequences contains many posts that should be tagged fiction or speculative. The fiction ones are obvious, but its not always obvious which ones are speculative.
Well, a tag system doesn’t have to be strict or predetermined in advance. I think that if posters were allowed to create new tags to express their intent and their certainty about their posts, suitable and expressive tags would likely prevail and become common whereas unexpressive tags would be used only a few times and then fall out of use. People would pick up usage of various tags from observation.
A similar thing happened on Imgur when tags were introduced there; it seems to have worked fine on their end, though obviously it’s a much larger community. I am not certain how the differences there, much less the differences in culture, would affect the adoption of tags as a means of identifying the nature in which a post is intended.
Additionally, it’s worth considering the use of the same system for commenting on posts; there are comment threads out there that I could see having used such a tagging system. The question, then, is whether that would create too much clutter.
Good point—agreed. I will try to remember to come up with an appropriate tag for my next post.
My comment in your thread suggests a number of actions, all of which I endorse:
Be less fearful and more excited about posting things to LW. Polls indicate that LW wants to see more content. So recognize that it’s normal for people to tear your ideas apart and that good posts can have tremendous value.
Be less critical and more appreciative when responding to & voting on toplevel LW posts. (Inversion of the above.) Consider sending appreciative private messages for posts you like.
Experiment with composing your post in a notebook or on Medium.com and only consider sharing your post on LW once you are finished writing it. (If Paul Graham is correct, writing is a good way to clarify your thinking and generate ideas even if you don’t share.)
“Blogging carnivals” for LW with monthly themes, similar to those held on the EA Forum for a while. (Theme suggestions anyone? We could run the first carnival in July.)
Promote all of the above as cultural norms whenever the opportunity presents itself.
By the way, I attach less significance to the LW/SSC/Tumblr diaspora than you do. I’m weakly in favor of a variety of different discussion models being explored. If you’re having trouble keeping track of everything, consider an RSS reader. You can also have a “Distracting Websites” bookmarks folder in your browser that you use Chrome’s “Open All Bookmarks in New Window” feature on when you need a break or you’re relaxing in the evening.
Equilibrium is overrated. The only stable end-state is heat death of the universe.
Instead of trying to maintain a center, move with the changing desires of your audience. If SSC is more attractive to some than LW, it’s ok to be posting/commenting in both places.