Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I’m thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I’d also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.
You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:
My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don’t think calling the moderation group “sunshine regiment” is a good idea for this reason.
Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
Encourage writers to do literature reviews to connect to existing work in relevant fields.
It could also help to:
Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don’t want to force a particular methodology, it would be good to nudge people in an empirical direction.
Encourage content that’s directly relevant to people doing important work, rather than mainly being abstract stuff.
I feel that this comment deserves a whole post in response, but I probably won’t get around to that for a while, so here is a short summary:
I generally think people have confused models about what forms of weirdness are actually costly. The much more common error mode for online communities is being boring and uninteresting. The vast majority of the most popular online forums are really weird and have a really strong distinct culture. The same is true for religions. There are forms of weirdness that prevent you from growing, but I feel that implementing the suggestions in this comment in a straightforward way would mostly result in the forum becoming boring and actually stinting its meaningful growth.
LessWrong is more than just weird in a general sense. A lot of the things that make LessWrong weird are actually the result of people having thought about how to have discourse, and then actually implementing those norms. That doesn’t mean that they got it right, but if you want to build a successful intellectual community you have to experiment with norms around discourse, and avoiding weirdness puts a halt to that.
I actually think that one of the biggest problem with Effective Altruism is the degree to which large parts of it are weirdness averse, which I see as one of the major reasons why EA kind of hasn’t really produced any particularly interesting insights or updates in the past few years. CEA at least seems to agree with me (probably partially because I used to work there and shaped the culture a bit, so this isn’t independent), and tried to counteract this by making the explicit theme of this years EA Global in SF about “accepting the weird parts of EA”. As such, I am not very interested in appeasing current EAs need for normalcy and properness and instead hope that this will move EA towards becoming more accepting of weird things.
I would love to give more detailed reasoning for all of the above, but time is short, so I will leave it at this. I hope this gave people at least a vague sense of my position on this.
You’re mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I’m most concerned about is that you’re building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you’d like to join it.
On the weirdness point: maybe it’s useful to distinguish between two meanings of ‘rationality community’. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I’m concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep—ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary—such as references to HPMOR.
We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I’d like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.
We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds.
I’m not persuaded that this is substantially more true of scientists than people in the LW community.
Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see “Profession” section here).
They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture.
I don’t think people usually become scientists unless they like the culture of academic science.
I’d like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.
I think “intellectual communities” are just a high-status kind of subculture. “Be more high status” is usually not useful advice.
I think it might make sense to see academic science as a culture that’s optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.
If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don’t endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.
The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that’s not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.
By intellectual community I wasn’t meaning ‘high status subculture’, I was trying to get across the idea of a community that selects on people’s ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more subculture-like. stackoverflow might be a better example.
I’m not hoping that lesswrong 2.0 will accumulate money and prestige, I’m hoping that it will make intellectual progress needed for solving the world’s most important problems. But I think this aim would be better served if it attracted a wide range of people who are both capable and aligned with its aims.
My impression is that you don’t understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there’s a venue that suits them—the venue is necessary, but stays empty unless the desire comes into play.
″ I’m thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I’d also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.”
Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?
“I think communities form because people discover they share a desire”
I agree with this, but would add that it’s possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don’t like.
“Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?”
That’s something I’d like to know. But I think it’s important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it’s going to be difficult for it to solve some of the world’s most important problems.
Perhaps we have different goals in mind for lesswrong 2.0. I’m thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you’d care less about appealing to audiences outside of the community.
I’m fond of LW (or at least its descendants). I’m somewhat weird myself, and more tolerant of weirdness than many.
It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.
From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.
The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?
I’m hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.
“From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.”
I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it’s also already produced good stuff.
Hopefully that’s the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren’t many people outside of it who could make valuable contributions.
It seems to me that you want to squeeze a lot of the fun out of the site.
I’m not sure how far it would be consistent with having a single focus for rationality online, but perhaps there should be a section or a nearby site for more dignified discussion.
I think the people you want to attract are likely to be busy, and not necessarily interested in interviews and testing for a rather hypothetical project, but I could be wrong.
Regarding a couple of your concrete suggestions: I like the idea of using existing academic jargon where it exists. That way, reading LW would teach me search terms I could use elsewhere or to communicate with non-LW users. (Sometimes, though, it’s better to come up with a new term; I like “trigger-action plans” way better than “implementation intentions”.)
It would be nice if users did literature reviews occasionally, but I don’t think they’ll have time to do that often at all.
Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I’m thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I’d also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.
You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:
My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don’t think calling the moderation group “sunshine regiment” is a good idea for this reason.
Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
Encourage writers to do literature reviews to connect to existing work in relevant fields.
It could also help to:
Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don’t want to force a particular methodology, it would be good to nudge people in an empirical direction.
Encourage content that’s directly relevant to people doing important work, rather than mainly being abstract stuff.
I feel that this comment deserves a whole post in response, but I probably won’t get around to that for a while, so here is a short summary:
I generally think people have confused models about what forms of weirdness are actually costly. The much more common error mode for online communities is being boring and uninteresting. The vast majority of the most popular online forums are really weird and have a really strong distinct culture. The same is true for religions. There are forms of weirdness that prevent you from growing, but I feel that implementing the suggestions in this comment in a straightforward way would mostly result in the forum becoming boring and actually stinting its meaningful growth.
LessWrong is more than just weird in a general sense. A lot of the things that make LessWrong weird are actually the result of people having thought about how to have discourse, and then actually implementing those norms. That doesn’t mean that they got it right, but if you want to build a successful intellectual community you have to experiment with norms around discourse, and avoiding weirdness puts a halt to that.
I actually think that one of the biggest problem with Effective Altruism is the degree to which large parts of it are weirdness averse, which I see as one of the major reasons why EA kind of hasn’t really produced any particularly interesting insights or updates in the past few years. CEA at least seems to agree with me (probably partially because I used to work there and shaped the culture a bit, so this isn’t independent), and tried to counteract this by making the explicit theme of this years EA Global in SF about “accepting the weird parts of EA”. As such, I am not very interested in appeasing current EAs need for normalcy and properness and instead hope that this will move EA towards becoming more accepting of weird things.
I would love to give more detailed reasoning for all of the above, but time is short, so I will leave it at this. I hope this gave people at least a vague sense of my position on this.
You’re mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I’m most concerned about is that you’re building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you’d like to join it.
On the weirdness point: maybe it’s useful to distinguish between two meanings of ‘rationality community’. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I’m concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep—ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary—such as references to HPMOR.
We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I’d like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.
I’m not persuaded that this is substantially more true of scientists than people in the LW community.
Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see “Profession” section here).
I don’t think people usually become scientists unless they like the culture of academic science.
I think “intellectual communities” are just a high-status kind of subculture. “Be more high status” is usually not useful advice.
I think it might make sense to see academic science as a culture that’s optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.
If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don’t endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.
The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that’s not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.
By intellectual community I wasn’t meaning ‘high status subculture’, I was trying to get across the idea of a community that selects on people’s ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more subculture-like. stackoverflow might be a better example.
I’m not hoping that lesswrong 2.0 will accumulate money and prestige, I’m hoping that it will make intellectual progress needed for solving the world’s most important problems. But I think this aim would be better served if it attracted a wide range of people who are both capable and aligned with its aims.
My impression is that you don’t understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there’s a venue that suits them—the venue is necessary, but stays empty unless the desire comes into play.
″ I’m thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I’d also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.”
Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?
“I think communities form because people discover they share a desire”
I agree with this, but would add that it’s possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don’t like.
“Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?”
That’s something I’d like to know. But I think it’s important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it’s going to be difficult for it to solve some of the world’s most important problems.
Perhaps we have different goals in mind for lesswrong 2.0. I’m thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you’d care less about appealing to audiences outside of the community.
I’m fond of LW (or at least its descendants). I’m somewhat weird myself, and more tolerant of weirdness than many.
It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.
From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.
The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?
I’m hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.
“From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.”
I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it’s also already produced good stuff.
Hopefully that’s the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren’t many people outside of it who could make valuable contributions.
It seems to me that you want to squeeze a lot of the fun out of the site.
I’m not sure how far it would be consistent with having a single focus for rationality online, but perhaps there should be a section or a nearby site for more dignified discussion.
I think the people you want to attract are likely to be busy, and not necessarily interested in interviews and testing for a rather hypothetical project, but I could be wrong.
Regarding a couple of your concrete suggestions: I like the idea of using existing academic jargon where it exists. That way, reading LW would teach me search terms I could use elsewhere or to communicate with non-LW users. (Sometimes, though, it’s better to come up with a new term; I like “trigger-action plans” way better than “implementation intentions”.)
It would be nice if users did literature reviews occasionally, but I don’t think they’ll have time to do that often at all.
This is a real dynamic that is worth attention. I particularly agree with removing HPMoR from the top of the front page.
Counterpoint: The serious/academic niche can also be filled by external sites, like https://agentfoundations.org/ and http://effective-altruism.com/.