You’re mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I’m most concerned about is that you’re building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you’d like to join it.
On the weirdness point: maybe it’s useful to distinguish between two meanings of ‘rationality community’. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I’m concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep—ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary—such as references to HPMOR.
We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I’d like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.
We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds.
I’m not persuaded that this is substantially more true of scientists than people in the LW community.
Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see “Profession” section here).
They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture.
I don’t think people usually become scientists unless they like the culture of academic science.
I’d like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.
I think “intellectual communities” are just a high-status kind of subculture. “Be more high status” is usually not useful advice.
I think it might make sense to see academic science as a culture that’s optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.
If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don’t endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.
The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that’s not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.
By intellectual community I wasn’t meaning ‘high status subculture’, I was trying to get across the idea of a community that selects on people’s ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more subculture-like. stackoverflow might be a better example.
I’m not hoping that lesswrong 2.0 will accumulate money and prestige, I’m hoping that it will make intellectual progress needed for solving the world’s most important problems. But I think this aim would be better served if it attracted a wide range of people who are both capable and aligned with its aims.
My impression is that you don’t understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there’s a venue that suits them—the venue is necessary, but stays empty unless the desire comes into play.
″ I’m thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I’d also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.”
Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?
“I think communities form because people discover they share a desire”
I agree with this, but would add that it’s possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don’t like.
“Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?”
That’s something I’d like to know. But I think it’s important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it’s going to be difficult for it to solve some of the world’s most important problems.
Perhaps we have different goals in mind for lesswrong 2.0. I’m thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you’d care less about appealing to audiences outside of the community.
I’m fond of LW (or at least its descendants). I’m somewhat weird myself, and more tolerant of weirdness than many.
It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.
From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.
The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?
I’m hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.
“From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.”
I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it’s also already produced good stuff.
Hopefully that’s the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren’t many people outside of it who could make valuable contributions.
You’re mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I’m most concerned about is that you’re building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you’d like to join it.
On the weirdness point: maybe it’s useful to distinguish between two meanings of ‘rationality community’. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I’m concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep—ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary—such as references to HPMOR.
We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I’d like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.
I’m not persuaded that this is substantially more true of scientists than people in the LW community.
Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see “Profession” section here).
I don’t think people usually become scientists unless they like the culture of academic science.
I think “intellectual communities” are just a high-status kind of subculture. “Be more high status” is usually not useful advice.
I think it might make sense to see academic science as a culture that’s optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.
If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don’t endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.
The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that’s not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.
By intellectual community I wasn’t meaning ‘high status subculture’, I was trying to get across the idea of a community that selects on people’s ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more subculture-like. stackoverflow might be a better example.
I’m not hoping that lesswrong 2.0 will accumulate money and prestige, I’m hoping that it will make intellectual progress needed for solving the world’s most important problems. But I think this aim would be better served if it attracted a wide range of people who are both capable and aligned with its aims.
My impression is that you don’t understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there’s a venue that suits them—the venue is necessary, but stays empty unless the desire comes into play.
″ I’m thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I’d also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.”
Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?
“I think communities form because people discover they share a desire”
I agree with this, but would add that it’s possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don’t like.
“Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they’ve got already?”
That’s something I’d like to know. But I think it’s important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it’s going to be difficult for it to solve some of the world’s most important problems.
Perhaps we have different goals in mind for lesswrong 2.0. I’m thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you’d care less about appealing to audiences outside of the community.
I’m fond of LW (or at least its descendants). I’m somewhat weird myself, and more tolerant of weirdness than many.
It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.
From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.
The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?
I’m hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.
“From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.”
I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it’s also already produced good stuff.
Hopefully that’s the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren’t many people outside of it who could make valuable contributions.