LW doesn’t like to hear the truth about male/female sexual strategies; we like to have accurate maps here, but there’s a big “censored” sign over the bit of the map that describes the evolutionary psychology of sexuality, practical dating advice, the burgeoning “pick-up” community and an assorted cloud of topics.
Reasons for this censorship (and I agree to an extent) are that talking about these topics offends people and splits the community. LW is more useful, it has been argued, if we just don’t talk about them.
The PUA community include people who come across as huge assholes, and that could be an alternative explanation of why people react negatively to the topics, by association. I’m thinking in particular of the blog “Roissy in DC”, which is on OB’s blogroll.
Offhand, it seems to me that thinking of all women as children entails thinking of some adults as children, which would be a map-territory mistake around the very important topic of personhood.
I did pick up some interesting tips from PUA writing, and I do think there can be valuable insight there if you can ignore the smell long enough to dig around (and wash your hands afterwards, epistemically speaking).
No relevant topics should be off-limits to a community of sincere inquiry. Relevance is the major reason why I wouldn’t discuss the beauty of Ruby metaprogramming on LessWrong, and wouldn’t discuss cryonics on a project management mailing list.
If discussions around topic X systematically tend to go off the rails, and topic X still appears relevant, then the conclusion is that the topic of “why does X cause us to go off the rails” should be adequately dealt with first, in lexical priority. That isn’t censorship, it’s dependency management.
Got that. I am suggesting that it is off-limits because this community isn’t yet strong enough at the skills of collaborative truth-seeking. Past failures shouldn’t be seen as eternal limitations; as the community grows, by acquiring new members, it may grow out of these failures.
To make this concrete, the community seems to have a (relative) blind spot around things like pragmatics, as well as what I’ve called “myths of pure reason”. One of the areas of improvement is in reasoning about feelings. I’m rather hopeful, given past contributions by (for instance) Alicorn and pjeby.
the community seems to have a (relative) blind spot around things like pragmatics
I don’t think that is the reason for the problem. The community doesn’t go off the rails and have to censor discussions about merely pragmatic issues.
It is more that the community has a bias surrounding the concept of traditional, storybook-esque morality, roughly a notion of doing good that seems to have some moral realist heritage, a heavy tint of political correctness, and sees the world in black-and-white terms, rather than moral shades of grey. Facts that undermine this conception of goodness can’t be countenanced, it seems.
Robin Hanson, on the other hand, has no trouble posting about the sexuality/seduction cluster of topics. There seems to be a systematic difference between OB and LW along this “moral political correctness/moral constraints” dimension—Robin talks with enthusiasm about futures where humans have been replaced with Vile Offspring, and generally shuns any kind of talk about ethics.
desire to cling to morals from children’s storybooks
This kind of phrase seems designed to rile (some of) your readers. You will improve the quality of discourse substantially by understanding that and correcting for it. Unless, of course, your goal really is to rile readers rather than to improve quality of discourse.
There is truth to what you say but unfortunately you are letting your frustration become visible. That gives people the excuse to assign you lower status and freely ignore your insight. This does not get you what you want.
This is perhaps one of the most important lessons to be learned on the topic of ‘pragmatics’. Whether you approach the topic from works like Robert Greene’s on Power, War and Seduction or from the popular social skills based self help communities previously mentioned, a universal lesson is that things aren’t fair, bullshit is inevitable and getting indignant about the bullshit gets in the way of your pragmatic goals.
There may be aspects of the morality here that is childlike or naive and I would be interested in your analysis of the subject since you clearly have given it some thought. But if you are reckless and throw out ‘like theists’ references without thought your contribution will get downvoted to oblivion and I will not get to hear what you have to say. Around here that more or less invokes the ‘nazi’ rule.
There is truth to what you say but unfortunately you are letting your frustration become visible.
LOL… indeed.
I am not sure that I am actually, in far mode, so interested in correcting this particular LW bias. In near mode, SOMEONE IS WRONG ON THE INTERNET bias kicks in. It seems like it’ll be an uphill struggle that neither I nor existential risk mitigation will benefit from. A morally naive LW is actually good for X-risks, because that particular mistake (the mistake of thinking in terms of black-and-white morality and Good and Evil) will probably make people more “in the mood” for selfless acts of charity.
I think I agree. If Eliezer didn’t have us all convinced that he is naive in that sense we would probably have to kill him before he casts his spell of ultimate power.
(cough The AI Box demonstrations were just warm ups...)
There seems to be a systematic difference between OB and LW along this “moral political correctness/moral constraints” dimension
Robin can do what he likes on his own blog without direct consequences within the blog environment. He also filters which comments he allows to be posted. I guess what I am saying is that it isn’t useful to compare OB and LW on this dimension because the community vs individual distinction is far more important than the topic clustering.
I may not have been clear: I meant pragmatics in this sense, roughly “how we do things with words”. I’d also include things like denotation vs connotation in that category. Your comment on “pragmatic issues” suggests you may have understood another sense.
LW doesn’t like to hear the truth about male/female sexual strategies; we like to have accurate maps here, but there’s a big “censored” sign over the bit of the map that describes the evolutionary psychology of sexuality, practical dating advice, the burgeoning “pick-up” community and an assorted cloud of topics.
Reasons for this censorship (and I agree to an extent) are that talking about these topics offends people and splits the community. LW is more useful, it has been argued, if we just don’t talk about them.
The PUA community include people who come across as huge assholes, and that could be an alternative explanation of why people react negatively to the topics, by association. I’m thinking in particular of the blog “Roissy in DC”, which is on OB’s blogroll.
Offhand, it seems to me that thinking of all women as children entails thinking of some adults as children, which would be a map-territory mistake around the very important topic of personhood.
I did pick up some interesting tips from PUA writing, and I do think there can be valuable insight there if you can ignore the smell long enough to dig around (and wash your hands afterwards, epistemically speaking).
No relevant topics should be off-limits to a community of sincere inquiry. Relevance is the major reason why I wouldn’t discuss the beauty of Ruby metaprogramming on LessWrong, and wouldn’t discuss cryonics on a project management mailing list.
If discussions around topic X systematically tend to go off the rails, and topic X still appears relevant, then the conclusion is that the topic of “why does X cause us to go off the rails” should be adequately dealt with first, in lexical priority. That isn’t censorship, it’s dependency management.
But in reality, this topic is off-limits. Therefore LW is not a community of sincere inquiry, but nothing’s perfect, and LW does a lot of good.
Interesting. However, in this case, that discussion might get somewhat accusatory, and go off the rails itself.
Got that. I am suggesting that it is off-limits because this community isn’t yet strong enough at the skills of collaborative truth-seeking. Past failures shouldn’t be seen as eternal limitations; as the community grows, by acquiring new members, it may grow out of these failures.
To make this concrete, the community seems to have a (relative) blind spot around things like pragmatics, as well as what I’ve called “myths of pure reason”. One of the areas of improvement is in reasoning about feelings. I’m rather hopeful, given past contributions by (for instance) Alicorn and pjeby.
I don’t think that is the reason for the problem. The community doesn’t go off the rails and have to censor discussions about merely pragmatic issues.
It is more that the community has a bias surrounding the concept of traditional, storybook-esque morality, roughly a notion of doing good that seems to have some moral realist heritage, a heavy tint of political correctness, and sees the world in black-and-white terms, rather than moral shades of grey. Facts that undermine this conception of goodness can’t be countenanced, it seems.
Robin Hanson, on the other hand, has no trouble posting about the sexuality/seduction cluster of topics. There seems to be a systematic difference between OB and LW along this “moral political correctness/moral constraints” dimension—Robin talks with enthusiasm about futures where humans have been replaced with Vile Offspring, and generally shuns any kind of talk about ethics.
(EDITED, thanks to Morendil)
This kind of phrase seems designed to rile (some of) your readers. You will improve the quality of discourse substantially by understanding that and correcting for it. Unless, of course, your goal really is to rile readers rather than to improve quality of discourse.
There is truth to what you say but unfortunately you are letting your frustration become visible. That gives people the excuse to assign you lower status and freely ignore your insight. This does not get you what you want.
This is perhaps one of the most important lessons to be learned on the topic of ‘pragmatics’. Whether you approach the topic from works like Robert Greene’s on Power, War and Seduction or from the popular social skills based self help communities previously mentioned, a universal lesson is that things aren’t fair, bullshit is inevitable and getting indignant about the bullshit gets in the way of your pragmatic goals.
There may be aspects of the morality here that is childlike or naive and I would be interested in your analysis of the subject since you clearly have given it some thought. But if you are reckless and throw out ‘like theists’ references without thought your contribution will get downvoted to oblivion and I will not get to hear what you have to say. Around here that more or less invokes the ‘nazi’ rule.
Edit: No longer relevant.
LOL… indeed.
I am not sure that I am actually, in far mode, so interested in correcting this particular LW bias. In near mode, SOMEONE IS WRONG ON THE INTERNET bias kicks in. It seems like it’ll be an uphill struggle that neither I nor existential risk mitigation will benefit from. A morally naive LW is actually good for X-risks, because that particular mistake (the mistake of thinking in terms of black-and-white morality and Good and Evil) will probably make people more “in the mood” for selfless acts of charity.
I think I agree. If Eliezer didn’t have us all convinced that he is naive in that sense we would probably have to kill him before he casts his spell of ultimate power.
(cough The AI Box demonstrations were just warm ups...)
Robin can do what he likes on his own blog without direct consequences within the blog environment. He also filters which comments he allows to be posted. I guess what I am saying is that it isn’t useful to compare OB and LW on this dimension because the community vs individual distinction is far more important than the topic clustering.
I may not have been clear: I meant pragmatics in this sense, roughly “how we do things with words”. I’d also include things like denotation vs connotation in that category. Your comment on “pragmatic issues” suggests you may have understood another sense.
oh, ok. Linguistic pragmatics. That’s a more fruitful idea.
Curiously, there is ambiguity there and both meanings seem to apply.