...these environments are also self-selecting. In other words, even when the people speaking loudest or most eloquently don’t intentionally discourage participation from people who are not like them / who may be uncomfortable with the terms of the discussion, entertaining ‘politically incorrect’ or potentially harmful ideas out loud, in public (so to speak) signals people who would be impacted by said ideas that they are not welcome.
Self-selection in LessWrong favors people who enjoy speaking dispassionately about sensitive issues, and disfavors people affected by those issues. We risk being an echo-chamber of people who aren’t hurt by the problems we discuss.
That said, I have no idea what could be done about it.
I’m not sure that anything should be done about it, at least if we look at it from whole society’s perspective. (Or rather, we should try to avoid the echo chamber effect if possible, but not at the cost of reducing dispassionate discussion.) If some places discuss sensitive issues dispassionately, then those places risk becoming echo chambers; but if no place does so, then there won’t be any place for dispassionate discussion of those issues. I have a hard time believing that a policy that led to some issue only being discussed in emotionally charged terms would be a net good for society.
Wouldn’t it be possible to minimize signaling given the same level of dispassionate discussion? That is, discourage use of highly emotionally charged/exosemantically heavy words/phrases if a less charged equivalent exists or can be coined and defined.
Say if you have a word X that means Y plus emotional connotation α and thede/memeplex/identity signaling effect β (not that emotional connotation is detached from the thedish/political/identity-wise context of the reader, of course), there’s really no reason to use X instead of Y in dispassionate discussion. To give a concrete example, there’s no reason to use ‘sluttiness’ (denotatively equivalent to ‘sexual promiscuity’ but carrying a generally negative connotational load, signaling against certain memeplexes/political positions/identities (though ideally readers here would read past the signaling load/repress the negative emotional response), and signaling identification with other positions/identities) instead of ‘sexual promiscuity’, which means the same thing but sheds all the emotional and thedish/tribal/whatever baggage.
(That shouldn’t be read as an endorsement of the reasoning toward the same conclusion in the post, of course.)
I don’t believe this is feasible. My impression is that emotional connotations inhere in things, not in words.
Over the decades, society has, over the decades, gone through a whole string of synonyms for “limited intelligence”—none of which are emotionally neutral. Changing terms from “imbecile”, to “retarded”, “developmentally disabled” to “special needs”, has just resulted in a steady turnover of playground insults. You can’t make an insulting concept emotionally neutral, I think.
The two aren’t contradictory: emotional connotations can inhere in things and words.
The euphemism treadmill is what you get when the emotional connotation inheres in a thing. But what emotional connotation inheres in ‘sexual promiscuity’? Even if it is there (and its recommendation by someone sensitive enough to emotional connotations that inhere in words [from the perspective of a specific thede/tribe] seems to suggest that it isn’t), certainly there’s less negative connotation there than in ‘sluttiness’.
Similarly, it’s possible to find loaded equivalents, or at least approximations, for most (all?) of Mencius Moldbug’s caste terms. (UR is a good place to mine for these sorts of pairs, since he coins emotionally neutral terms to replace, or at least approximate, emotionally loaded terms. Of course, if you use them, you’re signaling that you’ve read Moldbug, but...)
I have a hard time believing that a policy that led to some issue only being discussed in emotionally charged terms would be a net good for society.
But you’re also a white man and have an obvious lack of experience in this situation that functions as an unknown unknown. You’d be wise to be conservative in your conclusions.
As a white man myself, I feel it’s entirely reasonable to refuse to dispassionately discuss the matter of a boot on one’s own face. There are some situations in which case it is entirely appropriate to react with the deepest of passions.
If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.
As a white man (according to your own beliefs) you can’t understand how women or non-whites feel, so please stop appropriating their cause and speaking for them.
There are people on LW who aren’t white or male, so (according to your own beliefs) you should let them talk, instead of talking from your ignorant position of white male privilege about what you think is better for them. That’s mansplaining, right?
This is a hot iron approaching my face. YOU ARE TELLING ME MY THOUGHTS AND FEELINGS ARE ILLEGITIMATE. That is literally the first step to dehumanizing and murdering me. I can either follow your advice and tell you to fuck off, or I can try to address this disagreement in a reasonable way. Which do you think will go better for me? Which do you think will go better for you? I for one don’t think the adversarial approach of many feminist and pro queer writers is sane. You really should not declare the people you think are extremely powerful and controlling the world to be your sworn enemies. Feminism literally cannot win any victories without the consent of men.
I’ve got a lot of sympathy for your situation—I spent a lot of time freaking out about the complex emotional abuse that anti-racists/certain kinds of feminists go in for.
Still, I found it useful to learn something about assessing the current risk level of an attack just so I don’t go crazy—they’ve spread a lot of misery and they may eventually be politically dangerous, but they aren’t imposing the sort of immediate visceral threat you’re reacting to.
We haven’t begun to see the next stage of the fight (or at least, I haven’t seen anything I’d call effective opposition to the emotional abuse), but I recommend steadying yourself as much as possible.
YOU ARE TELLING ME MY THOUGHTS AND FEELINGS ARE ILLEGITIMATE.
Sometimes this is the case. Once you’ve realized this, try not to let it bother you too much. What’s true is already so; denying it doesn’t make it go away, and shouting on the Internet won’t make it go away either.
That is literally the first step to dehumanizing and murdering me.
If you’re worried about this, you’re either a totally normal oppressed persyn, or a paranoid white dude.
If you’re a white dude, you should stop appropriating very real fears that plenty of people face on a daily basis. That’s just bad taste.
I for one don’t think the adversarial approach of many feminist and pro queer writers is sane.
Assuming you’re a white dude, it’s really not your place to tell feminists or queer activists how to do what they do.
Do you see how your privilege has you assuming that you 1. know best and 2. should tell other people how to exist? Not to mention the fact you apparently think men are somehow necessary for feminist collective action.
I agree that this is by far the most interesting part of the piece. IIRC this site is pretty much all white men. Part of it is almost certainly that white men are into this sort of thing but I can’t help but imagine that if I was not a white man, especially if I was still in the process of becoming a rationalist, I would be turned off and made to feel unwelcome by the open dialogue of taboo issues on this website. This has the obvious effect of artificially shifting the site’s demographics, and more worryingly, artificially shifting the site’s demographics to include a large number of people who are the type of person to be unconcerned with political correctness and offending people. I think while that trait in and of itself is good, it is probably correlated with certain warped views of the world. Browse 4chan for a while if you want examples.
I think that between the extremes of the SJW Tumblr view of “When a POC talks to you, shut the fuck up and listen, you are privileged and you know nothing” and the view of “What does it matter if most of us aren’t affected by the problems we talk about, we can just imagine and extrapolate, we’re rationalist, right?” is where the truth probably lies.
Like you said, I have no idea what to do about this. There are already a lot of communities where standard societal taboos of political correctness are enforced, and I think it’s worthwhile to have at least one where these taboos don’t exist, so maybe nothing.
I’m a white man who’s done handsomely in the privilege lottery and I find quite a lot of LW utterly offputting and repellent (as I’ve noted at length previously). I’m still here of course, but in fairness I couldn’t call someone unreasonable for looking at its worst and never wanting to go near the place.
If all you show a person is the worst of lesswrong, then yes, I could see them not wanting to have anything to do with it. However, this doesn’t tell us anything; the same argument could be made of virtually all public boards. You could say the same thing about hallmark greeting cards.
This is roughly how I feel. There is a lot of good stuff here, and a lot of lot of horrible, horrible stuff that I never, ever want to be associated with. I do not recommend LessWrong to friends.
a lot of lot of horrible, horrible stuff that I never, ever want to be associated with.
As a lurker and relatively new person to this community I’ve now seen this sentiment expressed multiple places but without any specific examples. Could you (or anyone else) please provide some? I’d really like to know more about this before I start talking about Less Wrong to my friends/family/coworkers/etc.
Feel free to PM me if you don’t want to discuss it publicly.
A lot of this content is concentrated among the users who eventually created MoreRight. Check out that site for a concentrated dose of what also pops up here.
But but … he posted a link to that (or some other video of him ranting at the camera), and then was downvoted to oblivion and demolished in the comments, while whining about how he was being oppressed.
Things like that don’t seem remotely mainstream on LW, do they? (I don’t read all the big comment threads …)
If we keep telling ourselves that LW is full of horrible stuff, we start believing it. Then any negative example, even if it happens once in a while and is quickly downvoted, becomes a confirmation of the model.
This is a website with hundreds of thousands of comments. Just because a few dozen of the comments are about X, it doesn’t prove much.
EDIT: And I think threads like this contribute heavily to the availability bias. It’s like an exercise in making all the bad things more available. If you use this strategy as an individual, it’s called depression.
Just imagine that once in a while someone would accuse you of being a horrible human being, and (assuming they had a record of everything you ever did) would show you a compilation of the worst things you have ever did in the past (ignoring completely anything good you did, because that’s somehow irrelevant to the debate) and told you: this is you, this is why you are a horrible person! Well, that’s pretty much what we are doing here.
That guy is funny. Definitely not someone who would be well respected here. His model of the world is broken and he’s trying to make the world fit his model, instead of the other way around.
I’m at a loss regarding what you must consider ‘horrible’. About the worst example I can think of is the JoshElders saga of pedophilia posts, and it only took two days to downvote everything he posted into oblivion and get it removed from the lists—and even that contained a lot of good discussion in the comments.
If you truly see that much horrible stuff here, perhaps your bar is too low, or perhaps mine is too high. Can you provide examples that haven’t been downvoted, that are actually considered mainstream opinion here?
Most of these are not dominant on LW, but come up often enough to make me twitchy. I am not interested in debating or discussing the merits of these points here because that’s a one-way track to a flamewar this thread doesn’t need.
The stronger forms of evolutionary psychology and human-diversity stuff. High confidence that most/all demographic disparities are down to genes. The belief that LessWrong being dominated by white male technophiles is more indicative of the superior rationality of white male technophiles than any shortcomings of the LW community or society-at-large.
Any and all neoreactionary stuff.
High-confidence predictions about the medium-to-far-future (especially ones that suggest sending money)
Throwing the term “eugenics” around cavalierly and assuming that everyone knows you’re talking about benevolent genetic engineering and not forcibly-sterilizing-people-who-don’t-look-like-me.
There should be a place to discuss these things, but it probably shouldn’t be on a message board dedicated to spreading and refining the art of human rationality. LessWrong could easily be three communities:
a rationality forum (based on the sequences and similar, focused on technique and practice rather than applying to particular issues)
a transhumanist forum (for existential risk, cryonics, FAI and similar)
an object-level discussion/debate forum (for specific topics like feminism, genetic engineering, neoreactionism, etc).
High confidence that most/all demographic disparities are down to genes. The belief that LessWrong being dominated by white male technophiles is more indicative of the superior rationality of white male technophiles than any shortcomings of the LW community or society-at-large.
I am not sure how much these opinions are that extreme, and how much it’s just a reflection of how political debates push people into “all or nothing” positions. Like, if you admit that genes have any influence on population, you are automatically misinterpreted to believe that every aspect of a population is caused by genes. Because, you know, there are just two camps, “genes, boo” camp and “genes, yay” camp, and you have already proved you don’t belong into the former camp, therefore...
At least this is how I often feel in similar debates. Like there is no “genes affect 50% of something” position. There is a “genes don’t influence anything significant, ever” camp where all the good guys are; and there is the “other” camp, with everyone else, including me and Hitler. If we divide a continuous scale into “zero” and “nonzero” subsets, then of course 0.1 and 0.5 and 1 and infinity all get into the same subset. But that’s looking through the mindkilling glasses. I could start explaining how believing that genes can have some influence on thinking and behavior is not the same as attributing everything to the genes, and is completely nothing like advocating a genocide… but I already see all the good guys looking at me and thinking: “Nice try, but you are not going to fool us. We know what you really believe.”—Well, the idea is that I actually don’t.
I even don’t think that having a white male majority at this moment is some failure of a LW community. I mean—just try imagine a parallel universe where someone else started LW. How likely it is that in the parallel universe it is perfectly balanced by ethnicity and gender? What exactly does your model of reality make you predict?
Imagine that you are a visitor from an alien species are you are told the following facts: 1) Most humans are irrational, and rationality is associated with various negative things, like Straw Vulcans. Saying good things about rationality will get you laughed at. But paradoxically, telling others that they are not very rational, is offensive. So it’s best to avoid this topic, which most people do. 2) Asch’s conformity test suggests that women are a bit more likely than men to conform. 3) Asians have a culture that discourages standing out of the crowd. 4) Blacks usually live in the most poor countries, and those living in the developed countries were historically oppressed. -- Now that you know these facts, you are told that there is a new group of people who tries to promote rationality and science and technology. As the alien visitor, based on the given data, please tell me, which gender and which race would you bet would be most represented in this group?
If the LW remains forever a group of mostly white males, then yes, that would mean that we have failed. Specifically that we have failed to spread rationality, to increase the sanity waterline. But the fact that LW started with such demographics is completely unsurprising to me. So, is the proportion of other groups increasing on LW? Looking at the surveys for two years, it seems to me that yes. Then the only question is whether it is increasing fast enough? Well, fast enough compared with what? Sure, we could do more about it. Surely, we are not automatically strategic, we have missed some opportunities. Let’s try harder. But there is no point in obsessing over the fact that LW started as a predominantly white male group, or that we didn’t fix the disparities in the society within a few years.
I even don’t think that having a white male majority at this moment is some failure of a LW community
There are other options. I think there exist possible worlds where LW is less-offputting to people outside of the uppermiddleclasstechnophilewhitemaleosphere with demographics that are closer to, but probably not identical to, the broader population. Like you said, there’s no reason for us to split the world into all-or-nothing sides: It’s entirely possible (and I think likely) that statistical differences do exist between demographics and that we have a suboptimal community/broader-culture which skews those differences more than would otherwise be the case.
Edit: I had only skimmed your comment when writing this reply; On a reread, I think we mostly agree.
I’ve definitely experienced strong adverse reactions to discussing eugenics ‘cavalierly’ if you don’t spend at least ten to fifteen minutes covering the inferential steps and sanitising the perceived later uses of the concept.
Good point about the possible three communities. I haven’t posted here much, as I found myself standing too far outside the concepts whilst I worked my way through the sequences. Regardless of that, the more I read the more I feel I have to learn, especially about patterned thinking and reframes. To a certain extent I see this community as a more scientifically minded Maybe Logic group, when thinking about priors and updating information.
A lot of the transhumanist material have garnered very strong responses from friends though, but I’ve stocked up on Istvan paperbacks to hopefully disseminate soon.
My vague recollections of LW-past disagreements, but I don’t have any readily available examples. It’s possible my model is drawing too much on the-rest-of-the-Internet experiences and I should upgrade my assessment of LW accordingly.
The stronger forms of evolutionary psychology and human-diversity stuff. High confidence that most/all demographic disparities are down to genes. The belief that LessWrong being dominated by white male technophiles is more indicative of the superior rationality of white male technophiles than any shortcomings of the LW community or society-at-large.
Any and all neoreactionary stuff.
High-confidence predictions about the medium-to-far-future (especially ones that suggest sending money)
Throwing the term “eugenics” around cavalierly and assuming that everyone knows you’re talking about benevolent genetic engineering and not forcibly-sterilizing-people-who-don’t-look-like-me.
I don’t mind #3, in fact the discussions of futurism are a big draw of LessWrong for me (though I suppose there are general reasons for being cautious about your confidence about the future). But I would be very happy to see #1, #2, and #4 go away.
I find stuff like “if you don’t sign up your kids for cryonics then you are a lousy parent” more problematic than a sizeable fraction of what reactionaries say.
What if you qualified it, “If you believe the claims of cryonicists, are signed up for cryonics yourself, but don’t sign your kids up, then you are a lousy parent”?
In discussing vaccinations, how many people choose to say something as conditional as “if you believe the claims of doctors, have had your own vaccinations, but don’t let your kids be vaccinated, then you are a lousy parent”?
No, the argument is that you should believe the value of vaccinations, and that disbelieving the value of vaccinations itself makes your parenting lousy.
Well, I think Eliezer feels the same about cryonics as pretty much all the rest of us feel about vaccines—they help protect your kids from several possible causes of death.
No, the argument is that you should believe the value of vaccinations, and that disbelieving the value of vaccinations itself makes your parenting lousy.
Which is pretty much the same argument as saying that you should baptize your children and that disbelieving the value of baptism itself makes your parenting lousy.
If the belief-set you’re subtly implying is involved were accurate, then it would be.
However, I think we have a “sound” vs “sound” tree-falling-in-the-woods issue here. Is “lousy parenting” a virtue-ethics style moral judgement, or a judgement of your effectiveness as a parent?
Taboo “lousy”, people. We’re supposed to be rationalists.
Which is pretty much the same argument as saying that you should baptize your children and that disbelieving the value of baptism itself makes your parenting lousy.
Exactly, it all depends on the actual value of the thing in question. I believe baptism has zero value, I believe vaccines have lots of value, I’m highly uncertain about the value of cryonics (compared to other things the money could be going to).
A person is expected to say such about X if they believe X has lots of value. So why is it so very problematic for Eliezer to say it about cryonics when he believes cryonics have lots of value?
It’s impolitic and I don’t know how effective it is in changing minds. But then again it’s the same thing we say about vaccinations, so who knows: perhaps shaming parents does work in convincing them. I’d like to see research about that.
perhaps shaming parents does work in convincing them
My prior is that the results will be bi-modal: some parents can be shamed into adjusting their ways, while for others it will only force them into the bunker mindset and make them more resistant to change.
a rationality forum (based on the sequences and similar, focused on technique and practice rather than applying to particular issues)
a transhumanist forum (for existential risk, cryonics, FAI and similar)
an object-level discussion/debate forum (for specific topics like feminism, genetic engineering, neoreactionism, etc).
I’m not sure that would work. After all, Bayes’s rule has fairly obvious unPC consequences when applied to race or gender, and thinking seriously about transhumanism will require dealing with eugenics-like issues.
Think of it as the no-politics rule turned up to 11.The point is not that these things can’t be reasoned about, but that the strong (negative/positve) affect attached to certain things makes them ill-suited to rationalist pedagogy.
Lowering the barrier to entry doesn’t mean you can’t have other things further up the incline, though.
Datapoint: I find that I spend more time reading the politically-charged threads and subthreads than other content, but get much less out of them. They’re like junk food; interesting but not useful. On the other hand, just about anywhere other than LW, they’re not even interesting.
(on running a memory-check, I find that observation applies mostly to comment threads. There’s been a couple of top-level political articles that I genuinely learned something from)
Most of the previous threads on the topic, every time one of these posts comes around. You could find them by much the same process as I could. The HBD fans put me off for a few months.
Small but noisy. They add their special flavour to the tone though, as one of the few places outside their circle of blogs that gives them airtime (much like the neoreactionaries they cross over with).
I wonder if the people in the subthread below going “we may be racists, but let’s be the right sort of racists” understand that this doesn’t actually help much.
Small but noisy. They add their special flavour to the tone though, as one of the few places outside their circle of blogs that gives them airtime (much like the neoreactionaries they cross over with).
Rather we support our beliefs with rational arguments, the HBD-deniers don’t bother presenting counter arguments (and when they do they tend to be laughably bad) but instead try to argue that it’s somehow immoral to say and/or believe these things regardless of their truth value.
I’ve not really followed you, but I’ve never once seen you make an argument or even explain what you want. If you tell me something y’all want that you could plausibly achieve without the aid of low-status racists, perhaps I’ll try to put y’all in a separate category.
I’d like people to stop trying to suppress science because of nothing but ideological principles, like the creationists, and let the scientists get on with stuff like finding a cure for Alzheimer’s.
I wonder if the people in the subthread below going “we may be racists, but let’s be the right sort of racists” understand that this doesn’t actually help much.
This could do with some clarification—doesn’t help whom with what? And, by contrast, what would help?
“Fan” is a funny word in this contex. It brings to mind people who go around shouting “Yea, Diversity!” non-ironically. Except, there are people who more or less do that, it isn’t the HBD crowd, and in fact diversity boosters don’t even really believe in it.
Edit: Sorry, missed the correct coment to reply to.
I’m a white man who’s done handsomely in the privilege lottery and I find quite a lot of LW utterly offputting and repellent
Why? If the answer is, as appears to be the case from context, that we say true things that make you feel uncomfortable, well I recommend treating your feeling of discomfort with the truth rather than the people saying it as the problem. This is a community devoted to rationality, not to making you feel comfortable.
Continuing the argument though, I just don’t think including actual people on the receiving end into the debate would help determine true beliefs about the best way to solve whatever problem it is. It’d fall prey to the usual suspects like scope insensitivity, emotional pleading, and the like. Someone joins the debate and says “Your plan to wipe out malaria diverted funding away from charities that research the cure to my cute puppy’s rare illness, how could you do that?”—how do you respond to that truthfully while maintaining basic social standards of politeness?
Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed—but other than that, what marginal value are they adding to the discussion?
In my experience, reading blogs from minority representants (sensible ones) introduces you to different thought patterns.
Not very specific, huh ?
Gypsies are the most focused on minority in my country.
The gypsy blogger, who managed to leave her community, once described a story. Her mother visited her in her home, found frozen meat in her freezer, and started almonst crying: My daughter, how can you store meat at home, when people exist, who are hungry today ? (Gypsies are stereotypically bad at planning and managing their finances, to the point of selfdestruction. But before this blog, I did not understand, it makes them virtuous in their own eyes.)
Would not it be nice to have such people interacting in LW conversations, instead of just linking to them ?
Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)
Would not it be nice to have such people interacting in LW conversations, instead of just linking to them ?
Yes. It would be nice. I am genuinely uncertain whether there’s a good way to make LW appealing to people who currently dislike it, without alienating the existing contributors who do like it.
Maybe I am naive, but, how about explicitly stating, by some high status member, that we would be very happy if they contributed here ?
Eliezer wrote the same thing about women.
http://lesswrong.com/lw/ap/of_gender_and_rationality/
It was not exactly “Women, come, please” but it was clear they would be welcome to participate.
It might have helped.
Or maybe the increased percentage in the census result was due to something else ?
How would I know...
If you want to increase your fish-size, articles / comment threads which generate lots of upvotes are a good way to do it. And since your fish-size is small already there’s not much to lose if people don’t like it.
Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)
The plan to write an AI that will implement the Coherent Extrapolated Volition of all of humanity doesn’t involve talking to any of the affected humans. The plan is, literally, to first build an earlier AI that will do the interacting with all those other people for them.
That link only explains the concept of CEV as one possible idea related to building FAI, and a problematic one at that. But you’re making it sound like CEV being the only possible approach was an opinion that had already been set in stone.
AFAIK, it’s one idea that’s being considered, but I don’t think there’s currently enough confidence in any particular approach to call it The Plan. “The Plan” is more along the lines of “let’s experiment with a lot of approaches and see which ones seem the most promising”; the most recent direction that that plan has produced is a focus on general FAI math research, which may or may not eventually lead to something CEV-like.
although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist.
Could you elaborate on why you think that way? It’s always interesting to hear why people think a strong AI or Friendly AI is not possible/probable, especially if they have good reasons to think that way.
I think that AI is inevitable, but I think that unfriendly AI is more likely than friendly AI. This is just from my experience in developing software even in my small team environment where there are less human egos and tribalism/signaling to deal with. Something that you hadn’t thought of is always going to happen and a bug will be perpetuated throughout the lifecycle of your software. With AI, who knows what implications these bugs will have.
Rationality itself has to become much more mainstream before tackling AI responsibly.
I’m a programmer, and I doubt that AI is possible. Or, rather, I doubt that artificial intelligence will ever look that way to its creators. More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
I mean, there’s a device down at the freeway that moves an arm up if you pay the toll. So, as a system, its got the ability to sense the environment (limited to the context of knowing if the coin verification system is satisfied with the payment), and affect that environment (raise and lower arm). Most folks would agree that that is not AI.
So, then, how can we get beyond that? It is a nonhuman reaction to the environment. Whatever I wrote that we called “AI”, would presumably do what I program it to (and naught else) in response to its sensory input. A futuristic war drone’s basket is its radar and its lever is its missiles, but there’s nothing new going on here. A chat bot’s basket is the incoming feed, and its lever is its outgoing text, but it’s not like it ‘chooses’ in any sense more meaningful than the toll bot’s decision matrix, what it sends out.
So maybe it could rewrite its own code. But if it does so, it’ll only do so in the way that I’ve programmed it to. The paper clip maximizer will never decide to rewrite itself as a gold coin maximizer. The final result is just a derived product of my original code and the sensory experiences its received. Is that any more ‘intelligent’ than the toll taker?
I like to bet folks that AI won’t happen within timeframe X. The problem then becomes defining AI happening. I wouldn’t want them to point to the toll robot, and presumably they’d be equally miffed if we were slaves of the MechaPope and I was pointing out that its Twenty Commandments could be predicted given a knowledge of its source code.
Thinking on it, my knee jerk criteria is that I will admit that AI exists if the United States knowingly gives it the right to vote. (Obviously there’s a window where AI is sentient but can’t vote, but given the speed of the FOOM it’ll probably pass quickly), or if the earth declares war (or the equivalent) on it. Its a pretty hard criteria to come up with.
What would yours be? Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so (keeping in mind that we are imagining you as motivated not by a desire to win the bet but a desire that the bet represent the truth)?
More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
What would yours be?
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.
I would pick either some kind of programming ability, or the ability to learn a language like English (which I would bet implies the former if we’re talking about what the design can do with some tweaks).
Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed—but other than that, what marginal value are they adding to the discussion?
Thinkers—including such naive, starry-eyed liberal idealists as Friedrich Hayek or Niccolo Machiavelli—have long touched on the utter indispensability of subjective, individual knowledge and its advantages over the authoritarian dictates of an ostensibly all-seing “pure reason”. Then along comes a brave young LW user and suggests that enlightened technocrats like him should tell people what’s really important in their lives.
I’m grateful to David for pointing out this comment, it’s really a good summary of what’s wrong with the typical LW approach to policy.
That said, I have no idea what could be done about it.
I hesitate to suggest this, but I’ve noticed most of the “sensitive but discussed anyway” issues have been on areas where socially weaker groups might feel threatened by the discussion. Criticism of socially strong groups is conspicuously absent, given that LW demographics are actually far-left leaning according to polls.
If the requirement that one must be dispassionate would cut in multiple directions simultaneously (rather than selectively cutting in the direction of socially marginalized groups) then we’d select for “willing to deal intellectually with emotional things” rather than selecting for “emotionally un-reactive to social problems” (which is a heterogeneous class containing both people who are willing to deal intellectually with things which are emotionally threatening and people who happen to not often fall on the pointy end of sensitive issues)
The reason I hesitate to suggest it is that while I do want an arena where sensitive issues can be discussed intellectually without driving people away, people consciously following the suggestion would probably result in a green-blue battleground for social issues.
Well sure, but that doesn’t count because we’re pretty much all atheists here. Atheism is the default position in this social circle, and the only one which is really given respect.
I’m talking about criticisms of demographics and identities of non-marginalized groups that actually frequent Lesswrong.
If we’re allowed to discuss genetically mediated differences with respect to race and behavior, then we’re also allowed to discuss empirical studies of racism, its effects, which groups are demonstrated to engage in it, and how to avoid it if we so wish. If we’re allowed to empirically discuss findings about female hypergamy, we’re also allowed to discuss findings about male proclivities towards sexual and non-sexual violence.
But for all these things, there’s no point in discussing them in Main unless there’s an instrumental goal being serviced or a broader philosophical point being made about ideas...and even in Discussion, for any of this to deserve an upvote it would need to be really data driven and/or bringing attention to novel ideas rather than just storytelling, rhetoric, or the latest political drama.
Reactionary views, being obscure and meta-contrarian, have a natural edge in the “novel ideas” department, which is probably why it has come up so often here (and why there is a perception of LW as more right-wing than surveys show).
If we’re allowed to discuss genetically mediated differences with respect to race and behavior, then we’re also allowed to discuss empirical studies of racism, its effects, which groups are demonstrated to engage in it, and how to avoid it if we so wish. If we’re allowed to empirically discuss findings about female hypergamy, we’re also allowed to discuss findings about male proclivities towards sexual and non-sexual violence.
Speaking for myself, I would be happy to see a rational article discussing racism, sexism, violence, etc.
For example, I would be happy to see someone explaining feminism rationally, by which I mean: 1) not assuming that everyone already agrees with your whole teaching or they are a very bad person; 2) actually providing definitions of what is and what isn’t meant by the used terms in a way that really “carves reality at its joints” instead of torturing definitions to say what you want such as definining sexism as “doing X while male”; 3) focusing on those parts than can be reasonably defended and ignoring or even willing to criticize those part’s that can’t.
(What I hate is someone just throwing around an applause light and saying: “therefore you must agree with me or you are an evil person”. Or telling me to go and find a definition elsewhere without even giving me a pointer, when the problem is that almost everyone uses the word without defining it, or that there are different contradictory definitions. Etc.)
Some of my favorite feminist articles are the ones demonstrating actual statistical effects of irrational biases against women, such as http://www.catalyst.org/file/139/bottom%20line%202.pdf talking about women being undervalued as board members, or the ones talking about how gender blind audition processes result in far more women orchestra members.
That alone doesn’t imply agreement with any specific hypothesis about what exactly causes the prejudice, nor with any specific proposal how this should be fixed. That would require more bits of evidence.
In general, I support things that reduce that prejudice—such as the blind tests—where I see no negative side-effects. But I am cautious about proposals to fix it by reversing stupidity, typically by adding a random bonus to women (how exactly is it quantified?) or imposing quotas (what if in some specific situation X all women who applied for the job really were incompetent? just like in some other specific situation Y all men who applied could be incompetent).
Also, there are some Schelling-point concerns, e.g. once we accept it is okay to give bonuses on tests to different groups and to determine the given group and bonus by democratic vote or lobbying, it will become a new battlefield with effects similar to “democracy stops being fair once people discover they can vote themselves more money out of their neighbors’ pockets”. It would be nice to have some scientists discover that the appropriate bonus on tests is exactly 12.5 points, but it is more like real world to have politicians promising bonus 50 points to any group in exchange for their vote, of course each of them having “experts” to justify why this specific number is correct. -- And I would hate to have a choice between a political party that gives me −1000 points penalty and a political party that gives me +1000 points bonus, which I would consider also unfair, and in addition I might disagree with that party on some other topics. And given human nature, I would not be surprised inf those −1000 and +1000 parties become so popular among their voters that another party proposing to reset the bonuses back to 0 would simply have no chance.
One thing I would like to see—and haven’t—in regards to opposition to prejudice is work on how to become less prejudiced. That is, how to see the person in front of you accurately, even if you’ve spent a lot of time in an environment which trained you to have pre-set opinions about that person.
Information about an individual screens off information about the group. At least it should. Let’s assume partial success, which is better than nothing. So the key is to get information about the individual. I would just try talking to them.
I guess the failure of usual anti-prejudice techniques is assuming that all opinions about a group are wrong, i.e. not a valid Bayesian evidence. (Of course unless it is a positive opinion about a minority, in which case it hypocritically is okay.) They try to remove the participants’ opinion about a group in general; usually without any success.
I would rather assume that an opinion about the group may be correct, but still, any given individual may be different than the average or the stereotype of their group. Which can easily be demonstrated by letting participants talk about how they differ from the average or the stereotype of various groups they could be classified into. For example, unlike a typical man in my society, I have long hair, I don’t like beer, and I am not interested in watching sport on TV. At this moment, the idea of “the person is not the same as (my idea of) the group” is in near mode. The next step is getting enough specific information about the other person so that the general image of “a random member of group X” can be replaced with some other data. (Depends on situation; e.g. in a group of children I would give many yes/no questions such as “do you have a pet?” and let them raise their hands; and then also they would ask questions. Each bit of information that differs from the assumption, if noticed, could be useful.)
Of course the result could be that people change their opinion about this one specific person, and yet keep their prejudice about their group. Which is an acceptable result for me, but probably not acceptable for many other people. I would reason that a partial success which happens is much better than an idealistic solution that doesn’t happen; and that accepting one exception makes people more likely to accept another exception in the future, possibly weakening the prejudice. But on the other hand, if the original opinion about the average of the group was correct, then we have achieved the best possible result: we didn’t teach people bullshit (which could later backfire on us) and yet we taught them to perceive a person as an individual, different from the average of the group, which was the original goal.
Here’s some empirical research on the actual causes of the pay gap. Executive Summary: The majority of the burden of child rearing still falls on women, and this can be disruptive to their careers prospects, especially in high paying fields like law and bussiness management; childless women and women who work in jobs that allow for flexible hours earn incomes much closer to parity.
Side note: I can’t really tell, but some evidence suggests the total time spent on childcare has increased in the past 40-50 years. Now, when I look at people raised back then and try to adjust for the effects of leaded gasoline on the brain, they seem pretty much OK. So we should consider the possibility that we’re putting pointless pressure on mothers.
Who is the we there? I’m not declaiming responsibility, but interested in who these women feel is pressuring them. I’d wager it’s largely a status competition with other women.
As you said, “much closer to parity”. There are probably multiple causes, each responsible for a part of the effect. And as usual, the reality is not really convenient for any political side.
Agreed, but we devote plenty of time to criticizing it, don’t we? (Both reactionary criticism, and the more mainstream criticisms of the media/academia culture)
But the thing about the reactionary lens, especially Moldbug, is at the end of the day they side with the people in power. Moldbug even explicitly states as much. A central theme of his work is that we shouldn’t keep elevating the weaker and criticizing the stronger, thus creating endless revolution. “Formalism” essentially means “maintaining the status quo of the current power heirarchy”. The only exception to this is the Cathedral itself—because it is a power structure which is set up in such a way that it upsets existing heirarchies.
So the moldbug / reactionary ideology , at the core, is fundamentally opposed to carrying out the criticism which I just suggested against anyone who isn’t part of “the cathedral” which keeps shifting the status quo (hence the meta contrarianism). It is an ideology which only criticizes the social critics themselves, and seeks to return to the dominant paradigm as it was before the social critics entered the scene.
I’m saying we need more actual real contrarianism, not more meta contrarianism against the contrarians. It is useful to criticize things other than the Cathedral. I’m being a meta-meta-contrarian.
I’m saying we need actual real contrarianism, not meta contrarianism against the contrarians. I’m being a meta-meta-contrarian.
I think I’m a bit confused now.
Let’s say Cathedral is mainstream. Then Moldbug is a contrarian. Then Yvain’s anti-reactionary FAQ is contrarian against a contrarian. Are you saying we need more stuff like Yvain’s FAQ?
Or do you want some actual direct criticism of an existing power structure, maybe something along these lines?
We start with a base. You are saying this is the mainstream US which you understand to be conservative. So, level 0 -- US conservatives—mainstream.
Level 1 is the Cathedral which is contrarian to level 0 and which is US liberals or progressives.
Level 2 are the neo-reactionaries who are contrarian to level 1 (Cathedral)
Level 3 is Yvain’s FAQ which is contrarian to level 2 (Reactionaries).
So we are basically stacking levels where each level is explicitly opposed to the previous one and, obviously, all even layers are sympathetic to each other, as are all odd layers (I find the “meta-” terminology confusing since this word means other things to me, probably “anti-” would be better).
And what you want more of is level 1 stuff—basically left-liberal critique of whatever stands in the way of progress, preferably on steroids.
Do I understand you right?
EDIT: LOL, you simplified your post right along the lines I was extracting out of it...
I don’t mind hearing from any level, as long as things are well cited.
-I’ve sort of gotten bored with level 0, but that could change if I see a bunch of really well done level 0 content. I just don’t often see very many insightful things coming from this level.
-Level 2 holds my interest because it’s novel. When it’s well cited, it really holds my interest. However, it seldom is well cited. That’s okay though—the ideas are fun to play with.
-Level 1 is the level I agree with. However, because I’m very familiar with it and its supporting data, and I hate agreeing with things, it has to work a lot harder to hold my interest.
My perception is that level 2, for reasons described, gets more attention than it merits. The shock value, twisty narrative, and novelty of it make it more interesting to people like me, who like reading compelling arguments even if they don’t completely agree. However, it drives away people who are emotionally affected and/or perceive that have something to protect from what would happen if those viewpoints were to gain traction.
I was suggesting that maybe increasing good level one posts, which weren’t boring, echo-chamber-ish and obviously true to to most people on Lesswrong, would remedy this. (I’m taking the LW poll as indications that most LWers, like me, agree with Level 1)
Edit: Even layers are not necessarily sympathetic to each other, even if they are ideologically aligned. Mainstream conservatives would likely not be sympathetic to reactionary’s open racism/sexism etc, and the impression I get is that reactionaries think mainstream conservatives are fighting a losing battle and aren’t particularly bright. There’s really only one Odd Layer, practically speaking, since Yvain is the only person on hypothetical layer 3.
Hm. I understand you now. However I carve reality in a somewhat different way—we see joints in the territory in different places.
First I would set up level zero as reality, what actually exists now—all the current socio-econo-politco-etc. structures. And then one dimension by which you divide people/groups/movements would be by whether they are more or less content with the current reality or whether they want to radically change it.
Another dimension would be the individual vs. group/community/state spectrum, anarchists being on one end and fans of a totalitarian state on the other.
You can add more—say, egalitarianism vs.some sort of a caste system—as needed.
Getting back to your wishes, I think we have a bunch of socialists here who on a regular basis post critiques of the status quo from the left side (e.g. didn’t we have a debate about guaranteed basic income recently?). On the other hand they do lack in sexiness and edginess :-)
Getting back to your wishes, I think we have a bunch of socialists here who on a regular basis post critiques of the status quo from the left side (e.g. didn’t we have a debate about guaranteed basic income recently?).
I didn’t witness this debate, so maybe you’re right that the advocates for the guaranteed minimum income were in fact socialists. I’d like to note, though, that the idea of a guaranteed basic income has had some currency in libertarian circles as well, advocated by (among others) Friedrich Hayek and Milton Friedman. So I wouldn’t take support for this policy as very strong evidence of a socialist political orientation.
Well, I mentioned socialists because a significant part of LW self-identifies as socialist (see Yvain’s surveys). That, of course, is a fuzzy term with many possible meanings.
But the survey didn’t just say “Socialist”, it said “Socialist, for example Scandinavian countries: socially permissive, high taxes, major redistribution of wealth”.
Hehe I’ll give you that coherently expressing edgy views is part of what keeps me reading despite fairly strong disagreement...outside view, that’s not actually a point in its favor, of course—as a general heuristic, the boring and conventional people are right and the edgy internet subculture is wrong, even if wrong in novel ways!
as a general heuristic, the boring and conventional people are right and the edgy internet subculture is wrong
I don’t think that’s a particularly useful heuristic. I’d like to offer a replacement: people who actually did something in reality or who point to something existing and working are right more often than people whose arguments are based on imagination and counterfactuals.
Maybe invite blacks or other members of marginalized communities explicitly ?
Some time ago, Eliezer wrote a post, which made it clear he would be glad to see more women on LW.
I thing his article was well written. Did any of You guys, the opponents of crazier versions of feminism, feel annoyed by that ? Later, there were other efforts to drag women here. (It does feel flattering, I tell You).
Now, the percentage of LW women has grown slightly (lazy to look up the census result), athough we are still a minority.
Given that a large part of LW is drawn from the Bay Area, which IIRC has significantly higher trans density than the at-large 1%, that’s actually under where I would expect.
Wait, 1.3% trans women. Depending on the number of trans men, that may be much closer to representative of the broader likely-to-encounter-LW population. (Which I’d expect to have 2x-5x as many trans people as the general population.)
I don’t think simple invitations are going to make much difference.
If some marginal group didn’t drift here spontaneously because they’re inherently interested in the community, then we must provide them other incentives. Unfortunately this might mean privileging them some way, which to be honest I usually find so unjust and contrary to truth seeking it pisses me off.
Perhaps there are benign forms of such privileging, but none are cognitively available to me at the moment.
What if they visit the website and feel hesitant, whether the atmosphere is welcoming enough for them, considering all the HBD staff ? I do not imply we should censor HBD away, I am interested in it too. If there is some thruth to it, we will have to face it sooner or later anyway, taking into account all the DNA sequencing projects etc. In the world outside, I got yelled at for my interest a couple time, it is my interest to have clear discussion here, so that I know, where things stand. But, anyway, regardless of nature or nurture, all the data agree, there is a significant portion of intelligent individuals in all marginalised groups, and LW would very much benefit from them. If I only could express something like that, and not sound creepy… Some analogy of this: http://lesswrong.com/lw/ap/of_gender_and_rationality/
We risk being an echo-chamber of people who aren’t hurt by the problems we discuss.
I don’t see this as a problem, really. The entire point is to have high-value discussions. Being inclusive isn’t the point. It’d be nice, sure, and there’s no reason to drive away minority groups for no reason.
I mean, I don’t see us trying to spread internet access and English language instruction in Africa so that the inhabitants can help discuss how to solve their malaria problems. As long as we can get enough input about what the problem is actually like, we don’t need to be inclusive in order to solve problems. And in the African malaria case, being inclusive would obviously hurt our problem-solving capability.
Eh, yes and no. This attitude (“we know what’s best; your input is not required”) has historically almost always been wrong and frequently dangerous and deserves close attention, and I think it mostly fails here. In very, very specific instances (GiveWell-esque philanthropy, eg), maybe not, but in terms of, say, feminism? If anyone on LW is interested tackling feminist issues, having very few women would be a major issue. Even when not addressing specific issues, if you’re trying to develop models of how human beings think, and everyone in the conversation is a very specific sort of person, you’re going to have a much harder time getting it right.
This attitude (“we know what’s best; your input is not required”) has historically almost always been wrong
Has it really? The cases where it went wrong jump to mind more easily than those where it went right, but I don’t know which way the balance tips overall (and I suspect neither do your nor most readers—it’s a difficult question!).
For example, in past centuries Europe has seen a great rise in litteracy, and a drop in all kinds of mortality, through the adoption of widespread education, modern medical practices, etc. A lot of this seems to have been driven in a top-down way by bureaucratic governments who considered they were working for The Greater Good Of The Nation, and didn’t care that much about the opinion of a bunch of unwashed superstitious hicks.
(Some books on the topic: Seeing Like a State; The Discovery of France … I haven’t read either unfortunately)
I don’t see this as a problem, really. The entire point is to have high-value discussions.
High-value discussions here, so far as is apparent to me, seem to be better described as “High-value for modestly wealthy white and ethnic Jewish city-dwelling men, many of them programmers”. If it turns out said men get enough out of this to noticeably improve the lives of the huge populations (some of which might even contain intelligent, rational individuals or subgroups), that’s all fine and well. But so far, it mostly just sounds like rich programmers signalling at each other.
Which makes me wonder what the hell I’m still doing here; in spite of not feeling particularly welcome, or getting much out of discussions, I haven’t felt like not continuing to read and sometimes comment would make a good response. Yet, since I’m almost definitely not going to be able to contribute to a world-changing AI, directly or otherwise, and don’t have money to spare for EA or xrisk reduction, I don’t see why LW should care. (Ok, so I made a thinly veiled argument for why LW should care, but I also acknowledged it was rather weak.)
Even with malaria nets (which seem like a very simple case), having information from the people who are using them could be important. Is using malaria nets harder than it sounds? Are there other diseases which deserve more attention?
One of the topics here is that sometimes experts get things wrong. Of course, so do non-experts, but one of the checks on experts is people who have local experience.
Even with malaria nets (which seem like a very simple case), having information from the people who are using them could be important.
Even then, is trying to encourage sub-saharan African participation in the Effective Altruism movement really the best way to gather data about their needs and values? Wouldn’t it be more cost effective to hire an information-gathering specialist of some sort to conduct investigations?
The entire point is to have high-value discussions.
Feminism and possible racial differences seem like pretty low-value discussion topics to me… interesting way out of proportion to their usefulness, kind of like politics.
Feminism and possible racial differences seem like pretty low-value discussion topics to me...
That’s an incredibly short-sighted attitude. Feminism and race realism are just the focus of the current controversy. I’m pretty confident that you could pick just about any topic in social science (and some topics in the natural sciences as well—evolution, anyone?) and some people will want to prevent or bias discussions of it for political reasons. It’s not clear why we should be putting up with this nonsense at all.
My argument is: (1) Feminism and race realism are interesting for the same reasons politics are interesting and (2) they aren’t especially high value. If this argument is valid, then for the same reasons LW has an informal ban on politics discussion, it might make sense to have an informal ban on feminism and race realism discussion.
You don’t address either of my points. Instead you make a slippery slope argument, saying that if there’s an informal ban on feminism/race realism then maybe we will start making informal bans on all of social science. I don’t find this slippery slope argument especially persuasive (such arguments are widely considered fallacious). I trust the Less Wrong community to evaluate the heat-to-light ratio of different topics and determine which should have informal bans and which shouldn’t.
“some people will want to prevent or bias discussions of it for political reasons”—to clarify, I’m in favor of informal bans against making arguments for any side on highly interesting but fairly useless topics. Also, it seems like for some of these topics, “people getting their feelings hurt” is also a consideration and this seems like a legitimate cost to be weighed when determining whether discussing a given topic is worthwhile.
There’s obviously a level of exclusivity that also hurts our problem-solving, as well. At some point a programmer in the Bay Area with $20k/yr of disposable income and 20 hours a week to spare is going to do more than a subsaharan african farmer with $200/yr of disposable income, 6 hours a week of free time, and no internet access.
I don’t see how it would actually hurt our problem-solving, though, if we were to try to solicit input from people who don’t have the leisure time or education to provide it. It would be a phenomenal waste of resources, to be sure, but aside from that I don’t see how it would harm the community.
You are positing that folks who are affected by some issues would not participate in frank, dispassionate discussion of these same issues… why exactly? To preserve their ego? It seems like a dubious assumption.
I’m currently dispassionate about racial issues, and can (and have) openly discussed topics such as the possibility that racial discrimination is not a real thing, the possibility that genetically mediated behavioral differences between races exist, and other conservative-to-reactionary viewpoints. Some of those discussions have been on lesswrong, under this account and under an alt, some have been on other sites, and some have been in “real life”.
Prior to the age of ~19, I would have been unable to be dispassionate about issues of race and culture. I would understand the value of being dispassionate and I would try, but the emotions would have come anyway. Due to my racial and cultural differences, I’ve fended of physical attacks from bullies in middle school and been on the receiving end of condescending statements in high school and college, sometimes from strangers and people whom I do not care about and sometimes from peers who I liked and from authority figures who I respected. When it came from someone i liked/respected, it hurt more.
The way human brains work, is when a neutral stimuli (here, racist viewpoints) is repeatedly paired with a negative stimuli (here, physical harm and/or loss of social status), the neutral stimuli can involuntarily trigger pre-emptive anger and defensiveness all on its own. If your experience of people who posited Opinion X was that they proceeded to physically attack you / steal your things / taunt you openly in a social setting, you too would probably develop aversive reactions to Opinion X.
--
EDIT: just read the linked post. It independently echoes my account:
This is because respect for said arguments and/or the idea behind them is a warning sign for either 1) passively not respecting my personhood or 2) actively disregarding my personhood, both of which are, to use some vernacular, hella fucking dangerous to me personally.
--
The above is an explanation as to why it happens and how it is. I’m not saying it’s justified, or that it aught to be that way. I made a conscious effort to fight down the anger and not direct it at people who were clearly not trying to physically harm me or lower my social status in a group. I think others should do the same.
For an extreme example, in the past an authority figure made a racial joke at my expense in the presence of other students who had previously physically taunted me, thereby validating their behavior—and I took care to not direct the anger at the authority figure (who was simply ignorant of the social status lowering effect of the joke, not maliciously trying to harm me). For a tamer example, I’ve never actually ended a friendship with someone for espousing certain views—I’ve only been angry and forced myself not to say anything until after calming down.
Currently, I don’t feel emotionally angry at all when faced with those views, and i think every one else should strive to that. However, that doesn’t mean that people who haven’t faced this sort of thing are allowed to simply expect that people who have faced it will have that sort of emotional control. I’m pretty sure I’m an outlier with respect to unusually good emotional control (globally, if not on LessWrong) - most people can’t do it. It also really helps that my current social bubble has less of that sort of thing.
That said (and this is where I disagree with the linked poster) I don’t think it’s a good idea to censor views for the sake of not triggering anyone’s emotions. Dispassionate discussion of a topic unpairs the neutral stimuli with a negative stimuli—in fact, I would go so far as to recommend that people who are psychologically similar to myself (intellectually curious, emotionally stable) who have been hurt by racism should spend time talking on the internet to white nationalists and reactionaries, and people who have been hurt by sexism should spend time talking to pua’s / redpill / the “manosphere”. Talking about charged topics in settings where people are powerless to actually hurt you is a great way to remove emotional triggers.
That said, the small but vocal prevalence of meta-contrarian, reactionary ideology on LW has probably driven away a lot of smart people. There’s even dirty tactics at play here—such as the down-voting of every single comment of anyone who explicitly expresses progressive views or challenging reactionary views. I myself am on the receiving end of this nonsense—every post is systematically downvoted by exactly −1 ever since I mentioned some biological evidence about sexual orientation that could be construed as liberal. I think our kind is so partial to contrarians that we actually give people a pass from the downvote simply because they went against the grain even when the actual ideas aren’t especially insightful. Remember, well-kept gardens die by pacifism—reactionary ideas are fine if they are supported by real evidence and logic of the same standard you would hold if someone espoused a common viewpoint which is fairly obvious and popular. If it reads like pseudo-intellectual fluff, it probably is. Don’t go easy on it just because it’s contrarian.
If I were to expand this post at a future time, which ideas specifically do you find enlightening / would you say should be expanded? Are there any portions that you think should be slimmed or removed altogether?
Another thing to think about is how to talk about this productively without triggering similar over-heating...although this post wasn’t actually too controversial, so maybe that’s a good sign on that front?
If I were to expand this post at a future time, which ideas specifically do you find enlightening / would you say should be expanded? Are there any portions that you think should be slimmed or removed altogether?
What I think LW could benefit from is an explanation “from the inside” of what leads some people of disprivileged groups to be sensitive to the expression of certain opinions, to ask for “safe spaces” and talk of “triggers”, et cetera. I think you have an evenhanded position that on one side does not ask LW to censor or discourage dispassionate discussions of these opinions, but at the same time enables those who profess them to understand the unintended effects their words can have. Thus the well-intentioned among them (and I am sure there are some, though I share your indignation at those who are not and use underhanded tactics like mass-downvoting) will hopefully be more cautious in their choice of words, and also perhaps realize that requests for “safe spaces” are not necessarily power plays to squash controversy.
I think the last paragraph (except for the first sentence) is the part that could be slimmed or removed; you have registered your protest against mass downvoting and doing it again in a post would distract from the main topic.
Another thing to think about is how to talk about this productively without triggering similar over-heating
Indeed, writing a top-level post about this in a way that does not cause a flamewar is a daunting, perhaps impossible task. I fully understand if under consideration you prefer not to do it.
Dispassionate discussion of a topic unpairs the neutral stimuli with a negative stimuli
This is probably not a good argument on LW, but a large part of psychoanalysis is built on this.
Also desensitization therapy in CBT, but they would recommend starting with very small dozes of the stimuli. (And I think LW would be at the lower end of the scale.)
You are positing that folks who are affected by some issues would not participate in frank, dispassionate discussion of these same issues… why exactly? To preserve their ego? It seems like a dubious assumption.
This doesn’t really seem like a dubious assumption to me, practically everyone is more motivated to preserve their ego than to think rationally.
hard to be frankly dispassionate when you’re affected by an issue. That tends to encourage self-serving passion.
Oh, that’s quite right. But the original question here is whether they’ll even want to join the conversation at all. To me, it’s not at all clear why they wouldn’t. (And I see this as a mixed bag from a goals perspective, for reasons others have pointed out.)
Because life, of which the Internet is a subset, of which LW is a subset, is full of blowhards who will tell you all about your problems and how you should solve them while clearly not having a trace of a clue about the topic, and life is too short to go seeking them out.
Apposite criticism. Most worrying excerpt:
Self-selection in LessWrong favors people who enjoy speaking dispassionately about sensitive issues, and disfavors people affected by those issues. We risk being an echo-chamber of people who aren’t hurt by the problems we discuss.
That said, I have no idea what could be done about it.
I’m not sure that anything should be done about it, at least if we look at it from whole society’s perspective. (Or rather, we should try to avoid the echo chamber effect if possible, but not at the cost of reducing dispassionate discussion.) If some places discuss sensitive issues dispassionately, then those places risk becoming echo chambers; but if no place does so, then there won’t be any place for dispassionate discussion of those issues. I have a hard time believing that a policy that led to some issue only being discussed in emotionally charged terms would be a net good for society.
Yes, the complaint strikes me as “Stop saying things we don’t like, it might lead to disapproved opinions being silenced!
Wouldn’t it be possible to minimize signaling given the same level of dispassionate discussion? That is, discourage use of highly emotionally charged/exosemantically heavy words/phrases if a less charged equivalent exists or can be coined and defined.
Say if you have a word X that means Y plus emotional connotation α and thede/memeplex/identity signaling effect β (not that emotional connotation is detached from the thedish/political/identity-wise context of the reader, of course), there’s really no reason to use X instead of Y in dispassionate discussion. To give a concrete example, there’s no reason to use ‘sluttiness’ (denotatively equivalent to ‘sexual promiscuity’ but carrying a generally negative connotational load, signaling against certain memeplexes/political positions/identities (though ideally readers here would read past the signaling load/repress the negative emotional response), and signaling identification with other positions/identities) instead of ‘sexual promiscuity’, which means the same thing but sheds all the emotional and thedish/tribal/whatever baggage.
(That shouldn’t be read as an endorsement of the reasoning toward the same conclusion in the post, of course.)
I don’t believe this is feasible. My impression is that emotional connotations inhere in things, not in words.
Over the decades, society has, over the decades, gone through a whole string of synonyms for “limited intelligence”—none of which are emotionally neutral. Changing terms from “imbecile”, to “retarded”, “developmentally disabled” to “special needs”, has just resulted in a steady turnover of playground insults. You can’t make an insulting concept emotionally neutral, I think.
The two aren’t contradictory: emotional connotations can inhere in things and words.
The euphemism treadmill is what you get when the emotional connotation inheres in a thing. But what emotional connotation inheres in ‘sexual promiscuity’? Even if it is there (and its recommendation by someone sensitive enough to emotional connotations that inhere in words [from the perspective of a specific thede/tribe] seems to suggest that it isn’t), certainly there’s less negative connotation there than in ‘sluttiness’.
Similarly, it’s possible to find loaded equivalents, or at least approximations, for most (all?) of Mencius Moldbug’s caste terms. (UR is a good place to mine for these sorts of pairs, since he coins emotionally neutral terms to replace, or at least approximate, emotionally loaded terms. Of course, if you use them, you’re signaling that you’ve read Moldbug, but...)
I get the impression that we’re already pretty much mostly discusing issues in a “less emotionally laden” way, avoiding shocking words,etc., no?
But you’re also a white man and have an obvious lack of experience in this situation that functions as an unknown unknown. You’d be wise to be conservative in your conclusions.
As a white man myself, I feel it’s entirely reasonable to refuse to dispassionately discuss the matter of a boot on one’s own face. There are some situations in which case it is entirely appropriate to react with the deepest of passions.
As a white man (according to your own beliefs) you can’t understand how women or non-whites feel, so please stop appropriating their cause and speaking for them.
There are people on LW who aren’t white or male, so (according to your own beliefs) you should let them talk, instead of talking from your ignorant position of white male privilege about what you think is better for them. That’s mansplaining, right?
This is a hot iron approaching my face. YOU ARE TELLING ME MY THOUGHTS AND FEELINGS ARE ILLEGITIMATE. That is literally the first step to dehumanizing and murdering me. I can either follow your advice and tell you to fuck off, or I can try to address this disagreement in a reasonable way. Which do you think will go better for me? Which do you think will go better for you? I for one don’t think the adversarial approach of many feminist and pro queer writers is sane. You really should not declare the people you think are extremely powerful and controlling the world to be your sworn enemies. Feminism literally cannot win any victories without the consent of men.
I’ve got a lot of sympathy for your situation—I spent a lot of time freaking out about the complex emotional abuse that anti-racists/certain kinds of feminists go in for.
Still, I found it useful to learn something about assessing the current risk level of an attack just so I don’t go crazy—they’ve spread a lot of misery and they may eventually be politically dangerous, but they aren’t imposing the sort of immediate visceral threat you’re reacting to.
We haven’t begun to see the next stage of the fight (or at least, I haven’t seen anything I’d call effective opposition to the emotional abuse), but I recommend steadying yourself as much as possible.
Sometimes this is the case. Once you’ve realized this, try not to let it bother you too much. What’s true is already so; denying it doesn’t make it go away, and shouting on the Internet won’t make it go away either.
If you’re worried about this, you’re either a totally normal oppressed persyn, or a paranoid white dude.
If you’re a white dude, you should stop appropriating very real fears that plenty of people face on a daily basis. That’s just bad taste.
Assuming you’re a white dude, it’s really not your place to tell feminists or queer activists how to do what they do.
Do you see how your privilege has you assuming that you 1. know best and 2. should tell other people how to exist? Not to mention the fact you apparently think men are somehow necessary for feminist collective action.
I agree that this is by far the most interesting part of the piece. IIRC this site is pretty much all white men. Part of it is almost certainly that white men are into this sort of thing but I can’t help but imagine that if I was not a white man, especially if I was still in the process of becoming a rationalist, I would be turned off and made to feel unwelcome by the open dialogue of taboo issues on this website. This has the obvious effect of artificially shifting the site’s demographics, and more worryingly, artificially shifting the site’s demographics to include a large number of people who are the type of person to be unconcerned with political correctness and offending people. I think while that trait in and of itself is good, it is probably correlated with certain warped views of the world. Browse 4chan for a while if you want examples.
I think that between the extremes of the SJW Tumblr view of “When a POC talks to you, shut the fuck up and listen, you are privileged and you know nothing” and the view of “What does it matter if most of us aren’t affected by the problems we talk about, we can just imagine and extrapolate, we’re rationalist, right?” is where the truth probably lies.
Like you said, I have no idea what to do about this. There are already a lot of communities where standard societal taboos of political correctness are enforced, and I think it’s worthwhile to have at least one where these taboos don’t exist, so maybe nothing.
I’m a white man who’s done handsomely in the privilege lottery and I find quite a lot of LW utterly offputting and repellent (as I’ve noted at length previously). I’m still here of course, but in fairness I couldn’t call someone unreasonable for looking at its worst and never wanting to go near the place.
If all you show a person is the worst of lesswrong, then yes, I could see them not wanting to have anything to do with it. However, this doesn’t tell us anything; the same argument could be made of virtually all public boards. You could say the same thing about hallmark greeting cards.
This is roughly how I feel. There is a lot of good stuff here, and a lot of lot of horrible, horrible stuff that I never, ever want to be associated with. I do not recommend LessWrong to friends.
As a lurker and relatively new person to this community I’ve now seen this sentiment expressed multiple places but without any specific examples. Could you (or anyone else) please provide some? I’d really like to know more about this before I start talking about Less Wrong to my friends/family/coworkers/etc.
Feel free to PM me if you don’t want to discuss it publicly.
A lot of this content is concentrated among the users who eventually created MoreRight. Check out that site for a concentrated dose of what also pops up here.
Politics, eh? I’m confused.
This guy was a pretty big poster on LW, I think. Best example I can come up with, I’m sure there are better ones.
http://www.youtube.com/watch?v=cq5vRKiQlUQ
But but … he posted a link to that (or some other video of him ranting at the camera), and then was downvoted to oblivion and demolished in the comments, while whining about how he was being oppressed.
Things like that don’t seem remotely mainstream on LW, do they? (I don’t read all the big comment threads …)
Oh, okay. For some reason I thought he was fairly respected here.
A lie repeated a hundred times becomes available.
If we keep telling ourselves that LW is full of horrible stuff, we start believing it. Then any negative example, even if it happens once in a while and is quickly downvoted, becomes a confirmation of the model.
This is a website with hundreds of thousands of comments. Just because a few dozen of the comments are about X, it doesn’t prove much.
EDIT: And I think threads like this contribute heavily to the availability bias. It’s like an exercise in making all the bad things more available. If you use this strategy as an individual, it’s called depression.
Just imagine that once in a while someone would accuse you of being a horrible human being, and (assuming they had a record of everything you ever did) would show you a compilation of the worst things you have ever did in the past (ignoring completely anything good you did, because that’s somehow irrelevant to the debate) and told you: this is you, this is why you are a horrible person! Well, that’s pretty much what we are doing here.
That was awesome!
The dark secrets thread like a year ago was one of my favorite threads to read
Any key words I should use to find that one?
http://lesswrong.com/lw/9kf/ive_had_it_with_those_dark_rumours_about_our/
This one too, and maybe some other one I can’t think of at the moment.
A pretty minor poster, but there was someone who was a fan of his who posted a lot of links to him for a while. I think he’s gotten worse.
And thus, more entertaining.
That guy is funny. Definitely not someone who would be well respected here. His model of the world is broken and he’s trying to make the world fit his model, instead of the other way around.
In one of his videos there’s a part where he argues that cigarettes are actually good for you. LOL
I’m at a loss regarding what you must consider ‘horrible’. About the worst example I can think of is the JoshElders saga of pedophilia posts, and it only took two days to downvote everything he posted into oblivion and get it removed from the lists—and even that contained a lot of good discussion in the comments.
If you truly see that much horrible stuff here, perhaps your bar is too low, or perhaps mine is too high. Can you provide examples that haven’t been downvoted, that are actually considered mainstream opinion here?
Most of these are not dominant on LW, but come up often enough to make me twitchy. I am not interested in debating or discussing the merits of these points here because that’s a one-way track to a flamewar this thread doesn’t need.
The stronger forms of evolutionary psychology and human-diversity stuff. High confidence that most/all demographic disparities are down to genes. The belief that LessWrong being dominated by white male technophiles is more indicative of the superior rationality of white male technophiles than any shortcomings of the LW community or society-at-large.
Any and all neoreactionary stuff.
High-confidence predictions about the medium-to-far-future (especially ones that suggest sending money)
Throwing the term “eugenics” around cavalierly and assuming that everyone knows you’re talking about benevolent genetic engineering and not forcibly-sterilizing-people-who-don’t-look-like-me.
There should be a place to discuss these things, but it probably shouldn’t be on a message board dedicated to spreading and refining the art of human rationality. LessWrong could easily be three communities:
a rationality forum (based on the sequences and similar, focused on technique and practice rather than applying to particular issues)
a transhumanist forum (for existential risk, cryonics, FAI and similar)
an object-level discussion/debate forum (for specific topics like feminism, genetic engineering, neoreactionism, etc).
I am not sure how much these opinions are that extreme, and how much it’s just a reflection of how political debates push people into “all or nothing” positions. Like, if you admit that genes have any influence on population, you are automatically misinterpreted to believe that every aspect of a population is caused by genes. Because, you know, there are just two camps, “genes, boo” camp and “genes, yay” camp, and you have already proved you don’t belong into the former camp, therefore...
At least this is how I often feel in similar debates. Like there is no “genes affect 50% of something” position. There is a “genes don’t influence anything significant, ever” camp where all the good guys are; and there is the “other” camp, with everyone else, including me and Hitler. If we divide a continuous scale into “zero” and “nonzero” subsets, then of course 0.1 and 0.5 and 1 and infinity all get into the same subset. But that’s looking through the mindkilling glasses. I could start explaining how believing that genes can have some influence on thinking and behavior is not the same as attributing everything to the genes, and is completely nothing like advocating a genocide… but I already see all the good guys looking at me and thinking: “Nice try, but you are not going to fool us. We know what you really believe.”—Well, the idea is that I actually don’t.
I even don’t think that having a white male majority at this moment is some failure of a LW community. I mean—just try imagine a parallel universe where someone else started LW. How likely it is that in the parallel universe it is perfectly balanced by ethnicity and gender? What exactly does your model of reality make you predict?
Imagine that you are a visitor from an alien species are you are told the following facts: 1) Most humans are irrational, and rationality is associated with various negative things, like Straw Vulcans. Saying good things about rationality will get you laughed at. But paradoxically, telling others that they are not very rational, is offensive. So it’s best to avoid this topic, which most people do. 2) Asch’s conformity test suggests that women are a bit more likely than men to conform. 3) Asians have a culture that discourages standing out of the crowd. 4) Blacks usually live in the most poor countries, and those living in the developed countries were historically oppressed. -- Now that you know these facts, you are told that there is a new group of people who tries to promote rationality and science and technology. As the alien visitor, based on the given data, please tell me, which gender and which race would you bet would be most represented in this group?
If the LW remains forever a group of mostly white males, then yes, that would mean that we have failed. Specifically that we have failed to spread rationality, to increase the sanity waterline. But the fact that LW started with such demographics is completely unsurprising to me. So, is the proportion of other groups increasing on LW? Looking at the surveys for two years, it seems to me that yes. Then the only question is whether it is increasing fast enough? Well, fast enough compared with what? Sure, we could do more about it. Surely, we are not automatically strategic, we have missed some opportunities. Let’s try harder. But there is no point in obsessing over the fact that LW started as a predominantly white male group, or that we didn’t fix the disparities in the society within a few years.
There are other options. I think there exist possible worlds where LW is less-offputting to people outside of the uppermiddleclasstechnophilewhitemaleosphere with demographics that are closer to, but probably not identical to, the broader population. Like you said, there’s no reason for us to split the world into all-or-nothing sides: It’s entirely possible (and I think likely) that statistical differences do exist between demographics and that we have a suboptimal community/broader-culture which skews those differences more than would otherwise be the case.
Edit: I had only skimmed your comment when writing this reply; On a reread, I think we mostly agree.
I’ve definitely experienced strong adverse reactions to discussing eugenics ‘cavalierly’ if you don’t spend at least ten to fifteen minutes covering the inferential steps and sanitising the perceived later uses of the concept.
Good point about the possible three communities. I haven’t posted here much, as I found myself standing too far outside the concepts whilst I worked my way through the sequences. Regardless of that, the more I read the more I feel I have to learn, especially about patterned thinking and reframes. To a certain extent I see this community as a more scientifically minded Maybe Logic group, when thinking about priors and updating information.
A lot of the transhumanist material have garnered very strong responses from friends though, but I’ve stocked up on Istvan paperbacks to hopefully disseminate soon.
I can’t see this as part of the problem. You don’t have to discuss it, but I’m bewildered that it’s on the list.
I should probably have generalized this to “community-accepted norms that trigger absurdity heuristic alarms in the general population”.
Again, there should be a place to discuss that, but it shouldn’t be the same place that’s trying to raise the sanity waterline.
I don’t think this hypothesis is supported by the evidence, specifically past LW discussions.
My vague recollections of LW-past disagreements, but I don’t have any readily available examples. It’s possible my model is drawing too much on the-rest-of-the-Internet experiences and I should upgrade my assessment of LW accordingly.
Yes, I am specifically talking about LW. With respect to the usual ’net forums I agree with you.
I don’t mind #3, in fact the discussions of futurism are a big draw of LessWrong for me (though I suppose there are general reasons for being cautious about your confidence about the future). But I would be very happy to see #1, #2, and #4 go away.
I find stuff like “if you don’t sign up your kids for cryonics then you are a lousy parent” more problematic than a sizeable fraction of what reactionaries say.
What if you qualified it, “If you believe the claims of cryonicists, are signed up for cryonics yourself, but don’t sign your kids up, then you are a lousy parent”?
I would agree with it, but that’s a horse of a different colour.
In discussing vaccinations, how many people choose to say something as conditional as “if you believe the claims of doctors, have had your own vaccinations, but don’t let your kids be vaccinated, then you are a lousy parent”?
No, the argument is that you should believe the value of vaccinations, and that disbelieving the value of vaccinations itself makes your parenting lousy.
Well, I think Eliezer feels the same about cryonics as pretty much all the rest of us feel about vaccines—they help protect your kids from several possible causes of death.
Which is pretty much the same argument as saying that you should baptize your children and that disbelieving the value of baptism itself makes your parenting lousy.
If the belief-set you’re subtly implying is involved were accurate, then it would be.
However, I think we have a “sound” vs “sound” tree-falling-in-the-woods issue here. Is “lousy parenting” a virtue-ethics style moral judgement, or a judgement of your effectiveness as a parent?
Taboo “lousy”, people. We’re supposed to be rationalists.
Exactly, it all depends on the actual value of the thing in question. I believe baptism has zero value, I believe vaccines have lots of value, I’m highly uncertain about the value of cryonics (compared to other things the money could be going to).
A person is expected to say such about X if they believe X has lots of value. So why is it so very problematic for Eliezer to say it about cryonics when he believes cryonics have lots of value?
It’s impolitic and I don’t know how effective it is in changing minds. But then again it’s the same thing we say about vaccinations, so who knows: perhaps shaming parents does work in convincing them. I’d like to see research about that.
My prior is that the results will be bi-modal: some parents can be shamed into adjusting their ways, while for others it will only force them into the bunker mindset and make them more resistant to change.
I’m not sure that would work. After all, Bayes’s rule has fairly obvious unPC consequences when applied to race or gender, and thinking seriously about transhumanism will require dealing with eugenics-like issues.
“rather than applying to particular issues”
That would simply result in people treating Bayesianism as if it’s a separate magisterium from everyday life.
Think of it as the no-politics rule turned up to 11.The point is not that these things can’t be reasoned about, but that the strong (negative/positve) affect attached to certain things makes them ill-suited to rationalist pedagogy.
Lowering the barrier to entry doesn’t mean you can’t have other things further up the incline, though.
Datapoint: I find that I spend more time reading the politically-charged threads and subthreads than other content, but get much less out of them. They’re like junk food; interesting but not useful. On the other hand, just about anywhere other than LW, they’re not even interesting.
(on running a memory-check, I find that observation applies mostly to comment threads. There’s been a couple of top-level political articles that I genuinely learned something from)
Can you provide some links? I haven’t followed what you’ve said previously about this.
Most of the previous threads on the topic, every time one of these posts comes around. You could find them by much the same process as I could. The HBD fans put me off for a few months.
My impression is that the HBD fans are a pretty small minority here. What were your impressions?
Small but noisy. They add their special flavour to the tone though, as one of the few places outside their circle of blogs that gives them airtime (much like the neoreactionaries they cross over with).
I wonder if the people in the subthread below going “we may be racists, but let’s be the right sort of racists” understand that this doesn’t actually help much.
Rather we support our beliefs with rational arguments, the HBD-deniers don’t bother presenting counter arguments (and when they do they tend to be laughably bad) but instead try to argue that it’s somehow immoral to say and/or believe these things regardless of their truth value.
I’ve not really followed you, but I’ve never once seen you make an argument or even explain what you want. If you tell me something y’all want that you could plausibly achieve without the aid of low-status racists, perhaps I’ll try to put y’all in a separate category.
I’d like people to stop trying to suppress science because of nothing but ideological principles, like the creationists, and let the scientists get on with stuff like finding a cure for Alzheimer’s.
I’ll give you two-to-one odds that Derbyshire has not found a promising line of research for an Alzheimer’s cure.
This could do with some clarification—doesn’t help whom with what? And, by contrast, what would help?
Let’s see the results of the survey when they come out.
“Fan” is a funny word in this contex. It brings to mind people who go around shouting “Yea, Diversity!” non-ironically. Except, there are people who more or less do that, it isn’t the HBD crowd, and in fact diversity boosters don’t even really believe in it.
Edit: Sorry, missed the correct coment to reply to.
Why? If the answer is, as appears to be the case from context, that we say true things that make you feel uncomfortable, well I recommend treating your feeling of discomfort with the truth rather than the people saying it as the problem. This is a community devoted to rationality, not to making you feel comfortable.
Truth isn’t enough.
Continuing the argument though, I just don’t think including actual people on the receiving end into the debate would help determine true beliefs about the best way to solve whatever problem it is. It’d fall prey to the usual suspects like scope insensitivity, emotional pleading, and the like. Someone joins the debate and says “Your plan to wipe out malaria diverted funding away from charities that research the cure to my cute puppy’s rare illness, how could you do that?”—how do you respond to that truthfully while maintaining basic social standards of politeness?
Someone affected by the issue might bring up something that nobody else had thought of, something that the science and statistics and studies missed—but other than that, what marginal value are they adding to the discussion?
Aye !
Is that not enough for You ? Especially in some discussions, which are repetitive on LW ?
I’m thinking about the very low prior odds for them coming up anything unique.
In my experience, reading blogs from minority representants (sensible ones) introduces you to different thought patterns.
Not very specific, huh ?
Gypsies are the most focused on minority in my country. The gypsy blogger, who managed to leave her community, once described a story. Her mother visited her in her home, found frozen meat in her freezer, and started almonst crying: My daughter, how can you store meat at home, when people exist, who are hungry today ? (Gypsies are stereotypically bad at planning and managing their finances, to the point of selfdestruction. But before this blog, I did not understand, it makes them virtuous in their own eyes.)
This blog was also enlightening for me.
Would not it be nice to have such people interacting in LW conversations, instead of just linking to them ?
Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)
Yes. It would be nice. I am genuinely uncertain whether there’s a good way to make LW appealing to people who currently dislike it, without alienating the existing contributors who do like it.
Maybe I am naive, but, how about explicitly stating, by some high status member, that we would be very happy if they contributed here ?
Eliezer wrote the same thing about women. http://lesswrong.com/lw/ap/of_gender_and_rationality/ It was not exactly “Women, come, please” but it was clear they would be welcome to participate. It might have helped. Or maybe the increased percentage in the census result was due to something else ? How would I know...
And note that Eliezer did not forbid pick-up art discussion and whatever You guys hold dear.
I could try and write a similar post as was that about women, but I am a small fish in this pond.
If you want to increase your fish-size, articles / comment threads which generate lots of upvotes are a good way to do it. And since your fish-size is small already there’s not much to lose if people don’t like it.
Please do! It would be worth a try (though I’m not totally sure what kind of post you want to write...)
The plan to write an AI that will implement the Coherent Extrapolated Volition of all of humanity doesn’t involve talking to any of the affected humans. The plan is, literally, to first build an earlier AI that will do the interacting with all those other people for them.
That link only explains the concept of CEV as one possible idea related to building FAI, and a problematic one at that. But you’re making it sound like CEV being the only possible approach was an opinion that had already been set in stone.
As far as I understood, it was still the plan as of quite recently (last coupla years). Has this changed?
AFAIK, it’s one idea that’s being considered, but I don’t think there’s currently enough confidence in any particular approach to call it The Plan. “The Plan” is more along the lines of “let’s experiment with a lot of approaches and see which ones seem the most promising”; the most recent direction that that plan has produced is a focus on general FAI math research, which may or may not eventually lead to something CEV-like.
Could you elaborate on why you think that way? It’s always interesting to hear why people think a strong AI or Friendly AI is not possible/probable, especially if they have good reasons to think that way.
I respond to your guestion for the fairness sake, but my reasons are not impressive.
Most of it is probably a wishful thinking, driven by my desire not to have the powerful AI aronud. I am scared at the idea.
The fact that people have felt AI is near for some time and we still do not have it.
Maybe the things which are essential for learning are the same which make human intelligence limited. For instance forgetting things.
Vague feeling, that biologically based inteligence is so complex, that computers are no match.
I think that AI is inevitable, but I think that unfriendly AI is more likely than friendly AI. This is just from my experience in developing software even in my small team environment where there are less human egos and tribalism/signaling to deal with. Something that you hadn’t thought of is always going to happen and a bug will be perpetuated throughout the lifecycle of your software. With AI, who knows what implications these bugs will have.
Rationality itself has to become much more mainstream before tackling AI responsibly.
I’m a programmer, and I doubt that AI is possible. Or, rather, I doubt that artificial intelligence will ever look that way to its creators. More broadly, I’m skeptical of ‘intelligence’ in general. It doesn’t seem like a useful term.
I mean, there’s a device down at the freeway that moves an arm up if you pay the toll. So, as a system, its got the ability to sense the environment (limited to the context of knowing if the coin verification system is satisfied with the payment), and affect that environment (raise and lower arm). Most folks would agree that that is not AI.
So, then, how can we get beyond that? It is a nonhuman reaction to the environment. Whatever I wrote that we called “AI”, would presumably do what I program it to (and naught else) in response to its sensory input. A futuristic war drone’s basket is its radar and its lever is its missiles, but there’s nothing new going on here. A chat bot’s basket is the incoming feed, and its lever is its outgoing text, but it’s not like it ‘chooses’ in any sense more meaningful than the toll bot’s decision matrix, what it sends out.
So maybe it could rewrite its own code. But if it does so, it’ll only do so in the way that I’ve programmed it to. The paper clip maximizer will never decide to rewrite itself as a gold coin maximizer. The final result is just a derived product of my original code and the sensory experiences its received. Is that any more ‘intelligent’ than the toll taker?
I like to bet folks that AI won’t happen within timeframe X. The problem then becomes defining AI happening. I wouldn’t want them to point to the toll robot, and presumably they’d be equally miffed if we were slaves of the MechaPope and I was pointing out that its Twenty Commandments could be predicted given a knowledge of its source code.
Thinking on it, my knee jerk criteria is that I will admit that AI exists if the United States knowingly gives it the right to vote. (Obviously there’s a window where AI is sentient but can’t vote, but given the speed of the FOOM it’ll probably pass quickly), or if the earth declares war (or the equivalent) on it. Its a pretty hard criteria to come up with.
What would yours be? Say we bet, you and I, on whether AI will happen in 50 years. What would you want me to accept as evidence that it had done so (keeping in mind that we are imagining you as motivated not by a desire to win the bet but a desire that the bet represent the truth)?
People here have tried to define intelligence in more strict terms. See Playing Taboo with “Intelligence”. They define ‘intelligence’ as an agent’s ability to achieve goals in a wide range of environments.
It seems your post seems to be more about free will than intelligence as defined by Muehlhauser in the above article. Free will has been covered quite comprehensibly on LessWrong) so I’m not particularly interested debating about it.
Anyway, if you define intelligence as the ability to achieve goals in a wide range of environments then it doesn’t really matter if the AI’s actions are just an extension of what it was programmed to do. Even people are just extensions of what they were “programmed to do by evolution”. Unless you believe in magical free will, one’s actions have to come from some source and in this regard people don’t differ from paper clip maximizers.
I just think there are good optimizers and then there are really good optimizers. Between these there aren’t any sudden jumps except when the FOOM happens and possibly from unFriendly to Friendly. There isn’t any sudden point when the AI becomes sentient and the question how well the AI resembles humans is just a question of how well the AI can optimize towards this.
There are already some really good optimizers, like Deep Blue and other chess computers that are far better at playing chess than their makers. But you probably meant when AIs become sentient? I don’t know exactly how sentience works, but I think something akin to the Turing test that shows how well the AI can behave like humans is sufficient to show that AI is sentient, at least in one subset of sentient AIs. To reach a FOOM scenario the AI doesn’t have to be sentient, just really good at cross-domain optimization.
I’m confused. You are looking for good reasons to believe that AI is not possible, per your post two above, but from your beliefs it would seem that you either consider AI to already exist (optimizers) or be impossible (sentient).
I don’t believe sentient AIs are impossible and I’m sorry if I gave that impression. But apart from that, yes, that is a roundabout version of my belief—though I would prefer the word “AI” be taboo’d in this case. This doesn’t mean my way of thinking is set in stone, I still want to update my beliefs and seek ways to think about this differently.
If it was unclear, by “strong AI” I meant an AI that is capable of self-improving to the point of FOOM.
I would pick either some kind of programming ability, or the ability to learn a language like English (which I would bet implies the former if we’re talking about what the design can do with some tweaks).
Thinkers—including such naive, starry-eyed liberal idealists as Friedrich Hayek or Niccolo Machiavelli—have long touched on the utter indispensability of subjective, individual knowledge and its advantages over the authoritarian dictates of an ostensibly all-seing “pure reason”. Then along comes a brave young LW user and suggests that enlightened technocrats like him should tell people what’s really important in their lives.
I’m grateful to David for pointing out this comment, it’s really a good summary of what’s wrong with the typical LW approach to policy.
(I’m a repentant ex/authoritarian myself, BTW.)
I’m having trouble wrapping my head around that. Could you give an example?
I hesitate to suggest this, but I’ve noticed most of the “sensitive but discussed anyway” issues have been on areas where socially weaker groups might feel threatened by the discussion. Criticism of socially strong groups is conspicuously absent, given that LW demographics are actually far-left leaning according to polls.
If the requirement that one must be dispassionate would cut in multiple directions simultaneously (rather than selectively cutting in the direction of socially marginalized groups) then we’d select for “willing to deal intellectually with emotional things” rather than selecting for “emotionally un-reactive to social problems” (which is a heterogeneous class containing both people who are willing to deal intellectually with things which are emotionally threatening and people who happen to not often fall on the pointy end of sensitive issues)
The reason I hesitate to suggest it is that while I do want an arena where sensitive issues can be discussed intellectually without driving people away, people consciously following the suggestion would probably result in a green-blue battleground for social issues.
There’s lots of talk about religion which is almost the definition of a socially strong group.
Well sure, but that doesn’t count because we’re pretty much all atheists here. Atheism is the default position in this social circle, and the only one which is really given respect.
I’m talking about criticisms of demographics and identities of non-marginalized groups that actually frequent Lesswrong.
If we’re allowed to discuss genetically mediated differences with respect to race and behavior, then we’re also allowed to discuss empirical studies of racism, its effects, which groups are demonstrated to engage in it, and how to avoid it if we so wish. If we’re allowed to empirically discuss findings about female hypergamy, we’re also allowed to discuss findings about male proclivities towards sexual and non-sexual violence.
But for all these things, there’s no point in discussing them in Main unless there’s an instrumental goal being serviced or a broader philosophical point being made about ideas...and even in Discussion, for any of this to deserve an upvote it would need to be really data driven and/or bringing attention to novel ideas rather than just storytelling, rhetoric, or the latest political drama.
Reactionary views, being obscure and meta-contrarian, have a natural edge in the “novel ideas” department, which is probably why it has come up so often here (and why there is a perception of LW as more right-wing than surveys show).
Speaking for myself, I would be happy to see a rational article discussing racism, sexism, violence, etc.
For example, I would be happy to see someone explaining feminism rationally, by which I mean: 1) not assuming that everyone already agrees with your whole teaching or they are a very bad person; 2) actually providing definitions of what is and what isn’t meant by the used terms in a way that really “carves reality at its joints” instead of torturing definitions to say what you want such as definining sexism as “doing X while male”; 3) focusing on those parts than can be reasonably defended and ignoring or even willing to criticize those part’s that can’t.
(What I hate is someone just throwing around an applause light and saying: “therefore you must agree with me or you are an evil person”. Or telling me to go and find a definition elsewhere without even giving me a pointer, when the problem is that almost everyone uses the word without defining it, or that there are different contradictory definitions. Etc.)
Some of my favorite feminist articles are the ones demonstrating actual statistical effects of irrational biases against women, such as http://www.catalyst.org/file/139/bottom%20line%202.pdf talking about women being undervalued as board members, or the ones talking about how gender blind audition processes result in far more women orchestra members.
For the record, I completely support anonymous evaluation of orchestra members, and many other professions. And students, etc.
This is how quickly I update in favor of feminism when presented rationally. :D
More meta: This is why I think this kind of debate is more meaningful.
Do the results of the blind tests give you some reason to think there might be harder-to-quantify irrational prejudice against women?
Yes.
That alone doesn’t imply agreement with any specific hypothesis about what exactly causes the prejudice, nor with any specific proposal how this should be fixed. That would require more bits of evidence.
In general, I support things that reduce that prejudice—such as the blind tests—where I see no negative side-effects. But I am cautious about proposals to fix it by reversing stupidity, typically by adding a random bonus to women (how exactly is it quantified?) or imposing quotas (what if in some specific situation X all women who applied for the job really were incompetent? just like in some other specific situation Y all men who applied could be incompetent).
Also, there are some Schelling-point concerns, e.g. once we accept it is okay to give bonuses on tests to different groups and to determine the given group and bonus by democratic vote or lobbying, it will become a new battlefield with effects similar to “democracy stops being fair once people discover they can vote themselves more money out of their neighbors’ pockets”. It would be nice to have some scientists discover that the appropriate bonus on tests is exactly 12.5 points, but it is more like real world to have politicians promising bonus 50 points to any group in exchange for their vote, of course each of them having “experts” to justify why this specific number is correct. -- And I would hate to have a choice between a political party that gives me −1000 points penalty and a political party that gives me +1000 points bonus, which I would consider also unfair, and in addition I might disagree with that party on some other topics. And given human nature, I would not be surprised inf those −1000 and +1000 parties become so popular among their voters that another party proposing to reset the bonuses back to 0 would simply have no chance.
One thing I would like to see—and haven’t—in regards to opposition to prejudice is work on how to become less prejudiced. That is, how to see the person in front of you accurately, even if you’ve spent a lot of time in an environment which trained you to have pre-set opinions about that person.
Information about an individual screens off information about the group. At least it should. Let’s assume partial success, which is better than nothing. So the key is to get information about the individual. I would just try talking to them.
I guess the failure of usual anti-prejudice techniques is assuming that all opinions about a group are wrong, i.e. not a valid Bayesian evidence. (Of course unless it is a positive opinion about a minority, in which case it hypocritically is okay.) They try to remove the participants’ opinion about a group in general; usually without any success.
I would rather assume that an opinion about the group may be correct, but still, any given individual may be different than the average or the stereotype of their group. Which can easily be demonstrated by letting participants talk about how they differ from the average or the stereotype of various groups they could be classified into. For example, unlike a typical man in my society, I have long hair, I don’t like beer, and I am not interested in watching sport on TV. At this moment, the idea of “the person is not the same as (my idea of) the group” is in near mode. The next step is getting enough specific information about the other person so that the general image of “a random member of group X” can be replaced with some other data. (Depends on situation; e.g. in a group of children I would give many yes/no questions such as “do you have a pet?” and let them raise their hands; and then also they would ask questions. Each bit of information that differs from the assumption, if noticed, could be useful.)
Of course the result could be that people change their opinion about this one specific person, and yet keep their prejudice about their group. Which is an acceptable result for me, but probably not acceptable for many other people. I would reason that a partial success which happens is much better than an idealistic solution that doesn’t happen; and that accepting one exception makes people more likely to accept another exception in the future, possibly weakening the prejudice. But on the other hand, if the original opinion about the average of the group was correct, then we have achieved the best possible result: we didn’t teach people bullshit (which could later backfire on us) and yet we taught them to perceive a person as an individual, different from the average of the group, which was the original goal.
Here’s some empirical research on the actual causes of the pay gap. Executive Summary: The majority of the burden of child rearing still falls on women, and this can be disruptive to their careers prospects, especially in high paying fields like law and bussiness management; childless women and women who work in jobs that allow for flexible hours earn incomes much closer to parity.
Side note: I can’t really tell, but some evidence suggests the total time spent on childcare has increased in the past 40-50 years. Now, when I look at people raised back then and try to adjust for the effects of leaded gasoline on the brain, they seem pretty much OK. So we should consider the possibility that we’re putting pointless pressure on mothers.
Who is the we there? I’m not declaiming responsibility, but interested in who these women feel is pressuring them. I’d wager it’s largely a status competition with other women.
As you said, “much closer to parity”. There are probably multiple causes, each responsible for a part of the effect. And as usual, the reality is not really convenient for any political side.
The Cathedral, to use Moldbug’s terminology, is certainly a non-marginalized group and LW is full of its adherents.
Agreed, but we devote plenty of time to criticizing it, don’t we? (Both reactionary criticism, and the more mainstream criticisms of the media/academia culture)
But the thing about the reactionary lens, especially Moldbug, is at the end of the day they side with the people in power. Moldbug even explicitly states as much. A central theme of his work is that we shouldn’t keep elevating the weaker and criticizing the stronger, thus creating endless revolution. “Formalism” essentially means “maintaining the status quo of the current power heirarchy”. The only exception to this is the Cathedral itself—because it is a power structure which is set up in such a way that it upsets existing heirarchies.
So the moldbug / reactionary ideology , at the core, is fundamentally opposed to carrying out the criticism which I just suggested against anyone who isn’t part of “the cathedral” which keeps shifting the status quo (hence the meta contrarianism). It is an ideology which only criticizes the social critics themselves, and seeks to return to the dominant paradigm as it was before the social critics entered the scene.
I’m saying we need more actual real contrarianism, not more meta contrarianism against the contrarians. It is useful to criticize things other than the Cathedral. I’m being a meta-meta-contrarian.
I think I’m a bit confused now.
Let’s say Cathedral is mainstream. Then Moldbug is a contrarian. Then Yvain’s anti-reactionary FAQ is contrarian against a contrarian. Are you saying we need more stuff like Yvain’s FAQ?
Or do you want some actual direct criticism of an existing power structure, maybe something along these lines?
So the contrarian food chain goes
Mainstream America (bulk of the American population)
-> radical egalitarian critique of mainstream america (feminists, anti-racists, the Left, moldbug’s “Cathedral”)
→ Reactionary critique of egalitarian movements (Moldbug, Manosphere, human biodiversity, Dark enlightenment)
→ Critique of Reactionary anti-egalitarian stances (Yvain, this post).
I’m advocating good old-fashioned contrarianism—stuff like radical egalitarianism, sex positivism, etc.
(No, obviously, not along those lines—but yes, that link is at the correct level of contrarianism.)
OK. Let me try to sort this out.
We start with a base. You are saying this is the mainstream US which you understand to be conservative. So, level 0 -- US conservatives—mainstream.
Level 1 is the Cathedral which is contrarian to level 0 and which is US liberals or progressives.
Level 2 are the neo-reactionaries who are contrarian to level 1 (Cathedral)
Level 3 is Yvain’s FAQ which is contrarian to level 2 (Reactionaries).
So we are basically stacking levels where each level is explicitly opposed to the previous one and, obviously, all even layers are sympathetic to each other, as are all odd layers (I find the “meta-” terminology confusing since this word means other things to me, probably “anti-” would be better).
And what you want more of is level 1 stuff—basically left-liberal critique of whatever stands in the way of progress, preferably on steroids.
Do I understand you right?
EDIT: LOL, you simplified your post right along the lines I was extracting out of it...
I don’t mind hearing from any level, as long as things are well cited.
-I’ve sort of gotten bored with level 0, but that could change if I see a bunch of really well done level 0 content. I just don’t often see very many insightful things coming from this level.
-Level 2 holds my interest because it’s novel. When it’s well cited, it really holds my interest. However, it seldom is well cited. That’s okay though—the ideas are fun to play with.
-Level 1 is the level I agree with. However, because I’m very familiar with it and its supporting data, and I hate agreeing with things, it has to work a lot harder to hold my interest.
My perception is that level 2, for reasons described, gets more attention than it merits. The shock value, twisty narrative, and novelty of it make it more interesting to people like me, who like reading compelling arguments even if they don’t completely agree. However, it drives away people who are emotionally affected and/or perceive that have something to protect from what would happen if those viewpoints were to gain traction.
I was suggesting that maybe increasing good level one posts, which weren’t boring, echo-chamber-ish and obviously true to to most people on Lesswrong, would remedy this. (I’m taking the LW poll as indications that most LWers, like me, agree with Level 1)
Edit: Even layers are not necessarily sympathetic to each other, even if they are ideologically aligned. Mainstream conservatives would likely not be sympathetic to reactionary’s open racism/sexism etc, and the impression I get is that reactionaries think mainstream conservatives are fighting a losing battle and aren’t particularly bright. There’s really only one Odd Layer, practically speaking, since Yvain is the only person on hypothetical layer 3.
Hm. I understand you now. However I carve reality in a somewhat different way—we see joints in the territory in different places.
First I would set up level zero as reality, what actually exists now—all the current socio-econo-politco-etc. structures. And then one dimension by which you divide people/groups/movements would be by whether they are more or less content with the current reality or whether they want to radically change it.
Another dimension would be the individual vs. group/community/state spectrum, anarchists being on one end and fans of a totalitarian state on the other.
You can add more—say, egalitarianism vs.some sort of a caste system—as needed.
Getting back to your wishes, I think we have a bunch of socialists here who on a regular basis post critiques of the status quo from the left side (e.g. didn’t we have a debate about guaranteed basic income recently?). On the other hand they do lack in sexiness and edginess :-)
I didn’t witness this debate, so maybe you’re right that the advocates for the guaranteed minimum income were in fact socialists. I’d like to note, though, that the idea of a guaranteed basic income has had some currency in libertarian circles as well, advocated by (among others) Friedrich Hayek and Milton Friedman. So I wouldn’t take support for this policy as very strong evidence of a socialist political orientation.
Well, I mentioned socialists because a significant part of LW self-identifies as socialist (see Yvain’s surveys). That, of course, is a fuzzy term with many possible meanings.
But the survey didn’t just say “Socialist”, it said “Socialist, for example Scandinavian countries: socially permissive, high taxes, major redistribution of wealth”.
Hehe I’ll give you that coherently expressing edgy views is part of what keeps me reading despite fairly strong disagreement...outside view, that’s not actually a point in its favor, of course—as a general heuristic, the boring and conventional people are right and the edgy internet subculture is wrong, even if wrong in novel ways!
I don’t think that’s a particularly useful heuristic. I’d like to offer a replacement: people who actually did something in reality or who point to something existing and working are right more often than people whose arguments are based on imagination and counterfactuals.
Ah, sorry for the real time simplification! I realized I was writing spaghetti as soon as I looked it over.
Not a problem, untangling spaghetti (in limited amounts) is fun.
Maybe invite blacks or other members of marginalized communities explicitly ?
Some time ago, Eliezer wrote a post, which made it clear he would be glad to see more women on LW. I thing his article was well written. Did any of You guys, the opponents of crazier versions of feminism, feel annoyed by that ? Later, there were other efforts to drag women here. (It does feel flattering, I tell You). Now, the percentage of LW women has grown slightly (lazy to look up the census result), athough we are still a minority.
It grew from 3% in 2009 to 8.9% (cis) + 1.3% (trans) in 2012.
1.3% trans! That’s super cool
Given that a large part of LW is drawn from the Bay Area, which IIRC has significantly higher trans density than the at-large 1%, that’s actually under where I would expect.
Wait, 1.3% trans women. Depending on the number of trans men, that may be much closer to representative of the broader likely-to-encounter-LW population. (Which I’d expect to have 2x-5x as many trans people as the general population.)
From the 2012 survey results:
Previously discussed here.
OK, still lower than I would expect, then. Somewhat disappointing.
I don’t think simple invitations are going to make much difference.
If some marginal group didn’t drift here spontaneously because they’re inherently interested in the community, then we must provide them other incentives. Unfortunately this might mean privileging them some way, which to be honest I usually find so unjust and contrary to truth seeking it pisses me off.
Perhaps there are benign forms of such privileging, but none are cognitively available to me at the moment.
What if they visit the website and feel hesitant, whether the atmosphere is welcoming enough for them, considering all the HBD staff ? I do not imply we should censor HBD away, I am interested in it too. If there is some thruth to it, we will have to face it sooner or later anyway, taking into account all the DNA sequencing projects etc. In the world outside, I got yelled at for my interest a couple time, it is my interest to have clear discussion here, so that I know, where things stand. But, anyway, regardless of nature or nurture, all the data agree, there is a significant portion of intelligent individuals in all marginalised groups, and LW would very much benefit from them. If I only could express something like that, and not sound creepy… Some analogy of this: http://lesswrong.com/lw/ap/of_gender_and_rationality/
I don’t see this as a problem, really. The entire point is to have high-value discussions. Being inclusive isn’t the point. It’d be nice, sure, and there’s no reason to drive away minority groups for no reason.
I mean, I don’t see us trying to spread internet access and English language instruction in Africa so that the inhabitants can help discuss how to solve their malaria problems. As long as we can get enough input about what the problem is actually like, we don’t need to be inclusive in order to solve problems. And in the African malaria case, being inclusive would obviously hurt our problem-solving capability.
Eh, yes and no. This attitude (“we know what’s best; your input is not required”) has historically almost always been wrong and frequently dangerous and deserves close attention, and I think it mostly fails here. In very, very specific instances (GiveWell-esque philanthropy, eg), maybe not, but in terms of, say, feminism? If anyone on LW is interested tackling feminist issues, having very few women would be a major issue. Even when not addressing specific issues, if you’re trying to develop models of how human beings think, and everyone in the conversation is a very specific sort of person, you’re going to have a much harder time getting it right.
Has it really? The cases where it went wrong jump to mind more easily than those where it went right, but I don’t know which way the balance tips overall (and I suspect neither do your nor most readers—it’s a difficult question!).
For example, in past centuries Europe has seen a great rise in litteracy, and a drop in all kinds of mortality, through the adoption of widespread education, modern medical practices, etc. A lot of this seems to have been driven in a top-down way by bureaucratic governments who considered they were working for The Greater Good Of The Nation, and didn’t care that much about the opinion of a bunch of unwashed superstitious hicks.
(Some books on the topic: Seeing Like a State; The Discovery of France … I haven’t read either unfortunately)
High-value discussions here, so far as is apparent to me, seem to be better described as “High-value for modestly wealthy white and ethnic Jewish city-dwelling men, many of them programmers”. If it turns out said men get enough out of this to noticeably improve the lives of the huge populations (some of which might even contain intelligent, rational individuals or subgroups), that’s all fine and well. But so far, it mostly just sounds like rich programmers signalling at each other.
Which makes me wonder what the hell I’m still doing here; in spite of not feeling particularly welcome, or getting much out of discussions, I haven’t felt like not continuing to read and sometimes comment would make a good response. Yet, since I’m almost definitely not going to be able to contribute to a world-changing AI, directly or otherwise, and don’t have money to spare for EA or xrisk reduction, I don’t see why LW should care. (Ok, so I made a thinly veiled argument for why LW should care, but I also acknowledged it was rather weak.)
My LW reading comes out of my Internet-as-television time, and so does Hacker News. The two appear very similar in target audience.
Out of curiousity, what sites come out of your Internet-as-non-television time?
I live in my GMail. Wikipedia editing, well, really it’s a form of television I pretend isn’t. The rest is looking for something in particular.
So what do you consider a high-value use of your free time?
Even with malaria nets (which seem like a very simple case), having information from the people who are using them could be important. Is using malaria nets harder than it sounds? Are there other diseases which deserve more attention?
One of the topics here is that sometimes experts get things wrong. Of course, so do non-experts, but one of the checks on experts is people who have local experience.
Even then, is trying to encourage sub-saharan African participation in the Effective Altruism movement really the best way to gather data about their needs and values? Wouldn’t it be more cost effective to hire an information-gathering specialist of some sort to conduct investigations?
Feminism and possible racial differences seem like pretty low-value discussion topics to me… interesting way out of proportion to their usefulness, kind of like politics.
That’s an incredibly short-sighted attitude. Feminism and race realism are just the focus of the current controversy. I’m pretty confident that you could pick just about any topic in social science (and some topics in the natural sciences as well—evolution, anyone?) and some people will want to prevent or bias discussions of it for political reasons. It’s not clear why we should be putting up with this nonsense at all.
My argument is: (1) Feminism and race realism are interesting for the same reasons politics are interesting and (2) they aren’t especially high value. If this argument is valid, then for the same reasons LW has an informal ban on politics discussion, it might make sense to have an informal ban on feminism and race realism discussion.
You don’t address either of my points. Instead you make a slippery slope argument, saying that if there’s an informal ban on feminism/race realism then maybe we will start making informal bans on all of social science. I don’t find this slippery slope argument especially persuasive (such arguments are widely considered fallacious). I trust the Less Wrong community to evaluate the heat-to-light ratio of different topics and determine which should have informal bans and which shouldn’t.
“some people will want to prevent or bias discussions of it for political reasons”—to clarify, I’m in favor of informal bans against making arguments for any side on highly interesting but fairly useless topics. Also, it seems like for some of these topics, “people getting their feelings hurt” is also a consideration and this seems like a legitimate cost to be weighed when determining whether discussing a given topic is worthwhile.
Maybe I’m being dense, but I don’t see why this is obviously true.
There’s obviously a level of exclusivity that also hurts our problem-solving, as well. At some point a programmer in the Bay Area with $20k/yr of disposable income and 20 hours a week to spare is going to do more than a subsaharan african farmer with $200/yr of disposable income, 6 hours a week of free time, and no internet access.
I don’t see how it would actually hurt our problem-solving, though, if we were to try to solicit input from people who don’t have the leisure time or education to provide it. It would be a phenomenal waste of resources, to be sure, but aside from that I don’t see how it would harm the community.
You are positing that folks who are affected by some issues would not participate in frank, dispassionate discussion of these same issues… why exactly? To preserve their ego? It seems like a dubious assumption.
Anecdote time:
I’m currently dispassionate about racial issues, and can (and have) openly discussed topics such as the possibility that racial discrimination is not a real thing, the possibility that genetically mediated behavioral differences between races exist, and other conservative-to-reactionary viewpoints. Some of those discussions have been on lesswrong, under this account and under an alt, some have been on other sites, and some have been in “real life”.
Prior to the age of ~19, I would have been unable to be dispassionate about issues of race and culture. I would understand the value of being dispassionate and I would try, but the emotions would have come anyway. Due to my racial and cultural differences, I’ve fended of physical attacks from bullies in middle school and been on the receiving end of condescending statements in high school and college, sometimes from strangers and people whom I do not care about and sometimes from peers who I liked and from authority figures who I respected. When it came from someone i liked/respected, it hurt more.
The way human brains work, is when a neutral stimuli (here, racist viewpoints) is repeatedly paired with a negative stimuli (here, physical harm and/or loss of social status), the neutral stimuli can involuntarily trigger pre-emptive anger and defensiveness all on its own. If your experience of people who posited Opinion X was that they proceeded to physically attack you / steal your things / taunt you openly in a social setting, you too would probably develop aversive reactions to Opinion X.
--
EDIT: just read the linked post. It independently echoes my account:
--
The above is an explanation as to why it happens and how it is. I’m not saying it’s justified, or that it aught to be that way. I made a conscious effort to fight down the anger and not direct it at people who were clearly not trying to physically harm me or lower my social status in a group. I think others should do the same.
For an extreme example, in the past an authority figure made a racial joke at my expense in the presence of other students who had previously physically taunted me, thereby validating their behavior—and I took care to not direct the anger at the authority figure (who was simply ignorant of the social status lowering effect of the joke, not maliciously trying to harm me). For a tamer example, I’ve never actually ended a friendship with someone for espousing certain views—I’ve only been angry and forced myself not to say anything until after calming down.
Currently, I don’t feel emotionally angry at all when faced with those views, and i think every one else should strive to that. However, that doesn’t mean that people who haven’t faced this sort of thing are allowed to simply expect that people who have faced it will have that sort of emotional control. I’m pretty sure I’m an outlier with respect to unusually good emotional control (globally, if not on LessWrong) - most people can’t do it. It also really helps that my current social bubble has less of that sort of thing.
That said (and this is where I disagree with the linked poster) I don’t think it’s a good idea to censor views for the sake of not triggering anyone’s emotions. Dispassionate discussion of a topic unpairs the neutral stimuli with a negative stimuli—in fact, I would go so far as to recommend that people who are psychologically similar to myself (intellectually curious, emotionally stable) who have been hurt by racism should spend time talking on the internet to white nationalists and reactionaries, and people who have been hurt by sexism should spend time talking to pua’s / redpill / the “manosphere”. Talking about charged topics in settings where people are powerless to actually hurt you is a great way to remove emotional triggers.
That said, the small but vocal prevalence of meta-contrarian, reactionary ideology on LW has probably driven away a lot of smart people. There’s even dirty tactics at play here—such as the down-voting of every single comment of anyone who explicitly expresses progressive views or challenging reactionary views. I myself am on the receiving end of this nonsense—every post is systematically downvoted by exactly −1 ever since I mentioned some biological evidence about sexual orientation that could be construed as liberal. I think our kind is so partial to contrarians that we actually give people a pass from the downvote simply because they went against the grain even when the actual ideas aren’t especially insightful. Remember, well-kept gardens die by pacifism—reactionary ideas are fine if they are supported by real evidence and logic of the same standard you would hold if someone espoused a common viewpoint which is fairly obvious and popular. If it reads like pseudo-intellectual fluff, it probably is. Don’t go easy on it just because it’s contrarian.
I found this one of the most enlightening posts in this overheated thread and encourage you to expand it into a top-level post.
If I were to expand this post at a future time, which ideas specifically do you find enlightening / would you say should be expanded? Are there any portions that you think should be slimmed or removed altogether?
Another thing to think about is how to talk about this productively without triggering similar over-heating...although this post wasn’t actually too controversial, so maybe that’s a good sign on that front?
What I think LW could benefit from is an explanation “from the inside” of what leads some people of disprivileged groups to be sensitive to the expression of certain opinions, to ask for “safe spaces” and talk of “triggers”, et cetera. I think you have an evenhanded position that on one side does not ask LW to censor or discourage dispassionate discussions of these opinions, but at the same time enables those who profess them to understand the unintended effects their words can have. Thus the well-intentioned among them (and I am sure there are some, though I share your indignation at those who are not and use underhanded tactics like mass-downvoting) will hopefully be more cautious in their choice of words, and also perhaps realize that requests for “safe spaces” are not necessarily power plays to squash controversy.
I think the last paragraph (except for the first sentence) is the part that could be slimmed or removed; you have registered your protest against mass downvoting and doing it again in a post would distract from the main topic.
Indeed, writing a top-level post about this in a way that does not cause a flamewar is a daunting, perhaps impossible task. I fully understand if under consideration you prefer not to do it.
This is probably not a good argument on LW, but a large part of psychoanalysis is built on this.
Also desensitization therapy in CBT, but they would recommend starting with very small dozes of the stimuli. (And I think LW would be at the lower end of the scale.)
This doesn’t really seem like a dubious assumption to me, practically everyone is more motivated to preserve their ego than to think rationally.
http://imgur.com/ZaYq9Y5
hard to be frankly dispassionate when you’re affected by an issue. That tends to encourage self-serving passion.
Oh, that’s quite right. But the original question here is whether they’ll even want to join the conversation at all. To me, it’s not at all clear why they wouldn’t. (And I see this as a mixed bag from a goals perspective, for reasons others have pointed out.)
Because life, of which the Internet is a subset, of which LW is a subset, is full of blowhards who will tell you all about your problems and how you should solve them while clearly not having a trace of a clue about the topic, and life is too short to go seeking them out.