My above comment was made in a bit of jest, as I hope is clear. Still, some people do make a deep moral issue over “sexist” language, and insofar as they do, moral condemnation of much more heavily gendered languages than English is an inevitable logical consequence.
Regarding the supposed biases arising due to gendered language, do you think that they exist to a significant degree in practice? While it’s not a watertight argument to the contrary, I still think it’s significant that, to my knowledge, nobody has ever demonstrated any cross-cultural correlation between gender-related norms and customs and the linguistic role of gender. (For what that’s worth, of all Indo-European languages, the old I-E gender system has been most thoroughly lost in Persian, which doesn’t even have the he-she distinction.)
Also, when I reflect on my own native language and the all-pervasive use of masculine as the default gender, I honestly can’t imagine any plausible concrete examples of biases analogous to your hypothetical example with height. Of course, I may be biased in this regard myself.
I agree that some people do treat as moral failings many practices that, to my mind, are better treated as mistakes.
I also think that some people react to that by defending practices that, to my mind, are better treated as mistakes.
Regarding the supposed biases arising due to gendered language, do you think that they exist to a significant degree in practice?
I’m not sure.
One way I might approach the question is to teach an experimental subject some new words to denote new roles, and then have the subjects select people to fill those roles based on resumes. By manipulating the genderedness of the name used for the role (e.g., “farner,” “farness,” or “farnist”) and the nominal sex of the candidate (e.g., male or female), we could determine what effect an X-gendered term had on the odds of choosing a Y-sexed candidate.
I have no idea if that study has been performed.
So, for example, would I expect English-speakers (on average) selecting a candidate for the role of “farness” to select a female candidate more often than for the role of “farner”?
Yes, I think so. Probably not a huge difference, though. Call it a 65% confidence for a statistically significant difference.
What’s your estimate? (Or, if you’d rather operationalize the question differently, go for it.)
I was going to write a more detailed reply, but seeing the literature cited in the book linked by Conchis, I should probably read up on the topic before expressing any further opinions. It could be that I’m underestimating the magnitude of such effects.
That said, one huge difficulty with issues of prejudice and discrimination in general is that what looks like a bias caused by malice, ignorance, or unconscious error is often in fact an instance of accurate statisticaldiscrimination. Rational statistical discrimination is usually very hard to disentangle from various factors that supposedly trigger irrational biases, since all kinds of non-obvious correlations might be lurking everywhere. At the same time, a supposed finding of a factor that triggers irrational bias is a valuable and publishable result for people researching such things, so before I accept any of these findings, I’ll have to give them a careful look.
Agreed that attribution of things like malice, ignorance, error, and bias to people is tricky… much as with evil, earlier.
This is why I reframed your original question (asking me whether I thought gendered language introduced bias to a significant degree) in a more operational form, actually.
In any case, though, I endorse holding off on expressing opinions while one gathers data (for all that I don’t seem to do it very much myself).
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
* There’s an overview of some of this here (from p.26).
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
There’s an overview of some of this here (from p.26).
My above comment was made in a bit of jest, as I hope is clear. Still, some people do make a deep moral issue over “sexist” language, and insofar as they do, moral condemnation of much more heavily gendered languages than English is an inevitable logical consequence.
Regarding the supposed biases arising due to gendered language, do you think that they exist to a significant degree in practice? While it’s not a watertight argument to the contrary, I still think it’s significant that, to my knowledge, nobody has ever demonstrated any cross-cultural correlation between gender-related norms and customs and the linguistic role of gender. (For what that’s worth, of all Indo-European languages, the old I-E gender system has been most thoroughly lost in Persian, which doesn’t even have the he-she distinction.)
Also, when I reflect on my own native language and the all-pervasive use of masculine as the default gender, I honestly can’t imagine any plausible concrete examples of biases analogous to your hypothetical example with height. Of course, I may be biased in this regard myself.
I agree that some people do treat as moral failings many practices that, to my mind, are better treated as mistakes.
I also think that some people react to that by defending practices that, to my mind, are better treated as mistakes.
I’m not sure.
One way I might approach the question is to teach an experimental subject some new words to denote new roles, and then have the subjects select people to fill those roles based on resumes. By manipulating the genderedness of the name used for the role (e.g., “farner,” “farness,” or “farnist”) and the nominal sex of the candidate (e.g., male or female), we could determine what effect an X-gendered term had on the odds of choosing a Y-sexed candidate.
I have no idea if that study has been performed.
So, for example, would I expect English-speakers (on average) selecting a candidate for the role of “farness” to select a female candidate more often than for the role of “farner”?
Yes, I think so. Probably not a huge difference, though. Call it a 65% confidence for a statistically significant difference.
What’s your estimate? (Or, if you’d rather operationalize the question differently, go for it.)
I was going to write a more detailed reply, but seeing the literature cited in the book linked by Conchis, I should probably read up on the topic before expressing any further opinions. It could be that I’m underestimating the magnitude of such effects.
That said, one huge difficulty with issues of prejudice and discrimination in general is that what looks like a bias caused by malice, ignorance, or unconscious error is often in fact an instance of accurate statistical discrimination. Rational statistical discrimination is usually very hard to disentangle from various factors that supposedly trigger irrational biases, since all kinds of non-obvious correlations might be lurking everywhere. At the same time, a supposed finding of a factor that triggers irrational bias is a valuable and publishable result for people researching such things, so before I accept any of these findings, I’ll have to give them a careful look.
Agreed that attribution of things like malice, ignorance, error, and bias to people is tricky… much as with evil, earlier.
This is why I reframed your original question (asking me whether I thought gendered language introduced bias to a significant degree) in a more operational form, actually.
In any case, though, I endorse holding off on expressing opinions while one gathers data (for all that I don’t seem to do it very much myself).
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
* There’s an overview of some of this here (from p.26).
I wonder if they tested whether individuals suffer similar negative effects from plural generics.
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
There’s an overview of some of this here (from p.26).