Just to get away from the politics around real-world examples, suppose I speak a language that genders its verbs based on the height of the object—that is, there are separate markings for above-average height, below-average height, and average height.
It’s an empirical question whether, if I’m figuring out who to hire for a job, asking the question “Whom should we tall-hire?” makes me more likely to hire a tall person than asking “Whom should we short-hire?” If it’s true, it is; evil doesn’t enter into it under most understandings of evil. It’s just a fact about the language and about cognitive biases.
If the best available candidate for the job happens to be tall, but I ask myself whom I should short-hire, the way I’m talking about the job introduces bias into my hiring process that makes me less likely to hire the best available candidate. This also isn’t evil, but it’s a mistake.
If my language’s rules are such that this height-based gender-marking is non-optional, then this mistake is non-optional. My native language is, in that case, irreparably bias-ridden in this way.
Suppose I want to hire the best candidates. What can I do then?
Well, one thing I might do is deliberately alternate among “short-hire,” “tall-hire,” and “average-hire” in my speech, so as to reduce the systematic bias introduced by my choice of verb. Of course, if my language forces me to use “short-hire” for an unspecified-height target, then doing that is ungrammatical.
Another option is to make up a new way of speaking about hiring… perhaps borrow the equivalent verb from another language, or make up new words, so I can ask “whom should I hire?” without using a height-based gender marking at all. But maybe, inconveniently, my language is such that foreign loan verbs must also be marked in this way.
A third option is to systematically train myself so I am no longer subject to the selection bias that naive speakers of my language demonstrate. But there are opportunity costs associated with that training process, and maybe I don’t want to bother.
Ultimately, what I do will depend on how important speaking grammatically is to me, how important hiring optimal employees is, and so forth. If I lose significant status or clarity by speaking ungrammatically, I may prefer to hire suboptimal employees.
Should I get offended if someone points that out? Again, it depends on my goals. If I want to improve my ability to choose the best available candidate, then getting offended in that case is counter-productive. If I want to defend my choice to speak traditionally, then getting offended works reasonably well.
My above comment was made in a bit of jest, as I hope is clear. Still, some people do make a deep moral issue over “sexist” language, and insofar as they do, moral condemnation of much more heavily gendered languages than English is an inevitable logical consequence.
Regarding the supposed biases arising due to gendered language, do you think that they exist to a significant degree in practice? While it’s not a watertight argument to the contrary, I still think it’s significant that, to my knowledge, nobody has ever demonstrated any cross-cultural correlation between gender-related norms and customs and the linguistic role of gender. (For what that’s worth, of all Indo-European languages, the old I-E gender system has been most thoroughly lost in Persian, which doesn’t even have the he-she distinction.)
Also, when I reflect on my own native language and the all-pervasive use of masculine as the default gender, I honestly can’t imagine any plausible concrete examples of biases analogous to your hypothetical example with height. Of course, I may be biased in this regard myself.
I agree that some people do treat as moral failings many practices that, to my mind, are better treated as mistakes.
I also think that some people react to that by defending practices that, to my mind, are better treated as mistakes.
Regarding the supposed biases arising due to gendered language, do you think that they exist to a significant degree in practice?
I’m not sure.
One way I might approach the question is to teach an experimental subject some new words to denote new roles, and then have the subjects select people to fill those roles based on resumes. By manipulating the genderedness of the name used for the role (e.g., “farner,” “farness,” or “farnist”) and the nominal sex of the candidate (e.g., male or female), we could determine what effect an X-gendered term had on the odds of choosing a Y-sexed candidate.
I have no idea if that study has been performed.
So, for example, would I expect English-speakers (on average) selecting a candidate for the role of “farness” to select a female candidate more often than for the role of “farner”?
Yes, I think so. Probably not a huge difference, though. Call it a 65% confidence for a statistically significant difference.
What’s your estimate? (Or, if you’d rather operationalize the question differently, go for it.)
I was going to write a more detailed reply, but seeing the literature cited in the book linked by Conchis, I should probably read up on the topic before expressing any further opinions. It could be that I’m underestimating the magnitude of such effects.
That said, one huge difficulty with issues of prejudice and discrimination in general is that what looks like a bias caused by malice, ignorance, or unconscious error is often in fact an instance of accurate statisticaldiscrimination. Rational statistical discrimination is usually very hard to disentangle from various factors that supposedly trigger irrational biases, since all kinds of non-obvious correlations might be lurking everywhere. At the same time, a supposed finding of a factor that triggers irrational bias is a valuable and publishable result for people researching such things, so before I accept any of these findings, I’ll have to give them a careful look.
Agreed that attribution of things like malice, ignorance, error, and bias to people is tricky… much as with evil, earlier.
This is why I reframed your original question (asking me whether I thought gendered language introduced bias to a significant degree) in a more operational form, actually.
In any case, though, I endorse holding off on expressing opinions while one gathers data (for all that I don’t seem to do it very much myself).
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
* There’s an overview of some of this here (from p.26).
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
There’s an overview of some of this here (from p.26).
(shrug) “Evil” confuses the issue.
Just to get away from the politics around real-world examples, suppose I speak a language that genders its verbs based on the height of the object—that is, there are separate markings for above-average height, below-average height, and average height.
It’s an empirical question whether, if I’m figuring out who to hire for a job, asking the question “Whom should we tall-hire?” makes me more likely to hire a tall person than asking “Whom should we short-hire?” If it’s true, it is; evil doesn’t enter into it under most understandings of evil. It’s just a fact about the language and about cognitive biases.
If the best available candidate for the job happens to be tall, but I ask myself whom I should short-hire, the way I’m talking about the job introduces bias into my hiring process that makes me less likely to hire the best available candidate. This also isn’t evil, but it’s a mistake.
If my language’s rules are such that this height-based gender-marking is non-optional, then this mistake is non-optional. My native language is, in that case, irreparably bias-ridden in this way.
Suppose I want to hire the best candidates. What can I do then?
Well, one thing I might do is deliberately alternate among “short-hire,” “tall-hire,” and “average-hire” in my speech, so as to reduce the systematic bias introduced by my choice of verb. Of course, if my language forces me to use “short-hire” for an unspecified-height target, then doing that is ungrammatical.
Another option is to make up a new way of speaking about hiring… perhaps borrow the equivalent verb from another language, or make up new words, so I can ask “whom should I hire?” without using a height-based gender marking at all. But maybe, inconveniently, my language is such that foreign loan verbs must also be marked in this way.
A third option is to systematically train myself so I am no longer subject to the selection bias that naive speakers of my language demonstrate. But there are opportunity costs associated with that training process, and maybe I don’t want to bother.
Ultimately, what I do will depend on how important speaking grammatically is to me, how important hiring optimal employees is, and so forth. If I lose significant status or clarity by speaking ungrammatically, I may prefer to hire suboptimal employees.
Should I get offended if someone points that out? Again, it depends on my goals. If I want to improve my ability to choose the best available candidate, then getting offended in that case is counter-productive. If I want to defend my choice to speak traditionally, then getting offended works reasonably well.
My above comment was made in a bit of jest, as I hope is clear. Still, some people do make a deep moral issue over “sexist” language, and insofar as they do, moral condemnation of much more heavily gendered languages than English is an inevitable logical consequence.
Regarding the supposed biases arising due to gendered language, do you think that they exist to a significant degree in practice? While it’s not a watertight argument to the contrary, I still think it’s significant that, to my knowledge, nobody has ever demonstrated any cross-cultural correlation between gender-related norms and customs and the linguistic role of gender. (For what that’s worth, of all Indo-European languages, the old I-E gender system has been most thoroughly lost in Persian, which doesn’t even have the he-she distinction.)
Also, when I reflect on my own native language and the all-pervasive use of masculine as the default gender, I honestly can’t imagine any plausible concrete examples of biases analogous to your hypothetical example with height. Of course, I may be biased in this regard myself.
I agree that some people do treat as moral failings many practices that, to my mind, are better treated as mistakes.
I also think that some people react to that by defending practices that, to my mind, are better treated as mistakes.
I’m not sure.
One way I might approach the question is to teach an experimental subject some new words to denote new roles, and then have the subjects select people to fill those roles based on resumes. By manipulating the genderedness of the name used for the role (e.g., “farner,” “farness,” or “farnist”) and the nominal sex of the candidate (e.g., male or female), we could determine what effect an X-gendered term had on the odds of choosing a Y-sexed candidate.
I have no idea if that study has been performed.
So, for example, would I expect English-speakers (on average) selecting a candidate for the role of “farness” to select a female candidate more often than for the role of “farner”?
Yes, I think so. Probably not a huge difference, though. Call it a 65% confidence for a statistically significant difference.
What’s your estimate? (Or, if you’d rather operationalize the question differently, go for it.)
I was going to write a more detailed reply, but seeing the literature cited in the book linked by Conchis, I should probably read up on the topic before expressing any further opinions. It could be that I’m underestimating the magnitude of such effects.
That said, one huge difficulty with issues of prejudice and discrimination in general is that what looks like a bias caused by malice, ignorance, or unconscious error is often in fact an instance of accurate statistical discrimination. Rational statistical discrimination is usually very hard to disentangle from various factors that supposedly trigger irrational biases, since all kinds of non-obvious correlations might be lurking everywhere. At the same time, a supposed finding of a factor that triggers irrational bias is a valuable and publishable result for people researching such things, so before I accept any of these findings, I’ll have to give them a careful look.
Agreed that attribution of things like malice, ignorance, error, and bias to people is tricky… much as with evil, earlier.
This is why I reframed your original question (asking me whether I thought gendered language introduced bias to a significant degree) in a more operational form, actually.
In any case, though, I endorse holding off on expressing opinions while one gathers data (for all that I don’t seem to do it very much myself).
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
* There’s an overview of some of this here (from p.26).
I wonder if they tested whether individuals suffer similar negative effects from plural generics.
My understanding of the relevant research* is that it’s a fairly consistent finding that masculine generics (a) do cause people to imagine men rather than women, and (b) that this can have negative effects ranging from impaired recall, comprehension, and self-esteem in women, to reducing female job applications. (Some of these negative effects have also been established for men from feminine generics as well, which favours using they/them/their rather than she/her as replacements.)
There’s an overview of some of this here (from p.26).