I don’t think I’m at a position where I could give a statement of the feminist critique that a proponent of it would be happy to call their position, but my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another. That is, the social significance of a statement or concept is more important than whether or not it is concordant with reality.
Subjective perceptions and the relations between humans are also part of reality.
Of course.
A more charitable phrasing: you view feminism as more concerned with instrumental rationality than with epistemic rationality.
I don’t think this is correct, though. My experience has been that in discussions with feminists who critique rationality (FWCR for short),* we have deep disagreements not on the importance of epistemology, but the process and goal of epistemology. If something is correct but hurtful, for example, I might call it true because it is correct while a FWCR would call it false because it is hurtful. (One can find ample examples of this in the arguments for egalitarianism in measurement of socially relevant variables.)
One could argue that they’re making the instrumentally rational decision to spread a lie in order to accomplish some goal, or that it’s instrumentally rational to engage in coalition politics which involves truth-bending, but this isn’t a patrissimo saying “you guys should go out an accomplish things,” but a “truth wasn’t important anyway.”
*I am trying to avoid painting feminism with a broad brush, as not all feminists critique rationality, and it is the anti-rationals in particular on which I want to focus.
I’ve never seen this sort of claim, and thought you were talking about, for example, discouraging research on sex differences because people are likely to overinterpret the observations and cause harm as a result. Can you link to an example of the sort of argument you are discussing?
thought you were talking about, for example, discouraging research on sex differences because people are likely to overinterpret the observations and cause harm as a result.
I did have this sort of thing in mind. My claim was that I think it also goes deeper. This article (PM me your email address if you don’t have access to the PDF) splits the criticism into three primary schools, the first of which begins with the content of scientific theories (i.e. racism, sexism, class bias) and from that concludes that rationality is wrong. An excerpt:
If logic, rationality and objectivity produce such theories, then logic, rationality and objectivity must be at fault and women must search for alternative ways of knowing nature. Such arguments often end up privileging subjectivity, intuition, or a feminine way of knowing characterized by interaction with or identification with, rather than distance from, the object of knowledge.
If I’m reading that paragraph right, that’s attributed to Luce Irigaray’s 1987 paper.
The second school criticizes the methodology and philosophy of science, and then the third criticizes the funding sources (and the implied methodology) of modern science. The author argues that each has serious weaknesses, and that we need to build a better science to incorporate the critiques (with a handful of practical suggestions along those lines) but that the fundamental project of science as a communal endeavor is sound. Since I think the author of that paper is close to my camp, it may be prudent to follow her references and ensure her interpretation of them is fair.
my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.
The subjectivity of our perceptions and how we relate to one another are themselves parts of objective reality.
To steelman the position you’re attributing, if philosophy and rationality have been paying too little attention to those parts of objective reality, then they need to focus on those as well as, not instead of, the rest of reality. Or to put that in terms of a concrete example alluded to elsewhere in the thread, nuclear power plants must be designed to be safely operable by real fallible humans.
But they do attend to these things already. Bayesian methods provide objective reasoning about subjective belief. Psychology, not all of which is bunk, deals with (among other things) how we relate to one another. Engineering already deals with human factors.
my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.
I’d go even further than that, and state that the very notion of an objective reality onto which we can project our “rational” action without regard for social or moral/ethical factors is somewhat peculiar. It seems to be very much a product of the overall notion of λόγος - variously given the meaning of “argument”, “opinion”, “reason”, “number”, “rationality” and even “God” (as in the general idea of a “God’s Eye View”) - that seems to permeate Western culture.
Needless to say, such “logocentrism” is nowadays viewed quite critically and even ridiculed by postmodernists and feminists, as well as by others who point out that non-Western philosophies often held quite different point of view, even within supposedly “rational” schools of thought. For instance, the Chinese Confucianists and Mohists advocated a “Rectification [i.e. proper use] of Names” as the proper foundation of all rational inquiry, which many in the Western tradition would find quite hard to understand (with some well-deserved exceptions, of course).
I don’t see why this post is downvoted. When someone asks for an expression of postmodern thought and someone writes a reply to explain it, you shouldn’t vote it down because you don’t like postmodernism.
The idea that clarity about language is important is very familiar indeed in the Western philosophical tradition. (“It all depends what you mean by …” is pretty much a paradigmatic, or even caricatural, philosopher’s utterance.) It sounds as if the Confucian notion has a rather different spin on it—focusing on terminology related to social relationships, with the idea that fixing the terminology will lead to fixing the relationships—and a bunch of related assumptions not highly favoured among Western analytic philosophers—but I can’t help thinking there’s maybe a core of shared ideas there.
It is very possible that I’m overoptimistically reading too much into the terminology, though. Would any Confucian experts like to comment?
The Chinese Confucianists and Mohists, for instance, advocated a “Rectification [i.e. proper use] of Names” as the proper foundation of all rational inquiry
My understanding of this is that it’s basically map/territory convergence, with an especial emphasis on social reality- let “the ruler” be the ruler!
overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another.
I hope these people are kept far far away from nuclear plants. And regular factories. And machinery. Actually, far away from any sharp objects would be the best...
Some might hope that people who do not allow such a concern to be tempered by other concerns, perhaps of a social and moral/ethical nature, should be kept as far away as possible from any of these objects.
After all, even J. R. Oppenheimer discarded his scientific detachment upon witnessing the first nuclear explosion, instead he uttered the famous quote: “Now I am become Death, the destroyer of worlds.” (By contrast, a more “rational” person might simply rejoice that his complex calculations predicting that the Earth’s atmosphere would not be burned up in the explosion had been proven correct by experimentation!) And Einstein famously regretted his career as a physicist upon learning of these fateful possibilities, stating that if he had known earlier, he would have chosen to be a watchmaker.
And Einstein famously regretted his career as a physicist upon learning of these fateful possibilities, stating that if he had known earlier, he would have chosen to be a watchmaker.
Some might hope that people who do not allow such a concern to be tempered by other concerns, perhaps of a social and moral/ethical nature, should be kept as far away as possible from any of these objects.
Concern for what’s real and what’s not should NOT be “tempered by other concerns”. I think you’re confused between descriptive and normative, aka between what is and what should be.
Besides, while you may turn away from learning, say, what happens when you get a certain amount of U-235 packed together, other people won’t. And if at some point later they decide to come and take what used to be yours, well...
I think you’re confused between descriptive and normative, aka between what is and what should be.
These notions are intertwined, rather. “Normative” concerns guide the “descriptive” inquiries we choose to undertake, and provide a criteria for what counts as a “successful” inquiry or experiment. Hume stated that reason should be a slave to passions; by contrast, medieval philosophers viewed “rational” inquiry as a slave to theology, with its cosmology (in the anthropological sense, i. e. what is our “basic, foundational picture”, the way we talk about reality?) and morality.
When we forget about these things, we end up with billions being spent in incredibly complicated experiments on supposedly ‘foundational’ particle physics at the LHC—raising existential risks, such as the possibility of creating a black hole, or a ‘strangelet’. Meanwhile we don’t see anything near to the same concern about, say, the animals that are neareast to us in the Hominidae group, many of which are significantly endangered in the wild, despite the obvious potential of knowing so much more about “what it means to be human” by keeping them around and studying them more closely. These are not trivial concerns, as implied by the supposed primacy of the ‘descriptive’. To treat them as such is quite dangerous.
When we forget about these things, we end up with billions being spent in incredibly complicated experiments on supposedly ‘foundational’ particle physics at the LHC—raising existential risks, such as the possibility of creating a black hole, or a ‘strangelet’.
Claims escalated as commissioning of the LHC drew closer, around 2008–2010. The claimed dangers included the production of stable micro black holes and the creation of hypothetical particles called strangelets,[1] and these questions were explored in the media, on the Internet and at times through the courts.
To address these concerns in the context of the LHC, CERN mandated a group of independent scientists to review these scenarios. In a report issued in 2003, they concluded that, like current particle experiments such as the Relativistic Heavy Ion Collider (RHIC), the LHC particle collisions pose no conceivable threat.[2] A second review of the evidence commissioned by CERN was released in 2008. The report, prepared by a group of physicists affiliated to CERN but not involved in the LHC experiments, reaffirmed the safety of the LHC collisions in light of further research conducted since the 2003 assessment.[3][4] It was reviewed and endorsed by a CERN committee of 20 external scientists and by the Executive Committee of the Division of Particles & Fields of the American Physical Society,[5][6] and was later published in the peer-reviewed Journal of Physics G by the UK Institute of Physics, which also endorsed its conclusions.[3][7]
The report ruled out any doomsday scenario at the LHC, noting that the physical conditions and collision events which exist in the LHC, RHIC and other experiments occur naturally and routinely in the universe without hazardous consequences,[3] including ultra-high-energy cosmic rays observed to impact Earth with energies far higher than those in any man-made collider.
“Normative” concerns guide the “descriptive” inquiries we choose to undertake, and provide a criteria for what counts as a “successful” inquiry or experiment.
Normative concerns guide which inquiries we choose to undertake but they do not (or should not) affect the outcome of these inquiries.
Notably, the normative concerns do NOT provide criteria for success. The cases where such has been attempted—e.g. Lysenko and genetics in Soviet Russia—are universally recognized as failures. Richard Feynman had a lotto say about this.
These are not trivial concerns
By which criteria do you divide concerns into “trivial” and not?
Normative concerns guide which inquiries we choose to undertake but they do not (or should not) affect the outcome of these inquiries.
But they also guide what counts as success. If your biology research is aimed at developing new bioweapons, then stumbling upon a cure for cancer does not count as a success.
I don’t think I’m at a position where I could give a statement of the feminist critique that a proponent of it would be happy to call their position, but my basic sketch of it is that philosophy and rationality are overconcerned with objective reality, and that we should instead focus on how perceptions are subjective and how we relate to one another. That is, the social significance of a statement or concept is more important than whether or not it is concordant with reality.
Subjective perceptions and the relations between humans are also part of reality.
A more charitable phrasing: you view feminism as more concerned with instrumental rationality than with epistemic rationality.
Of course.
I don’t think this is correct, though. My experience has been that in discussions with feminists who critique rationality (FWCR for short),* we have deep disagreements not on the importance of epistemology, but the process and goal of epistemology. If something is correct but hurtful, for example, I might call it true because it is correct while a FWCR would call it false because it is hurtful. (One can find ample examples of this in the arguments for egalitarianism in measurement of socially relevant variables.)
One could argue that they’re making the instrumentally rational decision to spread a lie in order to accomplish some goal, or that it’s instrumentally rational to engage in coalition politics which involves truth-bending, but this isn’t a patrissimo saying “you guys should go out an accomplish things,” but a “truth wasn’t important anyway.”
*I am trying to avoid painting feminism with a broad brush, as not all feminists critique rationality, and it is the anti-rationals in particular on which I want to focus.
I’ve never seen this sort of claim, and thought you were talking about, for example, discouraging research on sex differences because people are likely to overinterpret the observations and cause harm as a result. Can you link to an example of the sort of argument you are discussing?
I did have this sort of thing in mind. My claim was that I think it also goes deeper. This article (PM me your email address if you don’t have access to the PDF) splits the criticism into three primary schools, the first of which begins with the content of scientific theories (i.e. racism, sexism, class bias) and from that concludes that rationality is wrong. An excerpt:
If I’m reading that paragraph right, that’s attributed to Luce Irigaray’s 1987 paper.
The second school criticizes the methodology and philosophy of science, and then the third criticizes the funding sources (and the implied methodology) of modern science. The author argues that each has serious weaknesses, and that we need to build a better science to incorporate the critiques (with a handful of practical suggestions along those lines) but that the fundamental project of science as a communal endeavor is sound. Since I think the author of that paper is close to my camp, it may be prudent to follow her references and ensure her interpretation of them is fair.
The subjectivity of our perceptions and how we relate to one another are themselves parts of objective reality.
To steelman the position you’re attributing, if philosophy and rationality have been paying too little attention to those parts of objective reality, then they need to focus on those as well as, not instead of, the rest of reality. Or to put that in terms of a concrete example alluded to elsewhere in the thread, nuclear power plants must be designed to be safely operable by real fallible humans.
But they do attend to these things already. Bayesian methods provide objective reasoning about subjective belief. Psychology, not all of which is bunk, deals with (among other things) how we relate to one another. Engineering already deals with human factors.
I’d go even further than that, and state that the very notion of an objective reality onto which we can project our “rational” action without regard for social or moral/ethical factors is somewhat peculiar. It seems to be very much a product of the overall notion of λόγος - variously given the meaning of “argument”, “opinion”, “reason”, “number”, “rationality” and even “God” (as in the general idea of a “God’s Eye View”) - that seems to permeate Western culture.
Needless to say, such “logocentrism” is nowadays viewed quite critically and even ridiculed by postmodernists and feminists, as well as by others who point out that non-Western philosophies often held quite different point of view, even within supposedly “rational” schools of thought. For instance, the Chinese Confucianists and Mohists advocated a “Rectification [i.e. proper use] of Names” as the proper foundation of all rational inquiry, which many in the Western tradition would find quite hard to understand (with some well-deserved exceptions, of course).
I don’t see why this post is downvoted. When someone asks for an expression of postmodern thought and someone writes a reply to explain it, you shouldn’t vote it down because you don’t like postmodernism.
The idea that clarity about language is important is very familiar indeed in the Western philosophical tradition. (“It all depends what you mean by …” is pretty much a paradigmatic, or even caricatural, philosopher’s utterance.) It sounds as if the Confucian notion has a rather different spin on it—focusing on terminology related to social relationships, with the idea that fixing the terminology will lead to fixing the relationships—and a bunch of related assumptions not highly favoured among Western analytic philosophers—but I can’t help thinking there’s maybe a core of shared ideas there.
It is very possible that I’m overoptimistically reading too much into the terminology, though. Would any Confucian experts like to comment?
My understanding of this is that it’s basically map/territory convergence, with an especial emphasis on social reality- let “the ruler” be the ruler!
I hope these people are kept far far away from nuclear plants. And regular factories. And machinery. Actually, far away from any sharp objects would be the best...
Some might hope that people who do not allow such a concern to be tempered by other concerns, perhaps of a social and moral/ethical nature, should be kept as far away as possible from any of these objects.
After all, even J. R. Oppenheimer discarded his scientific detachment upon witnessing the first nuclear explosion, instead he uttered the famous quote: “Now I am become Death, the destroyer of worlds.” (By contrast, a more “rational” person might simply rejoice that his complex calculations predicting that the Earth’s atmosphere would not be burned up in the explosion had been proven correct by experimentation!) And Einstein famously regretted his career as a physicist upon learning of these fateful possibilities, stating that if he had known earlier, he would have chosen to be a watchmaker.
This is a common misattribution:
http://en.wikiquote.org/wiki/Albert_Einstein#Misattributed
Scroll down to “If only I had known, I should have become a watch-maker.”
Concern for what’s real and what’s not should NOT be “tempered by other concerns”. I think you’re confused between descriptive and normative, aka between what is and what should be.
Besides, while you may turn away from learning, say, what happens when you get a certain amount of U-235 packed together, other people won’t. And if at some point later they decide to come and take what used to be yours, well...
These notions are intertwined, rather. “Normative” concerns guide the “descriptive” inquiries we choose to undertake, and provide a criteria for what counts as a “successful” inquiry or experiment. Hume stated that reason should be a slave to passions; by contrast, medieval philosophers viewed “rational” inquiry as a slave to theology, with its cosmology (in the anthropological sense, i. e. what is our “basic, foundational picture”, the way we talk about reality?) and morality.
When we forget about these things, we end up with billions being spent in incredibly complicated experiments on supposedly ‘foundational’ particle physics at the LHC—raising existential risks, such as the possibility of creating a black hole, or a ‘strangelet’. Meanwhile we don’t see anything near to the same concern about, say, the animals that are neareast to us in the Hominidae group, many of which are significantly endangered in the wild, despite the obvious potential of knowing so much more about “what it means to be human” by keeping them around and studying them more closely. These are not trivial concerns, as implied by the supposed primacy of the ‘descriptive’. To treat them as such is quite dangerous.
This is a common misconception, from Safety of high-energy particle collision experiments on wikipedia:
Normative concerns guide which inquiries we choose to undertake but they do not (or should not) affect the outcome of these inquiries.
Notably, the normative concerns do NOT provide criteria for success. The cases where such has been attempted—e.g. Lysenko and genetics in Soviet Russia—are universally recognized as failures. Richard Feynman had a lot to say about this.
By which criteria do you divide concerns into “trivial” and not?
But they also guide what counts as success. If your biology research is aimed at developing new bioweapons, then stumbling upon a cure for cancer does not count as a success.