For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a “social culture”, the disagreement with maps typically comes from enemies and assholes! Friends don’t make their friends update their maps; they always keep an extra map for each friend.
I figured this was an absurd caricature, but then this thing floated by on tumblr:
So when arguing against objectivity, they said, don’t make the post-modern mistake of saying there is no truth, but rather that there are infinite truths, diverse truths. The answer to the white, patriarchal, heteronormative, massively racist and ableist objectivity is DIVERSITY of subjectivities. And this, my friends, is called feminist epistemology: the idea that rather than searching for a unified truth to fuck all other truths we can understand and come to know the world through diverse views, each of which offers their own valid subjective view, each valid, each truthful. How? by interrupting the discourses of objectivity/normativity with discourses of diversity.
Objective facts: white, patriarchal, heteronormative, massively racist and ableist?
Logic itself has a very gendered and white supremacist history.
These people are clearly unable to distinguish between “the territory” and “the person who talks about the territory”.
I had to breathe calmly for a few moments. Okay, I’m not touching this shit on the object level again.
On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.
I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn’t disagree) but much less vice versa.
When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory.
On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be “Whose map should I choose?”, i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter’s judgement.
It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft’s comment is what one gets when one stretches the second example to extreme lengths.
There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable interpretations is something, it is valid to add them (since this is not a Bayesian inference, this is different from the problem of choice of priors). Since the whole process is deemed to be adversarial (scientists in this model look like prosecutors or defense attorneys), different people inject different biases and then argue that others should stop injecting theirs.
(disclaimer: I have read SEP article some time ago and wrote about these ideas from my memory, it wouldn’t be a big surprise if I misrepresented them in some way. In addition to that, there are other obvious sources of potential misrepresentations)
Seems like the essential difference is whether you believe that as the maps improve, they will converge.
A “LW-charitable” reading of the feminist version would be that although the maps should converge in theory, they will not converge in practice because humans are imperfect—the mapmaker is not able to reduce the biases in their map below certain level. In other words, that there is some level of irrationality that humans are unable to overcome today, and the specific direction of this irrationality depends on their “tribe”. So different tribes will forever have different maps, regardless of how much they try.
Then again, to avoid “motte and bailey”, even if there is the level of irrationality that humans are unable to overcome today even if they try, the question is whether the differences between maps are at this level, or whether people use this as a fully general excuse to put anything they like on their maps.
Yet another question would be who exactly are the “tribes” (the clusters of people that create maps with similar biases). Feminism (at least the version I see online) seems to define the clusters by gender, sexual orientation, race, etc. But maybe the important axes are different; maybe e.g. having high IQ, or studying STEM, or being a conservative, or something completely different and unexpected actually has greater influence on map-making. Which is difficult to talk about, because there is always the fully general excuse that if someone doesn’t have the map they should have, well, they have “internalized” something (a map of the group they don’t belong to was forced on them, but naturally they should have a different map).
On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.
Can rationality be lost? Or do people just stop performing the rituals?
Heh, I immediately went: “What is rationality if not following (a specific kind of) rituals?” But I guess the key is the word “specific” here. Rationality could be defined as following a set of rules that happen to create maps better corresponding to the territory, and knowing why those rules achieve that, i.e. applying the rules reflectively to themselves. The reflective part is what would prevent a person from arbitrarily replacing one of the rules by e.g. “what my group/leader says is always right, even if the remaining rules say otherwise”.
I imagine that most people have at least some minimal level of reflection of their rules. For example, if they look at the blue sky, they conclude that the sky is blue; and if someone else would say that the sky is green, they would tell them “look there, you idiot”. That is, not only they follow the rule, but they are aware that they have a rule, and can communicate it. But the rule is communicated only then someone obviously breaks it; that means, the reflection is only done in crisis. Which means they don’t develop the full reflective model, and it leaves the option of inserting new rules, such as “however, that reasoning doesn’t apply to God, because God is invisible”, which take priority over reflection. I guess these rules have a strong “first mover advantage”, so timing is critical.
So yeah, I guess most people are not, uhm, reflectively rational. And unreflective rationality (I guess on LW we wouldn’t call it “rationality”, but outside of LW that is the standard meaning of the word) is susceptible to inserting new rules under emotional pressure.
I don’t see why not. It is, basically, a set of perspectives, mental habits, and certain heuristics. People lose skills, forget knowledge, just change—why would rationality be exempt?
Habits and heuristics are what I’d call “rituals.”
I don’t know about that. A heuristic is definitely not a ritual—it’s not a behaviour pattern but just an imperfect tool for solving problems. And habits… I would probably consider rituals to be more rigid and more distanced from the actual purpose compared to mere habits.
Are perspectives something you can lose?
Sure. You can think of them as a habitual points of view. Or as default approaches to issues.
Sure, when formerly rational people declare some topic of limits to rationality because they don’t like the conclusions that are coming out. Of course, since all truths are entangled that means you have to invent other lies to protect the ones you’ve already made. Ultimately you have to lie about the process of arriving at truth itself, which is how we get to things like feminist anti-epistomology.
These people are clearly unable to distinguish between “the territory” and “the person who talks about the territory”.
What about that sentence makes you think that the person isn’t able to make that distinction?
If you look at YCombinator the semantics are a bit different but the message isn’t that different. YCombinator also talks about how diversity is important.
The epistemic method they teach founders is not to think abstractly about a topic and engage with it analytically but that it’s important to speak to people to understand their own unique experiences and views of the world.
It’s interesting how the link you posted talks about importance of using the right metaphors, while at the same time you object against my conclusion that people saying “logic itself has white supremacist history” can’t distinguish between the topic and the people who talk about the topic.
To explain my position, I believe that anyone who says either “logic is sexist and racist” or “I am going to rape this equation” should visit a therapist.
I believe that anyone who says either “logic is sexist and racist” or “I am going to rape this equation”
Nobody linked here says either of those things. In particular the orginal blog posts says about logic:
This is not to say it is not useful; it is. But it does not exist in a vacuum and should not be sanctified.
The argument isn’t that logic is inherently sexist and racist and therefore bad but that it’s frequently used in places where there are other viable alternatives. That using it in those places can be driven by sexism or racism.
The argument isn’t that logic is inherently sexist and racist and therefore bad but that it’s frequently used in places where there are other viable alternatives.
Interviewing lot’s of people to understand their view points and not to have conversations with them to show them where they are wrong but be non-judgemental. That’s basically what YC teaches.
Reasoning by analogy is useful in some cases.
There’s a huge class of expert decisions that’s done via intuition.
Using a technique like Gendlin’s Focusing would be a way to get to solutions that’s not based on logic.
I figured this was an absurd caricature, but then this thing floated by on tumblr:
Objective facts: white, patriarchal, heteronormative, massively racist and ableist?
Sigh.
These people are clearly unable to distinguish between “the territory” and “the person who talks about the territory”.
I had to breathe calmly for a few moments. Okay, I’m not touching this shit on the object level again.
On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.
I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn’t disagree) but much less vice versa.
When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory.
On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be “Whose map should I choose?”, i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter’s judgement.
It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft’s comment is what one gets when one stretches the second example to extreme lengths.
There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable interpretations is something, it is valid to add them (since this is not a Bayesian inference, this is different from the problem of choice of priors). Since the whole process is deemed to be adversarial (scientists in this model look like prosecutors or defense attorneys), different people inject different biases and then argue that others should stop injecting theirs.
(disclaimer: I have read SEP article some time ago and wrote about these ideas from my memory, it wouldn’t be a big surprise if I misrepresented them in some way. In addition to that, there are other obvious sources of potential misrepresentations)
Seems like the essential difference is whether you believe that as the maps improve, they will converge.
A “LW-charitable” reading of the feminist version would be that although the maps should converge in theory, they will not converge in practice because humans are imperfect—the mapmaker is not able to reduce the biases in their map below certain level. In other words, that there is some level of irrationality that humans are unable to overcome today, and the specific direction of this irrationality depends on their “tribe”. So different tribes will forever have different maps, regardless of how much they try.
Then again, to avoid “motte and bailey”, even if there is the level of irrationality that humans are unable to overcome today even if they try, the question is whether the differences between maps are at this level, or whether people use this as a fully general excuse to put anything they like on their maps.
Yet another question would be who exactly are the “tribes” (the clusters of people that create maps with similar biases). Feminism (at least the version I see online) seems to define the clusters by gender, sexual orientation, race, etc. But maybe the important axes are different; maybe e.g. having high IQ, or studying STEM, or being a conservative, or something completely different and unexpected actually has greater influence on map-making. Which is difficult to talk about, because there is always the fully general excuse that if someone doesn’t have the map they should have, well, they have “internalized” something (a map of the group they don’t belong to was forced on them, but naturally they should have a different map).
Can rationality be lost? Or do people just stop performing the rituals?
Heh, I immediately went: “What is rationality if not following (a specific kind of) rituals?” But I guess the key is the word “specific” here. Rationality could be defined as following a set of rules that happen to create maps better corresponding to the territory, and knowing why those rules achieve that, i.e. applying the rules reflectively to themselves. The reflective part is what would prevent a person from arbitrarily replacing one of the rules by e.g. “what my group/leader says is always right, even if the remaining rules say otherwise”.
I imagine that most people have at least some minimal level of reflection of their rules. For example, if they look at the blue sky, they conclude that the sky is blue; and if someone else would say that the sky is green, they would tell them “look there, you idiot”. That is, not only they follow the rule, but they are aware that they have a rule, and can communicate it. But the rule is communicated only then someone obviously breaks it; that means, the reflection is only done in crisis. Which means they don’t develop the full reflective model, and it leaves the option of inserting new rules, such as “however, that reasoning doesn’t apply to God, because God is invisible”, which take priority over reflection. I guess these rules have a strong “first mover advantage”, so timing is critical.
So yeah, I guess most people are not, uhm, reflectively rational. And unreflective rationality (I guess on LW we wouldn’t call it “rationality”, but outside of LW that is the standard meaning of the word) is susceptible to inserting new rules under emotional pressure.
I don’t see why not. It is, basically, a set of perspectives, mental habits, and certain heuristics. People lose skills, forget knowledge, just change—why would rationality be exempt?
Habits and heuristics are what I’d call “rituals.”
Are perspectives something you can lose? I ask genuinely. It’s not something I can relate to.
I don’t know about that. A heuristic is definitely not a ritual—it’s not a behaviour pattern but just an imperfect tool for solving problems. And habits… I would probably consider rituals to be more rigid and more distanced from the actual purpose compared to mere habits.
Sure. You can think of them as a habitual points of view. Or as default approaches to issues.
Can rationality be lost?
Sure, when formerly rational people declare some topic of limits to rationality because they don’t like the conclusions that are coming out. Of course, since all truths are entangled that means you have to invent other lies to protect the ones you’ve already made. Ultimately you have to lie about the process of arriving at truth itself, which is how we get to things like feminist anti-epistomology.
What about that sentence makes you think that the person isn’t able to make that distinction?
If you look at YCombinator the semantics are a bit different but the message isn’t that different. YCombinator also talks about how diversity is important. The epistemic method they teach founders is not to think abstractly about a topic and engage with it analytically but that it’s important to speak to people to understand their own unique experiences and views of the world.
David Chapman’s article going down on the phenomenon is also quite good.
It’s interesting how the link you posted talks about importance of using the right metaphors, while at the same time you object against my conclusion that people saying “logic itself has white supremacist history” can’t distinguish between the topic and the people who talk about the topic.
To explain my position, I believe that anyone who says either “logic is sexist and racist” or “I am going to rape this equation” should visit a therapist.
Nobody linked here says either of those things. In particular the orginal blog posts says about logic:
The argument isn’t that logic is inherently sexist and racist and therefore bad but that it’s frequently used in places where there are other viable alternatives. That using it in those places can be driven by sexism or racism.
Such as?
Interviewing lot’s of people to understand their view points and not to have conversations with them to show them where they are wrong but be non-judgemental. That’s basically what YC teaches.
Reasoning by analogy is useful in some cases.
There’s a huge class of expert decisions that’s done via intuition.
Using a technique like Gendlin’s Focusing would be a way to get to solutions that’s not based on logic.