(To put this another way: it seems like you missed an important part of the thesis of the piece*, which is that there are no interactions between two people with the exact same culture. While it is in fact the case that some people work differently (e.g. Scott’s discussion of high-trust vs. low-trust cultures) and will reliably hear you to be making claims about the context culture if you’re not extremely exact, and therefore it’s important to be clear and careful and say a few more words to delineate your claims about the context culture from your claims about your own personal sense of what-is-ideal …
… while it seems true that you should take that into account, on a practical level, it seems that if you have done all that work, and someone reacts hostilely to you as if you are making some other claim …
… as far as I can tell, in the Berkeley rationalist context culture, the one that most of us agree upon so we can get along with each other, the person who sort of … refused to believe that I meant what I said? … is the one who’s doing something hostile.
Or at least, it seems to me that there’s a principle of “don’t claim you understand better than others what’s going on in their heads” in the shared context of people you and I hang out with. But maybe I’m mistaken? Maybe this is not the case, and in fact that is just another piece of my personal culture?
*or you didn’t miss it yourself, but you’re pointing out that it’s subtle and therefore it gets missed in practice a lot
To be clear I’m not making the claim that what I described above is an endorsed or correct experience, just how I’ve actually encountered it in practice at times. I’ll try and keep track of my impressions when I encounter this sort of thing in the future, and take what you’ve said here into account.
Or at least, it seems to me that there’s a principle of “don’t claim you understand better than others what’s going on in their heads” in the shared context of people you and I hang out with. But maybe I’m mistaken? Maybe this is not the case, and in fact that is just another piece of my personal culture?
My read on the context-culture is that this isn’t very agreed upon, and/or depends a lot on context. (I had a sense that this particular point was probably the thing that triggered this entire post, but was waiting to talk about that until I had time to think seriously about it)
[Flagging: what follows is my read on the rationalist context culture, which… somewhat ironically can’t make much use of the technique suggested in the OP. I’m trying to stick to descriptive claims about what I’ve observed, and a couple of if-then statements which I think are locally valid]
A founding principle of the rationality community is “people are biased and confused a lot, even smart people, even smart people who’ve thought about it a bit”. So it seemed to me that if the rationality was going to succeed at the goal of “help people become less confused and more rational”, it’s necessary for some kind of social move in the space of “I think you’re more confused or blind-spotted than you realize”, at least some times.
But it’s also even easier to be wrong about what’s going on in someone else’s head than what’s going on in your head. And there are also sometimes incentives to use “I think someone is being confused” as a social weapon. And making a claim like that and getting it wrong
My observations are that rationalists do sometimes do this (in Berkeley and on LW and elsewhere), and it often goes poorly unless there is a lot of trust or a lot of effort is put in, but it doesn’t feel like there’s much like a collective immune response that I’d expect to see if it were an established norm.
Similar caveats as Ray’s re: this is more fraught, since here I am trying to describe my observations of the context culture, as opposed to things I’m relatively sure about because they live inside my head. These are not normative statements/shoulds, they’re just “in my experience”s.
it’s necessary for some kind of social move in the space of “I think you’re more confused or blind-spotted than you realize”, at least some times.
Strong agree. It seems to me that the additional bit that makes this prosocial instead of a weapon is something like:
I notice that I’ve got a hypothesis forming, that you’re more confused or blind-spotted than you realize. I started to form this hypothesis when I saw X, Y, and Z, which I interpreted to mean A, B, and C. This hypothesis causes me to predict that, if I hadn’t said anything, you would’ve responded to M with N, which would’ve been miscalibrated for reasons 1 and 2. If I saw you doing G, I would definitely update away from this hypothesis, and certainly G is not the only thing that would shift me. I want to now be open to hearing your response or counterargument; this is not a mic drop.
… where the two key pieces of the above are:
1) distinguishing between a hypothesis and a fact, or between a claim and an assertion. It seems non-rude and at least possibly non-aggressive/non-invalidating/non-weaponized to say “I’m considering [your blindness/biased-ness] among many possibilities,” whereas it seems pretty much guaranteed to be taken-as-rude or taken-as-an-attempt-to-delegitimize to just flatly state “Yeah, you’re [blind/biased].”
2) creating surface area/showing the gears of your hypothesis/sticking your neck out and making what you’ve said falsifiable. There are hints of cruxes not only in G, but also in X, Y, and Z, which someone may convincingly argue you misunderstood or misinterpreted or misremembered.
In the swath of the EA/rationalist community that I have the most exposure to (i.e. among the hundred or so Berkelanders that I’ve interacted with in the past year) the social move of having a hypothesis is one that is acceptable when used with clear care and respect, and the social move of claiming to know is one that is frowned upon. In other words, I’ve seen people band together in rejection of the latter, and I’ve heard many different people on many different occasions say things like my fake quote paragraph above.
This also seems to me to be correct, and is part of what I came here for (where “here” is the rationalist community). I notice that my expectation of such (in swathes of the community where that is not the norm) has gotten me into fights, in the past.
Random additional note: introspection is a skill, and extrospection is a skill, and part of what feeds into my “it seems like this is complicated” belief is that people can be good or bad at both, and common knowledge about who is good or bad at either is hard to establish.
(To put this another way: it seems like you missed an important part of the thesis of the piece*, which is that there are no interactions between two people with the exact same culture. While it is in fact the case that some people work differently (e.g. Scott’s discussion of high-trust vs. low-trust cultures) and will reliably hear you to be making claims about the context culture if you’re not extremely exact, and therefore it’s important to be clear and careful and say a few more words to delineate your claims about the context culture from your claims about your own personal sense of what-is-ideal …
… while it seems true that you should take that into account, on a practical level, it seems that if you have done all that work, and someone reacts hostilely to you as if you are making some other claim …
… as far as I can tell, in the Berkeley rationalist context culture, the one that most of us agree upon so we can get along with each other, the person who sort of … refused to believe that I meant what I said? … is the one who’s doing something hostile.
Or at least, it seems to me that there’s a principle of “don’t claim you understand better than others what’s going on in their heads” in the shared context of people you and I hang out with. But maybe I’m mistaken? Maybe this is not the case, and in fact that is just another piece of my personal culture?
*or you didn’t miss it yourself, but you’re pointing out that it’s subtle and therefore it gets missed in practice a lot
To be clear I’m not making the claim that what I described above is an endorsed or correct experience, just how I’ve actually encountered it in practice at times. I’ll try and keep track of my impressions when I encounter this sort of thing in the future, and take what you’ve said here into account.
My read on the context-culture is that this isn’t very agreed upon, and/or depends a lot on context. (I had a sense that this particular point was probably the thing that triggered this entire post, but was waiting to talk about that until I had time to think seriously about it)
[Flagging: what follows is my read on the rationalist context culture, which… somewhat ironically can’t make much use of the technique suggested in the OP. I’m trying to stick to descriptive claims about what I’ve observed, and a couple of if-then statements which I think are locally valid]
A founding principle of the rationality community is “people are biased and confused a lot, even smart people, even smart people who’ve thought about it a bit”. So it seemed to me that if the rationality was going to succeed at the goal of “help people become less confused and more rational”, it’s necessary for some kind of social move in the space of “I think you’re more confused or blind-spotted than you realize”, at least some times.
But it’s also even easier to be wrong about what’s going on in someone else’s head than what’s going on in your head. And there are also sometimes incentives to use “I think someone is being confused” as a social weapon. And making a claim like that and getting it wrong
My observations are that rationalists do sometimes do this (in Berkeley and on LW and elsewhere), and it often goes poorly unless there is a lot of trust or a lot of effort is put in, but it doesn’t feel like there’s much like a collective immune response that I’d expect to see if it were an established norm.
This makes sense to me.
Similar caveats as Ray’s re: this is more fraught, since here I am trying to describe my observations of the context culture, as opposed to things I’m relatively sure about because they live inside my head. These are not normative statements/shoulds, they’re just “in my experience”s.
Strong agree. It seems to me that the additional bit that makes this prosocial instead of a weapon is something like:
… where the two key pieces of the above are:
1) distinguishing between a hypothesis and a fact, or between a claim and an assertion. It seems non-rude and at least possibly non-aggressive/non-invalidating/non-weaponized to say “I’m considering [your blindness/biased-ness] among many possibilities,” whereas it seems pretty much guaranteed to be taken-as-rude or taken-as-an-attempt-to-delegitimize to just flatly state “Yeah, you’re [blind/biased].”
2) creating surface area/showing the gears of your hypothesis/sticking your neck out and making what you’ve said falsifiable. There are hints of cruxes not only in G, but also in X, Y, and Z, which someone may convincingly argue you misunderstood or misinterpreted or misremembered.
In the swath of the EA/rationalist community that I have the most exposure to (i.e. among the hundred or so Berkelanders that I’ve interacted with in the past year) the social move of having a hypothesis is one that is acceptable when used with clear care and respect, and the social move of claiming to know is one that is frowned upon. In other words, I’ve seen people band together in rejection of the latter, and I’ve heard many different people on many different occasions say things like my fake quote paragraph above.
This also seems to me to be correct, and is part of what I came here for (where “here” is the rationalist community). I notice that my expectation of such (in swathes of the community where that is not the norm) has gotten me into fights, in the past.
Random additional note: introspection is a skill, and extrospection is a skill, and part of what feeds into my “it seems like this is complicated” belief is that people can be good or bad at both, and common knowledge about who is good or bad at either is hard to establish.