But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems weird, in this context.
I agree with your sense that they should be directly arguing about “what are the standards implied by ‘calling yourself a rationalist’ or ‘saying you’re interested in EA’?”. I think that they are closer to having that argument than not having it, tho.
I think the difficulty is that the conversation they’re having is happening at multiple levels, dealing with both premises and implications, and it’s generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Looking at the first statement by Alice:
You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
In my eyes, this is pretty close to your proposed starter for Alice:
You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!
The main difference is that Alice’s version seems like it’s trying to balance “enforcing the boundary” and “helping Bob end up on the inside”. She’s not (initially) asking Bob to become a copy of her; she’s proposing a specific concrete action tied to one of Bob’s stated values, suggesting a way that he could make his self-assessments more honest.
Now, the next step in the conversation (after Bob rejected Alice’s bid to both suggest courses of action and evaluate how well he conforms to community standards) could have been for Alice to say “well, I’d rather you not lie about being one of us.” (And, indeed, it looks to me like Alice says as much in her 4th comment.)
The remaining discussion is mostly about whether or not Alice’s interpretation of the community standards is right. Given that many of the standards are downstream of empirical facts (like which working styles are most productive instead of demonstrating the most loyalty or w/e), it makes sense that Alice couldn’t just say “you’re not working hard enough” and instead needs to justify her belief that the standard is where she thinks it is. (And, indeed, if Bob in fact cannot work harder then Alice doesn’t want to push him past his limits—she just doesn’t yet believe that his limits are where he claims they are.)
That is, I think there’s a trilemma: either Bob says he’s not an EA/rationalist/etc., Bob behaves in an EA/rationalist/etc. way, or Bob defends his standards to Alice / whatever other gatekeeper (or establishes that they are not qualified to be a gatekeeper). I think Bob’s strategy is mostly denying the standards / denying Alice’s right to gatekeep, but this feels to me like the sort of thing that they should in fact be able to argue about, instead of Bob being right-by-default. Like, Bob’s first point is “Alice, you’re being rude” and Alice’s response is “being this sort of rude is EA”!
You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
In my eyes, this is pretty close to your proposed starter for Alice:
Hm, I don’t think those are very close. After all, suppose we imagine me in Bob’s place, having this conversation with the same fictional Alice. I could respond thus:
“Yes, I really care about improving the world. But why should that imply donating more, or using my time differently? I am acting in a way that my principles dictate. You claim that ‘really caring about the world’ implies that I should act as you want me to act, but I just don’t agree with you about that.”
Now, one imagines that Alice wouldn’t start such a conversation with me in the first place, as I am not, nor claim to be, an “Effective Altruist”, or any such thing.[1] But here again we come to the same result: that the point of contention between Bob and Alice is Bob’s self-assignment to certain distinctly identified groups or communities, not his claim to hold some general or particular values.
The remaining discussion is mostly about whether or not Alice’s interpretation of the community standards is right.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment) about Alice’s needs and wants and so forth.
I think the difficulty is that the conversation they’re having is happening at multiple levels, dealing with both premises and implications, and it’s generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Sure, maybe, but that mostly just points to the importance of being clear on what a discussion is about. Note that Alice flitting from topic to topic, neither striving for clarity nor allowing herself to be pressed on any point, is also quite realistic, and is characteristic of untrustworthy debaters.
Like, Bob’s first point is “Alice, you’re being rude” and Alice’s response is “being this sort of rude is EA”!
If this is true, then so much the worse for EA!
When I condemn Alice’s behavior, that condemnation does not contain an “EA exemption”, like “this behavior is bad, but if you slap the ‘EA’ label on it, then it’s not bad after all”. On the contrary, if the label is accurate, then my condemnation extends to EA itself.
Although I could certainly claim to be an effective altruist (note the lowercase), and such a claim would be true, as far as it goes. I don’t actually do this because it’s needlessly confusing, and nothing really hinges on such a claim.
Right, and then you and Alice could get into the details. I think this is roughly what Alice is trying to do with Bob (“here’s what I believe and why I believe it”) and Bob is trying to make the conversation not happen because it is about Bob.
And so there’s an interesting underlying disagreement, there! Bob believes in a peace treaty where people don’t point out each other’s flaws, and Alice believes in a high-performing-team culture where people point out each other’s flaws so that they can be fixed. To the extent that the resolution is just “yeah, I prefer the peace treaty to the mutual flaw inspection”, the conversation doesn’t have to be very long.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more ‘peace treaty’ style. I think that’s the same sort of conversation that’s happening here.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment) about Alice’s needs and wants and so forth.
Sure—in my read, Alice’s needs and wants and so forth are, in part, the generators of the ‘community standards’. (If Alice was better off with lots of low-performers around to feel superior to, instead of with lots of high-performers around to feel comparable to, then one imagines Alice would instead prefer ‘big-tent EA’ membership definitions.)
On the contrary, if the label is accurate, then my condemnation extends to EA itself.
I think this part of EA makes it ‘sharp’ which is pretty ambivalent.
If I’m reading you correctly, the main thing that’s going on here to condemn about Alice is that she’s doing some mixture of:
Setting herself as the judge of Bob without his consent or some external source of legitimacy
Being insufficiently clear about her complaints and judgments
I broadly agree with 2 (because basically anything can always be clearer) tho I think this is, like, a realistic level of clarity. I think 1 is unclear because it’s one of the points of disagreement—does Bob saying that he’s “interested in EA” or “really cares about improving the world” give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
[Noting that Alice would be quick to point out Bob’s interest in not having to change himself would also put a finger on the scales here.]
Right, and then you and Alice could get into the details.
But that’s just the thing—I wouldn’t be interested in getting into the details. My hypothetical response was meant to ward Alice off, not to engage with her. The subtext (which could be made into text, if need be—i.e., if Alice persists) is “I’m not an EA and won’t become an EA, so please take your sales pitch elsewhere”. The expected result is that Alice loses interest and goes off to find a likely-looking Bob.
I think this is roughly what Alice is trying to do with Bob (“here’s what I believe and why I believe it”) and Bob is trying to make the conversation not happen because it is about Bob.
The conversation as written doesn’t seem to me to support this reading. Alice steadfastly resists Bob’s attempts to turn the topic around to what she believes, her actions, etc., and instead relentlessly focuses on Bob’s beliefs, his alleged hypocrisy, etc.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more ‘peace treaty’ style. I think that’s the same sort of conversation that’s happening here.
Well, for one thing, I’ll note that I’m not much of a fan of this “mutual flaw inspection”, either. The proper alternative, in my view, isn’t any sort of “peace treaty”, but rather a “person-interface” approach.
More importantly, though, any sort of “mutual flaw inspection” has got to be opted into. Otherwise you’re just accosting random people to berate them about their flaws. That’s not praiseworthy behavior.
On the contrary, if the label is accurate, then my condemnation extends to EA itself.
I think this part of EA makes it ‘sharp’ which is pretty ambivalent.
Sorry, I don’t think I get the meaning here. Could you rephrase?
If I’m reading you correctly, the main thing that’s going on here to condemn about Alice is that she’s doing some mixture of:
Setting herself as the judge of Bob without his consent or some external source of legitimacy
Being insufficiently clear about her complaints and judgments
Yes, basically this.
does Bob saying that he’s “interested in EA” or “really cares about improving the world” give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
The behavior I was referring to, specifically, is not rudeness (or else I’d have quoted Alice’s first comment, not her second one), but rather Alice taking as given the assumption that she has some sort of claim on Bob’s reasons for his actions—that Bob has some obligation to explain himself, to justify his actions and his reasons, to Alice. It is that assumption which must be firmly and implacably rejected at once.
Criticism, per se, is not the central issue (although anti-solicited criticism is almost always rude, if nothing else).
Sorry, I don’t think I get the meaning here. Could you rephrase?
I think EA is a mixture of ‘giving people new options’ (we found a cool new intervention!) and ‘removing previously held options’; it involves cutting to the heart of things, and also cutting things out of your life. The core beliefs do not involve much in the way of softness or malleability to individual autonomy. (I think people have since developed a bunch of padding so that they can live with it more easily.)
Like, EA is about deprioritizing ‘ineffective’ approaches in favor of ‘effective’ approaches. This is both rough (for the ineffective approaches and people excited about them) and also the mechanism of action by which EA does any good (in the same way that capitalism does well in part by having companies go out of business when they’re less good at deploying capital than others).
Hmm, I see. Well, I agree with your first paragraph but not with your second. That is, I do not think that selection of approaches is the core, to say nothing of the entirety, of what EA is. This is a major part of my problem with EA as a movement an ideology.
However, that is perhaps a digression we can avoid. More relevant is that none of this seems to me to require, or even to motivate, “being this sort of rude”. It’s all very well to “remove previously held options” and otherwise be “rough” to the beliefs and values of people who come to EA looking for guidance and answers, but to impose these things on people who manifestly aren’t interested is… not justifiable behavior, it seems to me.
(And, again, this is quite distinct from the question of accepting or rejecting someone from some group or what have you, or letting their false claims to have some praiseworthy quality stand unchallenged, etc.)
I agree with your sense that they should be directly arguing about “what are the standards implied by ‘calling yourself a rationalist’ or ‘saying you’re interested in EA’?”. I think that they are closer to having that argument than not having it, tho.
I think the difficulty is that the conversation they’re having is happening at multiple levels, dealing with both premises and implications, and it’s generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Looking at the first statement by Alice:
In my eyes, this is pretty close to your proposed starter for Alice:
The main difference is that Alice’s version seems like it’s trying to balance “enforcing the boundary” and “helping Bob end up on the inside”. She’s not (initially) asking Bob to become a copy of her; she’s proposing a specific concrete action tied to one of Bob’s stated values, suggesting a way that he could make his self-assessments more honest.
Now, the next step in the conversation (after Bob rejected Alice’s bid to both suggest courses of action and evaluate how well he conforms to community standards) could have been for Alice to say “well, I’d rather you not lie about being one of us.” (And, indeed, it looks to me like Alice says as much in her 4th comment.)
The remaining discussion is mostly about whether or not Alice’s interpretation of the community standards is right. Given that many of the standards are downstream of empirical facts (like which working styles are most productive instead of demonstrating the most loyalty or w/e), it makes sense that Alice couldn’t just say “you’re not working hard enough” and instead needs to justify her belief that the standard is where she thinks it is. (And, indeed, if Bob in fact cannot work harder then Alice doesn’t want to push him past his limits—she just doesn’t yet believe that his limits are where he claims they are.)
That is, I think there’s a trilemma: either Bob says he’s not an EA/rationalist/etc., Bob behaves in an EA/rationalist/etc. way, or Bob defends his standards to Alice / whatever other gatekeeper (or establishes that they are not qualified to be a gatekeeper). I think Bob’s strategy is mostly denying the standards / denying Alice’s right to gatekeep, but this feels to me like the sort of thing that they should in fact be able to argue about, instead of Bob being right-by-default. Like, Bob’s first point is “Alice, you’re being rude” and Alice’s response is “being this sort of rude is EA”!
Hm, I don’t think those are very close. After all, suppose we imagine me in Bob’s place, having this conversation with the same fictional Alice. I could respond thus:
“Yes, I really care about improving the world. But why should that imply donating more, or using my time differently? I am acting in a way that my principles dictate. You claim that ‘really caring about the world’ implies that I should act as you want me to act, but I just don’t agree with you about that.”
Now, one imagines that Alice wouldn’t start such a conversation with me in the first place, as I am not, nor claim to be, an “Effective Altruist”, or any such thing.[1] But here again we come to the same result: that the point of contention between Bob and Alice is Bob’s self-assignment to certain distinctly identified groups or communities, not his claim to hold some general or particular values.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment) about Alice’s needs and wants and so forth.
Sure, maybe, but that mostly just points to the importance of being clear on what a discussion is about. Note that Alice flitting from topic to topic, neither striving for clarity nor allowing herself to be pressed on any point, is also quite realistic, and is characteristic of untrustworthy debaters.
If this is true, then so much the worse for EA!
When I condemn Alice’s behavior, that condemnation does not contain an “EA exemption”, like “this behavior is bad, but if you slap the ‘EA’ label on it, then it’s not bad after all”. On the contrary, if the label is accurate, then my condemnation extends to EA itself.
Although I could certainly claim to be an effective altruist (note the lowercase), and such a claim would be true, as far as it goes. I don’t actually do this because it’s needlessly confusing, and nothing really hinges on such a claim.
Right, and then you and Alice could get into the details. I think this is roughly what Alice is trying to do with Bob (“here’s what I believe and why I believe it”) and Bob is trying to make the conversation not happen because it is about Bob.
And so there’s an interesting underlying disagreement, there! Bob believes in a peace treaty where people don’t point out each other’s flaws, and Alice believes in a high-performing-team culture where people point out each other’s flaws so that they can be fixed. To the extent that the resolution is just “yeah, I prefer the peace treaty to the mutual flaw inspection”, the conversation doesn’t have to be very long.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more ‘peace treaty’ style. I think that’s the same sort of conversation that’s happening here.
Sure—in my read, Alice’s needs and wants and so forth are, in part, the generators of the ‘community standards’. (If Alice was better off with lots of low-performers around to feel superior to, instead of with lots of high-performers around to feel comparable to, then one imagines Alice would instead prefer ‘big-tent EA’ membership definitions.)
I think this part of EA makes it ‘sharp’ which is pretty ambivalent.
If I’m reading you correctly, the main thing that’s going on here to condemn about Alice is that she’s doing some mixture of:
Setting herself as the judge of Bob without his consent or some external source of legitimacy
Being insufficiently clear about her complaints and judgments
I broadly agree with 2 (because basically anything can always be clearer) tho I think this is, like, a realistic level of clarity. I think 1 is unclear because it’s one of the points of disagreement—does Bob saying that he’s “interested in EA” or “really cares about improving the world” give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
[Noting that Alice would be quick to point out Bob’s interest in not having to change himself would also put a finger on the scales here.]
But that’s just the thing—I wouldn’t be interested in getting into the details. My hypothetical response was meant to ward Alice off, not to engage with her. The subtext (which could be made into text, if need be—i.e., if Alice persists) is “I’m not an EA and won’t become an EA, so please take your sales pitch elsewhere”. The expected result is that Alice loses interest and goes off to find a likely-looking Bob.
The conversation as written doesn’t seem to me to support this reading. Alice steadfastly resists Bob’s attempts to turn the topic around to what she believes, her actions, etc., and instead relentlessly focuses on Bob’s beliefs, his alleged hypocrisy, etc.
Well, for one thing, I’ll note that I’m not much of a fan of this “mutual flaw inspection”, either. The proper alternative, in my view, isn’t any sort of “peace treaty”, but rather a “person-interface” approach.
More importantly, though, any sort of “mutual flaw inspection” has got to be opted into. Otherwise you’re just accosting random people to berate them about their flaws. That’s not praiseworthy behavior.
Sorry, I don’t think I get the meaning here. Could you rephrase?
Yes, basically this.
Let me emphasize again what the problem is:
Criticism, per se, is not the central issue (although anti-solicited criticism is almost always rude, if nothing else).
I think EA is a mixture of ‘giving people new options’ (we found a cool new intervention!) and ‘removing previously held options’; it involves cutting to the heart of things, and also cutting things out of your life. The core beliefs do not involve much in the way of softness or malleability to individual autonomy. (I think people have since developed a bunch of padding so that they can live with it more easily.)
Like, EA is about deprioritizing ‘ineffective’ approaches in favor of ‘effective’ approaches. This is both rough (for the ineffective approaches and people excited about them) and also the mechanism of action by which EA does any good (in the same way that capitalism does well in part by having companies go out of business when they’re less good at deploying capital than others).
Hmm, I see. Well, I agree with your first paragraph but not with your second. That is, I do not think that selection of approaches is the core, to say nothing of the entirety, of what EA is. This is a major part of my problem with EA as a movement an ideology.
However, that is perhaps a digression we can avoid. More relevant is that none of this seems to me to require, or even to motivate, “being this sort of rude”. It’s all very well to “remove previously held options” and otherwise be “rough” to the beliefs and values of people who come to EA looking for guidance and answers, but to impose these things on people who manifestly aren’t interested is… not justifiable behavior, it seems to me.
(And, again, this is quite distinct from the question of accepting or rejecting someone from some group or what have you, or letting their false claims to have some praiseworthy quality stand unchallenged, etc.)
Just noting here that I broadly agree with Said’s position throughout this comment thread.