Word of God, as the creator of both Alice and Bob: Bob really does claim to be an EA, want to belong to EA communities, say he’s a utilitarian, claim to be a rationalist, call himself a member of the rationalist community, etc. Alice isn’t lying or wrong about any of that. (You can get all “death of the author” and analyse the text as though Bob isn’t a rationalist/EA if you really want, but I think that would make for a less productive discussion with other commenters.)
Speaking for myself personally, I’d definitely prefer that people came and said “hey we need you to improve or we’ll kick you out” to my face, rather than going behind my back and starting a whisper campaign to kick me out of a group. So if I were Bob, I definitely wouldn’t want Alice to just go talk to Carol and Dave without talking to me first!
But more importantly, I think there’s a part of the dialogue you’re not engaging with. Alice claims to need or want certain things; she wants to surround herself with similarly-ethical people who normalise and affirm her lifestyle so that it’s easier for her to keep up, she wants people to call her out if she’s engaging in biased or motivated reasoning about how many resources she can devote to altruism or how hard she can work, she wants Bob to be honest with her, etc. In your view, is it ever acceptable for her to criticise Bob? Is there any way for her to get what she wants which is, in your eyes, morally acceptable? If it’s never morally acceptable to tell people they’re wrong about beliefs like “I can’t work harder than this”, how do you make sure those beliefs track truth?
Those questions aren’t rhetorical; the dialogue isn’t supposed to have a clear hero/villain dynamic. If you have a really awesome technique for calibrating beliefs about how much you can contribute which doesn’t require any input from anyone else, then that sounds super useful and I’d like to hear about it!
Word of God, as the creator of both Alice and Bob: …
Fair enough, but this is new information, not included in the post. So, all responses prior to you posting this explanatory comment can’t have taken it into account. (Perhaps you might make an addendum to the post, with this clarification? It significantly changes the context of the conversation!)
However, there is then the problem that if we assume what you’ve just added to be true, then the depicted conversation is rather odd. Why isn’t Alice focusing on these claims of Bob’s? After all, they’re the real problem! Alice should be saying:
“You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!”
And so on. But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems weird, in this context.
Now, Bob might very well respond with something like:
“Just who appointed you the gatekeeper of these identities, eh, Alice? Please display for me your ‘Official Enforcer of Who Gets To Call Themselves a Rationalist / EA / Utilitarian’ badge!”
And at that point, Alice would do well to dismiss talking to Bob as a lost cause, and convene at once the meeting of true EAs / rationalists / etc., to discuss the question of public shunning.
Speaking for myself personally, I’d definitely prefer that people came and said “hey we need you to improve or we’ll kick you out” to my face, rather than going behind my back and starting a whisper campaign to kick me out of a group. So if I were Bob, I definitely wouldn’t want Alice to just go talk to Carol and Dave without talking to me first!
That’s as may be, but Bob makes clear right at the start of the conversation (and then again several times afterwards) that he’s not really interested in being lectured like this. He just lacks the spine to enforce his boundaries. And Alice takes advantage. But the “whisper campaign” concern is misplaced.
Of course, as I say above, Alice doesn’t exactly make it clear that this whole thing is really about claiming group membership that you don’t properly have or deserve. Alice frames the whole thing as… well, various things, like policing Bob’s morality for his own good, her own “needs”, etc. She seems confused, so it’s only natural that Bob would also not get a clear idea of what the real point of the conversation is. If Alice were to approach the matter as I describe above, there would be no problem.
But more importantly, I think there’s a part of the dialogue you’re not engaging with. Alice claims to need or want certain things …
I’m not engaging with it because it seems totally irrelevant, and not any of Bob’s concern. Bob’s response to these complaints should be:
“Why are you telling me about any of this? Are you asking for my help, as a favor? Are you proposing a trade, where I help you achieve these goals of yours, and you offer me something I want in return? Or what? Otherwise it seems like you’re just telling me a list of things you want, then proceeding to try to force me to act in such a way that you get those things that you want. What do I get out of any of this? Why shouldn’t I tell you to go take a hike?”
In your view, is it ever acceptable for her to criticise Bob?
Sure. There’s lots of contexts in which it’s acceptable to criticize someone. This is really much too broad a question to usefully address.
Is there any way for her to get what she wants which is, in your eyes, morally acceptable?
It sounds like Alice wants to build a community of people, with certain characteristics. This is fine and well. She should focus on that goal (see above), and not distract herself with irrelevancies, like policing the morality of random people who aren’t interested in her project.
If it’s never morally acceptable to tell people they’re wrong about beliefs like “I can’t work harder than this”, how do you make sure those beliefs track truth?
Why should it be any of your business whether other people’s beliefs about whether they can work harder track truth?
You can’t force people to care about the things that you care about. You can, and should, work together with other people who care about those same things, to achieve your mutual goals. That’s what Alice (and all who are like her) should be focusing on: finding like-minded people, forming groups and communities of such, maintaining said groups and communities, and working within them to achieve their goals. Bobs should be excluded if they’re interfering, left alone otherwise.
But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems weird, in this context.
I agree with your sense that they should be directly arguing about “what are the standards implied by ‘calling yourself a rationalist’ or ‘saying you’re interested in EA’?”. I think that they are closer to having that argument than not having it, tho.
I think the difficulty is that the conversation they’re having is happening at multiple levels, dealing with both premises and implications, and it’s generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Looking at the first statement by Alice:
You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
In my eyes, this is pretty close to your proposed starter for Alice:
You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!
The main difference is that Alice’s version seems like it’s trying to balance “enforcing the boundary” and “helping Bob end up on the inside”. She’s not (initially) asking Bob to become a copy of her; she’s proposing a specific concrete action tied to one of Bob’s stated values, suggesting a way that he could make his self-assessments more honest.
Now, the next step in the conversation (after Bob rejected Alice’s bid to both suggest courses of action and evaluate how well he conforms to community standards) could have been for Alice to say “well, I’d rather you not lie about being one of us.” (And, indeed, it looks to me like Alice says as much in her 4th comment.)
The remaining discussion is mostly about whether or not Alice’s interpretation of the community standards is right. Given that many of the standards are downstream of empirical facts (like which working styles are most productive instead of demonstrating the most loyalty or w/e), it makes sense that Alice couldn’t just say “you’re not working hard enough” and instead needs to justify her belief that the standard is where she thinks it is. (And, indeed, if Bob in fact cannot work harder then Alice doesn’t want to push him past his limits—she just doesn’t yet believe that his limits are where he claims they are.)
That is, I think there’s a trilemma: either Bob says he’s not an EA/rationalist/etc., Bob behaves in an EA/rationalist/etc. way, or Bob defends his standards to Alice / whatever other gatekeeper (or establishes that they are not qualified to be a gatekeeper). I think Bob’s strategy is mostly denying the standards / denying Alice’s right to gatekeep, but this feels to me like the sort of thing that they should in fact be able to argue about, instead of Bob being right-by-default. Like, Bob’s first point is “Alice, you’re being rude” and Alice’s response is “being this sort of rude is EA”!
You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight?
In my eyes, this is pretty close to your proposed starter for Alice:
Hm, I don’t think those are very close. After all, suppose we imagine me in Bob’s place, having this conversation with the same fictional Alice. I could respond thus:
“Yes, I really care about improving the world. But why should that imply donating more, or using my time differently? I am acting in a way that my principles dictate. You claim that ‘really caring about the world’ implies that I should act as you want me to act, but I just don’t agree with you about that.”
Now, one imagines that Alice wouldn’t start such a conversation with me in the first place, as I am not, nor claim to be, an “Effective Altruist”, or any such thing.[1] But here again we come to the same result: that the point of contention between Bob and Alice is Bob’s self-assignment to certain distinctly identified groups or communities, not his claim to hold some general or particular values.
The remaining discussion is mostly about whether or not Alice’s interpretation of the community standards is right.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment) about Alice’s needs and wants and so forth.
I think the difficulty is that the conversation they’re having is happening at multiple levels, dealing with both premises and implications, and it’s generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Sure, maybe, but that mostly just points to the importance of being clear on what a discussion is about. Note that Alice flitting from topic to topic, neither striving for clarity nor allowing herself to be pressed on any point, is also quite realistic, and is characteristic of untrustworthy debaters.
Like, Bob’s first point is “Alice, you’re being rude” and Alice’s response is “being this sort of rude is EA”!
If this is true, then so much the worse for EA!
When I condemn Alice’s behavior, that condemnation does not contain an “EA exemption”, like “this behavior is bad, but if you slap the ‘EA’ label on it, then it’s not bad after all”. On the contrary, if the label is accurate, then my condemnation extends to EA itself.
Although I could certainly claim to be an effective altruist (note the lowercase), and such a claim would be true, as far as it goes. I don’t actually do this because it’s needlessly confusing, and nothing really hinges on such a claim.
Right, and then you and Alice could get into the details. I think this is roughly what Alice is trying to do with Bob (“here’s what I believe and why I believe it”) and Bob is trying to make the conversation not happen because it is about Bob.
And so there’s an interesting underlying disagreement, there! Bob believes in a peace treaty where people don’t point out each other’s flaws, and Alice believes in a high-performing-team culture where people point out each other’s flaws so that they can be fixed. To the extent that the resolution is just “yeah, I prefer the peace treaty to the mutual flaw inspection”, the conversation doesn’t have to be very long.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more ‘peace treaty’ style. I think that’s the same sort of conversation that’s happening here.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment) about Alice’s needs and wants and so forth.
Sure—in my read, Alice’s needs and wants and so forth are, in part, the generators of the ‘community standards’. (If Alice was better off with lots of low-performers around to feel superior to, instead of with lots of high-performers around to feel comparable to, then one imagines Alice would instead prefer ‘big-tent EA’ membership definitions.)
On the contrary, if the label is accurate, then my condemnation extends to EA itself.
I think this part of EA makes it ‘sharp’ which is pretty ambivalent.
If I’m reading you correctly, the main thing that’s going on here to condemn about Alice is that she’s doing some mixture of:
Setting herself as the judge of Bob without his consent or some external source of legitimacy
Being insufficiently clear about her complaints and judgments
I broadly agree with 2 (because basically anything can always be clearer) tho I think this is, like, a realistic level of clarity. I think 1 is unclear because it’s one of the points of disagreement—does Bob saying that he’s “interested in EA” or “really cares about improving the world” give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
[Noting that Alice would be quick to point out Bob’s interest in not having to change himself would also put a finger on the scales here.]
Right, and then you and Alice could get into the details.
But that’s just the thing—I wouldn’t be interested in getting into the details. My hypothetical response was meant to ward Alice off, not to engage with her. The subtext (which could be made into text, if need be—i.e., if Alice persists) is “I’m not an EA and won’t become an EA, so please take your sales pitch elsewhere”. The expected result is that Alice loses interest and goes off to find a likely-looking Bob.
I think this is roughly what Alice is trying to do with Bob (“here’s what I believe and why I believe it”) and Bob is trying to make the conversation not happen because it is about Bob.
The conversation as written doesn’t seem to me to support this reading. Alice steadfastly resists Bob’s attempts to turn the topic around to what she believes, her actions, etc., and instead relentlessly focuses on Bob’s beliefs, his alleged hypocrisy, etc.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more ‘peace treaty’ style. I think that’s the same sort of conversation that’s happening here.
Well, for one thing, I’ll note that I’m not much of a fan of this “mutual flaw inspection”, either. The proper alternative, in my view, isn’t any sort of “peace treaty”, but rather a “person-interface” approach.
More importantly, though, any sort of “mutual flaw inspection” has got to be opted into. Otherwise you’re just accosting random people to berate them about their flaws. That’s not praiseworthy behavior.
On the contrary, if the label is accurate, then my condemnation extends to EA itself.
I think this part of EA makes it ‘sharp’ which is pretty ambivalent.
Sorry, I don’t think I get the meaning here. Could you rephrase?
If I’m reading you correctly, the main thing that’s going on here to condemn about Alice is that she’s doing some mixture of:
Setting herself as the judge of Bob without his consent or some external source of legitimacy
Being insufficiently clear about her complaints and judgments
Yes, basically this.
does Bob saying that he’s “interested in EA” or “really cares about improving the world” give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
The behavior I was referring to, specifically, is not rudeness (or else I’d have quoted Alice’s first comment, not her second one), but rather Alice taking as given the assumption that she has some sort of claim on Bob’s reasons for his actions—that Bob has some obligation to explain himself, to justify his actions and his reasons, to Alice. It is that assumption which must be firmly and implacably rejected at once.
Criticism, per se, is not the central issue (although anti-solicited criticism is almost always rude, if nothing else).
Sorry, I don’t think I get the meaning here. Could you rephrase?
I think EA is a mixture of ‘giving people new options’ (we found a cool new intervention!) and ‘removing previously held options’; it involves cutting to the heart of things, and also cutting things out of your life. The core beliefs do not involve much in the way of softness or malleability to individual autonomy. (I think people have since developed a bunch of padding so that they can live with it more easily.)
Like, EA is about deprioritizing ‘ineffective’ approaches in favor of ‘effective’ approaches. This is both rough (for the ineffective approaches and people excited about them) and also the mechanism of action by which EA does any good (in the same way that capitalism does well in part by having companies go out of business when they’re less good at deploying capital than others).
Hmm, I see. Well, I agree with your first paragraph but not with your second. That is, I do not think that selection of approaches is the core, to say nothing of the entirety, of what EA is. This is a major part of my problem with EA as a movement an ideology.
However, that is perhaps a digression we can avoid. More relevant is that none of this seems to me to require, or even to motivate, “being this sort of rude”. It’s all very well to “remove previously held options” and otherwise be “rough” to the beliefs and values of people who come to EA looking for guidance and answers, but to impose these things on people who manifestly aren’t interested is… not justifiable behavior, it seems to me.
(And, again, this is quite distinct from the question of accepting or rejecting someone from some group or what have you, or letting their false claims to have some praiseworthy quality stand unchallenged, etc.)
If Bob asked this question, it would show he’s misunderstanding the point of Alice’s critique—unless I’m missing something, she claims he should, morally speaking, act differently.
Responding “What do I get out of any of this?” to that kind of critique is either a misunderstanding, or a rejection of morality (“I don’t care if I should be, morally speaking, doing something else, because I prefer to maximize my own utility.”).
Edit: Or also, possibly, a rejection of Alice (“You are so annoying that I’ll pretend this conversation is about something else to make you go away.”).
Please reread my comment more carefully. That part (Bob’s “what do I get out of any of this” response) was specifically about Alice’s commentary on her personal wants/needs, i.e. the specifically non-moral aspect of Alice’s array of criticisms.
Word of God, as the creator of both Alice and Bob: Bob really does claim to be an EA, want to belong to EA communities, say he’s a utilitarian, claim to be a rationalist, call himself a member of the rationalist community, etc. Alice isn’t lying or wrong about any of that. (You can get all “death of the author” and analyse the text as though Bob isn’t a rationalist/EA if you really want, but I think that would make for a less productive discussion with other commenters.)
Speaking for myself personally, I’d definitely prefer that people came and said “hey we need you to improve or we’ll kick you out” to my face, rather than going behind my back and starting a whisper campaign to kick me out of a group. So if I were Bob, I definitely wouldn’t want Alice to just go talk to Carol and Dave without talking to me first!
But more importantly, I think there’s a part of the dialogue you’re not engaging with. Alice claims to need or want certain things; she wants to surround herself with similarly-ethical people who normalise and affirm her lifestyle so that it’s easier for her to keep up, she wants people to call her out if she’s engaging in biased or motivated reasoning about how many resources she can devote to altruism or how hard she can work, she wants Bob to be honest with her, etc. In your view, is it ever acceptable for her to criticise Bob? Is there any way for her to get what she wants which is, in your eyes, morally acceptable? If it’s never morally acceptable to tell people they’re wrong about beliefs like “I can’t work harder than this”, how do you make sure those beliefs track truth?
Those questions aren’t rhetorical; the dialogue isn’t supposed to have a clear hero/villain dynamic. If you have a really awesome technique for calibrating beliefs about how much you can contribute which doesn’t require any input from anyone else, then that sounds super useful and I’d like to hear about it!
Fair enough, but this is new information, not included in the post. So, all responses prior to you posting this explanatory comment can’t have taken it into account. (Perhaps you might make an addendum to the post, with this clarification? It significantly changes the context of the conversation!)
However, there is then the problem that if we assume what you’ve just added to be true, then the depicted conversation is rather odd. Why isn’t Alice focusing on these claims of Bob’s? After all, they’re the real problem! Alice should be saying:
“You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!”
And so on. But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems weird, in this context.
Now, Bob might very well respond with something like:
“Just who appointed you the gatekeeper of these identities, eh, Alice? Please display for me your ‘Official Enforcer of Who Gets To Call Themselves a Rationalist / EA / Utilitarian’ badge!”
And at that point, Alice would do well to dismiss talking to Bob as a lost cause, and convene at once the meeting of true EAs / rationalists / etc., to discuss the question of public shunning.
That’s as may be, but Bob makes clear right at the start of the conversation (and then again several times afterwards) that he’s not really interested in being lectured like this. He just lacks the spine to enforce his boundaries. And Alice takes advantage. But the “whisper campaign” concern is misplaced.
Of course, as I say above, Alice doesn’t exactly make it clear that this whole thing is really about claiming group membership that you don’t properly have or deserve. Alice frames the whole thing as… well, various things, like policing Bob’s morality for his own good, her own “needs”, etc. She seems confused, so it’s only natural that Bob would also not get a clear idea of what the real point of the conversation is. If Alice were to approach the matter as I describe above, there would be no problem.
I’m not engaging with it because it seems totally irrelevant, and not any of Bob’s concern. Bob’s response to these complaints should be:
“Why are you telling me about any of this? Are you asking for my help, as a favor? Are you proposing a trade, where I help you achieve these goals of yours, and you offer me something I want in return? Or what? Otherwise it seems like you’re just telling me a list of things you want, then proceeding to try to force me to act in such a way that you get those things that you want. What do I get out of any of this? Why shouldn’t I tell you to go take a hike?”
Sure. There’s lots of contexts in which it’s acceptable to criticize someone. This is really much too broad a question to usefully address.
It sounds like Alice wants to build a community of people, with certain characteristics. This is fine and well. She should focus on that goal (see above), and not distract herself with irrelevancies, like policing the morality of random people who aren’t interested in her project.
Why should it be any of your business whether other people’s beliefs about whether they can work harder track truth?
You can’t force people to care about the things that you care about. You can, and should, work together with other people who care about those same things, to achieve your mutual goals. That’s what Alice (and all who are like her) should be focusing on: finding like-minded people, forming groups and communities of such, maintaining said groups and communities, and working within them to achieve their goals. Bobs should be excluded if they’re interfering, left alone otherwise.
I agree with your sense that they should be directly arguing about “what are the standards implied by ‘calling yourself a rationalist’ or ‘saying you’re interested in EA’?”. I think that they are closer to having that argument than not having it, tho.
I think the difficulty is that the conversation they’re having is happening at multiple levels, dealing with both premises and implications, and it’s generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context).
Looking at the first statement by Alice:
In my eyes, this is pretty close to your proposed starter for Alice:
The main difference is that Alice’s version seems like it’s trying to balance “enforcing the boundary” and “helping Bob end up on the inside”. She’s not (initially) asking Bob to become a copy of her; she’s proposing a specific concrete action tied to one of Bob’s stated values, suggesting a way that he could make his self-assessments more honest.
Now, the next step in the conversation (after Bob rejected Alice’s bid to both suggest courses of action and evaluate how well he conforms to community standards) could have been for Alice to say “well, I’d rather you not lie about being one of us.” (And, indeed, it looks to me like Alice says as much in her 4th comment.)
The remaining discussion is mostly about whether or not Alice’s interpretation of the community standards is right. Given that many of the standards are downstream of empirical facts (like which working styles are most productive instead of demonstrating the most loyalty or w/e), it makes sense that Alice couldn’t just say “you’re not working hard enough” and instead needs to justify her belief that the standard is where she thinks it is. (And, indeed, if Bob in fact cannot work harder then Alice doesn’t want to push him past his limits—she just doesn’t yet believe that his limits are where he claims they are.)
That is, I think there’s a trilemma: either Bob says he’s not an EA/rationalist/etc., Bob behaves in an EA/rationalist/etc. way, or Bob defends his standards to Alice / whatever other gatekeeper (or establishes that they are not qualified to be a gatekeeper). I think Bob’s strategy is mostly denying the standards / denying Alice’s right to gatekeep, but this feels to me like the sort of thing that they should in fact be able to argue about, instead of Bob being right-by-default. Like, Bob’s first point is “Alice, you’re being rude” and Alice’s response is “being this sort of rude is EA”!
Hm, I don’t think those are very close. After all, suppose we imagine me in Bob’s place, having this conversation with the same fictional Alice. I could respond thus:
“Yes, I really care about improving the world. But why should that imply donating more, or using my time differently? I am acting in a way that my principles dictate. You claim that ‘really caring about the world’ implies that I should act as you want me to act, but I just don’t agree with you about that.”
Now, one imagines that Alice wouldn’t start such a conversation with me in the first place, as I am not, nor claim to be, an “Effective Altruist”, or any such thing.[1] But here again we come to the same result: that the point of contention between Bob and Alice is Bob’s self-assignment to certain distinctly identified groups or communities, not his claim to hold some general or particular values.
Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment) about Alice’s needs and wants and so forth.
Sure, maybe, but that mostly just points to the importance of being clear on what a discussion is about. Note that Alice flitting from topic to topic, neither striving for clarity nor allowing herself to be pressed on any point, is also quite realistic, and is characteristic of untrustworthy debaters.
If this is true, then so much the worse for EA!
When I condemn Alice’s behavior, that condemnation does not contain an “EA exemption”, like “this behavior is bad, but if you slap the ‘EA’ label on it, then it’s not bad after all”. On the contrary, if the label is accurate, then my condemnation extends to EA itself.
Although I could certainly claim to be an effective altruist (note the lowercase), and such a claim would be true, as far as it goes. I don’t actually do this because it’s needlessly confusing, and nothing really hinges on such a claim.
Right, and then you and Alice could get into the details. I think this is roughly what Alice is trying to do with Bob (“here’s what I believe and why I believe it”) and Bob is trying to make the conversation not happen because it is about Bob.
And so there’s an interesting underlying disagreement, there! Bob believes in a peace treaty where people don’t point out each other’s flaws, and Alice believes in a high-performing-team culture where people point out each other’s flaws so that they can be fixed. To the extent that the resolution is just “yeah, I prefer the peace treaty to the mutual flaw inspection”, the conversation doesn’t have to be very long.
But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more ‘peace treaty’ style. I think that’s the same sort of conversation that’s happening here.
Sure—in my read, Alice’s needs and wants and so forth are, in part, the generators of the ‘community standards’. (If Alice was better off with lots of low-performers around to feel superior to, instead of with lots of high-performers around to feel comparable to, then one imagines Alice would instead prefer ‘big-tent EA’ membership definitions.)
I think this part of EA makes it ‘sharp’ which is pretty ambivalent.
If I’m reading you correctly, the main thing that’s going on here to condemn about Alice is that she’s doing some mixture of:
Setting herself as the judge of Bob without his consent or some external source of legitimacy
Being insufficiently clear about her complaints and judgments
I broadly agree with 2 (because basically anything can always be clearer) tho I think this is, like, a realistic level of clarity. I think 1 is unclear because it’s one of the points of disagreement—does Bob saying that he’s “interested in EA” or “really cares about improving the world” give Alice license to provide him with unsolicited (and, indeed, anti-solicited!) criticism?
[Noting that Alice would be quick to point out Bob’s interest in not having to change himself would also put a finger on the scales here.]
But that’s just the thing—I wouldn’t be interested in getting into the details. My hypothetical response was meant to ward Alice off, not to engage with her. The subtext (which could be made into text, if need be—i.e., if Alice persists) is “I’m not an EA and won’t become an EA, so please take your sales pitch elsewhere”. The expected result is that Alice loses interest and goes off to find a likely-looking Bob.
The conversation as written doesn’t seem to me to support this reading. Alice steadfastly resists Bob’s attempts to turn the topic around to what she believes, her actions, etc., and instead relentlessly focuses on Bob’s beliefs, his alleged hypocrisy, etc.
Well, for one thing, I’ll note that I’m not much of a fan of this “mutual flaw inspection”, either. The proper alternative, in my view, isn’t any sort of “peace treaty”, but rather a “person-interface” approach.
More importantly, though, any sort of “mutual flaw inspection” has got to be opted into. Otherwise you’re just accosting random people to berate them about their flaws. That’s not praiseworthy behavior.
Sorry, I don’t think I get the meaning here. Could you rephrase?
Yes, basically this.
Let me emphasize again what the problem is:
Criticism, per se, is not the central issue (although anti-solicited criticism is almost always rude, if nothing else).
I think EA is a mixture of ‘giving people new options’ (we found a cool new intervention!) and ‘removing previously held options’; it involves cutting to the heart of things, and also cutting things out of your life. The core beliefs do not involve much in the way of softness or malleability to individual autonomy. (I think people have since developed a bunch of padding so that they can live with it more easily.)
Like, EA is about deprioritizing ‘ineffective’ approaches in favor of ‘effective’ approaches. This is both rough (for the ineffective approaches and people excited about them) and also the mechanism of action by which EA does any good (in the same way that capitalism does well in part by having companies go out of business when they’re less good at deploying capital than others).
Hmm, I see. Well, I agree with your first paragraph but not with your second. That is, I do not think that selection of approaches is the core, to say nothing of the entirety, of what EA is. This is a major part of my problem with EA as a movement an ideology.
However, that is perhaps a digression we can avoid. More relevant is that none of this seems to me to require, or even to motivate, “being this sort of rude”. It’s all very well to “remove previously held options” and otherwise be “rough” to the beliefs and values of people who come to EA looking for guidance and answers, but to impose these things on people who manifestly aren’t interested is… not justifiable behavior, it seems to me.
(And, again, this is quite distinct from the question of accepting or rejecting someone from some group or what have you, or letting their false claims to have some praiseworthy quality stand unchallenged, etc.)
Just noting here that I broadly agree with Said’s position throughout this comment thread.
If Bob asked this question, it would show he’s misunderstanding the point of Alice’s critique—unless I’m missing something, she claims he should, morally speaking, act differently.
Responding “What do I get out of any of this?” to that kind of critique is either a misunderstanding, or a rejection of morality (“I don’t care if I should be, morally speaking, doing something else, because I prefer to maximize my own utility.”).
Edit: Or also, possibly, a rejection of Alice (“You are so annoying that I’ll pretend this conversation is about something else to make you go away.”).
Please reread my comment more carefully. That part (Bob’s “what do I get out of any of this” response) was specifically about Alice’s commentary on her personal wants/needs, i.e. the specifically non-moral aspect of Alice’s array of criticisms.