I really care about the conversation that’s likely to ensue here, like probably a lot of people do.
I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.
What I hope happens:
Curiosity
Caring,
Compassion,
Interest in understanding both the specifics of what happened at Leverage, and any general principles it might hint at about human dynamics, or human dynamics in particular kinds of groups.
What I hope doesn’t happen:
Distancing from uncomfortable data.
Using blame and politics to distance from uncomfortable data.
Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.
This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!
As a LessWrong mod, I’ve been sitting and thinking about how to make the conversation go well for days now and have been stuck on what exactly to say. This intention setting is a good start.
I think to your list I would add judging each argument and piece of data on its merits, .i.e., updating on evidence even if it pushes against the position we currently hold.
Phrased alternatively, I’m hoping we don’t: treating arguments as soldiers: accepting bad arguments because they favor our preferred conclusion, rejecting good arguments because they don’t support our preferred conclusion. I think there’s a risk in this cases of knowing which side you’re on and then accepting and rejecting all evidence accordingly.
Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.
Are you somehow guaranteeing or confidently predicting that others will not take them in a politicized way, use them as an excuse for false judgments, or otherwise cause harm to those sharing the true relevant facts? If not, why are you asking people not to refrain from sharing such facts?
(My impression is that it is sheer optimism, bordering on wishful thinking, to expect such a thing, that those who have such a fear are correct to have such a fear, and so I am confused that you are requesting it anyway.)
Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.
I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.
If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficient to mean that a person’s goals would be well-served in the short run by following my request to avoid “refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”
However, I’m pretty sure most people in this ecosystem, maybe everyone, would deep down like to figure out how to actually see what kinds of fucked up individual and group and inter-group dynamics we’ve variously gotten into, and why and how, so that we can have a realer shot at things going forward. And I’m pretty sure we want this (in the deeper, long-run sense) much more than we want short-run reputation management. Separately, I suspect most peoples’ reputation-management will be kinda fucked in the long run if we don’t figure out how to get enough right to do actual progress on the world (vs creating local illusions of the same), although this last sentence is more controversial and less obvious. So, yeah, I’m asking people to try to engage in real conversation with me and others even though it’ll probably mess up parts of their/our reputation in the short run, and even though probably many won’t manage to joint this in the short run. And I suspect this effort will be good for many peoples’ deeper goals despite the political dynamics you mention.
It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it’s a prediction about their values (alongside a prediction of what the short-term and long-term effects are).
I’ll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.
I’m also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I’m not sure if that’s what you meant—maybe you think in the long term sharing of additional facts would help them personally, not just help the group.
Fwiw I don’t have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I’ve occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don’t think there have been other direct interactions with them.
I’m also not a fan of requests that presume that the listener …
From my POV, requests, and statements of what I hope for, aren’t advice. I think they don’t presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it’s okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.
Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party’s interests, or to carefully disclaim if I’m not sure what’ll be in their interests? That sounds tricky; I’m not peoples’ parents and they shouldn’t trust me to do that, and I’m afraid that if I try to talk that way I’ll make it even more confusing for anyone who starts out confused like that.
I think I’m missing part of where you’re coming from in terms of what good norms are around requests, or else I disagree about those norms.
you have way more private info than me, so perhaps...
I don’t have that much relevant-info-that-hasn’t-been-shared, and am mostly not trying to rely on it in whatever arguments I’m making here. Trying to converse more transparently, rather.
I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people,
I feel like this assumption seems false. I do predict that (at least in the world where we didn’t have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.
I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn’t think that fear of reprisal is particularly important to care about. Well, probably, it’s hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.
I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) Ithink this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people’s opinions seriously, and as a reason for them to change their behavior even if they don’t understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.
So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.
I don’t have a candidate alternative norm. (I generally don’t think very much about norms, and if I made one up now I’m sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like “I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it”.
(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn’t have to fear political judgment?)
(I also agree with TekhneMakre’s response about authority.)
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can’t claim to know much about setting up effective norms for defending whistleblowers though.
My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something.
these assumptions of mine are importantly false
If someone takes you as an authority, then they’re likely to take your wishes as commands. Imagine a CEO saying to her employees, “What I hope happens: … What I hope doesn’t happen: …”, and the (vocative/imperative mood) “Let’s show the world...”. That’s only your responsibility insofar as you’re somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.
such that I should try to follow some other communication norm
IMO no, but you could, say, ask LW to make a “comment signature” feature, and then have every comment you make link, in small font, to the comment you just made.
Yeah, I also read Anna as trying to create/strengthen local norms to the effect of ‘whistleblowers, truth-tellers, and people-saying-the-emperor-has-no-clothes are good community members and to-be-rewarded/protected’. That doesn’t make reprisals impossible, but I appreciated the push (as I interpreted it).
I also interpreted Anna as leading by example to some degree—a lot of orgs wouldn’t have their president join a public conversation like this, given the reputational risks. If I felt like Anna was taking on zero risk but was asking others to take on lots of risk, I may have felt differently.
Saying this publicly also (in my mind) creates some accountability for Anna to follow through. Community leaders who advocate value X and then go back on their word are in much more hot water than ones who quietly watch bad things happen.
E.g., suppose this were happening on the EA Forum. People might assume by default that CEA or whoever is opposed to candor about this topic, because they’re worried hashing things out in public could damage the EA-brand (or whatever). This creates a default pressure against open and honest truth-seeking. Jumping in to say ‘no, actually, having this conversation here is good, and it seems good to try to make it as real as we can’ can relieve a lot of that perceived pressure, even if it’s not a complete solution. I perceive Anna as trying to push in that direction on a bunch of recent threads (e.g., here).
I’m not sure what I think of Rohin’s interpretation. My initial gut feeling is that it’s asking too much social ownership of the micro, or asking community leaders to baby the community too much, or spend too much time carefully editing their comments to address all possible errors (with the inevitable result that community leaders say very little and the things they say are more dead and safe).
It’s not that I particularly object to the proposed rephrasings, more just that I have a gut-level sense that this is in a reference class of a thousand other similarly-small ways community leaders can accidentally slightly nudge folks in the wrong direction. In this particular case, I’d rather expect a little more from the community, rather than put this specific onus on Anna.
I agree there’s an empirical question of how socially risky it actually is to e.g. share negative stuff about Leverage in this thread. I’m all in favor of a thread to try to evaluate that question (which could also switch to PMs as needed if some people don’t feel safe participating), and I see the argument for trying to do that first, since resolving that could make it easier to discuss everything else. I just think people here are smart and independent enough to not be ‘coerced’ by Anna if she doesn’t open the conversation with a bunch of ‘you might suffer reprisals’ warnings (which does have a bit of a self-fulfilling-prophecy ring to it, though I think there are skillful ways to pull it off).
You’re reading too much into my response. I didn’t claim that Anna should have this extra onus. I made an incorrect inference, was confused, asked for clarification, was still confused by the first response (honestly I’m still confused by that response), understood after the second response, and then explained what I would have said if I were in her place when she asked about norms.
(Yes, I do in fact think that the specific thing said had negative consequences. Yes, this belief shows in my comments. But I didn’t say that Anna was wrong/bad for saying the specific thing, nor did I say that she “should” have done something else. Assuming for the moment that the specific statement did have negative consequences, what should I have done instead?)
(On the actual question, I mostly agree that we probably have too many demands on public communication, such that much less public communication happens than would be good.)
I just think people here are smart and independent enough to not be ‘coerced’ by Anna if she doesn’t open the conversation with a bunch of ‘you might suffer reprisals’ warnings
I also would have been fine with “I hope people share additional true, relevant facts”. The specific phrasing seemed bad because it seemed to me to imply that the fear of reprisal was wrong. See also here.
Of course there’s also the possibility that it’s worth it. E.g. because people could then notice who is doing a rush-to-judgement thing or confirmation-bias-y thing. (This even holds if there’s threat of personal harm to fact-sharers, though personal harm looks like something you added to the part you quoted.)
I agree that’s possible, but then I’d say something like “I would love to know additional true relevant facts, but I recognize there are real risks to this and only recommend people do this if they think the benefits are worth it”.
Analogy: it could be worth it for an employee to publicly talk about the flaws of their company / manager (e.g. because then others know not to look for jobs at that company), even though it might get them fired. In such a situation I would say something like “It would be particularly helpful to know about the flaws of company X, but I recognize there are substantial risks involved and only recommend people do this if they feel up to it”. I would not say “I hope people don’t refrain from speaking up about the flaws of company X out of fear that they might be fired”, unless I had good reason to believe they wouldn’t be fired, or good reason to believe that it would be worth it on their values (though in that case presumably they’d speak up anyway).
Thanks. I’m actually still not sure what you’re saying.
Hypothesis 1: you’re saying, stating “I hope person A does X” implies a non-dependence on person A’s information, which implies the speaker has a lot of hidden evidence (enough to make their hope unlikely to change given A’s evidence). And, people might infer that there’s this hidden evidence, and update on it, which might be a mistake.
Hypothesis 2: you’re pointing at something about how “do X, even if you have fear” is subtly coercive / gaslighty, in the sense of trying to insert an external judgement to override someone’s emotion / intuition / instinct. E.g. “out of fear” might subtly frame an aversion as a “mere emotion”.
(Just to state the obvious: it is clearly not as bad as the words “coercion” and “gaslighting” would usually imply. I am endorsing the mechanism, not the magnitude-of-badness.)
I agree that hypothesis 1 could be an underlying generator of why the effect in hypothesis 2 exists.
I think I am more confident in the prediction that these sorts of statements do influence people in ways-I-don’t-endorse, than in any specific mechanism by which that happens.
I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I’d bring it up.
(Sorry it got long; I’m still not sure what to cut.)
There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that “this could not possibly happen to them”/”they will definitely be protected from this,” and would feel reassured at seeing Strong Condemning Action as soon as possible...
...and “the people who had this happen.” Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to “victim” TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.
(“Victim” is just not a healthy personal identity in the long-term, for most people.)
Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (working out what happened, improving reporting, protecting people from cultish reprisals), and I’m not sure that separation is always necessary.
My read of the direction Anna seems to be trying to steer this is “do everything she can to clearly hear out people’s stories carefully First.” Only later, after people have really really listened, use that to formulate carefully considered harm-reducing actions.
Understanding the issue, in all its complexity, before working on coming up with solutions? I feel pretty on-board with that.
...I admit, I initially chaffed a bit? I have some memories of times Anna has leaned a bit more into the former-group’s needs. Some of her attempts to aim differently this time, have felt a little awkward.
I did also get an “ordering other people to ignore politics and be vulnerable” vibe off this, which put my armor up to around my ears. (Something with more of a feel of… “showing own vulnerability to elicit other’s vulnerability,” would have generally felt more natural to me? I think her later responses cycled to this, a little).
...but I’m starting to think that even the awkwardness, is its own sort of evidence? Of someone who is used to wielding frame control, trying to put it aside to listen. And I feel a lot of affection, in seeing it show that she’s working on this.
I would like it if we showed the world how accountability is done, and given your position, I find it disturbing that you have omitted this objective. That is, if I wanted to deflect the conversation away from accountability, I think I would write a post similar to yours.
I would like it if we showed the world how accountability is done
So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.
The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.
I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.
Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?
I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.
To clarify: goal divergence between whom? Geoff and Zoe? Zoe and me? Me and you?
This reaction has been predictable for years IMO. As usual, a reasonable response required people to go public. There is no internal accountability process. Luckily things have been made public.
More thoughts:
I really care about the conversation that’s likely to ensue here, like probably a lot of people do.
I want to speak a bit to what I hope happens, and to what I hope doesn’t happen, in that conversation. Because I think it’s gonna be a tricky one.
What I hope happens:
Curiosity
Caring,
Compassion,
Interest in understanding both the specifics of what happened at Leverage, and any general principles it might hint at about human dynamics, or human dynamics in particular kinds of groups.
What I hope doesn’t happen:
Distancing from uncomfortable data.
Using blame and politics to distance from uncomfortable data.
Refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.
This is LessWrong; let’s show the world how curiosity/compassion/inquiry is done!
Thanks, Anna!
As a LessWrong mod, I’ve been sitting and thinking about how to make the conversation go well for days now and have been stuck on what exactly to say. This intention setting is a good start.
I think to your list I would add judging each argument and piece of data on its merits, .i.e., updating on evidence even if it pushes against the position we currently hold.
Phrased alternatively, I’m hoping we don’t: treating arguments as soldiers: accepting bad arguments because they favor our preferred conclusion, rejecting good arguments because they don’t support our preferred conclusion. I think there’s a risk in this cases of knowing which side you’re on and then accepting and rejecting all evidence accordingly.
Are you somehow guaranteeing or confidently predicting that others will not take them in a politicized way, use them as an excuse for false judgments, or otherwise cause harm to those sharing the true relevant facts? If not, why are you asking people not to refrain from sharing such facts?
(My impression is that it is sheer optimism, bordering on wishful thinking, to expect such a thing, that those who have such a fear are correct to have such a fear, and so I am confused that you are requesting it anyway.)
Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.
I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.
If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficient to mean that a person’s goals would be well-served in the short run by following my request to avoid “refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”
However, I’m pretty sure most people in this ecosystem, maybe everyone, would deep down like to figure out how to actually see what kinds of fucked up individual and group and inter-group dynamics we’ve variously gotten into, and why and how, so that we can have a realer shot at things going forward. And I’m pretty sure we want this (in the deeper, long-run sense) much more than we want short-run reputation management. Separately, I suspect most peoples’ reputation-management will be kinda fucked in the long run if we don’t figure out how to get enough right to do actual progress on the world (vs creating local illusions of the same), although this last sentence is more controversial and less obvious. So, yeah, I’m asking people to try to engage in real conversation with me and others even though it’ll probably mess up parts of their/our reputation in the short run, and even though probably many won’t manage to joint this in the short run. And I suspect this effort will be good for many peoples’ deeper goals despite the political dynamics you mention.
Here’s to trying.
It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it’s a prediction about their values (alongside a prediction of what the short-term and long-term effects are).
I’ll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.
I’m also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I’m not sure if that’s what you meant—maybe you think in the long term sharing of additional facts would help them personally, not just help the group.
Fwiw I don’t have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I’ve occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don’t think there have been other direct interactions with them.
From my POV, requests, and statements of what I hope for, aren’t advice. I think they don’t presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it’s okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.
Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party’s interests, or to carefully disclaim if I’m not sure what’ll be in their interests? That sounds tricky; I’m not peoples’ parents and they shouldn’t trust me to do that, and I’m afraid that if I try to talk that way I’ll make it even more confusing for anyone who starts out confused like that.
I think I’m missing part of where you’re coming from in terms of what good norms are around requests, or else I disagree about those norms.
I don’t have that much relevant-info-that-hasn’t-been-shared, and am mostly not trying to rely on it in whatever arguments I’m making here. Trying to converse more transparently, rather.
I feel like this assumption seems false. I do predict that (at least in the world where we didn’t have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.
I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn’t think that fear of reprisal is particularly important to care about. Well, probably, it’s hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.
I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) I think this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people’s opinions seriously, and as a reason for them to change their behavior even if they don’t understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.
So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.
I don’t have a candidate alternative norm. (I generally don’t think very much about norms, and if I made one up now I’m sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like “I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it”.
(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn’t have to fear political judgment?)
(I also agree with TekhneMakre’s response about authority.)
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can’t claim to know much about setting up effective norms for defending whistleblowers though.
If someone takes you as an authority, then they’re likely to take your wishes as commands. Imagine a CEO saying to her employees, “What I hope happens: … What I hope doesn’t happen: …”, and the (vocative/imperative mood) “Let’s show the world...”. That’s only your responsibility insofar as you’re somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.
IMO no, but you could, say, ask LW to make a “comment signature” feature, and then have every comment you make link, in small font, to the comment you just made.
I read Anna’s request as an attempt to create a self-fulfilling prophecy. It’s much easier to bully a few individuals than a large crowd.
Yeah, I also read Anna as trying to create/strengthen local norms to the effect of ‘whistleblowers, truth-tellers, and people-saying-the-emperor-has-no-clothes are good community members and to-be-rewarded/protected’. That doesn’t make reprisals impossible, but I appreciated the push (as I interpreted it).
I also interpreted Anna as leading by example to some degree—a lot of orgs wouldn’t have their president join a public conversation like this, given the reputational risks. If I felt like Anna was taking on zero risk but was asking others to take on lots of risk, I may have felt differently.
Saying this publicly also (in my mind) creates some accountability for Anna to follow through. Community leaders who advocate value X and then go back on their word are in much more hot water than ones who quietly watch bad things happen.
E.g., suppose this were happening on the EA Forum. People might assume by default that CEA or whoever is opposed to candor about this topic, because they’re worried hashing things out in public could damage the EA-brand (or whatever). This creates a default pressure against open and honest truth-seeking. Jumping in to say ‘no, actually, having this conversation here is good, and it seems good to try to make it as real as we can’ can relieve a lot of that perceived pressure, even if it’s not a complete solution. I perceive Anna as trying to push in that direction on a bunch of recent threads (e.g., here).
I’m not sure what I think of Rohin’s interpretation. My initial gut feeling is that it’s asking too much social ownership of the micro, or asking community leaders to baby the community too much, or spend too much time carefully editing their comments to address all possible errors (with the inevitable result that community leaders say very little and the things they say are more dead and safe).
It’s not that I particularly object to the proposed rephrasings, more just that I have a gut-level sense that this is in a reference class of a thousand other similarly-small ways community leaders can accidentally slightly nudge folks in the wrong direction. In this particular case, I’d rather expect a little more from the community, rather than put this specific onus on Anna.
I agree there’s an empirical question of how socially risky it actually is to e.g. share negative stuff about Leverage in this thread. I’m all in favor of a thread to try to evaluate that question (which could also switch to PMs as needed if some people don’t feel safe participating), and I see the argument for trying to do that first, since resolving that could make it easier to discuss everything else. I just think people here are smart and independent enough to not be ‘coerced’ by Anna if she doesn’t open the conversation with a bunch of ‘you might suffer reprisals’ warnings (which does have a bit of a self-fulfilling-prophecy ring to it, though I think there are skillful ways to pull it off).
You’re reading too much into my response. I didn’t claim that Anna should have this extra onus. I made an incorrect inference, was confused, asked for clarification, was still confused by the first response (honestly I’m still confused by that response), understood after the second response, and then explained what I would have said if I were in her place when she asked about norms.
(Yes, I do in fact think that the specific thing said had negative consequences. Yes, this belief shows in my comments. But I didn’t say that Anna was wrong/bad for saying the specific thing, nor did I say that she “should” have done something else. Assuming for the moment that the specific statement did have negative consequences, what should I have done instead?)
(On the actual question, I mostly agree that we probably have too many demands on public communication, such that much less public communication happens than would be good.)
I also would have been fine with “I hope people share additional true, relevant facts”. The specific phrasing seemed bad because it seemed to me to imply that the fear of reprisal was wrong. See also here.
OK, thanks for the correction! :]
Of course there’s also the possibility that it’s worth it. E.g. because people could then notice who is doing a rush-to-judgement thing or confirmation-bias-y thing. (This even holds if there’s threat of personal harm to fact-sharers, though personal harm looks like something you added to the part you quoted.)
I agree that’s possible, but then I’d say something like “I would love to know additional true relevant facts, but I recognize there are real risks to this and only recommend people do this if they think the benefits are worth it”.
Analogy: it could be worth it for an employee to publicly talk about the flaws of their company / manager (e.g. because then others know not to look for jobs at that company), even though it might get them fired. In such a situation I would say something like “It would be particularly helpful to know about the flaws of company X, but I recognize there are substantial risks involved and only recommend people do this if they feel up to it”. I would not say “I hope people don’t refrain from speaking up about the flaws of company X out of fear that they might be fired”, unless I had good reason to believe they wouldn’t be fired, or good reason to believe that it would be worth it on their values (though in that case presumably they’d speak up anyway).
Thanks. I’m actually still not sure what you’re saying.
Hypothesis 1: you’re saying, stating “I hope person A does X” implies a non-dependence on person A’s information, which implies the speaker has a lot of hidden evidence (enough to make their hope unlikely to change given A’s evidence). And, people might infer that there’s this hidden evidence, and update on it, which might be a mistake.
Hypothesis 2: you’re pointing at something about how “do X, even if you have fear” is subtly coercive / gaslighty, in the sense of trying to insert an external judgement to override someone’s emotion / intuition / instinct. E.g. “out of fear” might subtly frame an aversion as a “mere emotion”.
(Maybe these are the same...)
Hypothesis 2 feels truer than hypothesis 1.
(Just to state the obvious: it is clearly not as bad as the words “coercion” and “gaslighting” would usually imply. I am endorsing the mechanism, not the magnitude-of-badness.)
I agree that hypothesis 1 could be an underlying generator of why the effect in hypothesis 2 exists.
I think I am more confident in the prediction that these sorts of statements do influence people in ways-I-don’t-endorse, than in any specific mechanism by which that happens.
Okay.
I hesitated a bit before saying this? I thought it might add a little bit of clarity, so I figured I’d bring it up.
(Sorry it got long; I’m still not sure what to cut.)
There are definitely some needs-conflicts. Between (often distant) people who, in the face of this, feel the need to cling to the strong reassurance that “this could not possibly happen to them”/”they will definitely be protected from this,” and would feel reassured at seeing Strong Condemning Action as soon as possible...
...and “the people who had this happen.” Who might be best-served, if they absorbed that there is always some risk of this sort of shit happening to people. For them, it would probably be best if they felt their truth was genuinely heard, and took away some actionable lessons about what to avoid, without updating their personal identity to “victim” TOO much. And in the future, embraced connections that made them more robust against attaching to this sort of thing in the future.
(“Victim” is just not a healthy personal identity in the long-term, for most people.)
Sometimes, these needs are so different, that it warrants having different forums of discussion. But there is some overlap in these needs (working out what happened, improving reporting, protecting people from cultish reprisals), and I’m not sure that separation is always necessary.
My read of the direction Anna seems to be trying to steer this is “do everything she can to clearly hear out people’s stories carefully First.” Only later, after people have really really listened, use that to formulate carefully considered harm-reducing actions.
Understanding the issue, in all its complexity, before working on coming up with solutions? I feel pretty on-board with that.
...I admit, I initially chaffed a bit? I have some memories of times Anna has leaned a bit more into the former-group’s needs. Some of her attempts to aim differently this time, have felt a little awkward.
I did also get an “ordering other people to ignore politics and be vulnerable” vibe off this, which put my armor up to around my ears. (Something with more of a feel of… “showing own vulnerability to elicit other’s vulnerability,” would have generally felt more natural to me? I think her later responses cycled to this, a little).
...but I’m starting to think that even the awkwardness, is its own sort of evidence? Of someone who is used to wielding frame control, trying to put it aside to listen. And I feel a lot of affection, in seeing it show that she’s working on this.
There’s also the need to learn from what happened, so that when designing organizations in the future the same mistakes aren’t repeated.
I would like it if we showed the world how accountability is done, and given your position, I find it disturbing that you have omitted this objective. That is, if I wanted to deflect the conversation away from accountability, I think I would write a post similar to yours.
So would I. But to do accountability (as distinguished from scapegoating, less-epistemic blame), we need to know what happened, and we need to accurately trust each other (or at least most of each other) to be able to figure out what happened, and to care what actually happened.
The “figure out what happened” and “get in a position where we can have a non-fucked conversation” steps come first, IMO.
I also sort of don’t expect that much goal divergence on the accountability steps that very-optimistically come after those steps, either, basically because integrity and visible trustworthiness serve most good goals in the long run, and vengeance or temporarily-overextended-trust serves little.
Though, accountability is admittedly a weak point of mine, so I might be missing/omitting something. Maybe spell it out if so?
To clarify: goal divergence between whom? Geoff and Zoe? Zoe and me? Me and you?
This reaction has been predictable for years IMO. As usual, a reasonable response required people to go public. There is no internal accountability process. Luckily things have been made public.