Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.
I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.
If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficient to mean that a person’s goals would be well-served in the short run by following my request to avoid “refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”
However, I’m pretty sure most people in this ecosystem, maybe everyone, would deep down like to figure out how to actually see what kinds of fucked up individual and group and inter-group dynamics we’ve variously gotten into, and why and how, so that we can have a realer shot at things going forward. And I’m pretty sure we want this (in the deeper, long-run sense) much more than we want short-run reputation management. Separately, I suspect most peoples’ reputation-management will be kinda fucked in the long run if we don’t figure out how to get enough right to do actual progress on the world (vs creating local illusions of the same), although this last sentence is more controversial and less obvious. So, yeah, I’m asking people to try to engage in real conversation with me and others even though it’ll probably mess up parts of their/our reputation in the short run, and even though probably many won’t manage to joint this in the short run. And I suspect this effort will be good for many peoples’ deeper goals despite the political dynamics you mention.
It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it’s a prediction about their values (alongside a prediction of what the short-term and long-term effects are).
I’ll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.
I’m also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I’m not sure if that’s what you meant—maybe you think in the long term sharing of additional facts would help them personally, not just help the group.
Fwiw I don’t have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I’ve occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don’t think there have been other direct interactions with them.
I’m also not a fan of requests that presume that the listener …
From my POV, requests, and statements of what I hope for, aren’t advice. I think they don’t presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it’s okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.
Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party’s interests, or to carefully disclaim if I’m not sure what’ll be in their interests? That sounds tricky; I’m not peoples’ parents and they shouldn’t trust me to do that, and I’m afraid that if I try to talk that way I’ll make it even more confusing for anyone who starts out confused like that.
I think I’m missing part of where you’re coming from in terms of what good norms are around requests, or else I disagree about those norms.
you have way more private info than me, so perhaps...
I don’t have that much relevant-info-that-hasn’t-been-shared, and am mostly not trying to rely on it in whatever arguments I’m making here. Trying to converse more transparently, rather.
I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people,
I feel like this assumption seems false. I do predict that (at least in the world where we didn’t have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.
I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn’t think that fear of reprisal is particularly important to care about. Well, probably, it’s hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.
I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) Ithink this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people’s opinions seriously, and as a reason for them to change their behavior even if they don’t understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.
So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.
I don’t have a candidate alternative norm. (I generally don’t think very much about norms, and if I made one up now I’m sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like “I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it”.
(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn’t have to fear political judgment?)
(I also agree with TekhneMakre’s response about authority.)
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can’t claim to know much about setting up effective norms for defending whistleblowers though.
My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something.
these assumptions of mine are importantly false
If someone takes you as an authority, then they’re likely to take your wishes as commands. Imagine a CEO saying to her employees, “What I hope happens: … What I hope doesn’t happen: …”, and the (vocative/imperative mood) “Let’s show the world...”. That’s only your responsibility insofar as you’re somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.
such that I should try to follow some other communication norm
IMO no, but you could, say, ask LW to make a “comment signature” feature, and then have every comment you make link, in small font, to the comment you just made.
Thanks for the clarifying question, and the push-back. To elaborate my own take: I (like you) predict that some (maybe many) will take shared facts in a politicized way, will use them as an excuse for false or uncareful judgments, etc. I am not guaranteeing, nor predicting, that this won’t occur.
I am intending to myself do inference and conversation in a way that tries to avoids these “politicized speech” patterns, even if it turns out politically costly or socially awkward for me to do so. I am intending to make some attempt (not an infinite amount of effort, but some real effort, at some real cost if needed) to try to make it easier for others to do this too, and/or to ask it of others who I think may be amenable to being asked this, and/or to help coalesce a conversation in what I take to be a better pattern if I can figure out how to do so. I also predict, independently of my own efforts, that a nontrivial number of others will be trying this.
If “reputation management” is a person’s main goal, then the small- to medium-sized efforts I can hope to contribute toward a better conversation, plus the efforts I’m confident in predicting independently of mine, would be insufficient to mean that a person’s goals would be well-served in the short run by following my request to avoid “refraining from sharing true relevant facts, out of fear that others will take them in a politicized way, or will use them as an excuse for false judgments.”
However, I’m pretty sure most people in this ecosystem, maybe everyone, would deep down like to figure out how to actually see what kinds of fucked up individual and group and inter-group dynamics we’ve variously gotten into, and why and how, so that we can have a realer shot at things going forward. And I’m pretty sure we want this (in the deeper, long-run sense) much more than we want short-run reputation management. Separately, I suspect most peoples’ reputation-management will be kinda fucked in the long run if we don’t figure out how to get enough right to do actual progress on the world (vs creating local illusions of the same), although this last sentence is more controversial and less obvious. So, yeah, I’m asking people to try to engage in real conversation with me and others even though it’ll probably mess up parts of their/our reputation in the short run, and even though probably many won’t manage to joint this in the short run. And I suspect this effort will be good for many peoples’ deeper goals despite the political dynamics you mention.
Here’s to trying.
It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it’s a prediction about their values (alongside a prediction of what the short-term and long-term effects are).
I’ll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.
I’m also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I’m not sure if that’s what you meant—maybe you think in the long term sharing of additional facts would help them personally, not just help the group.
Fwiw I don’t have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I’ve occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don’t think there have been other direct interactions with them.
From my POV, requests, and statements of what I hope for, aren’t advice. I think they don’t presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it’s okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.
Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party’s interests, or to carefully disclaim if I’m not sure what’ll be in their interests? That sounds tricky; I’m not peoples’ parents and they shouldn’t trust me to do that, and I’m afraid that if I try to talk that way I’ll make it even more confusing for anyone who starts out confused like that.
I think I’m missing part of where you’re coming from in terms of what good norms are around requests, or else I disagree about those norms.
I don’t have that much relevant-info-that-hasn’t-been-shared, and am mostly not trying to rely on it in whatever arguments I’m making here. Trying to converse more transparently, rather.
I feel like this assumption seems false. I do predict that (at least in the world where we didn’t have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.
I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn’t think that fear of reprisal is particularly important to care about. Well, probably, it’s hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.
I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) I think this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people’s opinions seriously, and as a reason for them to change their behavior even if they don’t understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.
So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.
I don’t have a candidate alternative norm. (I generally don’t think very much about norms, and if I made one up now I’m sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like “I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it”.
(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn’t have to fear political judgment?)
(I also agree with TekhneMakre’s response about authority.)
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can’t claim to know much about setting up effective norms for defending whistleblowers though.
If someone takes you as an authority, then they’re likely to take your wishes as commands. Imagine a CEO saying to her employees, “What I hope happens: … What I hope doesn’t happen: …”, and the (vocative/imperative mood) “Let’s show the world...”. That’s only your responsibility insofar as you’re somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.
IMO no, but you could, say, ask LW to make a “comment signature” feature, and then have every comment you make link, in small font, to the comment you just made.