It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it’s a prediction about their values (alongside a prediction of what the short-term and long-term effects are).
I’ll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.
I’m also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I’m not sure if that’s what you meant—maybe you think in the long term sharing of additional facts would help them personally, not just help the group.
Fwiw I don’t have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I’ve occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don’t think there have been other direct interactions with them.
I’m also not a fan of requests that presume that the listener …
From my POV, requests, and statements of what I hope for, aren’t advice. I think they don’t presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it’s okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.
Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party’s interests, or to carefully disclaim if I’m not sure what’ll be in their interests? That sounds tricky; I’m not peoples’ parents and they shouldn’t trust me to do that, and I’m afraid that if I try to talk that way I’ll make it even more confusing for anyone who starts out confused like that.
I think I’m missing part of where you’re coming from in terms of what good norms are around requests, or else I disagree about those norms.
you have way more private info than me, so perhaps...
I don’t have that much relevant-info-that-hasn’t-been-shared, and am mostly not trying to rely on it in whatever arguments I’m making here. Trying to converse more transparently, rather.
I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people,
I feel like this assumption seems false. I do predict that (at least in the world where we didn’t have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.
I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn’t think that fear of reprisal is particularly important to care about. Well, probably, it’s hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.
I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) Ithink this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people’s opinions seriously, and as a reason for them to change their behavior even if they don’t understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.
So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.
I don’t have a candidate alternative norm. (I generally don’t think very much about norms, and if I made one up now I’m sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like “I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it”.
(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn’t have to fear political judgment?)
(I also agree with TekhneMakre’s response about authority.)
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can’t claim to know much about setting up effective norms for defending whistleblowers though.
My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something.
these assumptions of mine are importantly false
If someone takes you as an authority, then they’re likely to take your wishes as commands. Imagine a CEO saying to her employees, “What I hope happens: … What I hope doesn’t happen: …”, and the (vocative/imperative mood) “Let’s show the world...”. That’s only your responsibility insofar as you’re somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.
such that I should try to follow some other communication norm
IMO no, but you could, say, ask LW to make a “comment signature” feature, and then have every comment you make link, in small font, to the comment you just made.
It sounds like you are predicting that the people who are sharing true relevant facts have values such that the long-term benefits to the group overall outweigh the short-term costs to them. In particular, it’s a prediction about their values (alongside a prediction of what the short-term and long-term effects are).
I’ll just note that, on my view of the short-term and long-term effects, it seems pretty unclear whether by my values I should share additional true relevant information, and I lean towards it being negative. Of course, you have way more private info than me, so perhaps you just know a lot more about the short-term and long-term effects.
I’m also not a fan of requests that presume that the listener is altruistic, and willing to accept personal harm for group benefit. I’m not sure if that’s what you meant—maybe you think in the long term sharing of additional facts would help them personally, not just help the group.
Fwiw I don’t have any particularly relevant facts. I once tagged along with a friend to a party that I later (i.e. during or after the party) found out was at Leverage. I’ve occasionally talked briefly with people who worked at Leverage / Paradigm / Reserve. I might have once stopped by a poster they had at EA Global. I don’t think there have been other direct interactions with them.
From my POV, requests, and statements of what I hope for, aren’t advice. I think they don’t presume that the listener will want to do them or will be advantaged by them, or anything much else about the listener except that it’s okay to communicate my request/hope to them. My requests/hopes just share what I want. The listener can choose for themselves, based on what they want. I’m assuming listeners will only do things if they don’t mind doing them, i.e. that my words won’t coerce people, and I guess I’m also assuming that my words won’t be assumed to be a trustworthy voice of authorities that know where the person’s own interests are, or something. That I can be just some person, hoping and talking and expecting to be evaluated by equals.
Is it that you think these assumptions of mine are importantly false, such that I should try to follow some other communication norm, where I more try to only advocate for things that will turn out to be in the other party’s interests, or to carefully disclaim if I’m not sure what’ll be in their interests? That sounds tricky; I’m not peoples’ parents and they shouldn’t trust me to do that, and I’m afraid that if I try to talk that way I’ll make it even more confusing for anyone who starts out confused like that.
I think I’m missing part of where you’re coming from in terms of what good norms are around requests, or else I disagree about those norms.
I don’t have that much relevant-info-that-hasn’t-been-shared, and am mostly not trying to rely on it in whatever arguments I’m making here. Trying to converse more transparently, rather.
I feel like this assumption seems false. I do predict that (at least in the world where we didn’t have this discussion) your statement would create a social expectation for the people to report true, relevant facts, and that this social expectation would in fact move people in the direction of reporting true, relevant facts.
I immediately made the inference myself on reading your comment. There was no choice in the matter, no execution of a deliberate strategy on my part, just an inference that Anna wants people to give the facts, and doesn’t think that fear of reprisal is particularly important to care about. Well, probably, it’s hard to remember exactly what I thought, but I think it was something like this. I then thought about why this might be, and how I might have misunderstood. In hindsight, the explanation you gave above should have occurred to me, that is the sort of thing that people who speak literally would do, but it did not.
I think there are lots of LWers who, like me, make these sorts of inferences automatically. (And I note that these kinds of inferences are excellent for believing true things about people outside of LW.) I think this is especially true of people in the same reference class as Zoe, and that such people will feel particularly pressured by it. (There are a sadly large number of people in this community who have a lot of self-doubt / not much self-esteem, and are especially likely to take other people’s opinions seriously, and as a reason for them to change their behavior even if they don’t understand why.) This applies to both facts that are politically-pro-Leverage and facts that are politically-anti-Leverage.
So overall, yes, I think your words would lead people to infer that it would be better for them to report true relevant facts and that any fear they have is somehow misplaced, and to be pressured by that inference into actually doing so, i.e. it coerces them.
I don’t have a candidate alternative norm. (I generally don’t think very much about norms, and if I made one up now I’m sure it would be bad.) But if I wanted to convey something similar in this particular situation, I would have said something like “I would love to know additional true relevant facts, but I recognize there is a risk that others will take them in a politicized way, or will use them as an excuse to falsely judge you, so please only do this if you think the benefits are worth it”.
(Possibly this is missing something you wanted to convey, e.g. you wish that the community were such that people didn’t have to fear political judgment?)
(I also agree with TekhneMakre’s response about authority.)
Those making requests for others to come forward with facts in the interest of a long(er)-term common good could find norms that serves as assurance or insurance that someone will be protected against potential retaliation against their own reputation. I can’t claim to know much about setting up effective norms for defending whistleblowers though.
If someone takes you as an authority, then they’re likely to take your wishes as commands. Imagine a CEO saying to her employees, “What I hope happens: … What I hope doesn’t happen: …”, and the (vocative/imperative mood) “Let’s show the world...”. That’s only your responsibility insofar as you’re somehow collaborating with them to have them take you as an authority; but it could happen without your collaboration.
IMO no, but you could, say, ask LW to make a “comment signature” feature, and then have every comment you make link, in small font, to the comment you just made.