“PR” is corrosive; “reputation” is not.
This is in some sense a small detail, but one important enough to be worth write-up and critique: AFAICT, “PR” is a corrupt concept, in the sense that if you try to “navigate PR concerns” about yourself / your organization / your cause area / etc., the concept will guide you toward harmful and confused actions. In contrast, if you try to safeguard your “reputation”, your “brand”, or your “honor,” I predict this will basically go fine, and will not lead you to leave a weird confused residue in yourself or others.
To explain the difference:
If I am safeguarding my “honor” (or my “reputation”, “brand”, or “good name”), there are some fixed standards that I try to be known as adhering to. For example, in Game of Thrones, the Lannisters are safeguarding their “honor” by adhering to the principle “A Lannister always pays his debts.” They take pains to adhere to a certain standard, and to be known to adhere to that standard. Many examples are more complicated than this; a gentleman of 1800 who took up a duel to defend his “honor” was usually not defending his known adherence to a single simple principle a la the Lannisters. But it was still about his visible adherence to a fixed (though not explicit) societal standard.
In contrast, if I am “managing PR concerns,” there is no fixed standards of good conduct, or of my-brand-like conduct, that I am trying to adhere to. Instead, I am trying to do a more complicated operation:
Model which words or actions may cause “people” (especially media, or self-reinforcing miasma) to get upset with me;
Try to speak in such a way as to not set that off.
It’s a weirder or loopier process. One that’s more prone to self-reinforcing fears of shadows, and one that somehow (I think?) tends to pull a person away from communicating anything at all. Reminiscent of “Politics and the English Language.” Not reminiscent of Strunk and White.
One way you can see the difference, is that when people think about “PR” they imagine a weird outside expertise, such that you need to have a “PR consultant” or a “media consultant” who you should nervously heed advice from. When people think about their “honor,” it’s more a thing they can know or choose directly, and so it is more a thing that leaves them free to communicate something.
So: simple suggestion. If, at any point, you find yourself trying to “navigate PR”, or to help some person or organization or cause area or club or whatever to “navigate PR,” see if you can instead think and speak in terms of defending your/their “honor”, “reputation”, or “good name”. And see if that doesn’t make everybody feel a bit clearer, freer, and more as though their feet are on the ground.
Related: The Inner Ring, by CS Lewis; The New York Times, by Robert Rhinehart.
- Politics is way too meta by 17 Mar 2021 7:04 UTC; 288 points) (
- 20 May 2022 12:01 UTC; 145 points) 's comment on “Big tent” effective altruism is very important (particularly right now) by (EA Forum;
- Why did CEA buy Wytham Abbey? by 6 Dec 2022 14:46 UTC; 142 points) (EA Forum;
- Transcript for Geoff Anders and Anna Salamon’s Oct. 23 conversation by 8 Nov 2021 2:19 UTC; 83 points) (
- Covid 2/18: Vaccines Still Work by 18 Feb 2021 12:30 UTC; 76 points) (
- “Status” can be corrosive; here’s how I handle it by 24 Jan 2023 1:25 UTC; 71 points) (
- Prizes for the 2021 Review by 10 Feb 2023 19:47 UTC; 69 points) (
- Voting Results for the 2021 Review by 1 Feb 2023 8:02 UTC; 66 points) (
- Politics is far too meta by 17 Mar 2021 23:57 UTC; 58 points) (EA Forum;
- On Loyalty by 20 Feb 2023 10:29 UTC; 56 points) (EA Forum;
- Deontology is not the solution by 16 Nov 2022 14:22 UTC; 38 points) (EA Forum;
- EAs should recommend cost-effective interventions in more cause areas (not just the most pressing ones) by 1 Sep 2022 17:45 UTC; 37 points) (EA Forum;
- 2 Dec 2022 20:19 UTC; 34 points) 's comment on “Insider EA content” in Gideon Lewis-Kraus’s recent New Yorker article by (EA Forum;
- “Status” can be corrosive; here’s how I handle it by 24 Jan 2023 1:25 UTC; 22 points) (EA Forum;
- 23 Sep 2022 12:07 UTC; 12 points) 's comment on The $100,000 Truman Prize: Rewarding Anonymous EA Work by (EA Forum;
- Truth-seeking vs Influence-seeking—a narrower discussion by 23 Jun 2024 12:47 UTC; 11 points) (EA Forum;
- 18 Feb 2021 2:15 UTC; 8 points) 's comment on The feeling of breaking an Overton window by (
- 14 Nov 2021 0:41 UTC; 8 points) 's comment on Transcript for Geoff Anders and Anna Salamon’s Oct. 23 conversation by (
- 31 Oct 2022 3:18 UTC; 7 points) 's comment on A Critique of Longtermism by Popular YouTube Science Channel, Sabine Hossenfelder: “Elon Musk & The Longtermists: What Is Their Plan?” by (EA Forum;
- 8 Feb 2023 20:10 UTC; 4 points) 's comment on Highlights and Prizes from the 2021 Review Phase by (
- 12 Dec 2022 14:12 UTC; 3 points) 's comment on You *should* factor optics into EV calculations by (EA Forum;
- 21 Mar 2023 2:30 UTC; 2 points) 's comment on Capabilities Denial: The Danger of Underestimating AI by (
- 3 Jun 2023 18:27 UTC; 1 point) 's comment on Further defense of the 2% fuzzies/8% EA causes pledge proposal by (EA Forum;
- 13 Dec 2022 0:51 UTC; 0 points) 's comment on You *should* factor optics into EV calculations by (EA Forum;
I read this post for the first time in 2022, and I came back to it at least twice.
What I found helpful
The proposed solution: I actually do come back to the “honor” frame sometimes. I have little Rob Bensinger and Anna Salamon shoulder models that remind me to act with integrity and honor. And these shoulder models are especially helpful when I’m noticing (unhelpful) concerns about social status.
A crisp and community-endorsed statement of the problem: It was nice to be like “oh yeah, this thing I’m experiencing is that thing that Anna Salamon calls PR.” And to be honest, it was probably helpful tobe like “oh yeah this thing I’m experiencing is that thing that Anna Salamon, the legendary wise rationalist calls PR.” Sort of ironic, I suppose. But I wouldn’t be surprised if young/new rationalists benefit a lot from seeing some high-status or high-wisdom rationalist write a post that describes a problem they experience.
Note that I think this also applies to many posts in Replacing Guilt & The Sequences. To have Eliezer Yudkowsky describe a problem you face not only helps you see it; it also helps you be like ah yes, that’s a real/important problem that smart/legitimate people face.
The post “aged well.” It seems extremely relevant right now (Jan 2023), both for collectives and for individuals. The EA community is dealing with a lot of debate around PR right now. Also, more anecdotally, the Bay Area AI safety scene has quite a strange Status Hierarchy Thing going on, and I think this is a significant barrier to progress. (One might even say that “feeling afraid to speak openly due to vague social pressures” is a relatively central problem crippling the world at scale, as well as our community.)
The post is so short!
What could have been improved
The PR frame. “PR” seems like a term that applies to organizations but not individuals. I think Anna could have pretty easily thrown in some more synonyms/near-synonyms that help people relate more to the post. (I think “status” and “social standing” are terms that I hear the younguns using these days.)
Implementation details. Anna could have provided more suggestions for how to actually cultivate the “honor mindset” or otherwise deal with (unproductive) PR concerns. Sadly, humans are not automatically strategic, so I expect many people will not find the most strategic/effective ways to implement the advice in this post.
Stories. I think it would’ve been useful for Anna to provide examples of scenarios where she used the “honor” mindset in her own life or navigated the “PR” mindset in her own life.
But Akash, criticizing posts is easy. Why don’t you try to write your own post that attempts to address some of the limitations you pointed out?
I did!
I like this for the idea of distinguishing between what is real (how we behave) vs what is perceived (other people’s judgment of how we are behaving). It helped me see that rather than focusing on making other people happy or seeking their approval, I should instead focus on what I believe I should do (e.g. what kinds of behaviour create value in the world) and measure myself accordingly. My beliefs may be wrong, but feedback from reality is far more objective and consistent than things like social approval, so it’s a much saner goal. And more importantly, it is a goal that encourages genuine change.
What we want is for perceptions to match with what is real, not for perceptions themselves to be manipulated independently of reality.
I’m not sure how I feel about this post.
Here are three different things I took it to mean:
There are two different algorithms you might want to follow. One is “uphold a specific standard that you care about meeting”. The other is “Avoiding making people upset (more generally).” The first algorithm is bounded, the second algorithm is unbounded, and requires you to model other people.
You might call the first algorithm “Uphold honor” and the second algorithm “Manage PR concerns”, and using those names is probably a better intuition-guide.
The “Avoiding making people upset (more generally)” option is a loopier process that makes you more likely to jump at shadows.
I’m not sure I buy #2. I definitely buy #1. #3 seems probably true for many people but I’d present it to people more as a hypothesis to consider about themselves than a general fact.
Reflecting on these, a meta-concept jumps out at me: If you’re trying to do one kind of “PR management”, or “social/political navigation” (or, hell, any old problem you’re trying to solve), it can be helpful to try on a few different frames for what exactly you’re trying to accomplish. At a glance, “honor” and “PR” might seem very similar, but they might have fairly different implementation details with different reasons.
Different people might have different intuitions on what “honor” or “protecting your reputation” means, but it’s probably true-across-people that at least some different near-synonyms in fact have different details and flavors and side effects, and this is worth applying some perceptual dexterity to.
As for as importance: I do think the general topic of “feeling afraid to speak openly due to vague social pressures” is a relatively central problem crippling the modern world at scale. I know lots of people who express fears of speaking their mind for some reason or another, and for a number of them I think they list “this is bad PR” or “bad optics” as an explicit motivation.
I’m not sure how much this post helps, but I think it’s at least useful pointer and maybe helpful for people getting “unstuck”. Curious to hear if anyone has concretely used the post.