But if group members are insecure enough, or if there is some limited pool of resources to divide up that each member really wants for themselves, then each member experiences a strong pressure to signal their devotion harder and harder, often burning substantial personal resources.
To add to this: if the group leaders seem anxious or distressed, then one of the ways in which people may signal devotion is by also being anxious and distressed. This will then make everything worse—if you’re anxious, you’re likely to think poorly and fixate on what you think is wrong without necessarily being able to do any real problem-solving around it. It also causes motivated reasoning about how bad everything is, so that one could maintain that feeling of distress.
In various communities there’s often a (sometimes implicit, sometimes explicit) notion of “if you’re not freaked out by what’s happening, you’re not taking things seriously enough”. E.g. to take an example from EA/rationalist circles, this lukeprog post, while not quite explicitly saying that, reads to me as coming close (I believe that Luke only meant to say that it’s good for people to take action, but the way it’s phrased, it implies that you need to feel upset to take any action):
Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. 1 However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.
Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. [...]
Moreover, I believe Musk when he says that his ultimate purpose for founding Neuralink is to avert an AI catastrophe: “If you can’t beat it, join it.” Personally, I’m not optimistic that brain-computer interfaces can avert AI catastrophe — for roughly the reasons outlined in the BCIs section of Superintelligence ch. 2 — but Musk came to a different assessment, and I’m glad he’s trying.
Whatever my disagreements with Musk (I have plenty), it looks to me like Musk doesn’t just profess concern about AI existential risk. 2 I think he feels it in his bones, when he wakes up in the morning, and he’s spending a significant fraction of his time and capital to try to do something about it. And for that I am grateful.
As a separate consideration, if you consider someone an authority, then you’re going to (explicitly and implicitly) trust their assessments of the world to be at least somewhat accurate. So even if you didn’t experience social pressure to be distressed yourself, just the fact that they were distressed due to something you considered them an authority on (e.g. Eliezer on AI risk) suggests you might pick up some of their distress.
In various communities there’s often a (sometimes implicit, sometimes explicit) notion of “if you’re not freaked out by what’s happening, you’re not taking things seriously enough”.
Do you have an example of this from other communities? I am not quickly thinking of other examples (I think corrupt leaders often try to give vibes of being calm and in control and powerful, not being anxious and worried).
And furthermore I basically buy the claim that if you’re not freaked out by our civilization then you don’t understand it.
From my current vantage point I agree that people will imitate the vibe of the leadership, but I feel like you’re saying “and the particular vibe of anxiousness is common for common psychological reasons” but I don’t know why you think that or what psychological reasons you have in mind.
And furthermore I basically buy the claim that if you’re not freaked out by our civilization then you don’t understand it.
There’s probably a version of this sentence that I’d be sympathetic to (e.g. maybe “almost everyone’s emotional security relies on implicit assumptions about how competent civilization is, which are false”). But in general I am pretty opposed to claims which imply that there is one correct emotional reaction to understanding a given situation. I think it’s an important component of rationality to notice when judgments smuggle in implicit standards (as per my recent post), which this is an example of.
Having said that, it’s also an important component of rationality to not reason your way out of ever being freaked out. If the audience reading this weren’t LWers, then I probably wouldn’t have bothered pushing back, since I think something like my rephrasing above is true for many people, which implies that a better understanding would make them freak out more. But I think that LWers in particular are more often making the opposite mistake, of assuming that there’s one correct emotional reaction.
Having said that, it’s also an important component of rationality to not reason your way out of ever being freaked out.
Sorry, I’m getting confused and I don’t understand this sentence. Are you literally saying that you can’t reason out of being afraid? Because this would be a terrible guideline, for many reasons.
I can’t think of specific quotes offhand, but I feel like I’ve caught that kind of an vibe from some social justice and climate change people/conversations. E.g. I recall getting backlash from suggesting that climate change might not be an extinction risk.
To add to this: if the group leaders seem anxious or distressed, then one of the ways in which people may signal devotion is by also being anxious and distressed. This will then make everything worse—if you’re anxious, you’re likely to think poorly and fixate on what you think is wrong without necessarily being able to do any real problem-solving around it. It also causes motivated reasoning about how bad everything is, so that one could maintain that feeling of distress.
In various communities there’s often a (sometimes implicit, sometimes explicit) notion of “if you’re not freaked out by what’s happening, you’re not taking things seriously enough”. E.g. to take an example from EA/rationalist circles, this lukeprog post, while not quite explicitly saying that, reads to me as coming close (I believe that Luke only meant to say that it’s good for people to take action, but the way it’s phrased, it implies that you need to feel upset to take any action):
As a separate consideration, if you consider someone an authority, then you’re going to (explicitly and implicitly) trust their assessments of the world to be at least somewhat accurate. So even if you didn’t experience social pressure to be distressed yourself, just the fact that they were distressed due to something you considered them an authority on (e.g. Eliezer on AI risk) suggests you might pick up some of their distress.
Do you have an example of this from other communities? I am not quickly thinking of other examples (I think corrupt leaders often try to give vibes of being calm and in control and powerful, not being anxious and worried).
And furthermore I basically buy the claim that if you’re not freaked out by our civilization then you don’t understand it.
From my current vantage point I agree that people will imitate the vibe of the leadership, but I feel like you’re saying “and the particular vibe of anxiousness is common for common psychological reasons” but I don’t know why you think that or what psychological reasons you have in mind.
There’s probably a version of this sentence that I’d be sympathetic to (e.g. maybe “almost everyone’s emotional security relies on implicit assumptions about how competent civilization is, which are false”). But in general I am pretty opposed to claims which imply that there is one correct emotional reaction to understanding a given situation. I think it’s an important component of rationality to notice when judgments smuggle in implicit standards (as per my recent post), which this is an example of.
Having said that, it’s also an important component of rationality to not reason your way out of ever being freaked out. If the audience reading this weren’t LWers, then I probably wouldn’t have bothered pushing back, since I think something like my rephrasing above is true for many people, which implies that a better understanding would make them freak out more. But I think that LWers in particular are more often making the opposite mistake, of assuming that there’s one correct emotional reaction.
Your suggested sentence is basically what I had in mind.
Sorry, I’m getting confused and I don’t understand this sentence. Are you literally saying that you can’t reason out of being afraid? Because this would be a terrible guideline, for many reasons.
I can’t think of specific quotes offhand, but I feel like I’ve caught that kind of an vibe from some social justice and climate change people/conversations. E.g. I recall getting backlash from suggesting that climate change might not be an extinction risk.