I think LW needs better language to talk about efforts to “change minds.” Ideas like asymmetric weapons and the Dark Arts are useful but insufficient.
In particular, I think there is a common scenario where:
You have an underlying commitment to open-minded updating and possess evidence or analysis that would update community beliefs in a particular direction.
You also perceive a coordination problem that inhibits this updating process for a reason that the mission or values of the group do not endorse.
Perhaps the outcome of the update would be a decline in power and status for high-status people. Perhaps updates in general can feel personally or professionally threatening to some people in the debate. Perhaps there’s enough uncertainty in what the overall community believes that an information cascade has taken place. Perhaps the epistemic heuristics used by the community aren’t compatible with the form of your evidence or analysis.
Solving this coordination problem to permit open-minded updating is difficult due to lack of understanding or resources, or by sabotage attempts.
When solving the coordination problem would predictably lead to updating, then you are engaged in what I believe is an epistemically healthy effort to change minds. Let’s call it epistemic activism for now.
Here are some community touchstones I regard as forms of epistemic activism:
The founding of LessWrong and Effective Altruism
The one-sentence declaration on AI risks
The popularizing of terms like Dark Arts, asymmetric weapons, questionable research practices, and “importance hacking.”
Founding AI safety research organizations and PhD programs to create a population of credible and credentialed AI safety experts; calls for AI safety researchers to publish in traditional academic journals so that their research can’t be dismissed for not being subject to institutionalized peer review
The publication of books by high-status professionals articulating a shortcoming in current practice and endorsing some new paradigm for their profession. Such books don’t just collect arguments and evidence for the purpose of individual updating, but also create common knowledge about “what people in my profession are talking about.”
I suspect the strongest reason to categorize epistemic activism as a “symmetric weapon” or the “Dark Arts” is as a guardrail. An analogy can be drawn with professional integrity: you should neither do anything unethical, nor do anything that risks appearing unethical. Similar, you should neither do anything that’s Dark Arts, nor anything that risks appearing like the Dark Arts. Explicitly naming that you are engaging in epistemic activism looks like the Dark Arts, so we shouldn’t do it. It creates a permissive environment for bad-faith actors to actually engage in the Dark Arts while calling it “epistemic activism.” After all, “epistemic activism” assumes a genuine commitment to open-minded updating which is certainly not always the case.
That’s a legitimate risk. I am weighing it against two other risks that I think are also severe:
The existence of coordination problems that inhibit normal updating is also an epistemic risk, and if they represent an equilibrium or the kinetics of change are too slow, action is required to resolve the problem. The ability to notice and respond to this dynamic is bolstered when we have the language to describe it.
Epistemic activism happens anyway—it just has to be naturalized so that it doesn’t look like an activist effort. I think that creates an incentive to disguise one’s motives and creates confusion and a tendency toward inaction, particularly by lower-status people, when faced with an epistemic coordination problem.
Because I think that deferenceusually leads to more accurate individual beliefs, I think that epistemic activism should often result in the activist having their mind change. They might be wrong about the existence of the coordination problem, or they might find that there’s a ready rebuttal for their evidence or argument that they find persuasive. That is a successful outcome of epistemic activism: normal updating has occurred.
This is a key difference between epistemic activism and being a crank. The crank often also perceives that normal updating is impaired due to a social coordination problem. The difference is that a crank would regard it as a failure if the resumption of updating resulted in the crank realizing they were wrong all along. And for that reason, cranks often decide that the fact that the community hasn’t updated in the direction they expected is evidence that the coordination problem still exists.
A good epistemic activist ought to have a separate argument and evidence base for the existence of the coordination problem, independent from how they expect community beliefs would update if the problem was resolved. Let’s give an example:
My mom’s anxiety often impairs her, or my family as a whole, from trying new things together (this is the coordination problem). I think they’d enjoy snorkeling (this is the outcome I predict for the update). I have to track the existence of my mom’s anxiety about snorkeling and my success in addressing it separately from whether or not she ultimately becomes enthusiastic about snorkeling.
If I saw that despite my best efforts, my mom’s initial anxiety on the subject was static, then I might conclude I’d simply failed to solve the coordination problem.
If instead, my mom shifts from initial anxiety to a cogent description of why she’s just not interested in snorkeling (“it’s risky, I don’t like cold water, I don’t want to spend on a wetsuit, I’m just not that excited about it”), then I can’t regard this as evidence she’s still anxious. It just means I have to update toward thinking I was incorrect in predicting anxiety was inhibiting her from exploring an activity she would actually enjoy.
If I decide that my mom’s new, cogent description of why she’s not interested in snorkeling is evidence that she’s still anxious, then I’m engaging in crank behavior, not epistemic activism.
When communicating an argument, the quality of feedback about its correctness you get depends on efforts around its delivery whose shape doesn’t depend on its correctness. The objective of improving quality of feedback in order to better learn from it is a check on spreading nonsense.
Epistemic activism
I think LW needs better language to talk about efforts to “change minds.” Ideas like asymmetric weapons and the Dark Arts are useful but insufficient.
In particular, I think there is a common scenario where:
You have an underlying commitment to open-minded updating and possess evidence or analysis that would update community beliefs in a particular direction.
You also perceive a coordination problem that inhibits this updating process for a reason that the mission or values of the group do not endorse.
Perhaps the outcome of the update would be a decline in power and status for high-status people. Perhaps updates in general can feel personally or professionally threatening to some people in the debate. Perhaps there’s enough uncertainty in what the overall community believes that an information cascade has taken place. Perhaps the epistemic heuristics used by the community aren’t compatible with the form of your evidence or analysis.
Solving this coordination problem to permit open-minded updating is difficult due to lack of understanding or resources, or by sabotage attempts.
When solving the coordination problem would predictably lead to updating, then you are engaged in what I believe is an epistemically healthy effort to change minds. Let’s call it epistemic activism for now.
Here are some community touchstones I regard as forms of epistemic activism:
The founding of LessWrong and Effective Altruism
The one-sentence declaration on AI risks
The popularizing of terms like Dark Arts, asymmetric weapons, questionable research practices, and “importance hacking.”
Founding AI safety research organizations and PhD programs to create a population of credible and credentialed AI safety experts; calls for AI safety researchers to publish in traditional academic journals so that their research can’t be dismissed for not being subject to institutionalized peer review
The publication of books by high-status professionals articulating a shortcoming in current practice and endorsing some new paradigm for their profession. Such books don’t just collect arguments and evidence for the purpose of individual updating, but also create common knowledge about “what people in my profession are talking about.”
I suspect the strongest reason to categorize epistemic activism as a “symmetric weapon” or the “Dark Arts” is as a guardrail. An analogy can be drawn with professional integrity: you should neither do anything unethical, nor do anything that risks appearing unethical. Similar, you should neither do anything that’s Dark Arts, nor anything that risks appearing like the Dark Arts. Explicitly naming that you are engaging in epistemic activism looks like the Dark Arts, so we shouldn’t do it. It creates a permissive environment for bad-faith actors to actually engage in the Dark Arts while calling it “epistemic activism.” After all, “epistemic activism” assumes a genuine commitment to open-minded updating which is certainly not always the case.
That’s a legitimate risk. I am weighing it against two other risks that I think are also severe:
The existence of coordination problems that inhibit normal updating is also an epistemic risk, and if they represent an equilibrium or the kinetics of change are too slow, action is required to resolve the problem. The ability to notice and respond to this dynamic is bolstered when we have the language to describe it.
Epistemic activism happens anyway—it just has to be naturalized so that it doesn’t look like an activist effort. I think that creates an incentive to disguise one’s motives and creates confusion and a tendency toward inaction, particularly by lower-status people, when faced with an epistemic coordination problem.
Because I think that deference usually leads to more accurate individual beliefs, I think that epistemic activism should often result in the activist having their mind change. They might be wrong about the existence of the coordination problem, or they might find that there’s a ready rebuttal for their evidence or argument that they find persuasive. That is a successful outcome of epistemic activism: normal updating has occurred.
This is a key difference between epistemic activism and being a crank. The crank often also perceives that normal updating is impaired due to a social coordination problem. The difference is that a crank would regard it as a failure if the resumption of updating resulted in the crank realizing they were wrong all along. And for that reason, cranks often decide that the fact that the community hasn’t updated in the direction they expected is evidence that the coordination problem still exists.
A good epistemic activist ought to have a separate argument and evidence base for the existence of the coordination problem, independent from how they expect community beliefs would update if the problem was resolved. Let’s give an example:
My mom’s anxiety often impairs her, or my family as a whole, from trying new things together (this is the coordination problem). I think they’d enjoy snorkeling (this is the outcome I predict for the update). I have to track the existence of my mom’s anxiety about snorkeling and my success in addressing it separately from whether or not she ultimately becomes enthusiastic about snorkeling.
If I saw that despite my best efforts, my mom’s initial anxiety on the subject was static, then I might conclude I’d simply failed to solve the coordination problem.
If instead, my mom shifts from initial anxiety to a cogent description of why she’s just not interested in snorkeling (“it’s risky, I don’t like cold water, I don’t want to spend on a wetsuit, I’m just not that excited about it”), then I can’t regard this as evidence she’s still anxious. It just means I have to update toward thinking I was incorrect in predicting anxiety was inhibiting her from exploring an activity she would actually enjoy.
If I decide that my mom’s new, cogent description of why she’s not interested in snorkeling is evidence that she’s still anxious, then I’m engaging in crank behavior, not epistemic activism.
When communicating an argument, the quality of feedback about its correctness you get depends on efforts around its delivery whose shape doesn’t depend on its correctness. The objective of improving quality of feedback in order to better learn from it is a check on spreading nonsense.