I am of the opinion that you should use good epistemics when talking to the public or policy makers, rather than using bad epistemics to try to be more persuasive.
Do you have any particular examples as evidence of this? This is something I’ve been thinking a lot about for AI and I’m quite uncertain. It seems that ~0% of advocacy campaigns have good epistemics, so it’s hard to have evidence about this. Emotional appeals are important and often hard to reconcile with intellectual honesty.
Of course there are different standards for good epistemics and it’s probably bad to outright lie, or be highly misleading. But by EA standards of “good epistemics” it seems less clear if the benefits are worth the costs.
As one example, the AI Safety movement may want to partner with advocacy groups who care about AI using copyrighted data or unions concerned about jobs. But these groups basically always have terrible epistemics and partnering usually requires some level of endorsement of their positions.
As an even more extreme example, as far as I can tell about 99.9% of people have terrible epistemics by LessWrong standards so to even expand to a decently sized movement you will have to fill the ranks with people who will constantly say and think things that you think are wrong.
Agreed. Advocacy seems to me to be ~very frequently tied to bad epistemics, for a variety of reasons. So what is missing to me in this writeup (and indeed, in most of the discussions about the issue): why does it make sense to make laypeople even more interested?
The status quo is that relevant people (ML researchers at large, AI investors, governments and international bodies like UN) are already well-aware of the safety problem. Institutions are set up, work is being done. What is there to be gained from involving the public to an even greater extent, poison and inevitably simplify the discourse, add more hard-to-control momentum? I can imagine a few answers (at present not enough being done, fear of the market forces eventually overwhelming the governance, “democratic mindset”), but none of those seem convincing in the face of the above.
To tie with the environmental movement: wouldn’t it be much better for the world if it was an uninspiring issue. It seems to me that this would prevent the anti-nuclear movement being solidified by the momentum, the extinction rebellion promoting degrowth etc, and instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?
instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?
Whilst normally having radical groups is useful for shifting the Overton window or abusing anchoring effects in this case study of environmentalism I think it backfired from what I can understand, given the polling data of public in the sample country already caring about the environment.
Thanks, this is really useful.
Do you have any particular examples as evidence of this? This is something I’ve been thinking a lot about for AI and I’m quite uncertain. It seems that ~0% of advocacy campaigns have good epistemics, so it’s hard to have evidence about this. Emotional appeals are important and often hard to reconcile with intellectual honesty.
Of course there are different standards for good epistemics and it’s probably bad to outright lie, or be highly misleading. But by EA standards of “good epistemics” it seems less clear if the benefits are worth the costs.
As one example, the AI Safety movement may want to partner with advocacy groups who care about AI using copyrighted data or unions concerned about jobs. But these groups basically always have terrible epistemics and partnering usually requires some level of endorsement of their positions.
As an even more extreme example, as far as I can tell about 99.9% of people have terrible epistemics by LessWrong standards so to even expand to a decently sized movement you will have to fill the ranks with people who will constantly say and think things that you think are wrong.
Agreed. Advocacy seems to me to be ~very frequently tied to bad epistemics, for a variety of reasons. So what is missing to me in this writeup (and indeed, in most of the discussions about the issue): why does it make sense to make laypeople even more interested?
The status quo is that relevant people (ML researchers at large, AI investors, governments and international bodies like UN) are already well-aware of the safety problem. Institutions are set up, work is being done. What is there to be gained from involving the public to an even greater extent, poison and inevitably simplify the discourse, add more hard-to-control momentum? I can imagine a few answers (at present not enough being done, fear of the market forces eventually overwhelming the governance, “democratic mindset”), but none of those seem convincing in the face of the above.
To tie with the environmental movement: wouldn’t it be much better for the world if it was an uninspiring issue. It seems to me that this would prevent the anti-nuclear movement being solidified by the momentum, the extinction rebellion promoting degrowth etc, and instead semi-sensible policies would get considered somewhere in the bureaucracy of the states?
Whilst normally having radical groups is useful for shifting the Overton window or abusing anchoring effects in this case study of environmentalism I think it backfired from what I can understand, given the polling data of public in the sample country already caring about the environment.