I think it’s perfectly sensible to constrain yourself to only make arguments based on true premises, and then optimize your message for convincingness under this constraint. Indeed, I would argue it’s the correct way to do public messaging.
It’s not even at odds with “aim to explain, not persuade”. When explaining, you should be aiming to make your explanations clear to your audience. If your audience will predictably misunderstand arguments of a certain form, due to e. g. political poison, you should mindfully choose arguments that route around the poison, rather than pretending the issue doesn’t exist. Approaches for generating explanations that don’t involve this are approaches that aren’t optimizing the message for their audiences at all, and which therefore aren’t approaches for generating explanations to begin with. They’re equivalent to just writing out your stream of consciousness. Messaging aimed at people you think your audience ought to be, rather than who they are.
That said, I don’t think you can optimize any one of your messages to be convincing to all possible audiences, or even to the majority of the people you ought to be trying to convince. IMO, there should be several “compendiums”, optimized to be convincing to different large demographics. As an obvious example: a Democrats-targeting one and a Republicans-targeting one.
Or perhaps this split in particular is a bad idea. Perhaps an explanation that is deliberately optimized to be bipartisan is needed. But if that is the aim, then writing it would still require actively modeling the biases of both parties, and mindfully routing around them – rather than pretending that they don’t exist.
I feel this is a significant problem with a lot of EA/R public messaging. The (correct) idea that we should be optimizing our communication for conveying the truth in an epistemically sound way gets (incorrectly) interpreted as a mindset where thinking about the optics and the framing at all is considered verboten. As if, by acting like we live in a world where Simulacrum Levels 3-4 don’t exist, we can actually step into that nice world – rather than getting torn apart by SL3-4 agents after we naively expose square miles of attack surfaces.
We should “declaw” ourselves: avoid using rhetorical tricks and other “dark arts”. But that doesn’t mean forgetting that everyone else still has claws they’re eager to use. Or, for that matter, that many messages you intend as tonally neutral and purely informative may have the effect of a rhetorical attack, when deployed in our sociocultural context.
Constantly keeping the political/cultural context in mind as you’re phrasing your messages is a vital part of engaging in high-epistemic-standards communication, rather than something that detracts from it.
Yeah, I agree with a lot of this in-principle. But I think the specific case of avoiding saying anything that might have something to do with evolution is I think a pretty wrong take, on this dimension, trying to communicate clearly.
Perhaps. Admittedly, I don’t have a solid model of whether a median American claiming to be a Creationist in surveys would instantly dismiss a message if it starts making arguments from evolution.
Still, I think the general point applies:
A convincing case for the AGI Omnicide Risk doesn’t have to include arguments from human evolution.
Arguments from human evolution may trigger some people to instinctively dismiss the entire message.
If the fraction of such people is large enough, it makes sense to have public AI-Risk messages that avoid evolution-based arguments when making their case.
No, I think this kind of very naive calculation does predictably result in worse arguments propagating, people rightfully dismissing those bad arguments (because they are not entangled with the real reasons why any of the people who have thought about the problem have formed beliefs on an issue themselves), and then ultimately the comms problem getting much harder.
I am in favor of people thinking hard about these issues, but I think exactly this kind of naive argument are in an uncanny valley where I think your comms gets substantially worse.
I agree that inventing new arguments for X that sound kind-of plausible to you on the surface level, and which you imagine would work well on a given demographic, is not a recipe for good communication. Such arguments are “artificial”, they’re not native citizens of someone’s internally consistent world-model, and it’s going to show and lead to unconvincing messages that fall apart under minimal scrutiny.
That’s not what I’m arguing for. The case for the AGI risk is overdetermined: there are enough true arguments for it that you can remove a subset of them and still end up with an internally consistent world-model in which the AGI risk is real. Arguably, there’s even a set of correct arguments that convinces a Creationist, without making them not-a-Creationist in the process.
Convincing messaging towards Creationists involves instantiating a world-model in which only the subset of arguments Creationists would believe exist, and then (earnestly) arguing from within that world-model.
Edit: Like, here’s a sanity-check: suppose you must convince a specific Creationist that the AGI Risk is real. Do you need to argue them out of Creationism in order to do so?
Like, here’s a sanity-check: suppose you must convince a specific Creationist that the AGI Risk is real. Do you need to argue them out of Creationism in order to do so?
My guess is no, but also, my guess is we will probably still have better comms if I err on the side of explaining things how they come naturally to me, and entangled with the way I came to adopt a position, and then they can do a bunch of the work of generalizing. Of course, if something is deeply triggering or mindkilly to someone, then it’s worth routing, but it’s not like any analogy with evolution is invalid from the perspective of someone who believes in Creationism. Yes, some of the force of such an analogy would be lost, but most of it comes from the logical consistency, not the empirical evidence.
and then they can do a bunch of the work of generalizing
This is the step which is best made unnecessary if you’re crafting a message for a broad audience, I feel.
Most people are not going to be motivated to put this work in. Why would they? They get bombarded with a hundred credible-ish messages claiming high-importance content on a weekly basis. They don’t have the time nor stamina to do a deep dive into each of them.
Which means any given subculture would generate its own “inferential bridge” between itself and your message, artefacts that do this work for the median member (consisting of reviews by any prominent subculture members, the takes that go viral, the entire shape of the discourse around the topic, etc.). The more work is needed, the longer these inferential bridges will be. The longer they are, the bigger the opportunity to willfully or accidentally mistranslate your message.
Like I said, it doesn’t seem wise or even fair to your potential audience, to act as if those dynamics don’t take place. As if the only people that deserve consideration are those that would put in the work themselves (despite the fact it may be a locally suboptimal way to distribute resources under their current world-model), and everyone else are lost causes.
I think it’s perfectly sensible to constrain yourself to only make arguments based on true premises, and then optimize your message for convincingness under this constraint. Indeed, I would argue it’s the correct way to do public messaging.
It’s not even at odds with “aim to explain, not persuade”. When explaining, you should be aiming to make your explanations clear to your audience. If your audience will predictably misunderstand arguments of a certain form, due to e. g. political poison, you should mindfully choose arguments that route around the poison, rather than pretending the issue doesn’t exist. Approaches for generating explanations that don’t involve this are approaches that aren’t optimizing the message for their audiences at all, and which therefore aren’t approaches for generating explanations to begin with. They’re equivalent to just writing out your stream of consciousness. Messaging aimed at people you think your audience ought to be, rather than who they are.
That said, I don’t think you can optimize any one of your messages to be convincing to all possible audiences, or even to the majority of the people you ought to be trying to convince. IMO, there should be several “compendiums”, optimized to be convincing to different large demographics. As an obvious example: a Democrats-targeting one and a Republicans-targeting one.
Or perhaps this split in particular is a bad idea. Perhaps an explanation that is deliberately optimized to be bipartisan is needed. But if that is the aim, then writing it would still require actively modeling the biases of both parties, and mindfully routing around them – rather than pretending that they don’t exist.
I feel this is a significant problem with a lot of EA/R public messaging. The (correct) idea that we should be optimizing our communication for conveying the truth in an epistemically sound way gets (incorrectly) interpreted as a mindset where thinking about the optics and the framing at all is considered verboten. As if, by acting like we live in a world where Simulacrum Levels 3-4 don’t exist, we can actually step into that nice world – rather than getting torn apart by SL3-4 agents after we naively expose square miles of attack surfaces.
We should “declaw” ourselves: avoid using rhetorical tricks and other “dark arts”. But that doesn’t mean forgetting that everyone else still has claws they’re eager to use. Or, for that matter, that many messages you intend as tonally neutral and purely informative may have the effect of a rhetorical attack, when deployed in our sociocultural context.
Constantly keeping the political/cultural context in mind as you’re phrasing your messages is a vital part of engaging in high-epistemic-standards communication, rather than something that detracts from it.
Yeah, I agree with a lot of this in-principle. But I think the specific case of avoiding saying anything that might have something to do with evolution is I think a pretty wrong take, on this dimension, trying to communicate clearly.
Perhaps. Admittedly, I don’t have a solid model of whether a median American claiming to be a Creationist in surveys would instantly dismiss a message if it starts making arguments from evolution.
Still, I think the general point applies:
A convincing case for the AGI Omnicide Risk doesn’t have to include arguments from human evolution.
Arguments from human evolution may trigger some people to instinctively dismiss the entire message.
If the fraction of such people is large enough, it makes sense to have public AI-Risk messages that avoid evolution-based arguments when making their case.
No, I think this kind of very naive calculation does predictably result in worse arguments propagating, people rightfully dismissing those bad arguments (because they are not entangled with the real reasons why any of the people who have thought about the problem have formed beliefs on an issue themselves), and then ultimately the comms problem getting much harder.
I am in favor of people thinking hard about these issues, but I think exactly this kind of naive argument are in an uncanny valley where I think your comms gets substantially worse.
I agree that inventing new arguments for X that sound kind-of plausible to you on the surface level, and which you imagine would work well on a given demographic, is not a recipe for good communication. Such arguments are “artificial”, they’re not native citizens of someone’s internally consistent world-model, and it’s going to show and lead to unconvincing messages that fall apart under minimal scrutiny.
That’s not what I’m arguing for. The case for the AGI risk is overdetermined: there are enough true arguments for it that you can remove a subset of them and still end up with an internally consistent world-model in which the AGI risk is real. Arguably, there’s even a set of correct arguments that convinces a Creationist, without making them not-a-Creationist in the process.
Convincing messaging towards Creationists involves instantiating a world-model in which only the subset of arguments Creationists would believe exist, and then (earnestly) arguing from within that world-model.
Edit: Like, here’s a sanity-check: suppose you must convince a specific Creationist that the AGI Risk is real. Do you need to argue them out of Creationism in order to do so?
My guess is no, but also, my guess is we will probably still have better comms if I err on the side of explaining things how they come naturally to me, and entangled with the way I came to adopt a position, and then they can do a bunch of the work of generalizing. Of course, if something is deeply triggering or mindkilly to someone, then it’s worth routing, but it’s not like any analogy with evolution is invalid from the perspective of someone who believes in Creationism. Yes, some of the force of such an analogy would be lost, but most of it comes from the logical consistency, not the empirical evidence.
Sure. But:
This is the step which is best made unnecessary if you’re crafting a message for a broad audience, I feel.
Most people are not going to be motivated to put this work in. Why would they? They get bombarded with a hundred credible-ish messages claiming high-importance content on a weekly basis. They don’t have the time nor stamina to do a deep dive into each of them.
Which means any given subculture would generate its own “inferential bridge” between itself and your message, artefacts that do this work for the median member (consisting of reviews by any prominent subculture members, the takes that go viral, the entire shape of the discourse around the topic, etc.). The more work is needed, the longer these inferential bridges will be. The longer they are, the bigger the opportunity to willfully or accidentally mistranslate your message.
Like I said, it doesn’t seem wise or even fair to your potential audience, to act as if those dynamics don’t take place. As if the only people that deserve consideration are those that would put in the work themselves (despite the fact it may be a locally suboptimal way to distribute resources under their current world-model), and everyone else are lost causes.