From a broad policy perspective, it can be tricky to know what to communicate. I think it helps if we think a bit more about the effects of our communication and a bit less about correctly conveying our level of credence in particular claims. Let me explain.
If we communicate the simple idea that AGI is near then it pushes people to work on safety projects that would be good to work on even if AGI is not near while paying some costs in terms of reputation, mental health, and personal wealth.
If we communicate the simple idea that AGI is not near then people will feel less need to work on safety soon. This would let them not miss out on opportunities that would be good to take ahead of when they actually need to focus on AI safety.
I reached this via Joachim pointing it out as an example of someone urging epistemic defection around AI alignment, and I have to agree with him there. I think the higher difficulty posed by communicating “we think there’s a substantial probability that AGI happens in the next 10 years” vs “AGI is near” is worth it even from a PR perspective, because pretending you know the day and the hour smells like bullshit to the most important people who need convincing that AI alignment is nontrivial.
I left a comment over in the other thread, but I think Joachim misunderstands my position.
In the above comment I’ve taken for granted that there’s a non-trivial possibility that AGI is near, so I’m not arguing we should say that “AGI is near” regardless of whether it is or not, because we don’t know if it is or not, we only have our guesses about it, and so long as there’s a non-trivial chance that AGI is near, I think that’s the more important message to communicate.
Overall it would be better if we can communicate something like “AGI is probably near”, but “probably” and similar terms are going to get rounded off, so even if you do literally say “AGI is probably near” or similar, that’s not what people will hear, and if you’re going to say “probably” my argument is that it’s better if they round the “probably” off to “near” rather than “not near”.
I agree with “When you say ‘there’s a good chance AGI is near’, the general public will hear ‘AGI is near’”.
However, the general public isn’t everyone, and the people who can distinguish between the two claims are the most important to reach (per capita, and possibly in sum).
So we’ll do better by saying what we actually believe, while taking into account that some audiences will round probabilities off (and seeking ways to be rounded closer to the truth while still communicating accurately to anyone who does understand probabilistic claims). The marginal gain by rounding ourselves off at the start isn’t worth the marginal loss by looking transparently overconfident to those who can tell the difference.
I’m replying only here because spreading discussion over multiple threads makes it harder to follow.
You left a reply on a question asking how to communicate about reasons why AGI might not be near. The question refers to costs of “the community” thinking that AI closer than it really is as a reason to communicate about reasons it might not be so close.
So I understood the question as asking about communication with the community (my guess: of people seriously working and thinking about AI-safety-as-in-AI-not-killing-everyone). Where it’s important to actually try to figure out truth.
You replied (as I understand) that when we communicate to general public we can transmit only 1 idea that so we should communicate that AGI is near (if we assign not-very-low probability to that).
I think the biggest problem I have with your posting “general public communication” as a reply to question asking about “community communication” pushes towards less clarity in the community, where I think clarity is important.
I’m also not sold on the “you can communicate only one idea” thing but I mostly don’t care to talk about it right now (it would be nice if someone else worked it out for me but now I don’t have capacity to do it myself).
Ah I see. I have to admit, I write a lot of my comments between things and I missed that the context of the post could cause my words to be interpreted this way. These days I’m often in executive mode rather than scholar mode and miss nuance if it’s not clearly highlighted, hence my misunderstanding, but also reflects where I’m coming from with this answer!
From a broad policy perspective, it can be tricky to know what to communicate. I think it helps if we think a bit more about the effects of our communication and a bit less about correctly conveying our level of credence in particular claims. Let me explain.
If we communicate the simple idea that AGI is near then it pushes people to work on safety projects that would be good to work on even if AGI is not near while paying some costs in terms of reputation, mental health, and personal wealth.
If we communicate the simple idea that AGI is not near then people will feel less need to work on safety soon. This would let them not miss out on opportunities that would be good to take ahead of when they actually need to focus on AI safety.
We can only really communicate one thing at a time to people. Also, we should worry more about tail risks a false positives (thinking we can build AGI safely when we cannot) than false negatives (thinking we can’t build AGI safely when we can). Taking these two facts into consideration, I think the policy implication is clear: unless there is extremely strong evidence that AGI is not near, we must act and communicate as if AGI is near.
I reached this via Joachim pointing it out as an example of someone urging epistemic defection around AI alignment, and I have to agree with him there. I think the higher difficulty posed by communicating “we think there’s a substantial probability that AGI happens in the next 10 years” vs “AGI is near” is worth it even from a PR perspective, because pretending you know the day and the hour smells like bullshit to the most important people who need convincing that AI alignment is nontrivial.
I left a comment over in the other thread, but I think Joachim misunderstands my position.
In the above comment I’ve taken for granted that there’s a non-trivial possibility that AGI is near, so I’m not arguing we should say that “AGI is near” regardless of whether it is or not, because we don’t know if it is or not, we only have our guesses about it, and so long as there’s a non-trivial chance that AGI is near, I think that’s the more important message to communicate.
Overall it would be better if we can communicate something like “AGI is probably near”, but “probably” and similar terms are going to get rounded off, so even if you do literally say “AGI is probably near” or similar, that’s not what people will hear, and if you’re going to say “probably” my argument is that it’s better if they round the “probably” off to “near” rather than “not near”.
I agree with “When you say ‘there’s a good chance AGI is near’, the general public will hear ‘AGI is near’”.
However, the general public isn’t everyone, and the people who can distinguish between the two claims are the most important to reach (per capita, and possibly in sum).
So we’ll do better by saying what we actually believe, while taking into account that some audiences will round probabilities off (and seeking ways to be rounded closer to the truth while still communicating accurately to anyone who does understand probabilistic claims). The marginal gain by rounding ourselves off at the start isn’t worth the marginal loss by looking transparently overconfident to those who can tell the difference.
I’m replying only here because spreading discussion over multiple threads makes it harder to follow.
You left a reply on a question asking how to communicate about reasons why AGI might not be near. The question refers to costs of “the community” thinking that AI closer than it really is as a reason to communicate about reasons it might not be so close.
So I understood the question as asking about communication with the community (my guess: of people seriously working and thinking about AI-safety-as-in-AI-not-killing-everyone). Where it’s important to actually try to figure out truth.
You replied (as I understand) that when we communicate to general public we can transmit only 1 idea that so we should communicate that AGI is near (if we assign not-very-low probability to that).
I think the biggest problem I have with your posting “general public communication” as a reply to question asking about “community communication” pushes towards less clarity in the community, where I think clarity is important.
I’m also not sold on the “you can communicate only one idea” thing but I mostly don’t care to talk about it right now (it would be nice if someone else worked it out for me but now I don’t have capacity to do it myself).
Ah I see. I have to admit, I write a lot of my comments between things and I missed that the context of the post could cause my words to be interpreted this way. These days I’m often in executive mode rather than scholar mode and miss nuance if it’s not clearly highlighted, hence my misunderstanding, but also reflects where I’m coming from with this answer!