I reached this via Joachim pointing it out as an example of someone urging epistemic defection around AI alignment, and I have to agree with him there. I think the higher difficulty posed by communicating “we think there’s a substantial probability that AGI happens in the next 10 years” vs “AGI is near” is worth it even from a PR perspective, because pretending you know the day and the hour smells like bullshit to the most important people who need convincing that AI alignment is nontrivial.
I left a comment over in the other thread, but I think Joachim misunderstands my position.
In the above comment I’ve taken for granted that there’s a non-trivial possibility that AGI is near, so I’m not arguing we should say that “AGI is near” regardless of whether it is or not, because we don’t know if it is or not, we only have our guesses about it, and so long as there’s a non-trivial chance that AGI is near, I think that’s the more important message to communicate.
Overall it would be better if we can communicate something like “AGI is probably near”, but “probably” and similar terms are going to get rounded off, so even if you do literally say “AGI is probably near” or similar, that’s not what people will hear, and if you’re going to say “probably” my argument is that it’s better if they round the “probably” off to “near” rather than “not near”.
I agree with “When you say ‘there’s a good chance AGI is near’, the general public will hear ‘AGI is near’”.
However, the general public isn’t everyone, and the people who can distinguish between the two claims are the most important to reach (per capita, and possibly in sum).
So we’ll do better by saying what we actually believe, while taking into account that some audiences will round probabilities off (and seeking ways to be rounded closer to the truth while still communicating accurately to anyone who does understand probabilistic claims). The marginal gain by rounding ourselves off at the start isn’t worth the marginal loss by looking transparently overconfident to those who can tell the difference.
I’m replying only here because spreading discussion over multiple threads makes it harder to follow.
You left a reply on a question asking how to communicate about reasons why AGI might not be near. The question refers to costs of “the community” thinking that AI closer than it really is as a reason to communicate about reasons it might not be so close.
So I understood the question as asking about communication with the community (my guess: of people seriously working and thinking about AI-safety-as-in-AI-not-killing-everyone). Where it’s important to actually try to figure out truth.
You replied (as I understand) that when we communicate to general public we can transmit only 1 idea that so we should communicate that AGI is near (if we assign not-very-low probability to that).
I think the biggest problem I have with your posting “general public communication” as a reply to question asking about “community communication” pushes towards less clarity in the community, where I think clarity is important.
I’m also not sold on the “you can communicate only one idea” thing but I mostly don’t care to talk about it right now (it would be nice if someone else worked it out for me but now I don’t have capacity to do it myself).
Ah I see. I have to admit, I write a lot of my comments between things and I missed that the context of the post could cause my words to be interpreted this way. These days I’m often in executive mode rather than scholar mode and miss nuance if it’s not clearly highlighted, hence my misunderstanding, but also reflects where I’m coming from with this answer!
I reached this via Joachim pointing it out as an example of someone urging epistemic defection around AI alignment, and I have to agree with him there. I think the higher difficulty posed by communicating “we think there’s a substantial probability that AGI happens in the next 10 years” vs “AGI is near” is worth it even from a PR perspective, because pretending you know the day and the hour smells like bullshit to the most important people who need convincing that AI alignment is nontrivial.
I left a comment over in the other thread, but I think Joachim misunderstands my position.
In the above comment I’ve taken for granted that there’s a non-trivial possibility that AGI is near, so I’m not arguing we should say that “AGI is near” regardless of whether it is or not, because we don’t know if it is or not, we only have our guesses about it, and so long as there’s a non-trivial chance that AGI is near, I think that’s the more important message to communicate.
Overall it would be better if we can communicate something like “AGI is probably near”, but “probably” and similar terms are going to get rounded off, so even if you do literally say “AGI is probably near” or similar, that’s not what people will hear, and if you’re going to say “probably” my argument is that it’s better if they round the “probably” off to “near” rather than “not near”.
I agree with “When you say ‘there’s a good chance AGI is near’, the general public will hear ‘AGI is near’”.
However, the general public isn’t everyone, and the people who can distinguish between the two claims are the most important to reach (per capita, and possibly in sum).
So we’ll do better by saying what we actually believe, while taking into account that some audiences will round probabilities off (and seeking ways to be rounded closer to the truth while still communicating accurately to anyone who does understand probabilistic claims). The marginal gain by rounding ourselves off at the start isn’t worth the marginal loss by looking transparently overconfident to those who can tell the difference.
I’m replying only here because spreading discussion over multiple threads makes it harder to follow.
You left a reply on a question asking how to communicate about reasons why AGI might not be near. The question refers to costs of “the community” thinking that AI closer than it really is as a reason to communicate about reasons it might not be so close.
So I understood the question as asking about communication with the community (my guess: of people seriously working and thinking about AI-safety-as-in-AI-not-killing-everyone). Where it’s important to actually try to figure out truth.
You replied (as I understand) that when we communicate to general public we can transmit only 1 idea that so we should communicate that AGI is near (if we assign not-very-low probability to that).
I think the biggest problem I have with your posting “general public communication” as a reply to question asking about “community communication” pushes towards less clarity in the community, where I think clarity is important.
I’m also not sold on the “you can communicate only one idea” thing but I mostly don’t care to talk about it right now (it would be nice if someone else worked it out for me but now I don’t have capacity to do it myself).
Ah I see. I have to admit, I write a lot of my comments between things and I missed that the context of the post could cause my words to be interpreted this way. These days I’m often in executive mode rather than scholar mode and miss nuance if it’s not clearly highlighted, hence my misunderstanding, but also reflects where I’m coming from with this answer!