I can’t tell if this post is trying to discuss communicating about anything related to AI or alignment or is trying to more specifically discuss communication aimed at general audiences. I’ll assume it’s discussing arbitrary communication on AI or alignment.
I feel like this post doesn’t engage sufficiently with the costs associated with high effort writting and the alteratives to targeting arbitrary lesswrong users interested in alignment.
For instance, when communicating research it’s cheaper to instead just communicate to people who are operating within the same rough paradigm and ignore people who aren’t sold on the rough premises of this paradigm. If this results in other people having trouble engaging, this seems like a reasonable cost in many cases.
An even narrower approach is to only communicate beliefs in person to people who you frequently talk to.
While writing well is one of the aspects focused on by the OP, your reply doesn’t address the broader point, which is that EY (and those of similar repute/demeanor) juxtaposes his catastrophic predictions with his stark lack of effective exposition and discussion of the issue and potential solutions to a broader audience. To add insult to injury, he seems to actively try to demoralize dissenters in a very conspicuous and perverse manner, which detracts from his credibility and subtly but surely nudges people further and further from taking his ideas (and those similar) seriously. He gets frustrated by people not understanding him, hence the title of the OP implying the source of his frustration is his own murkiness, not a lack of faculty of the people listening to him. To me, the most obvious examples of this are his guest appearances on podcasts (namely Lex Fridman’s and Dwarkesh Patel’s, the only two I’ve listened to). Neither of these hosts are dumb, yet by the end of their respective episodes, the hosts were confused or otherwise fettered and there was palpable repulsion between the hosts and EY. Considering these are very popular podcasts, it is reasonable to assume that he agreed to appear on these podcasts to reach a wider audience. He does other things to reach wider audiences, e.g. his twitter account and the Time Magazine article he wrote. Other people like him do similar things to reach wider audiences.
Since I’ve laid this out, you can probably predict what my thoughts are regarding the cost-benefit analysis you did. Since EY and similar folk are predicting outcomes as unfavorable as human extinction and are actively trying to recruit people from a wider audience to work towards their problems, is it really a reasonable cost to continue going about this as they have?
Considering the potential impact on the field of AI alignment and the recruitment of individuals who may contribute meaningfully to addressing the challenges currently faced, I would argue that the cost of improving communication is far beyond justifiable. EY and similar figures should strive to balance efficiency in communication with the need for clarity, especially when the stakes are so high.
I agree that EY is quite overconfident and I think his argument for doom are often sloppy and don’t hold up. (I think the risk is substantial but often the exact arguments EY gives don’t work). And, his communication often fails to meet basic bars for clarity. I’d also probably agree with ‘if EY was able to do so, improving his communication and arguments in a variety of contexts would be extremely good’. And specifically not saying crazy sounding shit which is easily misunderstood would probably be good (there are some real costs here too). But, I’m not sure this the top of my asks list for EY.
Further I agree with “when trying to argue nuanced complex arguments to general audiences/random people, doing extremely high effort communication is often essential”.
All this said, this post doesn’t differentiate between communication to general audience and other communication about ai. I assumed it was talking about literally all alignment/ai communication and wanted to push back on this. There are real costs to better communication, and in many cases those costs aren’t worth it.
My comment was trying to make a relatively narrow and decoupled point (see decoupling norms etc.).
If you are think that AI is going to kill everyone, sooner or later you are going to have to communicate that to everyone. That doesn’t mean evey article has to be at the highest level of comprehensibity, but it does mean you shouldn’t end up with the in-group problem of being unable to communicate with outsiders at all.
I can’t tell if this post is trying to discuss communicating about anything related to AI or alignment or is trying to more specifically discuss communication aimed at general audiences. I’ll assume it’s discussing arbitrary communication on AI or alignment.
I feel like this post doesn’t engage sufficiently with the costs associated with high effort writting and the alteratives to targeting arbitrary lesswrong users interested in alignment.
For instance, when communicating research it’s cheaper to instead just communicate to people who are operating within the same rough paradigm and ignore people who aren’t sold on the rough premises of this paradigm. If this results in other people having trouble engaging, this seems like a reasonable cost in many cases.
An even narrower approach is to only communicate beliefs in person to people who you frequently talk to.
I think I agree with this regarding inside-group communication, and have now edited the post to add something kind-of-to-this-effect at the top.
While writing well is one of the aspects focused on by the OP, your reply doesn’t address the broader point, which is that EY (and those of similar repute/demeanor) juxtaposes his catastrophic predictions with his stark lack of effective exposition and discussion of the issue and potential solutions to a broader audience. To add insult to injury, he seems to actively try to demoralize dissenters in a very conspicuous and perverse manner, which detracts from his credibility and subtly but surely nudges people further and further from taking his ideas (and those similar) seriously. He gets frustrated by people not understanding him, hence the title of the OP implying the source of his frustration is his own murkiness, not a lack of faculty of the people listening to him. To me, the most obvious examples of this are his guest appearances on podcasts (namely Lex Fridman’s and Dwarkesh Patel’s, the only two I’ve listened to). Neither of these hosts are dumb, yet by the end of their respective episodes, the hosts were confused or otherwise fettered and there was palpable repulsion between the hosts and EY. Considering these are very popular podcasts, it is reasonable to assume that he agreed to appear on these podcasts to reach a wider audience. He does other things to reach wider audiences, e.g. his twitter account and the Time Magazine article he wrote. Other people like him do similar things to reach wider audiences.
Since I’ve laid this out, you can probably predict what my thoughts are regarding the cost-benefit analysis you did. Since EY and similar folk are predicting outcomes as unfavorable as human extinction and are actively trying to recruit people from a wider audience to work towards their problems, is it really a reasonable cost to continue going about this as they have?
Considering the potential impact on the field of AI alignment and the recruitment of individuals who may contribute meaningfully to addressing the challenges currently faced, I would argue that the cost of improving communication is far beyond justifiable. EY and similar figures should strive to balance efficiency in communication with the need for clarity, especially when the stakes are so high.
I agree that EY is quite overconfident and I think his argument for doom are often sloppy and don’t hold up. (I think the risk is substantial but often the exact arguments EY gives don’t work). And, his communication often fails to meet basic bars for clarity. I’d also probably agree with ‘if EY was able to do so, improving his communication and arguments in a variety of contexts would be extremely good’. And specifically not saying crazy sounding shit which is easily misunderstood would probably be good (there are some real costs here too). But, I’m not sure this the top of my asks list for EY.
Further I agree with “when trying to argue nuanced complex arguments to general audiences/random people, doing extremely high effort communication is often essential”.
All this said, this post doesn’t differentiate between communication to general audience and other communication about ai. I assumed it was talking about literally all alignment/ai communication and wanted to push back on this. There are real costs to better communication, and in many cases those costs aren’t worth it.
My comment was trying to make a relatively narrow and decoupled point (see decoupling norms etc.).
If you are think that AI is going to kill everyone, sooner or later you are going to have to communicate that to everyone. That doesn’t mean evey article has to be at the highest level of comprehensibity, but it does mean you shouldn’t end up with the in-group problem of being unable to communicate with outsiders at all.