I agree that EY is quite overconfident and I think his argument for doom are often sloppy and don’t hold up. (I think the risk is substantial but often the exact arguments EY gives don’t work). And, his communication often fails to meet basic bars for clarity. I’d also probably agree with ‘if EY was able to do so, improving his communication and arguments in a variety of contexts would be extremely good’. And specifically not saying crazy sounding shit which is easily misunderstood would probably be good (there are some real costs here too). But, I’m not sure this the top of my asks list for EY.
Further I agree with “when trying to argue nuanced complex arguments to general audiences/random people, doing extremely high effort communication is often essential”.
All this said, this post doesn’t differentiate between communication to general audience and other communication about ai. I assumed it was talking about literally all alignment/ai communication and wanted to push back on this. There are real costs to better communication, and in many cases those costs aren’t worth it.
My comment was trying to make a relatively narrow and decoupled point (see decoupling norms etc.).
I agree that EY is quite overconfident and I think his argument for doom are often sloppy and don’t hold up. (I think the risk is substantial but often the exact arguments EY gives don’t work). And, his communication often fails to meet basic bars for clarity. I’d also probably agree with ‘if EY was able to do so, improving his communication and arguments in a variety of contexts would be extremely good’. And specifically not saying crazy sounding shit which is easily misunderstood would probably be good (there are some real costs here too). But, I’m not sure this the top of my asks list for EY.
Further I agree with “when trying to argue nuanced complex arguments to general audiences/random people, doing extremely high effort communication is often essential”.
All this said, this post doesn’t differentiate between communication to general audience and other communication about ai. I assumed it was talking about literally all alignment/ai communication and wanted to push back on this. There are real costs to better communication, and in many cases those costs aren’t worth it.
My comment was trying to make a relatively narrow and decoupled point (see decoupling norms etc.).