Thanks for the detailed response, I really appreciate it! For the future Iâll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions. Iâm going offline in a few minutes, but will try to add some links here as well when I get back on Sunday.
As for the importance of outside opinions that arenât in essay form, I fully agree with you that some amount of critique is inevitable if you are doing good, impactful work. I also agree we should not alter our semi-private conversations on LessWrong and elsewhere to accommodate (bad-faith) critics. Things are different, however, when you are releasing a public-facing product, and talking about questionably defined âAI ethicsâ in a literal press release. There, everything is about perception, and you should expect people to be influenced heavily by your wording (if your PR folks are doing their jobs right đ).
Why should we care about the non-essay-writing-public? Well, one good reason is politics. I donât know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important. In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction. If there is one thing politicians (and to a lesser degree some corporations) care about, it is general public perception, and while they are generally fine with very small minority pushback, if the general vibe in Silicon valley becomes âAI ethicists are mainly partisan, paternalistic censors,â then there becomes a very strong incentive not to work with us.
Unfortunately, I believe that the above vibe has been growing both on and offline as a result of actions which members of this community have had some amount of control over. We shouldnât bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
Things are different, however, when you are releasing a public-facing product, and talking about questionably defined âAI ethicsâ in a literal press release.
I didnât do this, and LessWrong didnât do this.
For the future Iâll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions.
To be clear, as a rule Iâm just not reading it if itâs got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I donât want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying youâll aim to give higher quality PR advice in the future, is missing my point.
I donât know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important.
Citation needed? Anyway, my take is that using LWâs reputation in a popularity tug-of-war is a waste of our reputation. Plus youâll lose.
In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction.
Just give up on that. You will not get far with that.
We shouldnât bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
I donât know why you are identifying âML developersâ with âLessWrong usersâ, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
Iâm not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.
Thanks for the detailed response, I really appreciate it! For the future Iâll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions. Iâm going offline in a few minutes, but will try to add some links here as well when I get back on Sunday.
As for the importance of outside opinions that arenât in essay form, I fully agree with you that some amount of critique is inevitable if you are doing good, impactful work. I also agree we should not alter our semi-private conversations on LessWrong and elsewhere to accommodate (bad-faith) critics. Things are different, however, when you are releasing a public-facing product, and talking about questionably defined âAI ethicsâ in a literal press release. There, everything is about perception, and you should expect people to be influenced heavily by your wording (if your PR folks are doing their jobs right đ).
Why should we care about the non-essay-writing-public? Well, one good reason is politics. I donât know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important. In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction. If there is one thing politicians (and to a lesser degree some corporations) care about, it is general public perception, and while they are generally fine with very small minority pushback, if the general vibe in Silicon valley becomes âAI ethicists are mainly partisan, paternalistic censors,â then there becomes a very strong incentive not to work with us.
Unfortunately, I believe that the above vibe has been growing both on and offline as a result of actions which members of this community have had some amount of control over. We shouldnât bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
I didnât do this, and LessWrong didnât do this.
To be clear, as a rule Iâm just not reading it if itâs got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I donât want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying youâll aim to give higher quality PR advice in the future, is missing my point.
Citation needed? Anyway, my take is that using LWâs reputation in a popularity tug-of-war is a waste of our reputation. Plus youâll lose.
Just give up on that. You will not get far with that.
I donât know why you are identifying âML developersâ with âLessWrong usersâ, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
Iâm not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.