I like this because it makes it clear that legibility of results is the main concern. There are certain ways of writing and publishing information that communities 1) and 2) are accustomed to. Writing that way both makes your work more likely to be read, and also incentivizes you to state the key claims clearly (and, when possible, formally), which is generally good for making collaborative progress.
In addition, one good thing to adopt is comparing to prior and related work; the ML community is bad on this front, but some people genuinely do care. It also helps AI safety research to stack.
To avoid this comment section being an echo chamber: you do not have to follow all academic customs. Here is how to avoid some of the harmful ones that are unfortunately present:
Do not compromise on the motivation or related work to make it seem less weird for academics. If your work relies on some LW/AF posts, do cite them. If your work is intended to be relevant for x-risk, say it.
Avoid doing anything if the only person you want to appease with it is an anonymous reviewer.
Never compromise on the facts. If you have results that say some famous prior paper is wrong or bad, say it loud and clear, in papers and elsewhere. It doesn’t matter who you might offend.
AI x-risk research has its own perfectly usable risk sheet you can include in your papers.
And finally: do not publish potentially harmful things just because it benefits science. Science has no moral value. Society gives too much moral credit to scientists in comparison to other groups of people.
I like this because it makes it clear that legibility of results is the main concern. There are certain ways of writing and publishing information that communities 1) and 2) are accustomed to. Writing that way both makes your work more likely to be read, and also incentivizes you to state the key claims clearly (and, when possible, formally), which is generally good for making collaborative progress.
In addition, one good thing to adopt is comparing to prior and related work; the ML community is bad on this front, but some people genuinely do care. It also helps AI safety research to stack.
To avoid this comment section being an echo chamber: you do not have to follow all academic customs. Here is how to avoid some of the harmful ones that are unfortunately present:
Do not compromise on the motivation or related work to make it seem less weird for academics. If your work relies on some LW/AF posts, do cite them. If your work is intended to be relevant for x-risk, say it.
Avoid doing anything if the only person you want to appease with it is an anonymous reviewer.
Never compromise on the facts. If you have results that say some famous prior paper is wrong or bad, say it loud and clear, in papers and elsewhere. It doesn’t matter who you might offend.
AI x-risk research has its own perfectly usable risk sheet you can include in your papers.
And finally: do not publish potentially harmful things just because it benefits science. Science has no moral value. Society gives too much moral credit to scientists in comparison to other groups of people.