This seems like a case of making a rule to fix a problem that doesn’t exist.
Are people harassing individual AI labs or researchers? The tendency for reasonable people who are worried about AI safety should be to not do so, since it predictably won’t help the cause and can hurt. So far there does not seem to be any such problem of harassment discernible from background noise.
Naming individual labs and / or researchers is interesting, useful, and keeps things “real.”
So, to be clear, you don’t think confidently naming people by first name as destroying the world can be parsed emotionally by them?
Mentions of AI companies / AI personalities on LW will intrinsically tend to be adversarial, even if the author spares a polemic or use of terms like “so and so is working to destroy the world” because misaligned AI destroying the world is clearly THE focus of this community. So it can be argued that to be meaningful, a policy of no names would need to be applied to practically any discussion of AI as even if some AI content is framed positively by the author, the community at large will predictably tend to see it in existential risk terms.
That’s one issue. Personally, the calculus seems pretty simple: this well-behaved community and its concerns are largely not taken seriously by “the powers” who will predictably create AGI, there is little sign that these concerns will be taken seriously before reaching AGI, and there is almost no reason to think that humanity will pause to take a break and think “maybe we should put this on hold since we’ve made no discernible progress toward any alignment solutions” before someone trains and runs an AGI. So a conclusion that could be drawn from this is, we might as well have nice uncensored talks about AI free from petty rules until then.
Anyone can try, this seems way out in a practically invisible part of the tail of obstacles to not being destroyed by AGI, if it’s even an obstacle at all.
This seems like a case of making a rule to fix a problem that doesn’t exist.
Are people harassing individual AI labs or researchers? The tendency for reasonable people who are worried about AI safety should be to not do so, since it predictably won’t help the cause and can hurt. So far there does not seem to be any such problem of harassment discernible from background noise.
Naming individual labs and / or researchers is interesting, useful, and keeps things “real.”
Mentions of AI companies / AI personalities on LW will intrinsically tend to be adversarial, even if the author spares a polemic or use of terms like “so and so is working to destroy the world” because misaligned AI destroying the world is clearly THE focus of this community. So it can be argued that to be meaningful, a policy of no names would need to be applied to practically any discussion of AI as even if some AI content is framed positively by the author, the community at large will predictably tend to see it in existential risk terms.
That’s one issue. Personally, the calculus seems pretty simple: this well-behaved community and its concerns are largely not taken seriously by “the powers” who will predictably create AGI, there is little sign that these concerns will be taken seriously before reaching AGI, and there is almost no reason to think that humanity will pause to take a break and think “maybe we should put this on hold since we’ve made no discernible progress toward any alignment solutions” before someone trains and runs an AGI. So a conclusion that could be drawn from this is, we might as well have nice uncensored talks about AI free from petty rules until then.
Anyone can try, this seems way out in a practically invisible part of the tail of obstacles to not being destroyed by AGI, if it’s even an obstacle at all.