One example of (2) is disapproving of publishing AI alignment research that may advance AI capabilities. That’s because you’re criticizing the research not on the basis of “this is wrong” but on the basis of “it was bad to say this, even if it’s right”.
One example of (2) is disapproving of publishing AI alignment research that may advance AI capabilities. That’s because you’re criticizing the research not on the basis of “this is wrong” but on the basis of “it was bad to say this, even if it’s right”.