That’s all fair, and given what has been said, despite my initial impression, I don’t think this was “obviously wrong”—but I do have a hope that in this community, especially in acknowledged edge cases, people wait and check with others rather than going ahead.
Maybe. You can be too biased in either direction. One direction and you violate privacy which makes people not say things, in the other direction people are afraid of violating privacy and therefore valuable information doesn’t get spread (because asking is effortful, or highly impractical, e.g. Sam Altman isn’t that accessible). People should use their judgment, and sometimes they’ll get it wrong.
I agree that in general there is a tradeoff, and that there will always be edge cases. But in this case, I think judgement should be tilted strongly in favor of discretion. That’s because a high trust environment is characterized by people being more cautious around public disclosure and openness. Similarly, low trust environments have higher costs of communication internal to the community, due to lack of willingness to interact or share information. Given the domain discussed, and the importance of collaboration between key actors in AI safety, I’m by default in favor of putting value more on higher trust and less disclosure than on higher transparency and more sharing.
That’s all fair, and given what has been said, despite my initial impression, I don’t think this was “obviously wrong”—but I do have a hope that in this community, especially in acknowledged edge cases, people wait and check with others rather than going ahead.
Maybe. You can be too biased in either direction. One direction and you violate privacy which makes people not say things, in the other direction people are afraid of violating privacy and therefore valuable information doesn’t get spread (because asking is effortful, or highly impractical, e.g. Sam Altman isn’t that accessible). People should use their judgment, and sometimes they’ll get it wrong.
I agree that in general there is a tradeoff, and that there will always be edge cases. But in this case, I think judgement should be tilted strongly in favor of discretion. That’s because a high trust environment is characterized by people being more cautious around public disclosure and openness. Similarly, low trust environments have higher costs of communication internal to the community, due to lack of willingness to interact or share information. Given the domain discussed, and the importance of collaboration between key actors in AI safety, I’m by default in favor of putting value more on higher trust and less disclosure than on higher transparency and more sharing.