Sam has said he thought this was “off-the-record-ish” and it was clearly known that it not being recorded was a precondition for giving the talk. I don’t recall what terms were used, but I thought it was pretty obvious—and Sam’s later responses seem to agree—that he expected notes like this not to be made public.
Edit to add: I thought at the time that it was clear that this was off the record, despite that phrase likely not being used. If not, I would not have asked the question which I asked during the meetup.
Sam’s reply indicated that his preference was for it to be taken down. I wouldn’t interpret that as him expressing a violated expectation of what would happen given what was said.
Plus “off-the-recordish-ish”, would imply he doesn’t expect the information to be perfectly siloed.What does “ish” mean exactly? Unclear, not that that phrase was used at the event anyhow. Overall, seems it was ambiguous and the notes were an edge case.
The thing about situations of ambiguity is things that feel obvious to some people don’t feel obvious to everyone. I think I personally would have erred more on the side of caution like you, but I don’t think p.b. did something super obviously wrong. I don’t think Sam’s preferences were super well specified.
That’s all fair, and given what has been said, despite my initial impression, I don’t think this was “obviously wrong”—but I do have a hope that in this community, especially in acknowledged edge cases, people wait and check with others rather than going ahead.
Maybe. You can be too biased in either direction. One direction and you violate privacy which makes people not say things, in the other direction people are afraid of violating privacy and therefore valuable information doesn’t get spread (because asking is effortful, or highly impractical, e.g. Sam Altman isn’t that accessible). People should use their judgment, and sometimes they’ll get it wrong.
I agree that in general there is a tradeoff, and that there will always be edge cases. But in this case, I think judgement should be tilted strongly in favor of discretion. That’s because a high trust environment is characterized by people being more cautious around public disclosure and openness. Similarly, low trust environments have higher costs of communication internal to the community, due to lack of willingness to interact or share information. Given the domain discussed, and the importance of collaboration between key actors in AI safety, I’m by default in favor of putting value more on higher trust and less disclosure than on higher transparency and more sharing.
Sam has said he thought this was “off-the-record-ish” and it was clearly known that it not being recorded was a precondition for giving the talk. I don’t recall what terms were used, but I thought it was pretty obvious—and Sam’s later responses seem to agree—that he expected notes like this not to be made public.
Edit to add: I thought at the time that it was clear that this was off the record, despite that phrase likely not being used. If not, I would not have asked the question which I asked during the meetup.
Sam’s reply indicated that his preference was for it to be taken down. I wouldn’t interpret that as him expressing a violated expectation of what would happen given what was said.
Plus “off-the-recordish-ish”, would imply he doesn’t expect the information to be perfectly siloed. What does “ish” mean exactly? Unclear, not that that phrase was used at the event anyhow. Overall, seems it was ambiguous and the notes were an edge case.
The thing about situations of ambiguity is things that feel obvious to some people don’t feel obvious to everyone. I think I personally would have erred more on the side of caution like you, but I don’t think p.b. did something super obviously wrong. I don’t think Sam’s preferences were super well specified.
That’s all fair, and given what has been said, despite my initial impression, I don’t think this was “obviously wrong”—but I do have a hope that in this community, especially in acknowledged edge cases, people wait and check with others rather than going ahead.
Maybe. You can be too biased in either direction. One direction and you violate privacy which makes people not say things, in the other direction people are afraid of violating privacy and therefore valuable information doesn’t get spread (because asking is effortful, or highly impractical, e.g. Sam Altman isn’t that accessible). People should use their judgment, and sometimes they’ll get it wrong.
I agree that in general there is a tradeoff, and that there will always be edge cases. But in this case, I think judgement should be tilted strongly in favor of discretion. That’s because a high trust environment is characterized by people being more cautious around public disclosure and openness. Similarly, low trust environments have higher costs of communication internal to the community, due to lack of willingness to interact or share information. Given the domain discussed, and the importance of collaboration between key actors in AI safety, I’m by default in favor of putting value more on higher trust and less disclosure than on higher transparency and more sharing.