Ben Pace has said that perhaps he doesn’t disagree with you in particular about this, but I sure think I do.[1]
I think the amount of stress incurred when doing public communication is nearly orthogonal to these factors, and in particular is, when trying to be as careful about anything as Zac is trying to be about confidentiality, quite high at baseline.
I don’t see how the first half of this could be correct, and while the second half could be true, it doesn’t seem to me to offer meaningful support for the first half either (instead, it seems rather… off-topic).
As a general matter, even if it were the case that no matter what you say, at least one person will actively misinterpret your words, this fact would have little bearing on whether you can causally influence the proportion of readers/community members that end up with (what seem to you like) the correct takeaways from a discussion of that kind.
Moreover, in a spot where you have something meaningful and responsible, etc, that you and your company have done to deal with safety issues, the major concern in your mind when communicating publicly is figuring out how to make it clear to everyone that you are on top of things without revealing confidential information. That is certainly stressful, but much less so than the additional constraint you have in a world in which you do not have anything concrete that you can back your generic claims of responsibility with, since that is a spot where you can no longer fall back on (a partial version of) the truth as your defense. For the vast majority of human beings, lying and intentional obfuscation with the intent to mislead are significantly more psychologically straining than telling the truth as-you-see-it is.
Overall, I also think I disagree about the amount of stress that would be caused by conversations with AI safety community members. As I have said earlier:
AI safety community members are not actually arbitrarily intelligent Machiavellians with the ability to convincingly twist every (in-reality) success story into an (in-perception) irresponsible gaffe;[1] the extent to which they can do this depends very heavily on the extent to which you have anything substantive to bring up in the first place.
[1] Quite the opposite, actually, if the change in the wider society’s opinions about EA in the wake of the SBF scandal is any representative indication of how the rationalist/EA/AI safety cluster typically handles PR stuff.
In any case, I have already made all these points in a number of ways in my previous response to you (which you haven’t addressed, and which still seem to me to be entirely correct).
Ben Pace has said that perhaps he doesn’t disagree with you in particular about this, but I sure think I do.[1]
I don’t see how the first half of this could be correct, and while the second half could be true, it doesn’t seem to me to offer meaningful support for the first half either (instead, it seems rather… off-topic).
As a general matter, even if it were the case that no matter what you say, at least one person will actively misinterpret your words, this fact would have little bearing on whether you can causally influence the proportion of readers/community members that end up with (what seem to you like) the correct takeaways from a discussion of that kind.
Moreover, in a spot where you have something meaningful and responsible, etc, that you and your company have done to deal with safety issues, the major concern in your mind when communicating publicly is figuring out how to make it clear to everyone that you are on top of things without revealing confidential information. That is certainly stressful, but much less so than the additional constraint you have in a world in which you do not have anything concrete that you can back your generic claims of responsibility with, since that is a spot where you can no longer fall back on (a partial version of) the truth as your defense. For the vast majority of human beings, lying and intentional obfuscation with the intent to mislead are significantly more psychologically straining than telling the truth as-you-see-it is.
Overall, I also think I disagree about the amount of stress that would be caused by conversations with AI safety community members. As I have said earlier:
In any case, I have already made all these points in a number of ways in my previous response to you (which you haven’t addressed, and which still seem to me to be entirely correct).
He also said that he thinks your perspective makes sense, which… I’m not really sure about.