I actually ran it a couple of times (which was hard to keep track of due to the current tech issues). There were more complex versions (like versions that went over analogies involving specific climate change organizations), but I liked this version better. “bland RLHF’d PR suggestions” are useful when the problem involves PR and humans.
I would’ve probably went into more detail about “call these people science deniers” thing. It frustrates me that public is thinking that those denying capabilities are the experts on capabilities. But GPT-4′s suggestions are probably more actionable than mine. It also seemed to have higher signal-to-noise ratio than something I would write.
Hmm. To clarify, I mean that the suggestions from GPT4 feel low on substance about how to clarify while maintaining reputation, and are focused on PR instead
I think capabilities denial is basically a PR problem. This is different from denying the importance of the alignment problem; people are peddling pseudo-scientific explanations about why the AIs “seem” capable.
By contrast, I think alignment is still fuzzy enough that there is no scientific consensus, so techniques for dealing with science denial are less applicable.
PR and communication are not the same thing. It seems to me to be a communication problem; maintaining a positive affect for a brand is not the goal, which it would need to be in order for the term “PR” to be appropriate. The difference between reputation and PR is that, if communicating well in order to better explain a situation also happens to reduce the positive affect for the folks doing the communicating, then that’s still a success; honesty and accurate affect must be the goal for a communication to be reputation-maintenance seeking.
This is really just scientific communication anyhow—the variable we want people to have more accurate models of is “what can ai do now, and what might it be able to do soon?” not anything about any human’s intent or honor.
I actually ran it a couple of times (which was hard to keep track of due to the current tech issues). There were more complex versions (like versions that went over analogies involving specific climate change organizations), but I liked this version better. “bland RLHF’d PR suggestions” are useful when the problem involves PR and humans.
I would’ve probably went into more detail about “call these people science deniers” thing. It frustrates me that public is thinking that those denying capabilities are the experts on capabilities. But GPT-4′s suggestions are probably more actionable than mine. It also seemed to have higher signal-to-noise ratio than something I would write.
Hmm. To clarify, I mean that the suggestions from GPT4 feel low on substance about how to clarify while maintaining reputation, and are focused on PR instead
I think capabilities denial is basically a PR problem. This is different from denying the importance of the alignment problem; people are peddling pseudo-scientific explanations about why the AIs “seem” capable.
By contrast, I think alignment is still fuzzy enough that there is no scientific consensus, so techniques for dealing with science denial are less applicable.
PR and communication are not the same thing. It seems to me to be a communication problem; maintaining a positive affect for a brand is not the goal, which it would need to be in order for the term “PR” to be appropriate. The difference between reputation and PR is that, if communicating well in order to better explain a situation also happens to reduce the positive affect for the folks doing the communicating, then that’s still a success; honesty and accurate affect must be the goal for a communication to be reputation-maintenance seeking.
This is really just scientific communication anyhow—the variable we want people to have more accurate models of is “what can ai do now, and what might it be able to do soon?” not anything about any human’s intent or honor.