I think that my personal thoughts on capabilities externalities are reflected well in this post.
I’d also note that this concern isn’t very unique to interpretability work but applies to alignment work in general. And in comparison to other alignment techniques, I think that the downside risks of interpretability tools are most likely lower than those of stuff like RLHF. Most theories of change for interpretability helping with AI safety involve engineering work at some point in time, so I would expect that most interpretability researchers have similar attitudes to this on dual use concerns.
In general, a tool being engineering-relevant does not imply that it will be competitive for setting a new SOTA on something risky. So when I will talk about engineering relevance in this sequence, I don’t have big advancements in mind so much as stuff like fairly simple debugging work.
In general, a tool being engineering-relevant does not imply that it will be competitive for setting a new SOTA on something risky. So when I will talk about engineering relevance in this sequence, I don’t have big advancements in mind so much as stuff like fairly simple debugging work.
Fwiw this does not seem to be in the Dan Hendrycks post you linked!
How do you anticipate and strategize around dual-use concerns, particularly for basic / blue-sky interpretability-enablong research?
I think that my personal thoughts on capabilities externalities are reflected well in this post.
I’d also note that this concern isn’t very unique to interpretability work but applies to alignment work in general. And in comparison to other alignment techniques, I think that the downside risks of interpretability tools are most likely lower than those of stuff like RLHF. Most theories of change for interpretability helping with AI safety involve engineering work at some point in time, so I would expect that most interpretability researchers have similar attitudes to this on dual use concerns.
In general, a tool being engineering-relevant does not imply that it will be competitive for setting a new SOTA on something risky. So when I will talk about engineering relevance in this sequence, I don’t have big advancements in mind so much as stuff like fairly simple debugging work.
Fwiw this does not seem to be in the Dan Hendrycks post you linked!
Correct. I intended the 3 paragraphs in that comment to be separate thoughts. Sorry.