The main thing I want to address with this research strategy is language models using reasoning that we would not approve of, which could run through convergent instrumental goals like self-preservation, goal-preservation, power-seeking, etc. It doesn’t seem to me that the failure mode you’ve described depends on the AI doing reasoning of which we wouldn’t approve. Even if this research direction were wildly successful, there would be many other failure modes for AI; I’m just trying to address this particularly pernicious one.
It’s possible that the transparency provided by authentic, externalized reasoning could also be useful for reducing other dangers associated with powerful AI, but that’s not the main thrust of the research direction I’m presenting here.
The main thing I want to address with this research strategy is language models using reasoning that we would not approve of, which could run through convergent instrumental goals like self-preservation, goal-preservation, power-seeking, etc. It doesn’t seem to me that the failure mode you’ve described depends on the AI doing reasoning of which we wouldn’t approve. Even if this research direction were wildly successful, there would be many other failure modes for AI; I’m just trying to address this particularly pernicious one.
It’s possible that the transparency provided by authentic, externalized reasoning could also be useful for reducing other dangers associated with powerful AI, but that’s not the main thrust of the research direction I’m presenting here.