I don’t know what ‘we’ think, but as a person somewhat familiar with OpenAI employees and research output, they are definitely willing to pursue safety and transparency research that’s relevant to existential risk, and I don’t really know how one could do that without opening oneself up to producing research that provides evidence of AI danger.
I don’t know what ‘we’ think, but as a person somewhat familiar with OpenAI employees and research output, they are definitely willing to pursue safety and transparency research that’s relevant to existential risk, and I don’t really know how one could do that without opening oneself up to producing research that provides evidence of AI danger.