How open do we think OpenAI would be to additional research showing the dangers of AGI? If OpenAI is pursuing a perilous course, perhaps this community should prioritize doing the kind of research that would persuade them to slow down. Sam Altman came across to me at the two SSC talks he gave as being highly rational as this community would define rational.
If this is the correct path, we would benefit from people who have worked at OpenAI explaining what kind of evidence would be needed to influence them towards Eliezer’s view of AGI.
I don’t know what ‘we’ think, but as a person somewhat familiar with OpenAI employees and research output, they are definitely willing to pursue safety and transparency research that’s relevant to existential risk, and I don’t really know how one could do that without opening oneself up to producing research that provides evidence of AI danger.
How open do we think OpenAI would be to additional research showing the dangers of AGI? If OpenAI is pursuing a perilous course, perhaps this community should prioritize doing the kind of research that would persuade them to slow down. Sam Altman came across to me at the two SSC talks he gave as being highly rational as this community would define rational.
If this is the correct path, we would benefit from people who have worked at OpenAI explaining what kind of evidence would be needed to influence them towards Eliezer’s view of AGI.
I don’t know what ‘we’ think, but as a person somewhat familiar with OpenAI employees and research output, they are definitely willing to pursue safety and transparency research that’s relevant to existential risk, and I don’t really know how one could do that without opening oneself up to producing research that provides evidence of AI danger.