I just saw this, but this feels like a better-late-than-never situation. I think hard conversations about the possibilities of increasing existential risk should happen.
I work at OpenAI. I have worked at OpenAI for over five years.
I think we definitely should be willing and able to have these sorts of conversations in public, mostly for the reasons other people have listed. I think AnnaSalamon is the answer I agree with most.
I want to also add this has made me deeply uncomfortable and anxious countless times over the past few years. It can be a difficult thing to navigate well or navigate efficiently. I feel like I’ve gotten better at it, and better at knowing/managing myself. I see newer colleagues also suffering from this. I try to help them when I can.
I’m not sure this is the right answer for all context, but I am optimistic for this one. I’ve found the rationality community and the lesswrong community to be much better than average at dealing with bad faith arguments, and for cutting towards the truth. I think there are communities where it would go poorly enough that it could be net-negative to have the conversation.
Side note: I really don’t have a lot of context about the Elon Musk connection, and the guy has not really been involved for years. I think the “what things (including things OpenAI is doing) might increase existential risk” is an important conversation to have when analyzing forecasts, predictions, and research plans. I am less optimistic about “what tech executives think about other tech executives” going well.
I just saw this, but this feels like a better-late-than-never situation. I think hard conversations about the possibilities of increasing existential risk should happen.
I work at OpenAI. I have worked at OpenAI for over five years.
I think we definitely should be willing and able to have these sorts of conversations in public, mostly for the reasons other people have listed. I think AnnaSalamon is the answer I agree with most.
I want to also add this has made me deeply uncomfortable and anxious countless times over the past few years. It can be a difficult thing to navigate well or navigate efficiently. I feel like I’ve gotten better at it, and better at knowing/managing myself. I see newer colleagues also suffering from this. I try to help them when I can.
I’m not sure this is the right answer for all context, but I am optimistic for this one. I’ve found the rationality community and the lesswrong community to be much better than average at dealing with bad faith arguments, and for cutting towards the truth. I think there are communities where it would go poorly enough that it could be net-negative to have the conversation.
Side note: I really don’t have a lot of context about the Elon Musk connection, and the guy has not really been involved for years. I think the “what things (including things OpenAI is doing) might increase existential risk” is an important conversation to have when analyzing forecasts, predictions, and research plans. I am less optimistic about “what tech executives think about other tech executives” going well.