This might result in a different stance toward OpenAI
But part of the problem here is that the question “what’s the impact of our stance on OpenAI on existential risks?” is potentially very different from “is OpenAI’s current direction increasing or decreasing existential risks?”, and as people outside of OpenAI have much more control over their stance than they do over OpenAI’s current direction, the first question is much more actionable. And so we run into the standard question substitution problems, where we might be pretending to talk about a probabilistic assessment of an org’s impact while actually targeting the question of “how do I think people should relate to OpenAI?”.
[That said, I see the desire to have clear discussion of the current direction, and that’s why I wrote as much as I did, but I think it has prerequisites that aren’t quite achieved yet.]
But part of the problem here is that the question “what’s the impact of our stance on OpenAI on existential risks?” is potentially very different from “is OpenAI’s current direction increasing or decreasing existential risks?”, and as people outside of OpenAI have much more control over their stance than they do over OpenAI’s current direction, the first question is much more actionable. And so we run into the standard question substitution problems, where we might be pretending to talk about a probabilistic assessment of an org’s impact while actually targeting the question of “how do I think people should relate to OpenAI?”.
[That said, I see the desire to have clear discussion of the current direction, and that’s why I wrote as much as I did, but I think it has prerequisites that aren’t quite achieved yet.]