I think it’s important to remind people that dramaposting about OpenAI leadership is still ultimately dramaposting. Make the update on OpenAI’s nonprofit leadership structure having an effect, etc., and keep looking at the news about once a day until the events stop being eventful. While you’re doing that, keep in mind that ultimately the laminated monkey hierarchy is not what’s important about OpenAI or any of these other firms, at least terminally.
This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.
What’s different between this and e.g. the developments with Nonlinear, is that the developments here will have a big impact on how the AI field (and by one layer of indirection, the fate of the world) develops.
This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.
I don’t disagree! Even if you’re not involved directly in the goings on, it’s probably still important to tune in once a day or so.
Ummm, the laminated monkey hierarchy is going to determine exactly who launches the first AGI, and therefore who makes the most important call in humanity’s history.
If we provide them a solid alignment solution that makes their choice easier, but it’s still going to be some particular person’s call.
I think it’s important to remind people that dramaposting about OpenAI leadership is still ultimately dramaposting. Make the update on OpenAI’s nonprofit leadership structure having an effect, etc., and keep looking at the news about once a day until the events stop being eventful. While you’re doing that, keep in mind that ultimately the laminated monkey hierarchy is not what’s important about OpenAI or any of these other firms, at least terminally.
This is important news. I personally desire to be kept updated on this, and LW is a convenient (and appropriate) place to get this information. And I expect other users feel similarly.
What’s different between this and e.g. the developments with Nonlinear, is that the developments here will have a big impact on how the AI field (and by one layer of indirection, the fate of the world) develops.
I don’t disagree! Even if you’re not involved directly in the goings on, it’s probably still important to tune in once a day or so.
Ummm, the laminated monkey hierarchy is going to determine exactly who launches the first AGI, and therefore who makes the most important call in humanity’s history.
If we provide them a solid alignment solution that makes their choice easier, but it’s still going to be some particular person’s call.