The first one feels a bit too optimistic. It’s something more like: Are they able to be direct in their disagreement with one another? What level of internal politicking is there? How much ability do some of the leadership have to make unilateral decisions? Etc.
The second one is the one more about alignment, takeoff dynamics, and timelines. All the details, like the likelihood of Mesa optimisers. What are their thoughts on this, and how much do they think about it?
For the third, that one’s good. Also things about how differently things would’ve gone at DeepMind, and also how good/bad the world would be if Musk hadn’t shifted The Overton window so much (which I think is counterfactually linked up with OpenAI existing, you get both or neither).
The first one feels a bit too optimistic. It’s something more like: Are they able to be direct in their disagreement with one another? What level of internal politicking is there? How much ability do some of the leadership have to make unilateral decisions? Etc.
The second one is the one more about alignment, takeoff dynamics, and timelines. All the details, like the likelihood of Mesa optimisers. What are their thoughts on this, and how much do they think about it?
For the third, that one’s good. Also things about how differently things would’ve gone at DeepMind, and also how good/bad the world would be if Musk hadn’t shifted The Overton window so much (which I think is counterfactually linked up with OpenAI existing, you get both or neither).