Greg Brockman and Sam Altman (cosigned):
[...]
First, we have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it. We’ve repeatedly demonstrated the incredible possibilities from scaling up deep learning
chokes on coffee
I’m not sure how I feel about the whole idea of this endeavour in the abstract—but as someone who doesn’t know Ilya Sutskever and only followed the public stuff, I’m pretty worried that he in particular runs it if decision-making is on the “by an individual” level and even if not. Running this safely will likely require lots of moral integrity and courage. The board drama made it look to me like Ilya disqualified himself from having enough of that.
Lightly held because I don’t know the details but just from the public stuff I’ve seen I don’t know why I should at all believe that Ilya has sufficient moral integrity and courage for this project even if he might “mean well” at the moment.