The original stated rationale behind OpenAI was https://www.wired.com/2015/12/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over/. I gather that another big rationale behind OpenAI was ‘Elon was scared of Demis doing bad things with AGI’; and another big rationale was ‘Sam was already interested in doing cool stuff with AI regardless of the whole AI-risk thing’. (Let me know if you think any of this summary is misleading or wrong.)
Since then, Elon has left, and Sam and various other individuals at OpenAI seem to have improved their models of AGI risk a lot. But this does seem like a very different situation than ‘founding an org based on an at-all good understanding of AI risk, filtering for staff based on such an understanding, etc.’
The original stated rationale behind OpenAI was https://www.wired.com/2015/12/how-elon-musk-and-y-combinator-plan-to-stop-computers-from-taking-over/. I gather that another big rationale behind OpenAI was ‘Elon was scared of Demis doing bad things with AGI’; and another big rationale was ‘Sam was already interested in doing cool stuff with AI regardless of the whole AI-risk thing’. (Let me know if you think any of this summary is misleading or wrong.)
Since then, Elon has left, and Sam and various other individuals at OpenAI seem to have improved their models of AGI risk a lot. But this does seem like a very different situation than ‘founding an org based on an at-all good understanding of AI risk, filtering for staff based on such an understanding, etc.’
This link is dead for me. I found this link that points to the same article.
Thanks! Edited.