As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
The remaining board members are:
OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
Has anyone collected their public statements on various AI x-risk topics anywhere?
This should help access to AI diffuse throughout the world more quickly, and help those smaller researchers generate the large amounts of revenue that are needed to train bigger models and further fund their research.
We are especially excited about enabling a new class of smaller AI research groups or companies to reach a large audience, those who have unique talent or technology but don’t have the resources to build and market a consumer application to mainstream consumers.
This is a pretty good articulation of the unintended consequences of trying to pause AI research in the hope of reducing risk: [citing Nora Belrose’s tweet linking her article]
We (or our artificial descendants) will look back and divide history into pre-AGI and post-AGI eras, the way we look back at prehistoric vs “modern” times today.
It’s so incredible that we are going to live through the creation of AGI. It will probably be the most important event in the history of the world and it will happen in our lifetimes.
Judging from his tweets, D’Angelo seems like significantly not concerned with AI risk, so I was quite taken aback to find out he was on the OpenAI board. This might be misinterpreting his views based on vibes.
I couldn’t remember where from, but I know that Ilya Sutskever at least takes x-risk seriously. I remember him recently going public about how failing alignment would essentially mean doom. I think it was published as an article on a news site rather than an interview, which are what he usually does. Someone with a way better memory than me could find it.
Also seems pretty significant:
The remaining board members are:
Has anyone collected their public statements on various AI x-risk topics anywhere?
Adam D’Angelo via X:
Oct 25
This should help access to AI diffuse throughout the world more quickly, and help those smaller researchers generate the large amounts of revenue that are needed to train bigger models and further fund their research.
Oct 25
We are especially excited about enabling a new class of smaller AI research groups or companies to reach a large audience, those who have unique talent or technology but don’t have the resources to build and market a consumer application to mainstream consumers.
Sep 17
This is a pretty good articulation of the unintended consequences of trying to pause AI research in the hope of reducing risk: [citing Nora Belrose’s tweet linking her article]
Aug 25
We (or our artificial descendants) will look back and divide history into pre-AGI and post-AGI eras, the way we look back at prehistoric vs “modern” times today.
Aug 20
It’s so incredible that we are going to live through the creation of AGI. It will probably be the most important event in the history of the world and it will happen in our lifetimes.
A bit, not shareable.
Helen is an AI safety person. Tasha is on the Effective Ventures board. Ilya leads superalignment. Adam signed the CAIS statement.
For completeness—in addition to Adam D’Angelo, Ilya Sutskever and Mira Murati signed the CAIS statement as well.
Didn’t Sam Altman also sign it?
Yes, Sam has also signed it.
Notably, of the people involved in this, Greg Brockman did not sign the CAIS statement, and I believe that was a purposeful choice.
Also D’Angelo is on the board of Asana, Moskovitz’s company (Moskovitz who funds Open Phil).
Judging from his tweets, D’Angelo seems like significantly not concerned with AI risk, so I was quite taken aback to find out he was on the OpenAI board. This might be misinterpreting his views based on vibes.
I couldn’t remember where from, but I know that Ilya Sutskever at least takes x-risk seriously. I remember him recently going public about how failing alignment would essentially mean doom. I think it was published as an article on a news site rather than an interview, which are what he usually does. Someone with a way better memory than me could find it.
EDIT: Nevermind, found them.
Thanks, edited.