We’re proud to be part of the nascent “effective altruist” movement. Effective altruism has been discussed elsewhere (see Peter Singer’s TED talk and Wikipedia); this post gives our take on what it is and isn’t.
Holden in 2015 on the EA Forum (talking about GiveWell Labs, which grew into OpenPhil):
We’re excited about effective altruism, and we think of GiveWell as an effective altruist organization (while knowing that this term is subject to multiple interpretations, not all of which apply to us).
Holden in April 2016 about plans for working on AI:
Potential risks from advanced artificial intelligence will be a major priority for 2016. Not only will Daniel Dewey be working on this cause full-time, but Nick Beckstead and I will both be putting significant time into it as well. Some other staff will be contributing smaller amounts of time as appropriate.
(Dewey who IIRC had worked at FHI and CEA ahead of this, and Beckstead from FHI.)
I believe the Open Philanthropy Project is unusually well-positioned from this perspective:
We are well-connected in the effective altruism community, which includes many of the people and organizations that have been most active in analyzing and raising awareness of potential risks from advanced artificial intelligence. For example, Daniel Dewey has previously worked at the Future of Humanity Institute and the Future of Life Institute, and has been a research associate with the Machine Intelligence Research Institute.
This grant initiates a partnership between the Open Philanthropy Project and OpenAI, in which Holden Karnofsky (Open Philanthropy’s Executive Director, “Holden” throughout this page) will join OpenAI’s Board of Directors and, jointly with one other Board member, oversee OpenAI’s safety and governance work.
OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors.
As a negative datapoint: I looked through a bunch of the media articles linked at the bottom of this GiveWell page, and most of them do not mention Effective Altruism, only effective giving / cost-effectiveness. So their Effective Altruist identity have had less awareness amongst folks who primarily know of Open Philanthropy through their media appearances.
I think this is accurately described as “an EA organization got a board seat at OpenAI”, and the actions of those board members reflect directly on EA (whether internally or externally).
Why did OpenAI come to trust Holden with this position of power? My guess is Holden and Dustin’s personal reputations were substantial effects here, along with Open Philanthropy’s major funding source, but also that many involved people’s excitement about and respect for the EA movement were a relevant factor in OpenAI wanting to partner with Open Philanthropy, and that Helen’s and Tasha’s actions have directly and negatively reflected on how the EA ecosystem is viewed by OpenAI leadership.
There’s a separate question about why Holden picked Helen Toner and Tasha MacAulay, and to what extent they were given power in the world by the EA ecosystem. It seems clear that these people have gotten power through their participation in the EA ecosystem (as OpenPhil is an EA institution), and to the extent that the EA ecosystem advertises itself as more moral than other places, if they executed the standard level of deceptive strategies that others in the tech industry would in their shoes, then that was false messaging.
Some historical context
Holden in 2013 on the GiveWell blog:
Holden in 2015 on the EA Forum (talking about GiveWell Labs, which grew into OpenPhil):
Holden in April 2016 about plans for working on AI:
(Dewey who IIRC had worked at FHI and CEA ahead of this, and Beckstead from FHI.)
Holden in 2016 about why they’re making potential risks from advanced AI a priority:
Holden about the OpenAI grant in 2017:
As a negative datapoint: I looked through a bunch of the media articles linked at the bottom of this GiveWell page, and most of them do not mention Effective Altruism, only effective giving / cost-effectiveness. So their Effective Altruist identity have had less awareness amongst folks who primarily know of Open Philanthropy through their media appearances.
I think this is accurately described as “an EA organization got a board seat at OpenAI”, and the actions of those board members reflect directly on EA (whether internally or externally).
Why did OpenAI come to trust Holden with this position of power? My guess is Holden and Dustin’s personal reputations were substantial effects here, along with Open Philanthropy’s major funding source, but also that many involved people’s excitement about and respect for the EA movement were a relevant factor in OpenAI wanting to partner with Open Philanthropy, and that Helen’s and Tasha’s actions have directly and negatively reflected on how the EA ecosystem is viewed by OpenAI leadership.
There’s a separate question about why Holden picked Helen Toner and Tasha MacAulay, and to what extent they were given power in the world by the EA ecosystem. It seems clear that these people have gotten power through their participation in the EA ecosystem (as OpenPhil is an EA institution), and to the extent that the EA ecosystem advertises itself as more moral than other places, if they executed the standard level of deceptive strategies that others in the tech industry would in their shoes, then that was false messaging.