I don’t normally just write-up takes, especially about current events, but here’s something that I think is potentially crucially relevant to the dynamics involved in the recent actions of the OpenAI board, that I haven’t seen anyone talk about:
The four members of the board who did the firing do not know each other very well.
Most boards meet a few times per year, for a couple of hours. Only Sutskever works at OpenAI. D’Angelo works senior roles in tech companies like Facebook and Quora, Toner is in EA/policy, and MacAulay at other tech companies (I’m not aware of any overlap with D’Angelo).
It’s plausible to me that MacAulay and Toner have spent more than 50 hours in each others’ company, but overall I’d probably be willing to bet at even odds that no other pair of them had spent more than 10 hours together before this crisis.
This is probably a key factor in why they haven’t written more publicly about their decision. Decision-by-committee is famously terrible, and it’s pretty likely to me that everyone pushes back hard on anything unilateral by the others in this high-tension scenario. So any writing representing them has to get consensus, and they’re focused on firefighting and getting a new CEO, to spend time iterating on an explanation of their reasoning that they can all get behind. That’s why Sutskever’s public writing is only speaking for himself (he just says that he regrets the decision, he’s said nothing about why or that in-principle speaks for the others).
I think this also predicts that Shear getting involved, and being the only direct counterparty that they must collectively and repeatedly work something out with, improved things. (Which accounts I’ve read suggest was a turning point in the negotiations.) He’s the first person that they are all engaged with and need to make things work out with, so he is in a position where they are forced to get consensus in a timely fashion, and he can actually demand specific things of them. This was a forcing function on them making decisions and continuing to communicate with an individual.
It’s standard to expect them to prepare a proper explanation in-advance, but from the information in this comment, I believe this firing decision was made within just a couple days of the event. A fast decision may have been the wrong call, but once it happened, then a team who doesn’t really know each other is thrust into an extremely high-stakes position and has to make decisions by consensus. My guess is that this was really truly quite difficult and it was very hard to get anything done at all.
This lens on the situation makes me update in the direction that they will eventually talk about why, once they’ve had time to iterate on the text explaining the reasoning, now that the basic function of the company isn’t under fire.
My current guess is that in many ways, a lot of the board’s decision-making since the firing has been worse than any individual’s on the board would’ve been had they been working alone.
In this mess, Altman and Helen should not be held to the same ethical standards, because I believe one of them has been given a powerful career in substantial part based on her commitments to higher ethical standards (a movement that prided itself on openness and transparency and trying to do the most good).
If Altman played deceptive strategies, and insofar as Helen played back the same deceptive strategies as Altman, then she did not honor the EA name.
(The name has a lot of dirt on it these days already, but still. It is a name that used to mean something back when it gave her power.)
Insofar as you got a position specifically because you were affiliated with a movement claiming to be good and open and honest and to have unusually high moral standards, and then when you arrive you become a standard political player, that’s disingenuous.
because I believe [Helen] has been given a powerful career in substantial part based on her commitments to higher ethical standards [...] then she did not honor the EA name. [...] Insofar as you got a position specifically because you were affiliated with a movement claiming to be good and open and honest and to have unusually high moral standards, and then when you arrive you become a standard political player, that’s disingenuous.
I think Holden being added to the board shouldn’t be mostly attributed to his affiliation with EA. And the Helen board seat is originally from this.
We’re proud to be part of the nascent “effective altruist” movement. Effective altruism has been discussed elsewhere (see Peter Singer’s TED talk and Wikipedia); this post gives our take on what it is and isn’t.
Holden in 2015 on the EA Forum (talking about GiveWell Labs, which grew into OpenPhil):
We’re excited about effective altruism, and we think of GiveWell as an effective altruist organization (while knowing that this term is subject to multiple interpretations, not all of which apply to us).
Holden in April 2016 about plans for working on AI:
Potential risks from advanced artificial intelligence will be a major priority for 2016. Not only will Daniel Dewey be working on this cause full-time, but Nick Beckstead and I will both be putting significant time into it as well. Some other staff will be contributing smaller amounts of time as appropriate.
(Dewey who IIRC had worked at FHI and CEA ahead of this, and Beckstead from FHI.)
I believe the Open Philanthropy Project is unusually well-positioned from this perspective:
We are well-connected in the effective altruism community, which includes many of the people and organizations that have been most active in analyzing and raising awareness of potential risks from advanced artificial intelligence. For example, Daniel Dewey has previously worked at the Future of Humanity Institute and the Future of Life Institute, and has been a research associate with the Machine Intelligence Research Institute.
This grant initiates a partnership between the Open Philanthropy Project and OpenAI, in which Holden Karnofsky (Open Philanthropy’s Executive Director, “Holden” throughout this page) will join OpenAI’s Board of Directors and, jointly with one other Board member, oversee OpenAI’s safety and governance work.
OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors.
As a negative datapoint: I looked through a bunch of the media articles linked at the bottom of this GiveWell page, and most of them do not mention Effective Altruism, only effective giving / cost-effectiveness. So their Effective Altruist identity have had less awareness amongst folks who primarily know of Open Philanthropy through their media appearances.
I think this is accurately described as “an EA organization got a board seat at OpenAI”, and the actions of those board members reflect directly on EA (whether internally or externally).
Why did OpenAI come to trust Holden with this position of power? My guess is Holden and Dustin’s personal reputations were substantial effects here, along with Open Philanthropy’s major funding source, but also that many involved people’s excitement about and respect for the EA movement were a relevant factor in OpenAI wanting to partner with Open Philanthropy, and that Helen’s and Tasha’s actions have directly and negatively reflected on how the EA ecosystem is viewed by OpenAI leadership.
There’s a separate question about why Holden picked Helen Toner and Tasha MacAulay, and to what extent they were given power in the world by the EA ecosystem. It seems clear that these people have gotten power through their participation in the EA ecosystem (as OpenPhil is an EA institution), and to the extent that the EA ecosystem advertises itself as more moral than other places, if they executed the standard level of deceptive strategies that others in the tech industry would in their shoes, then that was false messaging.
I’m not quite sure in the above comment how to balance between “this seems to me like it could explain a lot” and also “might just be factually false”. So I guess I’m leaving this comment, lampshading it.
The most important thing right now: I still don’t know why they chose to fire Altman, and especially why they chose to do it so quickly.
That’s an exceedingly costly choice to make (i.e. with the speed of it), and so when I start to speculate on why, I only come up with commensurately worrying states of affair e.g. he did something egregious enough to warrant it, or he didn’t and the board acted with great hostility.
Them going back on their decision is bayesian evidence for the latter — if he’d done something egregious, they’d just be able to tell relevant folks, and Altman wouldn’t get his job back.
So many people are asking this (e.g. everyone at the company). I’ll be very worried if the reason doesn’t come out.
Also, I don’t know that I’ve said this, but from reading enough of his public tweets, I had blocked Sam Altman long ago. He seemed very political in how he used speech, and so I didn’t want to include him in my direct memetic sphere.
As a small pointer to why: he would commonly choose not to share object-level information about something, but instead share how he thought social reality should change. I think I recall him saying that the social consensus was wrong about fusion energy, and pushed for it to move in a specific direction; he did this rather than just plainly say what his object level beliefs about fusion were, or offer a particular counter-argument to an argument that was going around.
It’s been a year or two since I blocked him, so I don’t recall more specifics, but it seemed worth mentioning, as a datapoint for folks to include in their character assessments.
My current guess is that most of the variance in what happened is explained by a board where 3 out of 4 people don’t know the dynamics of upper management in a multi-billion dollar company, where the board don’t know each other well, and (for some reason) the decision was made very suddenly. Pretty low-expectations given that situation. Seems like Shear was a pretty great replacement get given the hand dealt. Assuming that they had legit reason to fire the CEO, it’s probably primarily through lack of skill and competence that they failed, more so than as a result of Altman’s superior deal-making skill and leadership abilities (though that was what finished it off).
I don’t normally just write-up takes, especially about current events, but here’s something that I think is potentially crucially relevant to the dynamics involved in the recent actions of the OpenAI board, that I haven’t seen anyone talk about:
The four members of the board who did the firing do not know each other very well.
Most boards meet a few times per year, for a couple of hours. Only Sutskever works at OpenAI. D’Angelo works senior roles in tech companies like Facebook and Quora, Toner is in EA/policy, and MacAulay at other tech companies (I’m not aware of any overlap with D’Angelo).
It’s plausible to me that MacAulay and Toner have spent more than 50 hours in each others’ company, but overall I’d probably be willing to bet at even odds that no other pair of them had spent more than 10 hours together before this crisis.
This is probably a key factor in why they haven’t written more publicly about their decision. Decision-by-committee is famously terrible, and it’s pretty likely to me that everyone pushes back hard on anything unilateral by the others in this high-tension scenario. So any writing representing them has to get consensus, and they’re focused on firefighting and getting a new CEO, to spend time iterating on an explanation of their reasoning that they can all get behind. That’s why Sutskever’s public writing is only speaking for himself (he just says that he regrets the decision, he’s said nothing about why or that in-principle speaks for the others).
I think this also predicts that Shear getting involved, and being the only direct counterparty that they must collectively and repeatedly work something out with, improved things. (Which accounts I’ve read suggest was a turning point in the negotiations.) He’s the first person that they are all engaged with and need to make things work out with, so he is in a position where they are forced to get consensus in a timely fashion, and he can actually demand specific things of them. This was a forcing function on them making decisions and continuing to communicate with an individual.
It’s standard to expect them to prepare a proper explanation in-advance, but from the information in this comment, I believe this firing decision was made within just a couple days of the event. A fast decision may have been the wrong call, but once it happened, then a team who doesn’t really know each other is thrust into an extremely high-stakes position and has to make decisions by consensus. My guess is that this was really truly quite difficult and it was very hard to get anything done at all.
This lens on the situation makes me update in the direction that they will eventually talk about why, once they’ve had time to iterate on the text explaining the reasoning, now that the basic function of the company isn’t under fire.
My current guess is that in many ways, a lot of the board’s decision-making since the firing has been worse than any individual’s on the board would’ve been had they been working alone.
In this mess, Altman and Helen should not be held to the same ethical standards, because I believe one of them has been given a powerful career in substantial part based on her commitments to higher ethical standards (a movement that prided itself on openness and transparency and trying to do the most good).
If Altman played deceptive strategies, and insofar as Helen played back the same deceptive strategies as Altman, then she did not honor the EA name.
(The name has a lot of dirt on it these days already, but still. It is a name that used to mean something back when it gave her power.)
Insofar as you got a position specifically because you were affiliated with a movement claiming to be good and open and honest and to have unusually high moral standards, and then when you arrive you become a standard political player, that’s disingenuous.
I think Holden being added to the board shouldn’t be mostly attributed to his affiliation with EA. And the Helen board seat is originally from this.
(The relevant history here is that this is the OpenAI grant that resulted in a board seat while here is a post from just earlier about Holden’s takes on EA.)
Some historical context
Holden in 2013 on the GiveWell blog:
Holden in 2015 on the EA Forum (talking about GiveWell Labs, which grew into OpenPhil):
Holden in April 2016 about plans for working on AI:
(Dewey who IIRC had worked at FHI and CEA ahead of this, and Beckstead from FHI.)
Holden in 2016 about why they’re making potential risks from advanced AI a priority:
Holden about the OpenAI grant in 2017:
As a negative datapoint: I looked through a bunch of the media articles linked at the bottom of this GiveWell page, and most of them do not mention Effective Altruism, only effective giving / cost-effectiveness. So their Effective Altruist identity have had less awareness amongst folks who primarily know of Open Philanthropy through their media appearances.
I think this is accurately described as “an EA organization got a board seat at OpenAI”, and the actions of those board members reflect directly on EA (whether internally or externally).
Why did OpenAI come to trust Holden with this position of power? My guess is Holden and Dustin’s personal reputations were substantial effects here, along with Open Philanthropy’s major funding source, but also that many involved people’s excitement about and respect for the EA movement were a relevant factor in OpenAI wanting to partner with Open Philanthropy, and that Helen’s and Tasha’s actions have directly and negatively reflected on how the EA ecosystem is viewed by OpenAI leadership.
There’s a separate question about why Holden picked Helen Toner and Tasha MacAulay, and to what extent they were given power in the world by the EA ecosystem. It seems clear that these people have gotten power through their participation in the EA ecosystem (as OpenPhil is an EA institution), and to the extent that the EA ecosystem advertises itself as more moral than other places, if they executed the standard level of deceptive strategies that others in the tech industry would in their shoes, then that was false messaging.
I’m not quite sure in the above comment how to balance between “this seems to me like it could explain a lot” and also “might just be factually false”. So I guess I’m leaving this comment, lampshading it.
The most important thing right now: I still don’t know why they chose to fire Altman, and especially why they chose to do it so quickly.
That’s an exceedingly costly choice to make (i.e. with the speed of it), and so when I start to speculate on why, I only come up with commensurately worrying states of affair e.g. he did something egregious enough to warrant it, or he didn’t and the board acted with great hostility.
Them going back on their decision is bayesian evidence for the latter — if he’d done something egregious, they’d just be able to tell relevant folks, and Altman wouldn’t get his job back.
So many people are asking this (e.g. everyone at the company). I’ll be very worried if the reason doesn’t come out.
In brief: I’m saying that once you condition on:
The board decided the firing was urgent.
The board does not know each other very well and defaults to making decisions by consensus.
The board is immediately in a high-stakes high-stress situation.
Then you naturally get
4. The board fails to come to consensus on public comms about the decision.
Also, I don’t know that I’ve said this, but from reading enough of his public tweets, I had blocked Sam Altman long ago. He seemed very political in how he used speech, and so I didn’t want to include him in my direct memetic sphere.
As a small pointer to why: he would commonly choose not to share object-level information about something, but instead share how he thought social reality should change. I think I recall him saying that the social consensus was wrong about fusion energy, and pushed for it to move in a specific direction; he did this rather than just plainly say what his object level beliefs about fusion were, or offer a particular counter-argument to an argument that was going around.
It’s been a year or two since I blocked him, so I don’t recall more specifics, but it seemed worth mentioning, as a datapoint for folks to include in their character assessments.
My current guess is that most of the variance in what happened is explained by a board where 3 out of 4 people don’t know the dynamics of upper management in a multi-billion dollar company, where the board don’t know each other well, and (for some reason) the decision was made very suddenly. Pretty low-expectations given that situation. Seems like Shear was a pretty great replacement get given the hand dealt. Assuming that they had legit reason to fire the CEO, it’s probably primarily through lack of skill and competence that they failed, more so than as a result of Altman’s superior deal-making skill and leadership abilities (though that was what finished it off).