Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.
The key passages:
Helen Toner and Tasha McCauley, who left the board of Openai after its decision to reverse course on replacing Sam Altman, the CEO, last November, have offered comments on the regulation of artificial intelligence (AI) and events at OpenAI in a By Invitation piece in The Economist.
We do not accept the claims made by Ms Toner and Ms McCauley regarding events at OpenAI. Upon being asked by the former board (including Ms Toner and Ms McCauley) to serve on the new board, the first step we took was to commission an external review of events leading up to Mr Altman’s forced resignation. We chaired a special committee set up by the board, and WilmerHale, a prestigious law firm, led the review. It conducted dozens of interviews with members of OpenAI’s previous board (including Ms Toner and Ms McCauley), Openai executives, advisers to the previous board and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Both Ms Toner and Ms McCauley provided ample input to the review, and this was carefully considered as we came to our judgments.
The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”
Furthermore, in six months of nearly daily contact with the company we have found Mr Altman highly forthcoming on all relevant issues and consistently collegial with his management team. We regret that Ms Toner continues to revisit issues that were thoroughly examined by the WilmerHale-led review rather than moving forward.
Ms Toner has continued to make claims in the press. Although perhaps difficult to remember now, OpenAI released ChatGPT in November 2022 as a research project to learn more about how useful its models are in conversational settings. It was built on GPT-3.5, an existing ai model which had already been available for more than eight months at the time.
He was caught lying about the non-disparagement agreements, but I guess lying to the public is fine as long as you don’t lie to the board?
Taylor’s and Summers’ comments here are pretty disappointing—it seems that they have no issue with, and maybe even endorse, Sam’s now-publicly-verified bad behavior.
I find it a weird thing to choose to say/emphasize.
The issue under discussion isn’t whether Altman hid things from the new board; it’s whether he hid things to the old board a long while ago.
Of course he’s going to seem forthcoming towards the new board at first. So, the new board having the impression that he was forthcoming towards them? This isn’t information that helps us much in assessing whether to side with Altman vs the old board. That makes me think: why report on it? It would be a more relevant update if Taylor or Summers were willing to stick their necks out a little further and say something stronger and more direct, something more in the direction of (hypothetically), “In all our by-now extensive interactions with Altman, we got the sense that he’s the sort of person you can trust; in fact, he had surprisingly circumspect and credible things to say about what happened, and he seems self-aware about things that he could’ve done better (and those things seem comparatively small or at least very understandable).” If they had added something like that, it would have been more interesting and surprising. (At least for those who are currently skeptical or outright negative towards Altman; but also “surprising” in terms of “nice, the new board is really invested in forming their own views here!”).
By contrast, this combination of basically defending Altman (and implying pretty negative things about Toner and McCauley’s objectivity and their judgment on things that they deem fair to tell the media), but doing so without sticking their necks out, makes me worried that the board is less invested in outcomes and more invested in playing their role. By “not sticking their necks out,” I mean the outsourcing of judgment-forming to the independent investigation and the mentioning of clearly unsurprising and not-very-relevant things like whether Altman has been forthcoming to them, so far. By “less invested in outcomes and more invested in playing their role,” I mean the possibility that the new board maybe doesn’t consider it important to form opinions at the object level (on Altman’s character and his suitability for OpenAI’s mission, and generally them having a burning desire to make the best CEO-related decisions). Instead, the alternative mode they could be in would be having in mind a specific “role” that board members play, which includes things like, e.g., “check whether Altman ever gets caught doing something outrageous,” “check if he passes independent legal reviews,” or “check if Altman’s answers seem reassuring when we occasionally ask him critical questions.” And then, that’s it, job done. If that’s the case, I think that’d be super unfortunate. The more important the org, the more it matters to have a engaged/invested board that considers itself ultimately responsible for CEO-related outcomes (“will history look back favorably on their choices regarding the CEO”).
To sum up, I’d have much preferred it if their comments had either included them sticking their neck out a little more, or if I had gotten from them more of a sense of still withholding judgment. I think the latter would have been possible even in combination with still reminding the public that Altman (e.g.,) passed that independent investigation or that some of the old board members’ claims against him seem thinly supported, etc. (If that’s their impression, fair enough.) For instance, it’s perfectly possible to say something like, “In our duty as board members, we haven’t noticed anything unusual or worrisome, but we’ll continue to keep our eyes open.” That’s admittedly pretty similar, in substance, to what they actually said. Still, it would read as a lot more reassuring to me because of its different emphasis My alternative phrasing would help convey that (1) they don’t naively believe that Altman – in worlds where he is dodgy – would have likely already given things away easily in interactions towards them, and (2) that they consider themselves responsible for the outcome (and not just following of the common procedures) of whether OpenAI will be led well and in line with its mission. (Maybe they do in fact have these views, 1 and 2, but didn’t do a good job here at reassuring me of that.)
The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”
Note that Toner did not make claims regarding product safety, security, the pace of development, OAI’s finances, or statements to investors (the board is not investors), customers, or business partners (the board are not business partners). She said he was not honest to the board.
OpenAI’s March 2024 summary of the WilmerHale report included:
The firm conducted dozens of interviews with members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Based on the record developed by WilmerHale and following the recommendation of the Special Committee, the Board expressed its full confidence in Mr. Sam Altman and Mr. Greg Brockman’s ongoing leadership of OpenAI.
[...]
WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.
I’d guess that telling lies to the board would mandate removal. If that’s right, then the summary suggests that they didn’t find evidence of this.
It’s also notable that Toner and McCauley have not provided public evidence of “outright lies” to the board. We also know that whatever evidence they shared in private during that critical weekend did not convince key stakeholders that Sam should go.
Some board members swapped notes on their individual discussions with Altman. The group concluded that in one discussion with a board member, Altman left a misleading perception that another member thought Toner should leave, the people said.
Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.
The key passages:
He was caught lying about the non-disparagement agreements, but I guess lying to the public is fine as long as you don’t lie to the board?
Taylor’s and Summers’ comments here are pretty disappointing—it seems that they have no issue with, and maybe even endorse, Sam’s now-publicly-verified bad behavior.
That’s exactly the line that made my heart sink.
I find it a weird thing to choose to say/emphasize.
The issue under discussion isn’t whether Altman hid things from the new board; it’s whether he hid things to the old board a long while ago.
Of course he’s going to seem forthcoming towards the new board at first. So, the new board having the impression that he was forthcoming towards them? This isn’t information that helps us much in assessing whether to side with Altman vs the old board. That makes me think: why report on it? It would be a more relevant update if Taylor or Summers were willing to stick their necks out a little further and say something stronger and more direct, something more in the direction of (hypothetically), “In all our by-now extensive interactions with Altman, we got the sense that he’s the sort of person you can trust; in fact, he had surprisingly circumspect and credible things to say about what happened, and he seems self-aware about things that he could’ve done better (and those things seem comparatively small or at least very understandable).” If they had added something like that, it would have been more interesting and surprising. (At least for those who are currently skeptical or outright negative towards Altman; but also “surprising” in terms of “nice, the new board is really invested in forming their own views here!”).
By contrast, this combination of basically defending Altman (and implying pretty negative things about Toner and McCauley’s objectivity and their judgment on things that they deem fair to tell the media), but doing so without sticking their necks out, makes me worried that the board is less invested in outcomes and more invested in playing their role. By “not sticking their necks out,” I mean the outsourcing of judgment-forming to the independent investigation and the mentioning of clearly unsurprising and not-very-relevant things like whether Altman has been forthcoming to them, so far. By “less invested in outcomes and more invested in playing their role,” I mean the possibility that the new board maybe doesn’t consider it important to form opinions at the object level (on Altman’s character and his suitability for OpenAI’s mission, and generally them having a burning desire to make the best CEO-related decisions). Instead, the alternative mode they could be in would be having in mind a specific “role” that board members play, which includes things like, e.g., “check whether Altman ever gets caught doing something outrageous,” “check if he passes independent legal reviews,” or “check if Altman’s answers seem reassuring when we occasionally ask him critical questions.” And then, that’s it, job done. If that’s the case, I think that’d be super unfortunate. The more important the org, the more it matters to have a engaged/invested board that considers itself ultimately responsible for CEO-related outcomes (“will history look back favorably on their choices regarding the CEO”).
To sum up, I’d have much preferred it if their comments had either included them sticking their neck out a little more, or if I had gotten from them more of a sense of still withholding judgment. I think the latter would have been possible even in combination with still reminding the public that Altman (e.g.,) passed that independent investigation or that some of the old board members’ claims against him seem thinly supported, etc. (If that’s their impression, fair enough.) For instance, it’s perfectly possible to say something like, “In our duty as board members, we haven’t noticed anything unusual or worrisome, but we’ll continue to keep our eyes open.” That’s admittedly pretty similar, in substance, to what they actually said. Still, it would read as a lot more reassuring to me because of its different emphasis My alternative phrasing would help convey that (1) they don’t naively believe that Altman – in worlds where he is dodgy – would have likely already given things away easily in interactions towards them, and (2) that they consider themselves responsible for the outcome (and not just following of the common procedures) of whether OpenAI will be led well and in line with its mission.
(Maybe they do in fact have these views, 1 and 2, but didn’t do a good job here at reassuring me of that.)
Note that Toner did not make claims regarding product safety, security, the pace of development, OAI’s finances, or statements to investors (the board is not investors), customers, or business partners (the board are not business partners). She said he was not honest to the board.
I’m not sure what to make of this omission.
OpenAI’s March 2024 summary of the WilmerHale report included:
I’d guess that telling lies to the board would mandate removal. If that’s right, then the summary suggests that they didn’t find evidence of this.
It’s also notable that Toner and McCauley have not provided public evidence of “outright lies” to the board. We also know that whatever evidence they shared in private during that critical weekend did not convince key stakeholders that Sam should go.
The WSJ reported:
I really wish they’d publish these notes.