I was asked to clarify my position about why I voted ‘disagree’ with “I assign >50% to this claim: The board should be straightforward with its employees about why they fired the CEO.”
I’m putting a maybe-unjustified high amount of trust in all the people involved, and from that, my prior is very high on “for some reason, it would be really bad, inappropriate, or wrong to discuss this in a public way.” And given that OpenAI has ~800 employees, telling them would basically count as a ‘public’ announcement. (I would update significantly on the claim if it was only a select group of trusted employees, rather than all of them.)
To me, people seem too-biased in the direction of “this info should be public”—maybe with the assumption that “well I am personally trustworthy, and I want to know, and in fact, I should know in order to be able to assess the situation for myself.” Or maybe with the assumption that the ‘public’ is good for keeping people accountable and ethical. Meaning that informing the public would be net helpful.
I am maybe biased in the direction of: The general public overestimates its own trustworthiness and ability to evaluate complex situations, especially without most of the relevant context.
My overall experience is that the involvement of the public makes situations worse, as a general rule.
And I think the public also overestimates their own helpfulness, post-hoc. So when things are handled in a public way, the public assesses their role in a positive light, but they rarely have ANY way to judge the counterfactual. And in fact, I basically NEVER see them even ACKNOWLEDGE the counterfactual. Which makes sense because that counterfactual is almost beyond-imagining. The public doesn’t have ANY of the relevant information that would make it possible to evaluate the counterfactual.
So in the end, they just default to believing that it had to play out in the way it did, and that the public’s involvement was either inevitable or good. And I do not understand where this assessment comes from, other than availability bias?
The involvement of the public, in my view, incentivizes more dishonesty, hiding, and various forms of deception. Because the public is usually NOT in a position to judge complex situations and lack much of the relevant context (and also aren’t particularly clear about ethics, often, IMO), so people who ARE extremely thoughtful, ethically minded, high-integrity, etc. are often put in very awkward binds when it comes to trying to interface with the public. And so I believe it’s better for the public not to be involved if they don’t have to be.
I am a strong proponent of keeping things close to the chest and keeping things within more trusted, high-context, in-person circles. And to avoid online involvement as much as possible for highly complex, high-touch situations. Does this mean OpenAI should keep it purely internal? No they should have outside advisors etc. Does this mean no employees should know what’s going on? No, some of them should—the ones who are high-level, responsible, and trustworthy, and they can then share what needs to be shared with the people under them.
Maybe some people believe that all ~800 employees deserve to know why their CEO was fired. Like, as a courtesy or general good policy or something. I think it depends on the actual reason. I can envision certain reasons that don’t need to be shared, and I can envision reasons that ought to be shared.
I can envision situations where sharing the reasons could potentially damage AI Safety efforts in the future. Or disable similar groups from being able to make really difficult but ethically sound choices—such as shutting down an entire company. I do not want to disable groups from being able to make extremely unpopular choices that ARE, in fact, the right thing to do.
“Well if it’s the right thing to do, we, the public, would understand and not retaliate against those decision-makers or generally cause havoc” is a terrible assumption, in my view.
I am interested in brainstorming, developing, and setting up really strong and effective accountability structures for orgs like OpenAI, and I do not believe most of those effective structures will include ‘keep the public informed’ as a policy. More often the opposite.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
That is already a public statement that they are firing Sam Altman for cause, and that the cause is specifically that he lied to the board about something material. That’s a perfectly fine public statement to make, if Sam Altman has in fact lied to the board about something material. Even a statement to the effect of “the board stands by its decision, but we are not at liberty to comment on the particulars of the reasons for Sam Altman’s departure at this time” would be better than what we’ve seen (because that would say “yes there was actual misconduct, no we’re not going to go into more detail”). The absence of such a statement implies that maybe there was no specific misconduct though.
You interpreted it as ‘lied to the board about something material’. But to me, it also might mean ‘wasn’t forthcoming enough for us to trust him’ or ‘speaks in misleading ways (but not necessarily on purpose)’ or it might even just be somewhat coded language for ‘difficult to work with + we’re tired of trying to work with him’.
I don’t know why you latch onto the interpretation that he definitely lied about something specific.
I’m interpreting this specifically through the lens of “this was a public statement”. The board definitely had the ability to execute steps like “ask ChatGPT for some examples of concrete scenarios that would lead a company to issue that statement”. The board probably had better options than “ask ChatGPT”, but that should still serve as a baseline for how informed one would expect them to be about the implications of their statement.
Here are some concrete example scenarios ChatGPT gives that might lead to that statement being given:
Financial Performance Misrepresentation: Four years into his tenure, CEO Mr. Ceoface of FooBarCorp, a leading ERP software company, had been painting an overly rosy picture of the company’s financial health in board meetings. He reported consistently high revenue projections and downplayed the mounting operational costs to keep investor confidence buoyant. However, an unexpected external audit revealed significant financial discrepancies. The actual figures showed that the company was on the brink of a financial crisis, far from the flourishing image Mr. Ceoface had portrayed. This breach of trust led to his immediate departure.
Undisclosed Risks in Business Strategy: Mr. Ceoface, the ambitious CEO of FooBarCorp, had spearheaded a series of high-profile acquisitions to dominate the ERP market. He assured the board of minimal risk and substantial rewards. However, he failed to disclose the full extent of the debt incurred and the operational challenges of integrating these acquisitions. When several of these acquisitions began underperforming, causing a strain on the company’s resources, the board realized they had not been fully informed about the potential pitfalls, leading to a loss of confidence in Mr. Ceoface’s leadership.
Compliance and Ethical Issues: Under Mr. Ceoface’s leadership, FooBarCorp had engaged in aggressive competitive practices that skirted the edges of legal and ethical norms. While these practices initially drove the company’s market share upwards, Mr. Ceoface kept the board in the dark about the potential legal and ethical ramifications. The situation came to a head when a whistleblower exposed these practices, leading to public outcry and regulatory scrutiny. The board, feeling blindsided and questioning Mr. Ceoface’s judgment, decided to part ways with him.
Personal Conduct and Conflict of Interest: Mr. Ceoface, CEO of FooBarCorp, had personal investments in several small tech startups, some of which became subcontractors and partners of FooBarCorp. He neglected to disclose these interests to the board, viewing them as harmless and separate from his role. However, when an investigative report revealed that these startups were receiving preferential treatment and contracts from FooBarCorp, the board was forced to confront Mr. Ceoface about these undisclosed conflicts of interest. His failure to maintain professional boundaries led to his immediate departure.
Technology and Product Missteps: Eager to place FooBarCorp at the forefront of innovation, Mr. Ceoface pushed for the development of a cutting-edge, AI-driven ERP system. Despite internal concerns about its feasibility and market readiness, he continuously assured the board of its progress and potential. However, when the product was finally launched, it was plagued with technical issues and received poor feedback from key clients. The board, having been assured of its success, felt misled by Mr. Ceoface’s optimistic but unrealistic assessments, leading to a decision to replace him.leading to a decision to replace him.
What all of these things have in common is that they involve misleading the board about something material. “Not fully candid”, in the context of corporate communications, means “liar liar pants on fire”, not “sometimes they make statements and those statements, while true, vaguely imply something that isn’t accurate”.
I was asked to clarify my position about why I voted ‘disagree’ with “I assign >50% to this claim: The board should be straightforward with its employees about why they fired the CEO.”
I’m putting a maybe-unjustified high amount of trust in all the people involved, and from that, my prior is very high on “for some reason, it would be really bad, inappropriate, or wrong to discuss this in a public way.” And given that OpenAI has ~800 employees, telling them would basically count as a ‘public’ announcement. (I would update significantly on the claim if it was only a select group of trusted employees, rather than all of them.)
To me, people seem too-biased in the direction of “this info should be public”—maybe with the assumption that “well I am personally trustworthy, and I want to know, and in fact, I should know in order to be able to assess the situation for myself.” Or maybe with the assumption that the ‘public’ is good for keeping people accountable and ethical. Meaning that informing the public would be net helpful.
I am maybe biased in the direction of: The general public overestimates its own trustworthiness and ability to evaluate complex situations, especially without most of the relevant context.
My overall experience is that the involvement of the public makes situations worse, as a general rule.
And I think the public also overestimates their own helpfulness, post-hoc. So when things are handled in a public way, the public assesses their role in a positive light, but they rarely have ANY way to judge the counterfactual. And in fact, I basically NEVER see them even ACKNOWLEDGE the counterfactual. Which makes sense because that counterfactual is almost beyond-imagining. The public doesn’t have ANY of the relevant information that would make it possible to evaluate the counterfactual.
So in the end, they just default to believing that it had to play out in the way it did, and that the public’s involvement was either inevitable or good. And I do not understand where this assessment comes from, other than availability bias?
The involvement of the public, in my view, incentivizes more dishonesty, hiding, and various forms of deception. Because the public is usually NOT in a position to judge complex situations and lack much of the relevant context (and also aren’t particularly clear about ethics, often, IMO), so people who ARE extremely thoughtful, ethically minded, high-integrity, etc. are often put in very awkward binds when it comes to trying to interface with the public. And so I believe it’s better for the public not to be involved if they don’t have to be.
I am a strong proponent of keeping things close to the chest and keeping things within more trusted, high-context, in-person circles. And to avoid online involvement as much as possible for highly complex, high-touch situations. Does this mean OpenAI should keep it purely internal? No they should have outside advisors etc. Does this mean no employees should know what’s going on? No, some of them should—the ones who are high-level, responsible, and trustworthy, and they can then share what needs to be shared with the people under them.
Maybe some people believe that all ~800 employees deserve to know why their CEO was fired. Like, as a courtesy or general good policy or something. I think it depends on the actual reason. I can envision certain reasons that don’t need to be shared, and I can envision reasons that ought to be shared.
I can envision situations where sharing the reasons could potentially damage AI Safety efforts in the future. Or disable similar groups from being able to make really difficult but ethically sound choices—such as shutting down an entire company. I do not want to disable groups from being able to make extremely unpopular choices that ARE, in fact, the right thing to do.
“Well if it’s the right thing to do, we, the public, would understand and not retaliate against those decision-makers or generally cause havoc” is a terrible assumption, in my view.
I am interested in brainstorming, developing, and setting up really strong and effective accountability structures for orgs like OpenAI, and I do not believe most of those effective structures will include ‘keep the public informed’ as a policy. More often the opposite.
The board’s initial statement in which they stated
That is already a public statement that they are firing Sam Altman for cause, and that the cause is specifically that he lied to the board about something material. That’s a perfectly fine public statement to make, if Sam Altman has in fact lied to the board about something material. Even a statement to the effect of “the board stands by its decision, but we are not at liberty to comment on the particulars of the reasons for Sam Altman’s departure at this time” would be better than what we’ve seen (because that would say “yes there was actual misconduct, no we’re not going to go into more detail”). The absence of such a statement implies that maybe there was no specific misconduct though.
I don’t interpret that statement in the same way.
You interpreted it as ‘lied to the board about something material’. But to me, it also might mean ‘wasn’t forthcoming enough for us to trust him’ or ‘speaks in misleading ways (but not necessarily on purpose)’ or it might even just be somewhat coded language for ‘difficult to work with + we’re tired of trying to work with him’.
I don’t know why you latch onto the interpretation that he definitely lied about something specific.
I’m interpreting this specifically through the lens of “this was a public statement”. The board definitely had the ability to execute steps like “ask ChatGPT for some examples of concrete scenarios that would lead a company to issue that statement”. The board probably had better options than “ask ChatGPT”, but that should still serve as a baseline for how informed one would expect them to be about the implications of their statement.
Here are some concrete example scenarios ChatGPT gives that might lead to that statement being given:
What all of these things have in common is that they involve misleading the board about something material. “Not fully candid”, in the context of corporate communications, means “liar liar pants on fire”, not “sometimes they make statements and those statements, while true, vaguely imply something that isn’t accurate”.