Use this thread to (a) upvote topics you’re interested in reading about, (b) agree/disagree with positions, and (c) add new positions for people to vote on.
Open-ended: A dialogue between an OpenAI employee who signed the open letter, and someone outside opposed to the open letter, about their reasoning and the options.
(Up/down-vote if you’re interested in reading discussion of this. React paperclip if you have an opinion and would be up for dialoguing)
The secrecy in which OpenAI’s board operated made it less trustworthy. Boards at places like Antrophic should update to be less secretive and more transparent.
There is a >80% chance that US-China affairs (including the AI race between the US and China) is an extremely valuable or crucial lens for understanding the current conflict over OpenAI (the conflict itself, not the downstream implications), as opposed to being a merely somewhat-helpful lens.
It would be a promising move, to reduce existential risk, for Anthropic to take over what will remain of OpenAI and consolidate efforts into a single project.
The mass of OpenAI employees rapidly performing identical rote token actions in public (identical tweets and open-letter swarming) is a poor indicator for their collective epistemics (e.g. manipulated, mind-killed, …)
OpenAI should have an internal-only directory allowing employees and leadership to write up and see each other’s beliefs about AI extinction risk and alignment approaches.
If the board did not abide by cooperative principles in the firing nor acted on substantial evidence to warrant the firing in line with the charter, and nonetheless were largely EA motivated, then EA should be disavowed and dismantled.
The OpenAI Charter, if fully & faithfully followed and effectively stood behind, including possibly shuttering the whole project down if it came down to it, would prevent OpenAI from being a major contributor to AI x-risk. In other words, as long as people actually followed this particular Charter to the letter, it is sufficient for curtailing AI risk, at least from this one org.
Media & Twitter reactions to OpenAI developments were largely unhelpful, specious, or net-negative for overall discourse around AI and AI Safety. We should reflect on how we can do better in the future and possibly even consider how to restructure media/Twitter/etc to lessen the issues going forward.
If a mass exodus happens, it will mostly be the fodder employees, and more than 30% of OpenAI’s talent will remain (e.g. if the mind of Ilya and two other people contain more than 30% of OpenAI’s talent, and they all stay).
There is a person or entity deploying FUD tactics in a strategically relevant way which is driving such feelings on Twitter. In contrast to the situation just being naturally or unintentionally provoking of fear, uncertainty, and doubt. For example, Sam privately threatening to sue the board for libel & wrongful termination if they’re more specific about why Sam was fired or don’t support the dissolution of the board, and the board caving or at least taking more time.
This poll is has too many questions of fact, and therefore questions of fact should be downvoted, so that questions of policy can be upvoted in their stead. Discuss below.
If there was actually a spooky capabilities advance that convinced the board that drastic action was needed, then the board’s actions were on net justified, regardless of what other dynamics were at play and whether cooperative principles were followed.
If most of OpenAI is absorbed into Microsoft, they will ultimately remain sovereign and resist further capture, e.g. using rationality, FDT/UDT, coordination theory, CFAR training, glowfic osmosis, or Microsoft underestimating them sufficiently to fail to capture them further, or overestimating their predictability, or being deterred, or Microsoft not having the appetite to make an attempt at all.
Microsoft is not actively aggressive or trying to capture OpenAI, and is largely passive in the conflict, e.g. Sam or Greg or investors approached Microsoft with the exodus and letter idea, and Microsoft was either clueless or misled about the connection between the board and EA/AI safety.
If >80% of Microsoft employees were signed up for Cryonics, as opposed to ~0% now, that would indicate that Microsoft is sufficiently future-conscious to make it probably net-positive for them to absorb OpenAI.
The mass of OpenAI employees rapidly performing identical rote token actions in public (identical tweets and open-letter swarming) indicates a cult of personality around Sam
The conflict was not started by the board, but rather the board reacting to a move made by someone else, or a discovery of a hostile plot previously initiated and advanced by someone else.
Poll For Topics of Discussion and Disagreement
Use this thread to (a) upvote topics you’re interested in reading about, (b) agree/disagree with positions, and (c) add new positions for people to vote on.
Open-ended: A dialogue between an OpenAI employee who signed the open letter, and someone outside opposed to the open letter, about their reasoning and the options.
(Up/down-vote if you’re interested in reading discussion of this. React paperclip if you have an opinion and would be up for dialoguing)
We should expect something similar to this fiasco to happen if/when Anthropic’s oversight board tries to significantly exercise their powers.
Was Sam Altman acting consistently with the OpenAI charter prior to the board firing him?
We should consider other accountability structures than the one OpenAI tried (i.e. the non-profit / BoD). Also: What should they be?
Open-ended: If >90% of employees leave OpenAI: what plan should Emmett Shear set for OpenAI going forwards?
(Up/down-vote if you’re interested in reading discussion of this. React paperclip if you have an opinion and would be up for discussing)
We should expect something similar to this fiasco to happen if/when Anthropic’s responsible scaling policies tell them to stop scaling.
Open-ended: If >50% of employees end up staying at OpenAI: how, if at all, should OpenAI change its structure and direction going forwards?
(Up/down-vote if you’re interested in reading discussion of this. React paperclip if you have an opinion and would be up for discussing)
The way this firing has played out so far (to Monday Nov 20th) is evidence that the non-profit board effectively was not able to fire the CEO.
The secrecy in which OpenAI’s board operated made it less trustworthy. Boards at places like Antrophic should update to be less secretive and more transparent.
There is a >80% chance that US-China affairs (including the AI race between the US and China) is an extremely valuable or crucial lens for understanding the current conflict over OpenAI (the conflict itself, not the downstream implications), as opposed to being a merely somewhat-helpful lens.
I assign >50% to this claim: The board should be straightforward with its employees about why they fired the CEO.
It would be a promising move, to reduce existential risk, for Anthropic to take over what will remain of OpenAI and consolidate efforts into a single project.
The mass of OpenAI employees rapidly performing identical rote token actions in public (identical tweets and open-letter swarming) is a poor indicator for their collective epistemics (e.g. manipulated, mind-killed, …)
OpenAI should have an internal-only directory allowing employees and leadership to write up and see each other’s beliefs about AI extinction risk and alignment approaches.
If the board did not abide by cooperative principles in the firing nor acted on substantial evidence to warrant the firing in line with the charter, and nonetheless were largely EA motivated, then EA should be disavowed and dismantled.
The events of the OpenAI board CEO-ousting on net reduced existential risk from AGI.
The partnership between Microsoft and OpenAI is a net negative for AI safety. And: What can we do about that?
Neither Microsoft nor OpenAI will have the best language model in a year.
Insofar as lawyers are recommending against the former OpenAI board speaking about what happened, the board should probably ignore them.
I assign >80% probability to this claim: the board should be straightforward with its employees about why they fired the CEO.
The OpenAI Charter, if fully & faithfully followed and effectively stood behind, including possibly shuttering the whole project down if it came down to it, would prevent OpenAI from being a major contributor to AI x-risk. In other words, as long as people actually followed this particular Charter to the letter, it is sufficient for curtailing AI risk, at least from this one org.
Media & Twitter reactions to OpenAI developments were largely unhelpful, specious, or net-negative for overall discourse around AI and AI Safety. We should reflect on how we can do better in the future and possibly even consider how to restructure media/Twitter/etc to lessen the issues going forward.
Your impression is that Sam Altman is deceitful, manipulative, and often lies in his professional relationships.
(Edit: Do not use the current poll result in your evaluation here)
If a mass exodus happens, it will mostly be the fodder employees, and more than 30% of OpenAI’s talent will remain (e.g. if the mind of Ilya and two other people contain more than 30% of OpenAI’s talent, and they all stay).
The board’s behavior is non-trivial evidence against EA promoting willingness-to-cooperate and trustworthiness.
There is a person or entity deploying FUD tactics in a strategically relevant way which is driving such feelings on Twitter. In contrast to the situation just being naturally or unintentionally provoking of fear, uncertainty, and doubt. For example, Sam privately threatening to sue the board for libel & wrongful termination if they’re more specific about why Sam was fired or don’t support the dissolution of the board, and the board caving or at least taking more time.
This poll is has too many questions of fact, and therefore questions of fact should be downvoted, so that questions of policy can be upvoted in their stead. Discuss below.
If there was actually a spooky capabilities advance that convinced the board that drastic action was needed, then the board’s actions were on net justified, regardless of what other dynamics were at play and whether cooperative principles were followed.
It is important that the board release another public statement explaining their actions, and providing any key pieces of evidence.
If most of OpenAI is absorbed into Microsoft, they will ultimately remain sovereign and resist further capture, e.g. using rationality, FDT/UDT, coordination theory, CFAR training, glowfic osmosis, or Microsoft underestimating them sufficiently to fail to capture them further, or overestimating their predictability, or being deterred, or Microsoft not having the appetite to make an attempt at all.
I assign more than 20% probability to this claim: the firing of Sam Altman was part of a plan to merge OpenAI with Anthropic.
Microsoft is not actively aggressive or trying to capture OpenAI, and is largely passive in the conflict, e.g. Sam or Greg or investors approached Microsoft with the exodus and letter idea, and Microsoft was either clueless or misled about the connection between the board and EA/AI safety.
If >80% of Microsoft employees were signed up for Cryonics, as opposed to ~0% now, that would indicate that Microsoft is sufficiently future-conscious to make it probably net-positive for them to absorb OpenAI.
New leadership should shut down OpenAI.
Richard Ngo, the person, signed the letter (as opposed to a fake signature)
The letter indicating ~700 employees will leave if Altman and Brockman do not return, contains >50 fake signatures.
The mass of OpenAI employees rapidly performing identical rote token actions in public (identical tweets and open-letter swarming) indicates a cult of personality around Sam
The conflict was not started by the board, but rather the board reacting to a move made by someone else, or a discovery of a hostile plot previously initiated and advanced by someone else.
The main reason for Altman’s firing was due to a scandal of a more personal nature mostly unrelated to the everyday or strategic operations of OpenAI.
EAs need to aggressively recruit and fund additional ambitious Sam’s, to ensure there’s one to sacrifice for Samsgiving November 2024.