I worked at OpenAI for three years, from 2021-2024 on the Alignment team, which eventually became the Superalignment team. I worked on scalable oversight, part of the team developing critiques as a technique for using language models to spot mistakes in other language models. I then worked to refine an idea from Nick Cammarata into a method for using language model to generate explanations for features in language models. I was then promoted to managing a team of 4 people which worked on trying to understand language model features in context, leading to the release of an open source “transformer debugger” tool.
I resigned from OpenAI on February 15, 2024.
William_S
Principles for the AGI Race
Transformer Circuit Faithfulness Metrics Are Not Robust
Absent evidence to the contrary, for any organization one should assume board members were basically selected by the CEO. So hard to get assurance about true independence, but it seems good to at least to talk to someone who isn’t a family member/close friend.
Good that it’s clear who it goes to, though if I was an anthropic I’d want an option to escalate to a board member who isn’t Dario or Daniella, in case I had concerns related to the CEO
I do think 80k should have more context on OpenAI but also any other organization that seems bad with maybe useful roles. I think people can fail to realize the organizational context if it isn’t pointed out and they only read the company’s PR.
I agree that this kind of legal contract is bad, and Anthropic should do better. I think there are a number of aggrevating factors which made the OpenAI situation extrodinarily bad, and I’m not sure how much these might obtain regarding Anthropic (at least one comment from another departing employee about not being offered this kind of contract suggest the practice is less widespread).
-amount of money at stake
-taking money, equity or other things the employee believed they already owned if the employee doesn’t sign the contract, vs. offering them something new (IANAL but in some cases, this could be a felony “grand theft wages” under California law if a threat to withhold wages for not signing a contract is actually carried out, what kinds of equity count as wages would be a complex legal question)
-is this offered to everyone, or only under circumstances where there’s a reasonable justification?
-is this only offered when someone is fired or also when someone resigns?
-to what degree are the policies of offering contracts concealed from employees?
-if someone asks to obtain legal advice and/or negotiate before signing, does the company allow this?
-if this becomes public, does the company try to deflect/minimize/only address issues that are made publically, or do they fix the whole situation?
-is this close to “standard practice” (which doesn’t make it right, but makes it at least seem less deliberately malicious), or is it worse than standard practice?
-are there carveouts that reduce the scope of the non-disparagement clause (explicitly allow some kinds of speech, overriding the non-disparagement)?
-are there substantive concerns that the employee has at the time of signing the contract, that the agreement would prevent discussing?
-are there other ways the company could retaliate against an employee/departing employee who challenges the legality of contract?
I think with termination agreements on being fired there’s often 1. some amount of severance offered 2. a clause that says “the terms and monetary amounts of this agreement are confidential” or similar. I don’t know how often this also includes non-disparagement. I expect that most non-disparagement agreements don’t have a term or limits on what is covered.
I think a steelman of this kind of contract is: Suppose you fire someone, believe you have good reasons to fire them, and you think that them loudly talking about how it was unfair that you fired them would unfairly harm your company’s reputation. Then it seems somewhat reasonable to offer someone money in exchange for “don’t complain about being fired”. The person who was fired can then decide whether talking about it is worth more than the money being offered.
However, you could accomplish this with a much more limited contract, ideally one that lets you disclose “I signed a legal agreement in exchange for money to not complain about being fired”, and doesn’t cover cases where “years later, you decide the company is doing the wrong thing based on public information and want to talk about that publically” or similar.
I think it is not in the nature of most corporate lawyers to think about “is this agreement giving me too much power?” and most employees facing such an agreement just sign it without considering negotiating or challenging the terms.
For any future employer, I will ask about their policies for termination contracts before I join (as this is when you have the most leverage, if they give you an offer they want to convince you to join).
Would be nice if it was based on “actual robot army was actually being built and you have multiple confirmatory sources and you’ve tried diplomacy and sabotage and they’ve both failed” instead of “my napkin math says they could totally build a robot army bro trust me bro” or “they totally have WMDs bro” or “we gotta blow up some Japanese civilians so that we don’t have to kill more Japanese civilians when we invade Japan bro” or “dude I’m seeing some missiles on our radar, gotta launch ours now bro”.
Relevant paper discussing risks of risk assessments being wrong due to theory/model/calculation error. Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes
Based on the current vibes, I think that suggest that methodological errors alone will lead to significant chance of significant error for any safety case in AI.
IMO it’s unlikely that we’re ever going to have a safety case that’s as reliable as the nuclear physics calculations that showed that the Trinity Test was unlikely to ignite the atmosphere (where my impression is that the risk was mostly dominated by risk of getting the calculations wrong). If we have something that is less reliable, then will we ever be in a position where only considering the safety case gives a low enough probability of disaster for launching an AI system beyond the frontier where disastrous capabilities are demonstrated?
Thus, in practice, decisions will probably not be made on a safety case alone, but also based on some positive case of the benefits of deployment (e.g. estimated reduced x-risk, advancing the “good guys” in the race, CEO has positive vibes that enough risk mitigation has been done, etc.). It’s not clear what role governments should have in assessing this, maybe we can only get assessment of the safety case, but it’s useful to note that safety cases won’t be the only thing informs these decisions.This situation is pretty disturbing, and I wish we had a better way, but it still seems useful to push the positive benefit case more towards “careful argument about reduced x-risk” and away from “CEO vibes about whether enough mitigation has been done”.
Imo I don’t know if we have evidence that Anthropic deliberately cultivated or significantly benefitted from the appearance of a commitment. However if an investor or employee felt like they made substantial commitments based on this impression and then later felt betrayed that would be more serious. (The story here is I think importantly different from other stories where I think there were substantial benefits from commitment appearance and then violation)
Everyone is afraid of the AI race, and hopes that one of the labs will actually end up doing what they think is the most responsible thing to do. Hope and fear is one hell of a drug cocktail, makes you jump to the conclusions you want based on the flimsiest evidence. But the hangover is a bastard.
Really, the race started more when OpenAI released GPT-4, it’s been going on for a while, this is just another event that makes it clear.
Would be interesting philosophical experiment to have models trained on model spec v1 then try to improve their model spec for version v2, will this get better or go off the rails?
You get more discrete transitions when one s-curve process takes the lead from another s-curve process, e.g. deep learning taking over from other AI methods.
Probably shouldn’t limit oneself from thinking only in terms of 3 game phases or fitting into one specific game, in general can have n-phases where different phrases have different characteristics.
If anyone wants to work on this, there’s a contest with $50K and $20K prizes for creating safety relevant benchmarks. https://www.mlsafety.org/safebench
I think that’s how people should generally react in the absence of harder commitments and accountability measures.
I think the right way to think about verbal or written commitments is that they increase the costs of taking a certain course of action. A legal contract can mean that the price is civil lawsuits leading to paying a financial price. A non-legal commitment means if you break it, the person you made the commitment to gets angry at you, and you gain a reputation for being the sort of person who breaks commitments. It’s always an option for someone to break the commitment and pay the price, even laws leading to criminal penalties can be broken if someone is willing to run the risk or pay the price.
In this framework, it’s reasonable to be somewhat angry at someone or some corporation who breaks a soft commitment to you, in order to increase the perceived cost of breaking soft commitments to you and people like you.
People on average maybe tend more towards keeping important commitments due to reputational and relationship cost, but maybe corporations as groups of people tend to think only in terms of financial and legal costs, so are maybe more willing to break soft commitments (especially, if it’s an organization where one person makes the commitment but then other people break it). So for relating to corporations, you should be more skeptical of non-legally binding commitments (and even for legally binding commitments, pay attention to the real price of breaking it).
Yeah, I think it’s good if labs are willing to make more “cheap talk” statements of vague intentions, so you can learn how they think. Everyone should understand that these aren’t real commitments, and not get annoyed if these don’t end up meaning anything. This is probably the best way to view “statements by random lab employees”.
Imo would be good to have more “changeable commitments” too in between, statements that are “we’ll do policy X until we change the policy, when we do we commit to clearly informing everyone about the change” which is maybe more the current status of most RSPs.
Hypothesis: each of these vectors representing a single token that is usually associated with code, vectors says “I should output this token soon”, and the model then plans around that to produce code. But adding vectors representing code tokens doesn’t necessarily produce another vector representing a code token, so that’s why you don’t see compositionality. Does somewhat seem plausible that there might be ~800 “code tokens” in the representation space.