AI grantmaking at Open Philanthropy.
I used to give careers advice for 80,000 hours.
AI grantmaking at Open Philanthropy.
I used to give careers advice for 80,000 hours.
I think this post is great. Thanks for writing it!
James Hoffman’s coffee videos have this kind of vibe. The “tasting every Nespresso pod” one is a clear example, but I also really appreciate e.g. the explanations of how to blind taste
Thanks, both for the thoughts and encouragement!
I’d love to see the most important types of work for each failure mode. Here’s my very quick version, any disagreements or additions are welcome:
Appreciate you doing a quick version. I’m excited for more attempts at this and would like to write something similar myself, though I might structure it the other way round if I do a high effort version (take an agenda, work out how/if it maps onto the different parts of this). Will try to do a low-effort set of quick responses to yours soon.
P(Doom) for each scenario would also be useful.
Also in the (very long) pipeline, and a key motivation! Not just for each scenario in isolation, but also for various conditionals like:
- P(scenario B leads to doom | scenario A turns out not to be an issue by default)
- P(scenario B leads to doom | scenario A turns out to be an issue that we then fully solve)
- P(meaningful AI-powered alignment progress is possible before doom | scenario C is solved)
etc.
I think my suggest usage is slightly better but I’m not sure it’s worth the effort of trying to make people change, though I find ‘camouflage’ as a term useful when I’m trying to explain to people.
Good question. I think there’s a large overlap between them, including most of the important/scary cases that don’t involve deceptive alignment (which are usually both). I think listing examples feels like the easiest way of explaining where they come apart:
- There’s some kinds of ‘oversight failure’ which aren’t ‘scalable oversight failure’ e.g. the ball grabbing robot hand thing. I don’t think the problem here was oversight simply failing to scale to superhuman. This does count as camouflage.
- There’s also some kinds of scalable oversight failure where the issue looks more like ‘we didn’t try at all’ than ‘we tried, but selecting based only on what we could see screwed us’. Someone just deciding to deploy a system and essentially just hoping that it’s aligned would fall into this camp, but a more realistic case would be something like only evaluating a system based on its immediate effects, and then the long-run effects being terrible. You might not consider this a ‘failure of scalable oversight’, and instead want to call it a ‘failure to even try scalable oversight’, but I think the line is blurry—maybe people tried some scalable oversight stuff, it didn’t really work, and then they gave up and said ‘short term is probably fine’.
- I think most failures of scalable oversight have some story which roughly goes “people tried to select for things that would be good, and instead got things that looked like they would be good to the overseer”. These count as both.
Ok, I think this might actually be a much bigger deal than I thought. The basic issue is that weight decay should push things to be simpler if they can be made simpler without harming training loss.
This means that models which end up deceptively aligned should expect their goals to shift over time (to ones that can be more simply represented). Of course this also means that, even if we end up with a perfectly aligned model, if it isn’t yet capable of gradient hacking, we shouldn’t expect it to stay aligned, but instead we should expect weight decay to push it towards a simple proxy for the aligned goal, unless it is immediately able to realise this and help us freeze the relevant weights (which seems extremely hard).
(Intuitions here mostly coming from the “cleanup” stage that Neel found in his grokking paper)
[epistemic status: showerthought] Getting language models to never simulate a character which does objectionable stuff is really hard. But getting language models to produce certain behaviours for a wide variety of prompts is much easier.
If we’re worried about conditioning generative models getting more dangerous the more powerful LMs get, what if we fine tuned in an association between [power seeking/deception] and [wild overconfidence, including missing obvious flaws in plans, doing long “TV supervillain” style speeches before carrying out the final stage of a plan, etc.].
If the world model that gets learned is one where power seeking has an extremely strong association with poor cognition, maybe we either get bad attempts at treacherous turns before good ones, or models learning to be extremely suspicious of instrumental reasoning leading to power seeking given it’s poor track record.
I still think there’s something here and still think that it’s interesting, but since writing it has occurred to me that something like root access to the datacenter, including e.g. ‘write access to external memory of which there is no oversight’, could bound the potential drift problem at lower capability levels than I was initially thinking for a ‘pure’ gradient-hack of the sort described here.
I think there’s quite a big difference between ‘bad looking stuff gets selected away’ and ‘design a poisoned token’ and I was talking about the former in the top level comment, but as it happens I don’t think you need to work that hard to find very easy ways to hide signals in LM outputs and recent empirical work like this seems to back that up.
The different kinds of deception thing did eventually get written up and posted!
If goal-drift prevention comes after perfect deception in the capabilities ladder, treacherous turns are a bad idea.
Prompted by a thought from a colleague, here’s a rough sketch of something that might turn out to be interesting once I flesh it out.
- Once a model is deceptively aligned, it seems like SGD is most just going to improve search/planning ability rather than do anything with the mesa-objective.
- But because ‘do well according the overseers’ is the correct training strategy irrespective of the mesa-objective, there’s also no reason that SGD would preserve the mesa objective.
- I think this means we should expect it to ‘drift’ over time.
- Gradient hacking seems hard, plausibly harder than fooling human oversight.
- If gradient hacking is hard, and I’m right about the drift thing, then I think there are setups where something that looks more like “trade with humans and assist with your own oversight” beats “deceptive alignment + eventual treacherous turn” as a strategy.
- In particular, it feels like this points slightly in the direction of a “transparency is self-promoting/unusually stable” hypothesis, which is exciting.
Could you explain your model here of how outreach to typical employees becomes net negative?
The path of: [low level OpenAI employees think better about x-risk → improved general OpenAI reasoning around x-risk → improved decisions] seems high EV to me.
I think the obvious way this becomes net negative is if the first (unstated) step in the causal chain is actually false:
[People who don’t have any good ideas for making progress on alignment try to ‘buy time’ by pitching people who work at big ML labs on AI x-risk → low level OpenAI employees think better about x-risk]
A concern of mine, especially when ideas about this kind of untargeted outreach are framed as “this is the thing to do if you can’t make technical progress”, is that [low level OpenAI employees think better about x-risk] will often instead be something like [low level employees’ suspicion that the “AI doomer crowd” doesn’t really know what it’s talking about is reinforced], or [low level employee now thinks worse about x-risk].
[crossposting my comment from the EA forum as I expect it’s also worth discussing here]
whether you have a 5-10 year timeline or a 15-20 year timeline
Something that I’d like this post to address that it doesn’t is that to have “a timeline” rather than a distribution seems ~indefensible given the amount of uncertainty involved. People quote medians (or modes, and it’s not clear to me that they reliability differentiate between these) ostensibly as a shorthand for their entire distribution, but then discussion proceeds based only on the point estimates.
I think a shift of 2 years in the median of your distribution looks like a shift of only a few % in your P(AGI by 20XX) numbers for all 20XX, and that means discussion of what people who “have different timelines” should do is usually better framed as “what strategies will turn out to have been helpful if AGI arrives in 2030″.
While this doesn’t make discussion like this post useless, I don’t think this is a minor nitpick. I’m extremely worried by “plays for variance”, some of which are briefly mentioned above (though far from the worst I’ve heard). I think these tend to look good only on worldviews which are extremely overconfident, and treat timelines as point estimates/extremely sharp peaks). More balanced views, even those with a median much sooner than mine, should typically realise that the EV gained in the worlds where things move quickly is not worth the expected cost in worlds where they don’t. This is in addition to the usual points about co-operative behaviour when uncertain about the state of the world, adverse selection, the unilateralist’s curse etc.
(Written up from a Twitter conversation here. Few/no original ideas, but maybe some original presentation of them.)
‘Consequentialism’ in AI systems.
When I think about the potential future capabilities of AI systems, one pattern is especially concerning. The pattern is simple, will produce good performance in training, and by default is extremely dangerous. It is often referred to as consequentialism, but as this term has several other meanings, I’ll spell it out explicitly here*:
1. Generate plans
2. Predict the consequences of those plans
3. Evaluate the expected consequences of those plans
4. Execute the one with the best expected consequences
Preventing this pattern from emerging is, in my view, a large part of the problem we face.
There is a disconnect between my personal beliefs and the algorithm I described. I believe, like many others thinking about AI alignment, that the most plausible moral theories are Consequentialist. That is, policies are good if and only if, in expectation, they lead to good outcomes. This moral position is separate from my worry about consequentialist reasoning in models, in fact, in most cases I think the best policy for me to have looks nothing like the algorithm above. My problem with “consequentialist” agents is not that they might have my personal values as their “evaluate” step. It is that, by default, they will be deceptive until they are powerful enough, and then kill me.
The reason this pattern is so concerning is because, once such a system can model itself as being part of a training process, then plans which look like ‘do exactly what the developers want until they can’t turn you off, then make sure they’ll never be able to again’ will score perfectly on training, regardless of the evaluation function being used in step 3. In other words, the system will be deceptively aligned, and once a system is deceptively aligned, it will score perfectly on training.
This only matters for models which are sufficiently intelligent, but the term “intelligent” is loaded and means different things to different people, so I’ll avoid using it. In the context I care about, intelligence is about the ability to execute the first two steps of the algorithm I’m worried about. Per my definition, being able to generate many long and/or complicated plans, and being able to accurately predict the consequences of these plans, both contribute to “intelligence”, and the way they contribute to dangerous capabilities is different. Consider an advanced chess-playing AI, which has control of a robot body in order to play over-the-board. If the relevant way in which it’s advanced corresponds to step 2, you won’t be able to win, but you’ll probably be safe. If the relevant way in which it’s advanced corresponds to step 1, it might discover the strategy: “threaten my opponent with physical violence unless they resign”.
*The 4-step algorithm I described will obviously not be linear in practice, in particular, which plans get generated will likely be informed by predictions and evaluations of their consequences, so 1-3 are all mixed up. I don’t think this matters much to the argument.
Parts of my model I’m yet to write up but which fit into this:
- Different kinds of deception and the capabilities required.
- Different kinds of myopia and how fragile they are
- What winning might look like (not a strategy, just a north star)
I think currently nothing (which is why I ended up writing that I regretted the sensationalist framing). However I expect that the very strong default of any methods to use chain of thought to monitor/steer/interpret systems being that they end up providing exactly that selection pressure, and I’m skeptical about preventing this.
A similar service exists in the UK—https://www.gamstop.co.uk/
I don’t know if “don’t even discuss other methods until you’ve tried this first” seems right to me, but I do think such services seem pretty great, and would guess that expanding/building on them (including e.g. requiring that any gambling advertising included an ad for them) would be a lot more tractable than pursuing harder bans.
What actually works is clearly the most important thing here, but aesthetically I do like the mechanism of “give people the ability to irreversibly self exclude” as a response to predatory/addictive systems.