AlphaGo Zero and capability amplification
AlphaGo Zero is an impressive demonstration of AI capabilities. It also happens to be a nice proof-of-concept of a promising alignment strategy.
How AlphaGo Zero works
AlphaGo Zero learns two functions (which take as input the current board):
A prior over moves p is trained to predict what AlphaGo will eventually decide to do.
A value function v is trained to predict which player will win (if AlphaGo plays both sides)
Both are trained with supervised learning. Once we have these two functions, AlphaGo actually picks it moves by using 1600 steps of Monte Carlo tree search (MCTS), using p and v to guide the search. It trains p to bypass this expensive search process and directly pick good moves. As p improves, the expensive search becomes more powerful, and p chases this moving target.
Iterated capability amplification
In the simplest form of iterated capability amplification, we train one function:
A “weak” policy A, which is trained to predict what the agent will eventually decide to do in a given situation.
Just like AlphaGo doesn’t use the prior p directly to pick moves, we don’t use the weak policy A directly to pick actions. Instead, we use a capability amplification scheme: we call A many times in order to produce more intelligent judgments. We train A to bypass this expensive amplification process and directly make intelligent decisions. As A improves, the amplified policy becomes more powerful, and A chases this moving target.
In the case of AlphaGo Zero, A is the prior over moves, and the amplification scheme is MCTS. (More precisely: A is the pair (p, v), and the amplification scheme is MCTS + using a rollout to see who wins.)
Outside of Go, A might be a question-answering system, which can be applied several times in order to first break a question down into pieces and then separately answer each component. Or it might be a policy that updates a cognitive workspace, which can be applied many times in order to “think longer” about an issue.
The significance
Reinforcement learners take a reward function and optimize it; unfortunately, it’s not clear where to get a reward function that faithfully tracks what we care about. That’s a key source of safety concerns.
By contrast, AlphaGo Zero takes a policy-improvement-operator (like MCTS) and converges towards a fixed point of that operator. If we can find a way to improve a policy while preserving its alignment, then we can apply the same algorithm in order to get very powerful but aligned strategies.
Using MCTS to achieve a simple goal in the real world wouldn’t preserve alignment, so it doesn’t fit the bill. But “think longer” might. As long as we start with a policy that is close enough to being aligned — a policy that “wants” to be aligned, in some sense — allowing it to think longer may make it both smarter and more aligned.
I think designing alignment-preserving policy amplification is a tractable problem today, which can be studied either in the context of existing ML or human coordination. So I think it’s an exciting direction in AI alignment. A candidate solution could be incorporated directly into the AlphaGo Zero architecture, so we can already get empirical feedback on what works. If by good fortune powerful AI systems look like AlphaGo Zero, then that might get us much of the way to an aligned AI.
This was originally posted here on 19th October 2017.
Tomorrow’s AI Alignment Forum sequences will continue with a pair of posts, ‘What is narrow value learning’ by Rohin Shah and ‘Ambitious vs. narrow value learning’ by Paul Christiano, from the sequence on Value Learning.
The next post in this sequence will be ‘Directions for AI Alignment’ by Paul Christiano on Thursday.
- Alignment Newsletter #41 by 17 Jan 2019 8:10 UTC; 22 points) (
- 3 Jan 2021 20:35 UTC; 6 points) 's comment on AI Alignment, Philosophical Pluralism, and the Relevance of Non-Western Philosophy by (
Also, an arbitrary supervised learning step that updates p and v is not safe. Generally, making that Distill step safe seems to me like the hardest challenge of the iterated capability amplification approach. Are there already research directions for tackling that challenge? (if I understand correctly, your recent paper did not focus on it).
I think Techniques for optimizing worst-case performance may be what you’re looking for.
Thank you.
I see how the directions proposed there (adversarial training, verification, transparency) can be useful for creating aligned systems. But if we use a Distill step that can be trusted to be safe via one or more of those approaches, I find it implausible that Amplification would yield systems that are competitive relative to the most powerful ones created by other actors around the same time (i.e. actors that create AI systems without any safety-motivated restrictions on the model space and search algorithm).
Paul’s position in that post was:
I think this is meant to include the difficulty of making them competitive with unaligned ML, since that has been his stated goal. If you can argue that we should be even more pessimistic than this, I’m sure a lot of people would find that interesting.
In this 2017 post about Amplification (linked from OP) Paul wrote: “I think there is a very good chance, perhaps as high as 50%, that this basic strategy can eventually be used to train benign state-of-the-art model-free RL agents.”
The post you linked to is more recent, so either the quote in your comment reflects an update or Paul has other insights/estimates about safe Distill steps.
BTW, I think Amplification might currently be the most promising approach for creating aligned and powerful systems; what I argue is that in order to save the world it will probably need to be complemented with governance solutions.
How uncompetitive do you think aligned IDA agents will be relative to unaligned agents, and what kinds of governance solutions do you think that would call for? Also, I should have made this clearer last time, but I’d be interested to hear more about why you think Distill probably can’t be made both safe and competitive, regardless of whether you’re more or less optimistic than Paul.
For the sake of this estimate I’m using a definition of IDA that is probably narrower than what Paul has in mind: in the definition I use here, the Distill steps are carried out by nothing other than supervised learning + what it takes to make that supervised learning safe (but the implementation of the Distill steps may be improved during the Amplify steps).
This narrow definition might not include the most promising future directions of IDA (e.g. maybe the Distill steps should be carried out by some other process that involves humans). Without this simplifying assumption, one might define IDA as broadly as: “iteratively create stronger and stronger safe AI systems by using all the resources and tools that you currently have”. Carrying out that Broad IDA approach might include efforts like asking AI alignment researchers to get into a room with a whiteboard and come up with ideas for new approaches.
Therefor this estimate uses my narrow definition of IDA. If you like, I can also answer the general question: “How uncompetitive do you think aligned agents will be relative to unaligned agents?”.
My estimate:
Suppose it is the case that if OpenAI decided to create an AGI agent as soon as they could, it would have taken them X years (assuming an annual budget of $10M and that the world around them stays the same, and OpenAI doesn’t do neuroscience, and no unintentional disasters happen).
Now suppose that OpenAI decided to create an aligned IDA agent with AGI capabilities as soon as they could (same conditions). How much time would it take them? My estimate follows; each entry is in the format:
[years]: [my credence that it would take them at most that many years]
(consider writing down your own credences before looking at mine)
1.0X:
0.1%
1.1X:
3%
1.2X:
3%
1.5X:
4%
2X:
5%
5X:
10%
10X:
30%
100X:
60%
Generally, I don’t see why we should expect that the most capable systems that can be created with supervised learning (e.g. by using RL to search over an arbitrary space of NN architectures) would perform similarly to the most capable systems that can be created, at around the same time, using some restricted supervised learning that humans must trust to be safe. My prior is that the former is very likely to outperform by a lot, and I’m not aware of strong evidence pointing one way or another.
So for example, I expect that an aligned IDA agent will be outperformed by an agent that was created by that same IDA framework when replacing the most capable safe supervised learning in the Distill steps with the most capable unrestricted supervised learning available at around the same time.
I think they will probably be uncompetitive enough to make some complementary governance solutions necessary (this line replaced an attempt for a quantitative answer which turned out long; let me know if you want it).
I’m very uncertain. It might be the case that our world must stop being a place in which anyone with $10M can purchase millions of GPU hours. I’m aware that most people in the AI safety community are extremely skeptical about governments carrying out “stabilization” efforts etcetera. I suspect this common view fails to account for likely pivotal events (e.g. some advances in narrow AI that might suddenly allow anyone with sufficient computation power to carry out large scale terror attacks). I think Allan Dafoe’s research agenda for AI Governance is an extremely important and neglected landscape that we (the AI safety community) should be looking at to improve our predictions and strategies.
This seems similar to my view, which is that if you try to optimize for just one thing (efficiency) you’re probably going to end up with more of that thing than if you try to optimize for two things at the same time (efficiency and safety) or if you try to optimize for that thing under a heavy constraint (i.e., safety).
But there are people (like Paul) who seem to be more optimistic than this based on more detailed inside-view intuitions, which makes me wonder if I should defer to them. If the answer is no, there’s also the question of how do we make policy makers take this problem seriously (i.e., that safe AI probably won’t be as efficient as unsafe AI) given the existence of more optimistic AI safety researchers, so that they’d be willing to undertake costly preparations for governance solutions ahead of time. By the time we get conclusive evidence one way or another, it may be too late to make such preparations.
I’m not aware of any AI safety researchers that are extremely optimistic about solving alignment competitively. I think most of them are just skeptical about the feasibility of governance solutions, or think governance related interventions might be necessary but shouldn’t be carried out yet.
In this 80,000 Hours podcast episode, Paul said the following:
I’m not sure what you’d consider “extremely” optimistic, but I gathered some quantitative estimates of AI risk here, and they all seem overly optimistic to me. Did you see that?
I agree with this motivation to do early work, but in a world where we do need drastic policy responses, I think it’s pretty likely that the early work won’t actually produce conclusive enough results to show that. For example, if a safety approach fails to make much progress, there’s not really a good way to tell if it’s because safe and competitive AI really is just too hard (and therefore we need a drastic policy response), or because the approach is wrong, or the people working on it aren’t smart enough, or they’re trying to do the work too early. People who are inclined to be optimistic will probably remain so until it’s too late.
I only now read that thread. I think it is extremely worthwhile to gather such estimates.
I think all the three estimates mentioned there correspond to marginal probabilities (rather than probabilities conditioned on “no governance interventions”). So those estimates already account for scenarios in which governance interventions save the world. Therefore, it seems we should not strongly update against the necessity of governance interventions due to those estimates being optimistic.
Maybe we should gather researchers’ credences for predictions like:
”If there will be no governance interventions, competitive aligned AIs will exist in 10 years from now”.
I suspect that gathering such estimates from publicly available information might expose us to a selection bias, because very pessimistic estimates might be outside the Overton window (even for the EA/AIS crowd). For example, if Robert Wiblin would have concluded that an AI existential catastrophe is 50% likely, I’m not sure that the 80,000 Hours website (which targets a large and motivationally diverse audience) would have published that estimate.
I strongly agree with all of this.
I normally give ~50% as my probability we’d be fine without any kind of coordination.
Upvoted for giving this number, but what does it mean exactly? You expect “50% fine” through all kinds of x-risk, assuming no coordination from now until the end of the universe? Or just assuming no coordination until AGI? Is it just AI risk instead of all x-risk, or just risk from narrow AI alignment? If “AI risk”, are you including risks from AI exacerbating human safety problems, or AI differentially accelerating dangerous technologies? Is it 50% probability that humanity survives (which might be “fine” to some people) or 50% that we end up with a nearly optimal universe? Do you have a document that gives all of your quantitative risk estimates with clear explanations of what they mean?
(Sorry to put you on the spot here when I haven’t produced anything like that myself, but I just want to convey how confusing all this is.)
MCTS works as amplification because you can evaluate future board positions to get a convergent estimate of how well you’re doing—and then eventually someone actually wins the game, which keeps p from departing reality entirely. Importantly, the single thing you’re learning can play the role of the environment, too, by picking the opponents’ moves.
In trying to train A to predict human actions given access to A, you’re almost doing something similar. You have a prediction that’s also supposed to be a prediction of the environment (the human), so you can use it for both sides of a tree search. But A isn’t actually searching through an interesting tree—it’s searching for cycles of length 1 in its own model of the environment, with no particular guarantee that any cycles of length 1 exist or are a good idea. “Tree search” in this context (I think) means spraying out a bunch of outputs and hoping at least one falls into a fixed point upon iteration.
EDIT: Big oops, I didn’t actually understand what was being talked about here.
I agree there is a real sense in which AGZ is “better-grounded” (and more likely to be stable) than iterated amplification in general. (This was some of the motivation for the experiments here.)
Oh, I’ve just realized that the “tree” was always intended to be something like task decomposition. Sorry about that—that makes the analogy a lot tighter.
Isn’t A also grounded in reality by eventually giving no A to consult with?
This is true when getting training data, but I think it’s a difference between A (or HCH) and AlphaGo Zero when doing simulation / amplification. Someone wins a simulated game of Go even if both players are making bad moves (or even random moves), which gives you a signal that A doesn’t have access to.
I don’t suppose you could explain how it uses P and V? Does it use P to decide which path to go down and V to avoid fully playing it out?
This is totally wild speculation, but the thought occurred to me whether the human brain might be doing something like this with identities and social roles:
If you squint, you could kind of interpret this kind of a dynamic to be a result of the human brain trying to predict what it expects itself to do next, using that prediction to guide the search of next actions, and then ending up with next actions that have a strong structural resemblance to its previous ones. (Though I can also think of maybe better-fitting models of this too; still, seemed worth throwing out.)
How do you know MCTS doesn’t preserve alignment?
As I understand it—MCTS is used to maximize a given computable utility function, and so it is non alignment-preserving in the general sense that a sufficiently strong optimization of a non-perfect utility function is non alignment-preserving.