Victoria Krakovna. Research scientist at DeepMind working on AI safety, and cofounder of the Future of Life Institute. Website and blog: vkrakovna.wordpress.com
Vika
Yeah, living in a group house was important for our mental well-being as well, especially during the pandemic and parental leaves. I think the benefits of the social environment decreased somewhat because we were often occupied with the kids and had less time to socialize. It was still pretty good though—if Deep End was close enough to schools we like, we would have probably stayed and tried to make it work (though this would likely involve taking over more of the house over time). Our new place contributes to mental well-being by being much closer to nature (while still a reasonable bike commute from the office).
I would potentially be interested, if we knew the other people well. I find that, as a parent, I’m less willing to take risks by moving in with people I don’t know that well, because the stress and uncertainty associated with things not working out are more costly.
Space requirements would likely be the biggest difficulty though, as you pointed out. A family with 2 kids probably needs at least 3 rooms, so two such families together would need a 6 bedroom house. This is hard to find, especially combined with other constraints like proximity to schools, commute distances, etc. It’s a lot easier to live near other families than sharing a living space.
Moving on from community living
I really enjoyed this sequence, it provides useful guidance on how to combine different sources of knowledge and intuitions to reason about future AI systems. Great resource on how to think about alignment for an ML audience.
I think this is still one of the most comprehensive and clear resources on counterpoints to x-risk arguments. I have referred to this post and pointed people to a number of times. The most useful parts of the post for me were the outline of the basic x-risk case and section A on counterarguments to goal-directedness (this was particularly helpful for my thinking about threat models and understanding agency).
I still endorse the breakdown of “sharp left turn” claims in this post. Writing this helped me understand the threat model better (or at all) and make it a bit more concrete.
This post could be improved by explicitly relating the claims to the “consensus” threat model summarized in Clarifying AI X-risk. Overall, SLT seems like a special case of that threat model, which makes a subset of the SLT claims:
Claim 1 (capabilities generalize far) and Claim 3 (humans fail to intervene), but not Claims 1a/b (simultaneous / discontinuous generalization) or Claim 2 (alignment techniques stop working).
It probably relies on some weaker version of Claim 2 (alignment techniques failing to apply to more powerful systems in some way). This seems necessary for deceptive alignment to arise, e.g. if our interpretability techniques fail to detect deceptive reasoning. However, I expect that most ways this could happen would not be due to the alignment techniques being fundamentally inadequate for the capability transition to more powerful systems (the strong version of Claim 2 used in SLT).
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- 10 Jan 2024 22:04 UTC; 17 points) 's comment on The LessWrong 2022 Review: Review Phase by (
I continue to endorse this categorization of threat models and the consensus threat model. I often refer people to this post and use the “SG + GMG → MAPS” framing in my alignment overview talks. I remain uncertain about the likelihood of the deceptive alignment part of the threat model (in particular the requisite level of goal-directedness) arising in the LLM paradigm, relative to other mechanisms for AI risk.
In terms of adding new threat models to the categorization, the main one that comes to mind is Deep Deceptiveness (let’s call it Soares2), which I would summarize as “non-deceptiveness is anti-natural / hard to disentangle from general capabilities”. I would probably put this under “SG MAPS”, assuming an irreducible kind of specification gaming where it’s very difficult (or impossible) to distinguish deceptiveness from non-deceptiveness (including through feedback on the model’s reasoning process). Though it could also be GMG, where the “non-deceptiveness” concept is incoherent and thus very difficult to generalize well.
I’m glad I ran this survey, and I expect the overall agreement distribution probably still holds for the current GDM alignment team (or may have shifted somewhat in the direction of disagreement), though I haven’t rerun the survey so I don’t really know. Looking back at the “possible implications for our work” section, we are working on basically all of these things.
Thoughts on some of the cruxes in the post based on last year’s developments:
Is global cooperation sufficiently difficult that AGI would need to deploy new powerful technology to make it work?
There has been a lot of progress on AGI governance and broad endorsement of the risks this year, so I feel somewhat more optimistic about global cooperation than a year ago.
Will we know how capable our models are?
The field has made some progress on designing concrete capability evaluations—how well they measure the properties we are interested in remains to be seen.
Will systems acquire the capability to be useful for alignment / cooperation before or after the capability to perform advanced deception?
At least so far, deception and manipulation capabilities seem to be lagging a bit behind usefulness for alignment (e.g. model-written evals / critiques, weak-to-strong generalization), but this could change in the future.
Is consequentialism a powerful attractor? How hard will it be to avoid arbitrarily consequentialist systems?
Current SOTA LLMs seem surprisingly non-consequentialist for their level of capability. I still expect LLMs to be one of the safest paths to AGI in terms of avoiding arbitrarily consequentialist systems.
I hoped to see other groups do the survey as well—looks like this didn’t happen, though a few people asked me to share the template at the time. It would be particularly interesting if someone ran a version of the survey with separate ratings for “agreement with the statement” and “agreement with the implications for risk”.
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- 10 Jan 2024 22:04 UTC; 17 points) 's comment on The LessWrong 2022 Review: Review Phase by (
I agree that a possible downside of talking about capabilities is that people might assume they are uncorrelated and we can choose not to create them. It does seem relatively easy to argue that deception capabilities arise as a side effect of building language models that are useful to humans and good at modeling the world, as we are already seeing with examples of deception / manipulation by Bing etc.
I think the people who think we can avoid building systems that are good at deception often don’t buy the idea of instrumental convergence either (e.g. Yann LeCun), so I’m not sure that arguing for correlated capabilities in terms of intelligence would have an advantage.
When discussing AI risks, talk about capabilities, not intelligence
Re 4, we were just discussing this paper in a reading group at DeepMind, and people were confused why it’s not on arxiv.
The issue with being informal is that it’s hard to tell whether you are right. You use words like “motivations” without defining what you mean, and this makes your statements vague enough that it’s not clear whether or how they are in tension with other claims. (E.g. what I have read so far doesn’t seems to rule out that shards can be modeled as contextually activated subagents with utility functions.)
An upside of formalism is that you can tell when it’s wrong, and thus it can help make our thinking more precise even if it makes assumptions that may not apply. I think defining your terms and making your arguments more formal should be a high priority. I’m not advocating spending hundreds of hours proving theorems, but moving in the direction of formalizing definitions and claims would be quite valuable.
It seems like a bad sign that the most clear and precise summary of shard theory claims was written by someone outside your team. I highly agree with this takeaway from that post: “Making a formalism for shard theory (even one that’s relatively toy) would probably help substantially with both communicating key ideas and also making research progress.” This work has a lot of research debt, and paying it off would really help clarify the disagreements around these topics.
Thanks Daniel, this is a great summary. I agree that internal representation of the reward function is not load-bearing for the claim. The weak form of representation that you mentioned is what I was trying to point at. I will rephrase the sentence to clarify this, e.g. something like “We assume that the agent learns a goal during the training process: some form of implicit internal representation of desired state features or concepts”.
Thanks Daniel for the detailed response (which I agree with), and thanks Alex for the helpful clarification.
I agree that the training-compatible set is not predictive for how the neural network generalizes (at least under the “strong distributional shift” assumption in this post where the test set is disjoint from the training set, which I think could be weakened in future work). The point of this post is that even though you can’t generally predict behavior in new situations based on the training-compatible set alone, you can still predict power-seeking tendencies. That’s why the title says “power-seeking can be predictive” not “training-compatible goals can be predictive”.
The hypothesis you mentioned seems compatible with the assumptions of this post. When you say “the policy develops motivations related to obvious correlates of its historical reinforcement signals”, these “motivations” seem like a kind of training-compatible goals (if defined more broadly than in this post). I would expect that a system that pursues these motivations in new situations would exhibit some power-seeking tendencies because those correlate with a lot of reinforcement signals.
I suspect a lot of the disagreement here comes from different interpretations of the “internal representations of goals” assumption, I will try to rephrase that part better.
The internal representations assumption was meant to be pretty broad, I didn’t mean that the network is explicitly representing a scalar reward function over observations or anything like that—e.g. these can be implicit representations of state features I think this would also include the kind of representations you are assuming in the maze-solving post, e.g. cheese shards / circuits.
Thanks Alex! Your original comment didn’t read as ill-intended to me, though I wish that you’d just messaged me directly. I could have easily missed your comment in this thread—I only saw it because you linked the thread in the comments on my post.
Your suggested rephrase helps to clarify how you think about the implications of the paper, but I’m looking for something shorter and more high-level to include in my talk. I’m thinking of using this summary, which is based on a sentence from the paper’s intro: “There are theoretical results showing that many decision-making algorithms have power-seeking tendencies.”
(Looking back, the sentence I used in the talk was a summary of the optimal policies paper, and then I updated the citation to point to the retargetability paper and forgot to update the summary...)
Sorry about the cite in my “paradigms of alignment” talk, I didn’t mean to misrepresent your work. I was going for a high-level one-sentence summary of the result and I did not phrase it carefully. I’m open to suggestions on how to phrase this differently when I next give this talk.
Similarly to Steven, I usually cite your power-seeking papers to support a high-level statement that “instrumental convergence is a thing” for ML audiences, and I find they are a valuable outreach tool. For example, last year I pointed David Silver to the optimal policies paper when he was proposing some alignment ideas to our team that we would expect don’t work because of instrumental convergence. (There’s a nonzero chance he would look at a NeurIPS paper and basically no chance that he would read a LW post.)
The subtleties that you discuss are important in general, but don’t seem relevant to making the basic case for instrumental convergence to ML researchers. Maybe you don’t care about optimal policies, but many RL people do, and I think these results can help them better understand why alignment is hard.
Here is my guess on how shard theory would affect the argument in this post:
In my understanding, shard theory would predict that the model learns multiple goals from the training-compatible (TC) set (e.g. including both the coin goal and the go-right goal in CoinRun), and may pursue different learned goals in different new situations. The simplifying assumption that the model pursues a randomly chosen goal from the TC set also covers this case, so this doesn’t affect the argument.
Shard theory might also imply that the training-compatible set should be larger, e.g. including goals for which the agent’s behavior is not optimal. I don’t think this affects the argument, since we just need the TC set to satisfy the condition that permuting reward values in will produce a reward vector that is still in the TC set.
So think that assuming shard theory in this post would lead to the same conclusions—would be curious if you disagree.
Great post! I especially enjoyed the intuitive visualizations for how the heavy-tailed distributions affect the degree of overoptimization of X.
As a possibly interesting connection, your set of criteria for an alignment plan can also be thought of as criteria for selecting a model specification that approximates the ideal specification well, especially trying to ensure that the approximation error is light-tailed.
Thanks Gunnar, those sound like reasonable guidelines!
The common space was still usable by other housemates, but it felt a bit cramped, and I felt more internal pressure to keep it tidy for others to use (while in my own space I feel more comfortable leaving it messy for longer). Our housemates were very tolerant of having kid stuff everywhere, but it still seemed suboptimal.
The fridge, laundry area and outdoor garbage bins were the most overloaded in our case, while the shed and attic were sufficiently spacious and less in demand that it wasn’t an issue. Gathering everyone for a decluttering spree is a noble effort but a bit hard to coordinate. I found it easier to declutter by putting away some type of object (e.g. shoes) and have people put theirs back (to identify things that didn’t belong to anyone). The fridge was often overfull despite regular decluttering—I think it was just too small for the number of people we had, and getting a second fridge would take up extra space.
I would add general disruption of child routines in addition to sleep (though sleep is the most important routine). Surprisingly, it was not as much of an issue the other way around, e.g. the baby was quiet enough not to bother the housemate next door at night. The 3 year old running around the living room in the morning was a bit noisy for the people downstairs though.