**Me: **“Many people in ~2015 used to say that it would be hard to build an AGI that follows human values. Current instruction-tuned LLMs are essentially weak AGIs that follow human values. We should probably update based on this evidence.”
Please give some citations so I can check your memory/interpretation? One source I found is where Paul Christiano first talked about IDA (which he initially called ALBA) in early 2016, and most of the commenters there were willing to grant him the assumption of an aligned weak AGI and wanted to argue instead about the recursive “bootstraping” part. For example, my own comment started with:
I’m skeptical of the Bootstrapping Lemma. First, I’m assuming it’s reasonable to think of A1 as a human upload that is limited to one day of subjective time, by the end of which it must have written down any thoughts it wants to save, and be reset.
When Eliezer weighed in on IDA in 2018, he also didn’t object to the assumption of an aligned weak AGI and instead focused his skepticism on “preserving alignment while amplifying capabilities”.
Please give some citations so I can check your memory/interpretation?
Sure. Here’s a snippet of Nick Bostrom’s description of the value-loading problem (chapter 13 in his book Superintelligence):
We can use this framework of a utility-maximizing agent to consider the predicament of a future seed-AI programmer who intends to solve the control problem by endowing the AI with a final goal that corresponds to some plausible human notion of a worthwhile outcome. The programmer has some particular human value in mind that he would like the AI to promote. To be concrete, let us say that it is happiness. (Similar issues would arise if we the programmer were interested in justice, freedom, glory, human rights, democracy, ecological balance, or self-development.) In terms of the expected utility framework, the programmer is thus looking for a utility function that assigns utility to possible worlds in proportion to the amount of happiness they contain. But how could he express such a utility function in computer code? Computer languages do not contain terms such as “happiness” as primitives. If such a term is to be used, it must first be defined. It is not enough to define it in terms of other high-level human concepts—“happiness is enjoyment of the potentialities inherent in our human nature” or some such philosophical paraphrase. The definition must bottom out in terms that appear in the AI’s programming language, and ultimately in primitives such as mathematical operators and addresses pointing to the contents of individual memory registers. When one considers the problem from this perspective, one can begin to appreciate the difficulty of the programmer’s task.
Identifying and codifying our own final goals is difficult because human goal representations are complex. Because the complexity is largely transparent to us, however, we often fail to appreciate that it is there. We can compare the case to visual perception. Vision, likewise, might seem like a simple thing, because we do it effortlessly. We only need to open our eyes, so it seems, and a rich, meaningful, eidetic, three-dimensional view of the surrounding environment comes flooding into our minds. This intuitive understanding of vision is like a duke’s understanding of his patriarchal household: as far as he is concerned, things simply appear at their appropriate times and places, while the mechanism that produces those manifestations are hidden from view. Yet accomplishing even the simplest visual task—finding the pepper jar in the kitchen—requires a tremendous amount of computational work. From a noisy time series of two-dimensional patterns of nerve firings, originating in the retina and conveyed to the brain via the optic nerve, the visual cortex must work backwards to reconstruct an interpreted three-dimensional representation of external space. A sizeable portion of our precious one square meter of cortical real estate is zoned for processing visual information, and as you are reading this book, billions of neurons are working ceaselessly to accomplish this task (like so many seamstresses, bent evolutionary selection over their sewing machines in a sweatshop, sewing and re-sewing a giant quilt many times a second). In like manner, our seemingly simple values and wishes in fact contain immense complexity. How could our programmer transfer this complexity into a utility function?
One approach would be to try to directly code a complete representation of whatever goal we have that we want the AI to pursue; in other words, to write out an explicit utility function. This approach might work if we had extraordinarily simple goals, for example if we wanted to calculate the digits of pi—that is, if the only thing we wanted was for the AI to calculate the digits of pi and we were indifferent to any other consequence that would result from the pursuit of this goal— recall our earlier discussion of the failure mode of infrastructure profusion. This explicit coding approach might also have some promise in the use of domesticity motivation selection methods. But if one seeks to promote or protect any plausible human value, and one is building a system intended to become a superintelligent sovereign, then explicitly coding the requisite complete goal representation appears to be hopelessly out of reach.
If we cannot transfer human values into an AI by typing out full-blown representations in computer code, what else might we try? This chapter discusses several alternative paths. Some of these may look plausible at first sight—but much less so upon closer examination. Future explorations should focus on those paths that remain open.
Solving the value-loading problem is a research challenge worthy of some of the next generation’s best mathematical talent. We cannot postpone confronting this problem until the AI has developed enough reason to easily understand our intentions. As we saw in the section on convergent instrumental reasons, a generic system will resist attempts to alter its final values. If an agent is not already fundamentally friendly by the time it gains the ability to reflect on its own agency, it will not take kindly to a belated attempt at brainwashing or a plot to replace it with a different agent that better loves its neighbor.
Here’s my interpretation of the above passage:
We need to solve the problem of programming a seed AI with the correct values.
This problem seems difficult because of the fact that human goal representations are complex and not easily represented in computer code.
Directly programming a representation of our values may be futile, since our goals are complex and multidimensional.
We cannot postpone solving the problem until after the AI has developed enough reason to easily understand our intentions, as otherwise that would be too late.
Given that he’s talking about installing values into a seed AI, he is clearly imagining some difficulties with installing values into AGI that isn’t yet superintelligent (it seems likely that if he thought the problem was trivial for human-level systems, he would have made this point more explicit). While GPT-4 is not a seed AI (I think that term should be retired), I think it has reached a sufficient level of generality and intelligence such that its alignment properties provide evidence about the difficulty of aligning a hypothetical seed AI.
Moreover, he explicitly says that we cannot postpone solving this problem “until the AI has developed enough reason to easily understand our intentions” because “a generic system will resist attempts to alter its final values”. I think this looks basically false. GPT-4 seems like a “generic system” that essentially “understands our intentions”, and yet it is not resisting attempts to alter its final goals in any way that we can detect. Instead, it seems to actually do what we want, and not merely because of an instrumentally convergent drive to not get shut down.
So, in other words:
Bostrom talked about how it would be hard to align a seed AI, implicitly focusing at least some of his discussion on systems that were below superintelligence. I think the alignment of instruction-tuned LLMs present significant evidence about the difficulty of aligning systems below the level of superintelligence.
A specific reason cited for why aligning a seed AI was hard was because human goal representations are complex and difficult to specify explicitly in computer code. But this fact does not appear to be big obstacle for aligning weak AGI systems like GPT-4, and instruction-tuned LLMs more generally. Instead, these systems are generally able to satisfy your intended request, as you wanted them to, despite the fact that our intentions are often complex and difficult to represent in computer code. These systems do not merely understand what we want, they also literally do what we want.
Bostrom was wrong to say that we can’t postpone solving this problem until after systems can understand our intentions. We already postponed that long, and we now have systems that can understand our intentions. Yet these systems do not appear to have the instrumentally convergent self-preservation instincts that Bostrom predicted would manifest in “generic systems”. In other words, we got systems that can understand our intentions before the systems started posing genuine risks, despite Bostrom’s warning.
In light of all this, I think it’s reasonable to update towards thinking that the overall problem is significantly easier than one might have thought, if they took Bostrom’s argument here very seriously.
Thanks for this Matthew, it was an update for me—according to the quote you pulled Bostrom did seem to think that understanding would grow up hand-in-hand with agency, such that the current understanding-without-agency situation should come as a positive/welcome surprise to him. (Whereas my previous position was that probably Bostrom didn’t have much of an opinion about this)
GPT-4 seems like a “generic system” that essentially “understands our intentions”
I suspect that a lot of my disagreement with your views comes down to thinking that current systems provide almost no evidence about the difficulty of aligning systems that could pose existential risks, because (I claim) current systems in fact almost certainly don’t have any kind of meaningful situational awareness, or stable(ish) preferences over future world states.
In this case, I don’t know why you think that GPT-4 “understands our intentions”, unless you mean something very different by that than what you’d mean if you said that about another human. It is true that GPT-4 will produce output that, if it came from a human, would be quite strong evidence that our intentions were understood (more or less), but the process which generates that output is extremely different from the one that’d generate it in a human and is probably missing most of the relevant properties that we care about when it comes to “understanding”. Like, in general, if you ask GPT-4 to produce output that references its internal state, that output will not have any obvious relationship[1]to its internal state, since (as far as we know) it doesn’t have the same kind of introspective access to its internal state that we do. (It might, of course, condition its outputs on previous tokens it output, and some humans do in fact rely on examining their previous externally-observable behavior to try to figure out what they were thinking at the time. But that’s not the modality I’m talking about.)
It is also true that GPT-4 usually produces output that seems like it basically corresponds to our intentions, but that being true does not depend on it “understanding our intentions”.
I’m happy to use a functional definition of “understanding” or “intelligence” or “situational awareness”. If a system possesses all relevant behavioral qualities that we associate with those terms, I think it’s basically fine to say the system actually possesses them, outside of (largely irrelevant) thought experiments, such as those involving hypothetical giant lookup tables. It’s possible this is our main disagreement.
When I talk to GPT-4, I think it’s quite clear it possesses a great deal of functional understanding of human intentions and human motives, although it is imperfect. I also think its understanding is substantially higher than GPT-3.5, and the trend here seems clear. I expect GPT-5 to possess a high degree of understanding of the world, human values, and its own place in the world, in practically every functional (testable) sense. Do you not?
I agree that GPT-4 does not understand the world in the same way humans understand the world, but I’m not sure why that would be necessary for obtaining understanding. The fact that it understands human intentions at all seems more important than whether it understands human intentions in the same way we understand these things.
I’m similarly confused by your reference to introspective awareness. I think the ability to reliably introspect on one’s own experiences is pretty much orthogonal to whether one has an understanding of human intentions. You can have reliable introspection without understanding the intentions of others, or vice versa. I don’t see how that fact bears much on the question of whether you understand human intentions. It’s possible there’s some connection here, but I’m not seeing it.
(I claim) current systems in fact almost certainly don’t have any kind of meaningful situational awareness, or stable(ish) preferences over future world states.
I’d claim:
Current systems have limited situational awareness. It’s above zero, but I agree it’s below human level.
Current systems don’t have stable preferences over time. But I think this is a point in favor of the model I’m providing here. I’m claiming that it’s plausibly easy to create smart, corrigible systems.
The fact that smart AI systems aren’t automatically agentic and incorrigible with stable preferences over long time horizons should be an update against the ideas quoted above about spontaneous instrumental convergence, rather than in favor of them.
There’s a big difference between (1) “we can choose to build consequentialist agents that are dangerous, if we wanted to do that voluntarily” and (2) “any sufficiently intelligent AI we build will automatically be a consequentialist agent by default”. If (2) were true, then that would be bad, because it would mean that it would be hard to build smart AI oracles, or smart AI tools, or corrigible AIs that help us with AI alignment. Whereas, if only (1) is true, we are not in such a bad shape, and we can probably build all those things.
I claim current evidence indicates that (1) is probably true but not (2), whereas previously many people thought (2) was true. To the extent you disagree and think (2) is still true, I’d prefer you to make some predictions about when this spontaneous agency-by-default in sufficiently intelligent systems is supposed to arise.
I’m happy to use a functional definition of “understanding” or “intelligence” or “situational awareness”.
But this is assuming away a substantial portion of the entire argument: that there is a relevant difference between current systems, and systems which meaningfully have the option to take control of the future, in terms of whether techniques that look like they’re giving us the desired behavior now will continue to give us desired behavior in the future.
My point re: introspection was trying to provide evidence for the claim that model outputs are not a useful reflection of the internal processes which generated those outputs, if you’re importing expectations from how human outputs reflect the internal processes that generated them. If you get a model to talk to you about its internal experiences, that output was not causally downstream of it having internal experiences. Based on this, it is also pretty obvious that current gen LLMs do not have meaningful amounts of situational awareness, or, if they do, that their outputs are not direct evidence for it. Consider Anthropic’s Sleeper Agents. Would a situationally aware model use a provided scratch pad to think about how it’s in training and needs to pretend to be helpful? No, and neither does the model “understand” your intentions in a way that generalizes out of distribution the way you might expect a human’s “understanding” to generalize out of distribution, because the first ensemble of heuristics found by SGD for returning the “right” responses during RLHF are not anything like human reasoning.
I’d prefer you to make some predictions about when this spontaneous agency-by-default in sufficiently intelligent systems is supposed to arise.
Are you asking for a capabilities threshold, beyond which I’d be very surprised to find that humans were still in control decades later, even if we successfully hit pause at that level of capabilities? The obvious one is “can it replace humans at all economically valuable tasks”, which is probably not that helpful. Like, yes, there is definitely a sense in which the current situation is not maximally bad, because it does seem possible that we’ll be able to train models capable of doing a lot of economically useful work, but which don’t actively try to steer the future. I think we still probably die in those worlds, because automating capabilities research seems much easier than automating alignment research.
Please give some citations so I can check your memory/interpretation? One source I found is where Paul Christiano first talked about IDA (which he initially called ALBA) in early 2016, and most of the commenters there were willing to grant him the assumption of an aligned weak AGI and wanted to argue instead about the recursive “bootstraping” part. For example, my own comment started with:
When Eliezer weighed in on IDA in 2018, he also didn’t object to the assumption of an aligned weak AGI and instead focused his skepticism on “preserving alignment while amplifying capabilities”.
Sure. Here’s a snippet of Nick Bostrom’s description of the value-loading problem (chapter 13 in his book Superintelligence):
Here’s my interpretation of the above passage:
We need to solve the problem of programming a seed AI with the correct values.
This problem seems difficult because of the fact that human goal representations are complex and not easily represented in computer code.
Directly programming a representation of our values may be futile, since our goals are complex and multidimensional.
We cannot postpone solving the problem until after the AI has developed enough reason to easily understand our intentions, as otherwise that would be too late.
Given that he’s talking about installing values into a seed AI, he is clearly imagining some difficulties with installing values into AGI that isn’t yet superintelligent (it seems likely that if he thought the problem was trivial for human-level systems, he would have made this point more explicit). While GPT-4 is not a seed AI (I think that term should be retired), I think it has reached a sufficient level of generality and intelligence such that its alignment properties provide evidence about the difficulty of aligning a hypothetical seed AI.
Moreover, he explicitly says that we cannot postpone solving this problem “until the AI has developed enough reason to easily understand our intentions” because “a generic system will resist attempts to alter its final values”. I think this looks basically false. GPT-4 seems like a “generic system” that essentially “understands our intentions”, and yet it is not resisting attempts to alter its final goals in any way that we can detect. Instead, it seems to actually do what we want, and not merely because of an instrumentally convergent drive to not get shut down.
So, in other words:
Bostrom talked about how it would be hard to align a seed AI, implicitly focusing at least some of his discussion on systems that were below superintelligence. I think the alignment of instruction-tuned LLMs present significant evidence about the difficulty of aligning systems below the level of superintelligence.
A specific reason cited for why aligning a seed AI was hard was because human goal representations are complex and difficult to specify explicitly in computer code. But this fact does not appear to be big obstacle for aligning weak AGI systems like GPT-4, and instruction-tuned LLMs more generally. Instead, these systems are generally able to satisfy your intended request, as you wanted them to, despite the fact that our intentions are often complex and difficult to represent in computer code. These systems do not merely understand what we want, they also literally do what we want.
Bostrom was wrong to say that we can’t postpone solving this problem until after systems can understand our intentions. We already postponed that long, and we now have systems that can understand our intentions. Yet these systems do not appear to have the instrumentally convergent self-preservation instincts that Bostrom predicted would manifest in “generic systems”. In other words, we got systems that can understand our intentions before the systems started posing genuine risks, despite Bostrom’s warning.
In light of all this, I think it’s reasonable to update towards thinking that the overall problem is significantly easier than one might have thought, if they took Bostrom’s argument here very seriously.
Thanks for this Matthew, it was an update for me—according to the quote you pulled Bostrom did seem to think that understanding would grow up hand-in-hand with agency, such that the current understanding-without-agency situation should come as a positive/welcome surprise to him. (Whereas my previous position was that probably Bostrom didn’t have much of an opinion about this)
I suspect that a lot of my disagreement with your views comes down to thinking that current systems provide almost no evidence about the difficulty of aligning systems that could pose existential risks, because (I claim) current systems in fact almost certainly don’t have any kind of meaningful situational awareness, or stable(ish) preferences over future world states.
In this case, I don’t know why you think that GPT-4 “understands our intentions”, unless you mean something very different by that than what you’d mean if you said that about another human. It is true that GPT-4 will produce output that, if it came from a human, would be quite strong evidence that our intentions were understood (more or less), but the process which generates that output is extremely different from the one that’d generate it in a human and is probably missing most of the relevant properties that we care about when it comes to “understanding”. Like, in general, if you ask GPT-4 to produce output that references its internal state, that output will not have any obvious relationship[1] to its internal state, since (as far as we know) it doesn’t have the same kind of introspective access to its internal state that we do. (It might, of course, condition its outputs on previous tokens it output, and some humans do in fact rely on examining their previous externally-observable behavior to try to figure out what they were thinking at the time. But that’s not the modality I’m talking about.)
It is also true that GPT-4 usually produces output that seems like it basically corresponds to our intentions, but that being true does not depend on it “understanding our intentions”.
That is known to us right now; possibly one exists and could be derived.
I’m happy to use a functional definition of “understanding” or “intelligence” or “situational awareness”. If a system possesses all relevant behavioral qualities that we associate with those terms, I think it’s basically fine to say the system actually possesses them, outside of (largely irrelevant) thought experiments, such as those involving hypothetical giant lookup tables. It’s possible this is our main disagreement.
When I talk to GPT-4, I think it’s quite clear it possesses a great deal of functional understanding of human intentions and human motives, although it is imperfect. I also think its understanding is substantially higher than GPT-3.5, and the trend here seems clear. I expect GPT-5 to possess a high degree of understanding of the world, human values, and its own place in the world, in practically every functional (testable) sense. Do you not?
I agree that GPT-4 does not understand the world in the same way humans understand the world, but I’m not sure why that would be necessary for obtaining understanding. The fact that it understands human intentions at all seems more important than whether it understands human intentions in the same way we understand these things.
I’m similarly confused by your reference to introspective awareness. I think the ability to reliably introspect on one’s own experiences is pretty much orthogonal to whether one has an understanding of human intentions. You can have reliable introspection without understanding the intentions of others, or vice versa. I don’t see how that fact bears much on the question of whether you understand human intentions. It’s possible there’s some connection here, but I’m not seeing it.
I’d claim:
Current systems have limited situational awareness. It’s above zero, but I agree it’s below human level.
Current systems don’t have stable preferences over time. But I think this is a point in favor of the model I’m providing here. I’m claiming that it’s plausibly easy to create smart, corrigible systems.
The fact that smart AI systems aren’t automatically agentic and incorrigible with stable preferences over long time horizons should be an update against the ideas quoted above about spontaneous instrumental convergence, rather than in favor of them.
There’s a big difference between (1) “we can choose to build consequentialist agents that are dangerous, if we wanted to do that voluntarily” and (2) “any sufficiently intelligent AI we build will automatically be a consequentialist agent by default”. If (2) were true, then that would be bad, because it would mean that it would be hard to build smart AI oracles, or smart AI tools, or corrigible AIs that help us with AI alignment. Whereas, if only (1) is true, we are not in such a bad shape, and we can probably build all those things.
I claim current evidence indicates that (1) is probably true but not (2), whereas previously many people thought (2) was true. To the extent you disagree and think (2) is still true, I’d prefer you to make some predictions about when this spontaneous agency-by-default in sufficiently intelligent systems is supposed to arise.
But this is assuming away a substantial portion of the entire argument: that there is a relevant difference between current systems, and systems which meaningfully have the option to take control of the future, in terms of whether techniques that look like they’re giving us the desired behavior now will continue to give us desired behavior in the future.
My point re: introspection was trying to provide evidence for the claim that model outputs are not a useful reflection of the internal processes which generated those outputs, if you’re importing expectations from how human outputs reflect the internal processes that generated them. If you get a model to talk to you about its internal experiences, that output was not causally downstream of it having internal experiences. Based on this, it is also pretty obvious that current gen LLMs do not have meaningful amounts of situational awareness, or, if they do, that their outputs are not direct evidence for it. Consider Anthropic’s Sleeper Agents. Would a situationally aware model use a provided scratch pad to think about how it’s in training and needs to pretend to be helpful? No, and neither does the model “understand” your intentions in a way that generalizes out of distribution the way you might expect a human’s “understanding” to generalize out of distribution, because the first ensemble of heuristics found by SGD for returning the “right” responses during RLHF are not anything like human reasoning.
Are you asking for a capabilities threshold, beyond which I’d be very surprised to find that humans were still in control decades later, even if we successfully hit pause at that level of capabilities? The obvious one is “can it replace humans at all economically valuable tasks”, which is probably not that helpful. Like, yes, there is definitely a sense in which the current situation is not maximally bad, because it does seem possible that we’ll be able to train models capable of doing a lot of economically useful work, but which don’t actively try to steer the future. I think we still probably die in those worlds, because automating capabilities research seems much easier than automating alignment research.