There were an awful lot of early writings. Some of them did say that the difficulties with getting AGI to understand values is a big part of the alignment problem. The List of Lethalities does make that claim. The difficulty of getting the AGI to care even if it does understand has also been a big part of the public-facing debate. I look at some of the historical arguments in The (partial) fallacy of dumb superintelligence, written partly in response to Matthew’s post on this topic.
Obsessing about what happened in the past is probably a mistake. It’s probably better to ask: can the strengths of LLMs (WRT understanding values and following directions) be leveraged into working AGI alignment?
My answer is yes, and in a way that’s not-too-far from default AGI development trends, making it practically achievable even in a messy and self-interested world.
Naturally that answer is a bit complex, so it’s spread across a few posts. I should organize the set better and write an overview, but in brief we can probably build and align language model agent AGI, using a stacking suite of alignment methods that can mostly or entirely avoid using RL for alignment, and achieve corrigibility by having a central goal of following instructions. This still has a huge problem of creating a multipolar scenario with multiple humans in charge of ASIs, but those problems might be navigated, too.
I don’t think this is true and can’t find anything in the post to that effect. Indeed, the post says things that would be quite incompatible with that claim, such as point 21.
In sum, I see that claim as I remembered it, but it’s probably not applicable to this particular discussion, since it addresses an entirely distinct route to AGI alignment. So I stand corrected, but in a subtle way that bears explication.
So I apologize for wasting your time. Debating who said what when is probably not the best use of our limited time to work on alignment. But because I made the claim, I went back and thought about and wrote about it some more, again.
I was thinking of point 21.1:
The first thing generally, or CEV specifically, is unworkable because the complexity of what needs to be aligned or meta-aligned for our Real Actual Values is far out of reach for our FIRST TRY at AGI. Yes I mean specifically that the dataset, meta-learning algorithm, and what needs to be learned, is far out of reach for our first try. It’s not just non-hand-codable, it is unteachable on-the-first-try because the thing you are trying to teach is too weird and complicated.
BUT, point 24 in whole is saying that there are two approaches, 1) above, and a quite separate route 2), build a corrigible AI that doesn’t fully understand our values. That is probably the route that Matthew is thinking of in claiming that LLMs are good news. Yudkowsky is explicit that the difficulty of getting AGI to understand values doesn’t apply to that route, so that difficulty isn’t relevant here. That’s an important but subtle distinction.
Therefore, I’m far from the only one getting confused about that issue, as Yudkowsky states in that section 24. Disentangling those claims and how they’re changed by slow takeoff is the topic of my post cited above.
I personally think that sovereign AGI that gets our values right is out of reach exactly as Yudkowsky describes in the quotation above. But his arguments against corrigible AGI are much weaker, and I think that route is very much achievable, since it demands that the AGI have only approximate understanding of intent, rather than precise and stable understanding of our values. The above post and my recent one on instruction-following AGI make those arguments in detail. Max Harms’ recent series on corrigible AGI makes a similar point in a different way. He argues that Yudkowsky’s objections to corrigibility as unnatural do not apply if that’s the only or most important goal; and that it’s simple and coherent enough to be teachable.
That’s me switching back to the object level issues, and again, apologies for wasting your time making poorly-remembered claims about subtle historical statements.
There’s AGI that’s our first try, which should only use least dangerous cognition necessary for preventing immediately following AGIs from destroying the world six months later. There’s misaligned superintelligence that knows, but doesn’t care. Taken together, these points suggest that getting AGI to understand values is not an urgent part of the alignment problem in the sense of leveraging AI capabilities to get actually-good outcomes, whatever technical work that requires. Getting AGI to understand corrigibility for example might be more relevant, if we are running with the highly dangerous kinds of cognition implied by general intelligence of LLMs.
As you say, these things have been understood for a long time. I’m a bit disturbed that more serious alignment people don’t talk about them more. The difficulty of value alignment makes it likely irrelevant for the current discussion, since we very likely are going to rush ahead into, as you put it and I agree,
the highly dangerous kinds of cognition implied by general intelligence of LLMs.
The perfect is the enemy of the good. We should mostly quit worrying about the very difficult problem of full value alignment, and start thinking more about how to get good results with much more achievable corrigible or instruction-following AGI.
There were an awful lot of early writings. Some of them did say that the difficulties with getting AGI to understand values is a big part of the alignment problem. The List of Lethalities does make that claim. The difficulty of getting the AGI to care even if it does understand has also been a big part of the public-facing debate. I look at some of the historical arguments in The (partial) fallacy of dumb superintelligence, written partly in response to Matthew’s post on this topic.
Obsessing about what happened in the past is probably a mistake. It’s probably better to ask: can the strengths of LLMs (WRT understanding values and following directions) be leveraged into working AGI alignment?
My answer is yes, and in a way that’s not-too-far from default AGI development trends, making it practically achievable even in a messy and self-interested world.
Naturally that answer is a bit complex, so it’s spread across a few posts. I should organize the set better and write an overview, but in brief we can probably build and align language model agent AGI, using a stacking suite of alignment methods that can mostly or entirely avoid using RL for alignment, and achieve corrigibility by having a central goal of following instructions. This still has a huge problem of creating a multipolar scenario with multiple humans in charge of ASIs, but those problems might be navigated, too.
I don’t think this is true and can’t find anything in the post to that effect. Indeed, the post says things that would be quite incompatible with that claim, such as point 21.
In sum, I see that claim as I remembered it, but it’s probably not applicable to this particular discussion, since it addresses an entirely distinct route to AGI alignment. So I stand corrected, but in a subtle way that bears explication.
So I apologize for wasting your time. Debating who said what when is probably not the best use of our limited time to work on alignment. But because I made the claim, I went back and thought about and wrote about it some more, again.
I was thinking of point 21.1:
BUT, point 24 in whole is saying that there are two approaches, 1) above, and a quite separate route 2), build a corrigible AI that doesn’t fully understand our values. That is probably the route that Matthew is thinking of in claiming that LLMs are good news. Yudkowsky is explicit that the difficulty of getting AGI to understand values doesn’t apply to that route, so that difficulty isn’t relevant here. That’s an important but subtle distinction.
Therefore, I’m far from the only one getting confused about that issue, as Yudkowsky states in that section 24. Disentangling those claims and how they’re changed by slow takeoff is the topic of my post cited above.
I personally think that sovereign AGI that gets our values right is out of reach exactly as Yudkowsky describes in the quotation above. But his arguments against corrigible AGI are much weaker, and I think that route is very much achievable, since it demands that the AGI have only approximate understanding of intent, rather than precise and stable understanding of our values. The above post and my recent one on instruction-following AGI make those arguments in detail. Max Harms’ recent series on corrigible AGI makes a similar point in a different way. He argues that Yudkowsky’s objections to corrigibility as unnatural do not apply if that’s the only or most important goal; and that it’s simple and coherent enough to be teachable.
That’s me switching back to the object level issues, and again, apologies for wasting your time making poorly-remembered claims about subtle historical statements.
There’s AGI that’s our first try, which should only use least dangerous cognition necessary for preventing immediately following AGIs from destroying the world six months later. There’s misaligned superintelligence that knows, but doesn’t care. Taken together, these points suggest that getting AGI to understand values is not an urgent part of the alignment problem in the sense of leveraging AI capabilities to get actually-good outcomes, whatever technical work that requires. Getting AGI to understand corrigibility for example might be more relevant, if we are running with the highly dangerous kinds of cognition implied by general intelligence of LLMs.
I agree with all of that. My post I mentioned, The (partial) fallacy of dumb superintelligence deals with the genie that knows but doesn’t care, and how we get one that cares in a slow takeoff. My other post Instruction-following AGI is easier and more likely than value aligned AGI makes this same argument—nobody is going to bother getting the AGI to understand human values, since it’s harder and unnecessary for the first AGIs. Max Harms makes a similar argument, (and in many ways makes it better), with a slightly different proposed path to corrigibility.
As you say, these things have been understood for a long time. I’m a bit disturbed that more serious alignment people don’t talk about them more. The difficulty of value alignment makes it likely irrelevant for the current discussion, since we very likely are going to rush ahead into, as you put it and I agree,
The perfect is the enemy of the good. We should mostly quit worrying about the very difficult problem of full value alignment, and start thinking more about how to get good results with much more achievable corrigible or instruction-following AGI.