the AGI can be corrected and can act as a collaborator in improving its alignment as we collaborate to improve its intelligence.
Why do you think you can get to a state where the AGI is materially helping to solve extremely difficult problems (not extremely difficult like chess, extremely difficult like inventing language before you have language), and also the AGI got there due to some process that doesn’t also immediately cause there to be a much smarter AGI? https://tsvibt.blogspot.com/2023/01/a-strong-mind-continues-its-trajectory.html
I’m not sure I understand your question. I think maybe the answer is roughly that you do it gradually and carefully, in a slow takeoff scenario where you’re able to shut down and adjust the AGI at least while it passes through roughly the level of human intelligence.
It’s a process of aligning it to follow instructions, then using its desire to follow instructions to get honesty, helpfulness, and corrigibility from it. Of course it won’t be much help before it’s human level, but it can at least tell you what it thinks it would do in different circumstances. That would let you adjust its alignment. It’s hopefully something like a human therapist with a cooperative patient, except that therapist can also tinker with their brain function .
But I’m not sure I understand your question. The example of inventing language confuses me, because I tend to assume that would probably understand language (the way LLMs loosely understand language) from inception, through pretraining. And even failing that, they wouldn’t have to invent language, just learn human language. I’m mostly thinking of language model cognitive architecture AGI, but it seems like anything based on neural networks could learn language before being smarter than a human. You’d stop the training process to give it instructions. For instance, humans are “not human-level” by the time they understand a good bit of language.
I’m also thinking that a network-based AGI pretty much guarantees a slow takeoff, if that addresses what you mean by “immediately cause there to be a smarter AI”. The AGI will keep developing, as your linked post argues (I think that’s what you meant to reference about that post), but I am assuming it will allow itself to be shut down if it’s following instructions. That’s the way IF overlaps with corrigibility. Once it’s shut down, you can alter its alignment by altering or re-doing the relevant pretraining or goal descriptions.
Or maybe I’m misunderstanding your question entirely, in which case, sorry about that.
Anyway, I did try to explain the scheme in that link if you’re interested. I am claiming this is very likely how people will try to align the first AGIs, if they’re anything like we can anticipate from current efforts; that it’s obviously the thing to try when you’re actually deciding what to get your AGI to do first, it’s following instructions.
Yeah I think there’s a miscommunication. We could try having a phone call.
A guess at the situation is that I’m responding to two separate things. One is the story here:
One mainstay of claiming alignment is near-impossible is the difficulty of “solving ethics”—identifying and specifying the values of all of humanity. I have come to think that this is obviously (in retrospect—this took me a long time) irrelevant for early attempts at alignment: people will want to make AGIs that follow their instructions, not try to do what all of humanity wants for all of time. This also massively simplifies the problem; not only do we not have to solve ethics, but the AGI can be corrected and can act as a collaborator in improving its alignment as we collaborate to improve its intelligence.
It does simplify the problem, but not massively relative to the whole problem. A harder part shows up in the task of having a thing that
is capable enough to do things that would help humans a lot, like a lot a lot, whether or not it actually does those things, and
doesn’t kill everyone destroy approximately all human value.
And I’m not pulling a trick on you where I say that X is the hard part, and then you realize that actually we don’t have to do X, and then I say “Oh wait actually Y is the hard part”. Here is a quote from “Coherent Extrapolated Volition”, Yudkowsky 2004 https://intelligence.org/files/CEV.pdf:
Solving the technical problems required to maintain a well-specified abstract invariant in a self-modifying goal system. (Interestingly, this problem is relatively straightforward from a theoretical standpoint.)
Choosing something nice to do with the AI. This is about midway in theoretical hairiness between problems 1 and 3.
Designing a framework for an abstract invariant that doesn’t automatically wipe out the human species. This is the hard part.
I realize now that I don’t know whether or not you view IF as trying to address this problem.
The other thing I’m responding to is:
the AGI can be corrected and can act as a collaborator in improving its alignment as we collaborate to improve its intelligence.
If the AGI can (relevantly) act as a collaborator in improving its alignment, it’s already a creative intelligence on par with humanity. Which means there was already something that made a creative intelligence on par with humanity. Which is probably fast, ongoing, and nearly inextricable from the mere operation of the AGI.
I also now realize that I don’t know how much of a crux for you the claim that you made is.
I’m familiar with the arguments you mention for the other hard part, and I think instruction-following helps makes that part (or parts, depending on how you divvy it up) substantially easier. I do view it as addressing all of your points (there’s a lot of overlap amongst them).
And yes, that is separate from avoiding the problem of solving ethics.
So it’s a pretty big crux; I think instruction-following helps a lot. I’d love to have a phone call; I’d like it if you’d read that post first, because I do go into detail on the scheme and many objections there. LW puts it at a 15 minute read I think.
But I’ll try to summarize a little more, since re-explaining your thinking is always a good exercise.
Making instruction-following the AGI’s central goal means you don’t have to solve the remainder of the problems you list all at once. You get to keep changing your mind about what to do with the AI (your point 4). Instead of choosing an invariant goal that has to work for all time, your invariant is a pointer to the human’s preferences, which can change as they like (your point 5). It helps with point 3, stability, by allowing you to ask the AGI if its goal will remain stable and functioning as you want it in the new contexts and in the face of the learning it’s doing.
They key here is not thinking of the AGI as an omniscient genie. This wouldn’t work at all in a fast foom. But if the AGI gets smarter slowly, as a network-based AGI will, you get to use its intelligence to help align its next level of capabilities, at every level.
Ultimately, this should culminate in getting superhuman help to achieve full value alignment, a truly friendly and truly sovereign AGI. But there’s no rush to get there.
Naturally, this scheme working would be good if the humans in charge are good and wise, and not good if they’re not.
Why do you think you can get to a state where the AGI is materially helping to solve extremely difficult problems (not extremely difficult like chess, extremely difficult like inventing language before you have language), and also the AGI got there due to some process that doesn’t also immediately cause there to be a much smarter AGI? https://tsvibt.blogspot.com/2023/01/a-strong-mind-continues-its-trajectory.html
I talk about how this might work in the post linked just before the text you quoted:
Instruction-following AGI is easier and more likely than value aligned AGI
I’m not sure I understand your question. I think maybe the answer is roughly that you do it gradually and carefully, in a slow takeoff scenario where you’re able to shut down and adjust the AGI at least while it passes through roughly the level of human intelligence.
It’s a process of aligning it to follow instructions, then using its desire to follow instructions to get honesty, helpfulness, and corrigibility from it. Of course it won’t be much help before it’s human level, but it can at least tell you what it thinks it would do in different circumstances. That would let you adjust its alignment. It’s hopefully something like a human therapist with a cooperative patient, except that therapist can also tinker with their brain function .
But I’m not sure I understand your question. The example of inventing language confuses me, because I tend to assume that would probably understand language (the way LLMs loosely understand language) from inception, through pretraining. And even failing that, they wouldn’t have to invent language, just learn human language. I’m mostly thinking of language model cognitive architecture AGI, but it seems like anything based on neural networks could learn language before being smarter than a human. You’d stop the training process to give it instructions. For instance, humans are “not human-level” by the time they understand a good bit of language.
I’m also thinking that a network-based AGI pretty much guarantees a slow takeoff, if that addresses what you mean by “immediately cause there to be a smarter AI”. The AGI will keep developing, as your linked post argues (I think that’s what you meant to reference about that post), but I am assuming it will allow itself to be shut down if it’s following instructions. That’s the way IF overlaps with corrigibility. Once it’s shut down, you can alter its alignment by altering or re-doing the relevant pretraining or goal descriptions.
Or maybe I’m misunderstanding your question entirely, in which case, sorry about that.
Anyway, I did try to explain the scheme in that link if you’re interested. I am claiming this is very likely how people will try to align the first AGIs, if they’re anything like we can anticipate from current efforts; that it’s obviously the thing to try when you’re actually deciding what to get your AGI to do first, it’s following instructions.
Yeah I think there’s a miscommunication. We could try having a phone call.
A guess at the situation is that I’m responding to two separate things. One is the story here:
It does simplify the problem, but not massively relative to the whole problem. A harder part shows up in the task of having a thing that
is capable enough to do things that would help humans a lot, like a lot a lot, whether or not it actually does those things, and
doesn’t
kill everyonedestroy approximately all human value.And I’m not pulling a trick on you where I say that X is the hard part, and then you realize that actually we don’t have to do X, and then I say “Oh wait actually Y is the hard part”. Here is a quote from “Coherent Extrapolated Volition”, Yudkowsky 2004 https://intelligence.org/files/CEV.pdf:
Solving the technical problems required to maintain a well-specified abstract invariant in a self-modifying goal system. (Interestingly, this problem is relatively straightforward from a theoretical standpoint.)
Choosing something nice to do with the AI. This is about midway in theoretical hairiness between problems 1 and 3.
Designing a framework for an abstract invariant that doesn’t automatically wipe out the human species. This is the hard part.
I realize now that I don’t know whether or not you view IF as trying to address this problem.
The other thing I’m responding to is:
If the AGI can (relevantly) act as a collaborator in improving its alignment, it’s already a creative intelligence on par with humanity. Which means there was already something that made a creative intelligence on par with humanity. Which is probably fast, ongoing, and nearly inextricable from the mere operation of the AGI.
I also now realize that I don’t know how much of a crux for you the claim that you made is.
I’m familiar with the arguments you mention for the other hard part, and I think instruction-following helps makes that part (or parts, depending on how you divvy it up) substantially easier. I do view it as addressing all of your points (there’s a lot of overlap amongst them).
And yes, that is separate from avoiding the problem of solving ethics.
So it’s a pretty big crux; I think instruction-following helps a lot. I’d love to have a phone call; I’d like it if you’d read that post first, because I do go into detail on the scheme and many objections there. LW puts it at a 15 minute read I think.
But I’ll try to summarize a little more, since re-explaining your thinking is always a good exercise.
Making instruction-following the AGI’s central goal means you don’t have to solve the remainder of the problems you list all at once. You get to keep changing your mind about what to do with the AI (your point 4). Instead of choosing an invariant goal that has to work for all time, your invariant is a pointer to the human’s preferences, which can change as they like (your point 5). It helps with point 3, stability, by allowing you to ask the AGI if its goal will remain stable and functioning as you want it in the new contexts and in the face of the learning it’s doing.
They key here is not thinking of the AGI as an omniscient genie. This wouldn’t work at all in a fast foom. But if the AGI gets smarter slowly, as a network-based AGI will, you get to use its intelligence to help align its next level of capabilities, at every level.
Ultimately, this should culminate in getting superhuman help to achieve full value alignment, a truly friendly and truly sovereign AGI. But there’s no rush to get there.
Naturally, this scheme working would be good if the humans in charge are good and wise, and not good if they’re not.