I’m familiar with the arguments you mention for the other hard part, and I think instruction-following helps makes that part (or parts, depending on how you divvy it up) substantially easier. I do view it as addressing all of your points (there’s a lot of overlap amongst them).
And yes, that is separate from avoiding the problem of solving ethics.
So it’s a pretty big crux; I think instruction-following helps a lot. I’d love to have a phone call; I’d like it if you’d read that post first, because I do go into detail on the scheme and many objections there. LW puts it at a 15 minute read I think.
But I’ll try to summarize a little more, since re-explaining your thinking is always a good exercise.
Making instruction-following the AGI’s central goal means you don’t have to solve the remainder of the problems you list all at once. You get to keep changing your mind about what to do with the AI (your point 4). Instead of choosing an invariant goal that has to work for all time, your invariant is a pointer to the human’s preferences, which can change as they like (your point 5). It helps with point 3, stability, by allowing you to ask the AGI if its goal will remain stable and functioning as you want it in the new contexts and in the face of the learning it’s doing.
They key here is not thinking of the AGI as an omniscient genie. This wouldn’t work at all in a fast foom. But if the AGI gets smarter slowly, as a network-based AGI will, you get to use its intelligence to help align its next level of capabilities, at every level.
Ultimately, this should culminate in getting superhuman help to achieve full value alignment, a truly friendly and truly sovereign AGI. But there’s no rush to get there.
Naturally, this scheme working would be good if the humans in charge are good and wise, and not good if they’re not.
I’m familiar with the arguments you mention for the other hard part, and I think instruction-following helps makes that part (or parts, depending on how you divvy it up) substantially easier. I do view it as addressing all of your points (there’s a lot of overlap amongst them).
And yes, that is separate from avoiding the problem of solving ethics.
So it’s a pretty big crux; I think instruction-following helps a lot. I’d love to have a phone call; I’d like it if you’d read that post first, because I do go into detail on the scheme and many objections there. LW puts it at a 15 minute read I think.
But I’ll try to summarize a little more, since re-explaining your thinking is always a good exercise.
Making instruction-following the AGI’s central goal means you don’t have to solve the remainder of the problems you list all at once. You get to keep changing your mind about what to do with the AI (your point 4). Instead of choosing an invariant goal that has to work for all time, your invariant is a pointer to the human’s preferences, which can change as they like (your point 5). It helps with point 3, stability, by allowing you to ask the AGI if its goal will remain stable and functioning as you want it in the new contexts and in the face of the learning it’s doing.
They key here is not thinking of the AGI as an omniscient genie. This wouldn’t work at all in a fast foom. But if the AGI gets smarter slowly, as a network-based AGI will, you get to use its intelligence to help align its next level of capabilities, at every level.
Ultimately, this should culminate in getting superhuman help to achieve full value alignment, a truly friendly and truly sovereign AGI. But there’s no rush to get there.
Naturally, this scheme working would be good if the humans in charge are good and wise, and not good if they’re not.