There’s no way to raise a human such that their value system cleanly revolves around the one single goal of duplicating a strawberry, and nothing else. By asking for a method of forming values which would permit such a narrow specification of end goals, you’re asking for a value formation process that’s fundamentally different from the one humans use. There’s no guarantee that such a thing even exists, and implicitly aiming to avoid the one value formation process we know is compatible with our own values seems like a terrible idea.
I narrowly agree with most of this, but I tend to say the same thing with a very different attitude:
I would say: “Gee it would be super cool if we could decide a priori what we want the AGI to be trying to do, WITH SURGICAL PRECISION. But alas, that doesn’t seem possible, at least not according to any method I know of.”
I disagree with you in your apparent suggestion that the above paragraph is obvious or uninteresting, and also disagree with your apparent suggestion that “setting an AGI’s motivations with surgical precision” is such a dumb idea that we shouldn’t even waste one minute of our time thinking about whether it might be possible to do that.
For example, people who are used to programming almost any other type of software have presumably internalized the idea that the programmer can decide what the software will do with surgical precision. So it’s important to spread the idea that, on current trends, AGI software will be very different from that.
BTW I do agree with you that Eliezer’s interview response seems to suggest that he thinks aligning an AGI to “basic notions of morality” is harder and aligning an AGI to “strawberry problem” is easier. If that’s what he thinks, it’s at least not obvious to me.(see follow-up)
BTW I do agree with you that Eliezer’s interview response seems to suggest that he thinks aligning an AGI to “basic notions of morality” is harder and aligning an AGI to “strawberry problem” is easier. If that’s what he thinks, it’s at least not obvious to me.
My sense (which I expect Eliezer would agree with) is that it’s relatively easy to get an AI system to imitate the true underlying ‘basic notions of morality’, to the extent humans agree on that, but that this doesn’t protect you at all as soon as you want to start making large changes, or as soon as you start trying to replace specialist sectors of the economy. (A lot of ethics for doctors has to do with the challenges of simultaneously being a doctor and a human; those ethics will not necessarily be relevant for docbots, and the question of what they should be instead is potentially hard to figure out.)
So if you’re mostly interested in getting out of the acute risk period, you probably need to aim for a harder target.
Strawberry Alignment (defined as: make an AGI that is specifically & exclusively motivated to duplicate a strawberry without destroying the world), versus
“Strawberry Problem” (make an AGI that in fact duplicates a strawberry without destroying the world, using whatever methods / motivations you like).
Eliezer definitely talks about the latter. I’m not sure Eliezer has ever brought up the former? I think I was getting that from the OP (Quintin), but maybe Quintin was just confused (and/or Eliezer misspoke).
Anyway, making an AGI that can solve the strawberry problem is tautologically no harder than making an AGI that can do advanced technological development and is motivated by human norms / morals / whatever, because the latter set of AGIs is a subset of the former.
One distinction I think is important to keep in mind here is between precision with respect to what software will do and precision with respect to the effect it will have. While traditional software engineering often (though not always) involves knowing exactly what software will do, it is very common that the real-world effects of deploying some software in a real-world environment are impossible to predict with perfect accuracy. This reduces the perceived novelty of unintended consequences (though obviously, a fully-fledged AGI would lead to significantly more novelty than anything that preceded it).
I narrowly agree with most of this, but I tend to say the same thing with a very different attitude:
I would say: “Gee it would be super cool if we could decide a priori what we want the AGI to be trying to do, WITH SURGICAL PRECISION. But alas, that doesn’t seem possible, at least not according to any method I know of.”
I disagree with you in your apparent suggestion that the above paragraph is obvious or uninteresting, and also disagree with your apparent suggestion that “setting an AGI’s motivations with surgical precision” is such a dumb idea that we shouldn’t even waste one minute of our time thinking about whether it might be possible to do that.
For example, people who are used to programming almost any other type of software have presumably internalized the idea that the programmer can decide what the software will do with surgical precision. So it’s important to spread the idea that, on current trends, AGI software will be very different from that.
BTW I do agree with you that Eliezer’s interview response seems to suggest that he thinks aligning an AGI to “basic notions of morality” is harder and aligning an AGI to “strawberry problem” is easier. If that’s what he thinks, it’sat leastnot obvious to me.(see follow-up)My sense (which I expect Eliezer would agree with) is that it’s relatively easy to get an AI system to imitate the true underlying ‘basic notions of morality’, to the extent humans agree on that, but that this doesn’t protect you at all as soon as you want to start making large changes, or as soon as you start trying to replace specialist sectors of the economy. (A lot of ethics for doctors has to do with the challenges of simultaneously being a doctor and a human; those ethics will not necessarily be relevant for docbots, and the question of what they should be instead is potentially hard to figure out.)
So if you’re mostly interested in getting out of the acute risk period, you probably need to aim for a harder target.
Hmm, on further reflection, I was mixing up
Strawberry Alignment (defined as: make an AGI that is specifically & exclusively motivated to duplicate a strawberry without destroying the world), versus
“Strawberry Problem” (make an AGI that in fact duplicates a strawberry without destroying the world, using whatever methods / motivations you like).
Eliezer definitely talks about the latter. I’m not sure Eliezer has ever brought up the former? I think I was getting that from the OP (Quintin), but maybe Quintin was just confused (and/or Eliezer misspoke).
Anyway, making an AGI that can solve the strawberry problem is tautologically no harder than making an AGI that can do advanced technological development and is motivated by human norms / morals / whatever, because the latter set of AGIs is a subset of the former.
Sorry. I crossed out that paragraph. :)
One distinction I think is important to keep in mind here is between precision with respect to what software will do and precision with respect to the effect it will have. While traditional software engineering often (though not always) involves knowing exactly what software will do, it is very common that the real-world effects of deploying some software in a real-world environment are impossible to predict with perfect accuracy. This reduces the perceived novelty of unintended consequences (though obviously, a fully-fledged AGI would lead to significantly more novelty than anything that preceded it).