The main issue with this sort of thing (on my understanding of Eliezer’s models) is Hidden Complexity of Wishes. You can make an AI safe by making it only able to fulfill certain narrow, well-defined kinds of wishes where we understand all the details of what we want, but then it probably won’t suffice for a pivotal act. Alternatively, you can make it powerful enough for a pivotal act, but unfortunately a (good) pivotal act probably has to be very big, very irreversible, and very entangled with all the complicated details of human values. So alignment is likely to be a necessary step for a (good) pivotal act.
What this looks-like-in-practice is that “ask the AI for plans that succeed conditional on them being executed” has to be operationalized somehow, and the operationalization will inevitably not correctly capture what we actually want (because “what we actually want” has a ton of hidden complexity).
This is tricky. Let’s say we have a powerful black box that initially has no knowledge or morals, but a lot of malleable computational power. We train it to give answers to scary real-world questions, like how to succeed at business or how to manipulate people. If we reward it for competent answers while we can still understand the answers, at some point we’ll stop understanding answers, but they’ll continue being super-competent. That’s certainly a danger and I agree with it. But by the same token, if we reward the box for aligned answers while we still understand them, the alignment will generalize too. There seems no reason why alignment would be much less learnable than competence about reality.
Maybe your and Eliezer’s point is that competence about reality has a simple core, while alignment doesn’t. But I don’t see the argument for that. Reality is complex, and so are values. A process for learning and acting in reality can have a simple core, but so can a process for learning and acting on values. Humans pick up knowledge from their surroundings, which is part of “general intelligence”, but we pick up values just as easily and using the same circuitry. Where does the symmetry break?
I do think alignment has a relatively-simple core. Not as simple as intelligence/competence, since there’s a decent number of human-value-specific bits which need to be hardcoded (as they are in humans), but not enough to drive the bulk of the asymmetry.
(BTW, I do think you’ve correctly identified an important point which I think a lot of people miss: humans internally “learn” values from a relatively-small chunk of hardcoded information. It should be possible in-principle to specify values with a relatively small set of hardcoded info, similar to the way humans do it; I’d guess fewer than at most 1000 things on the order of complexity of a very fuzzy face detector are required, and probably fewer than 100.)
The reason it’s less learnable than competence is not that alignment is much more complex, but that it’s harder to generate a robust reward signal for alignment. Basically any sufficiently-complex long-term reward signal should incentivize competence. But the vast majority of reward signals do not incentivize alignment. In particular, even if we have a reward signal which is “close” to incentivizing alignment in some sense, the actual-process-which-generates-the-reward-signal is likely to be at least as simple/natural as actual alignment.
(I’ll note that the departure from talking about Hidden Complexity here is mainly because competence in particular is a special case where “complexity” plays almost no role, since it’s incentivized by almost any reward. Hidden Complexity is still usually the right tool for talking about why any particular reward-signal will not incentivize alignment.)
I suspect that Eliezer’s answer to this would be different, and I don’t have a good guess what it would be.
Thinking about it more, it seems that messy reward signals will lead to some approximation of alignment that works while the agent has low power compared to its “teachers”, but at high power it will do something strange and maybe harm the “teachers” values. That holds true for humans gaining a lot of power and going against evolutionary values (“superstimuli”), and for individual humans gaining a lot of power and going against societal values (“power corrupts”), so it’s probably true for AI as well. The worrying thing is that high power by itself seems sufficient for the change, for example if an AI gets good at real-world planning, that constitutes power and therefore danger. And there don’t seem to be any natural counterexamples. So yeah, I’m updating toward your view on this.
The main issue with this sort of thing (on my understanding of Eliezer’s models) is Hidden Complexity of Wishes. You can make an AI safe by making it only able to fulfill certain narrow, well-defined kinds of wishes where we understand all the details of what we want, but then it probably won’t suffice for a pivotal act. Alternatively, you can make it powerful enough for a pivotal act, but unfortunately a (good) pivotal act probably has to be very big, very irreversible, and very entangled with all the complicated details of human values. So alignment is likely to be a necessary step for a (good) pivotal act.
What this looks-like-in-practice is that “ask the AI for plans that succeed conditional on them being executed” has to be operationalized somehow, and the operationalization will inevitably not correctly capture what we actually want (because “what we actually want” has a ton of hidden complexity).
This is tricky. Let’s say we have a powerful black box that initially has no knowledge or morals, but a lot of malleable computational power. We train it to give answers to scary real-world questions, like how to succeed at business or how to manipulate people. If we reward it for competent answers while we can still understand the answers, at some point we’ll stop understanding answers, but they’ll continue being super-competent. That’s certainly a danger and I agree with it. But by the same token, if we reward the box for aligned answers while we still understand them, the alignment will generalize too. There seems no reason why alignment would be much less learnable than competence about reality.
Maybe your and Eliezer’s point is that competence about reality has a simple core, while alignment doesn’t. But I don’t see the argument for that. Reality is complex, and so are values. A process for learning and acting in reality can have a simple core, but so can a process for learning and acting on values. Humans pick up knowledge from their surroundings, which is part of “general intelligence”, but we pick up values just as easily and using the same circuitry. Where does the symmetry break?
I do think alignment has a relatively-simple core. Not as simple as intelligence/competence, since there’s a decent number of human-value-specific bits which need to be hardcoded (as they are in humans), but not enough to drive the bulk of the asymmetry.
(BTW, I do think you’ve correctly identified an important point which I think a lot of people miss: humans internally “learn” values from a relatively-small chunk of hardcoded information. It should be possible in-principle to specify values with a relatively small set of hardcoded info, similar to the way humans do it; I’d guess fewer than at most 1000 things on the order of complexity of a very fuzzy face detector are required, and probably fewer than 100.)
The reason it’s less learnable than competence is not that alignment is much more complex, but that it’s harder to generate a robust reward signal for alignment. Basically any sufficiently-complex long-term reward signal should incentivize competence. But the vast majority of reward signals do not incentivize alignment. In particular, even if we have a reward signal which is “close” to incentivizing alignment in some sense, the actual-process-which-generates-the-reward-signal is likely to be at least as simple/natural as actual alignment.
(I’ll note that the departure from talking about Hidden Complexity here is mainly because competence in particular is a special case where “complexity” plays almost no role, since it’s incentivized by almost any reward. Hidden Complexity is still usually the right tool for talking about why any particular reward-signal will not incentivize alignment.)
I suspect that Eliezer’s answer to this would be different, and I don’t have a good guess what it would be.
Thinking about it more, it seems that messy reward signals will lead to some approximation of alignment that works while the agent has low power compared to its “teachers”, but at high power it will do something strange and maybe harm the “teachers” values. That holds true for humans gaining a lot of power and going against evolutionary values (“superstimuli”), and for individual humans gaining a lot of power and going against societal values (“power corrupts”), so it’s probably true for AI as well. The worrying thing is that high power by itself seems sufficient for the change, for example if an AI gets good at real-world planning, that constitutes power and therefore danger. And there don’t seem to be any natural counterexamples. So yeah, I’m updating toward your view on this.