programmers build a seed AI (a not-yet-superintelligent AGI that will recursively self-modify to become superintelligent after many stages) that includes, among other things, a large block of code I’ll call X.
The programmers think of this block of code as an algorithm that will make the seed AI and its descendents maximize human pleasure.
The problem, I reckon, is that X will never be anything like this.
It will likely be something much more mundane, i.e. modelling the world properly and predicting outcomes given various counterfactuals. You might be worried by it trying to expand its hardware resources in an unbounded fashion, but any AI doing this would try to shut itself down if its utility function was penalized by the amount of resources that it had, so you can check by capping utility in inverse proportion to available hardware—at worst, it will eventually figure out how to shut itself down, and you will dodge a bullet. I also reckon that the AI’s capacity for deception would be severely crippled if its utility function penalized it when it didn’t predict its own actions or the consequences of its actions correctly. And if you’re going to let the AI actually do things… why not do exactly that?
Arguably, such an AI would rather uneventfully arrive to a point where, when asking it “make us happy”, it would just answer with a point by point plan that represents what it thinks we mean, and fill in details until we feel sure our intents are properly met. Then we just tell it to do it. I mean, seriously, if we were making an AGI, I would think “tell us what will happen next” would be fairly high in our list of priorities, only surpassed by “do not do anything we veto”. Why would you program AI to “maximize happiness” rather than “produce documents detailing every step of maximizing happiness”? They are basically the same thing, except that the latter gives you the opportunity for a sanity check.
You might be worried by it trying to expand its hardware resources in an unbounded fashion, but any AI doing this would try to shut itself down if its utility function was penalized by the amount of resources that it had
What counts as ‘resources’? Do we think that ‘hardware’ and ‘software’ are natural kinds, such that the AI will always understand what we mean by the two? What if software innovations on their own suffice to threaten the world, without hardware takeover?
I also reckon that the AI’s capacity for deception would be severely crippled if its utility function penalized it when it didn’t predict its own actions or the consequences of its actions correctly.
Hm? That seems to only penalize it for self-deception, not for deceiving others.
Arguably, such an AI would rather uneventfully arrive to a point where, when asking it “make us happy”, it would just answer with a point by point plan that represents what it thinks we mean, and fill in details until we feel sure our intents are properly met.
You’re talking about an Oracle AI. This is one useful avenue to explore, but it’s almost certainly not as easy as you suggest:
“‘Tool AI’ may sound simple in English, a short sentence in the language of empathically-modeled agents — it’s just ‘a thingy that shows you plans instead of a thingy that goes and does things.’ If you want to know whether this hypothetical entity does X, you just check whether the outcome of X sounds like ‘showing someone a plan’ or ‘going and doing things’, and you’ve got your answer. It starts sounding much scarier once you try to say something more formal and internally-causal like ‘Model the user and the universe, predict the degree of correspondence between the user’s model and the universe, and select from among possible explanation-actions on this basis.’ [...]
“If we take the concept of the Google Maps AGI at face value, then it actually has four key magical components. (In this case, ‘magical’ isn’t to be taken as prejudicial, it’s a term of art that means we haven’t said how the component works yet.) There’s a magical comprehension of the user’s utility function, a magical world-model that GMAGI uses to comprehend the consequences of actions, a magical planning element that selects a non-optimal path using some method other than exploring all possible actions, and a magical explain-to-the-user function.
“report($leading_action) isn’t exactly a trivial step either. Deep Blue tells you to move your pawn or you’ll lose the game. You ask ‘Why?’ and the answer is a gigantic search tree of billions of possible move-sequences, leafing at positions which are heuristically rated using a static-position evaluation algorithm trained on millions of games. Or the planning Oracle tells you that a certain DNA sequence will produce a protein that cures cancer, you ask ‘Why?’, and then humans aren’t even capable of verifying, for themselves, the assertion that the peptide sequence will fold into the protein the planning Oracle says it does.
“‘So,’ you say, after the first dozen times you ask the Oracle a question and it returns an answer that you’d have to take on faith, ‘we’ll just specify in the utility function that the plan should be understandable.’
“Whereupon other things start going wrong. Viliam_Bur, in the comments thread, gave this example, which I’ve slightly simplified:
“‘Example question: “How should I get rid of my disease most cheaply?” Example answer: “You won’t. You will die soon, unavoidably. This report is 99.999% reliable”. Predicted human reaction: Decides to kill self and get it over with. Success rate: 100%, the disease is gone. Costs of cure: zero. Mission completed.’
“Bur is trying to give an example of how things might go wrong if the preference function is over the accuracy of the predictions explained to the human— rather than just the human’s ‘goodness’ of the outcome. And if the preference function was just over the human’s ‘goodness’ of the end result, rather than the accuracy of the human’s understanding of the predictions, the AI might tell you something that was predictively false but whose implementation would lead you to what the AI defines as a ‘good’ outcome. And if we ask how happy the human is, the resulting decision procedure would exert optimization pressure to convince the human to take drugs, and so on.
“I’m not saying any particular failure is 100% certain to occur; rather I’m trying to explain—as handicapped by the need to describe the AI in the native human agent-description language, using empathy to simulate a spirit-in-a-box instead of trying to think in mathematical structures like A* search or Bayesian updating—how, even so, one can still see that the issue is a tad more fraught than it sounds on an immediate examination.
“If you see the world just in terms of math, it’s even worse; you’ve got some program with inputs from a USB cable connecting to a webcam, output to a computer monitor, and optimization criteria expressed over some combination of the monitor, the humans looking at the monitor, and the rest of the world. It’s a whole lot easier to call what’s inside a ‘planning Oracle’ or some other English phrase than to write a program that does the optimization safely without serious unintended consequences. Show me any attempted specification, and I’ll point to the vague parts and ask for clarification in more formal and mathematical terms, and as soon as the design is clarified enough to be a hundred light years from implementation instead of a thousand light years, I’ll show a neutral judge how that math would go wrong. (Experience shows that if you try to explain to would-be AGI designers how their design goes wrong, in most cases they just say “Oh, but of course that’s not what I meant.” Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button. But based on past sad experience with many other would-be designers, I say ‘Explain to a neutral judge how the math kills” and not “Explain to the person who invented that math and likes it.’)
“Just as the gigantic gap between smart-sounding English instructions and actually smart algorithms is the main source of difficulty in AI, there’s a gap between benevolent-sounding English and actually benevolent algorithms which is the source of difficulty in FAI. ‘Just make suggestions—don’t do anything!’ is, in the end, just more English.”
What counts as ‘resources’? Do we think that ‘hardware’ and ‘software’ are natural kinds, such that the AI will always understand what we mean by the two? What if software innovations on their own suffice to threaten the world, without hardware takeover?
What is “taking over the world”, if not taking control of resources (hardware)? Where is the motivation in doing it? Also consider, as others pointed out, that an AI which “misunderstands” your original instructions will demonstrate this earlier than later. For instance, if you create a resource “honeypot” outside the AI which is trivial to take, an AI would naturally take that first, and then you know there’s a problem. It is not going to figure out you don’t want it to take it before it takes it.
Hm? That seems to only penalize it for self-deception, not for deceiving others.
When I say “predict”, I mean publishing what will happen next, and then taking a utility hit if the published account deviates from what happens, as evaluated by a third party.
You’re talking about an Oracle AI. This is one useful avenue to explore, but it’s almost certainly not as easy as you suggest:
The first part of what you copy pasted seems to say that “it’s nontrivial to implement”. No shit, but I didn’t say the contrary. Then there is a bunch of “what if” scenarios I think are not particularly likely and kind of contrived:
Example question: “How should I get rid of my disease most cheaply?” Example answer: “You won’t. You will die soon, unavoidably. This report is 99.999% reliable”. Predicted human reaction: Decides to kill self and get it over with. Success rate: 100%, the disease is gone. Costs of cure: zero. Mission completed.′
Because asking for understandable plans means you can’t ask for plans you don’t understand? And you’re saying that refusing to give a plan counts as success and not failure? Sounds like a strange set up that would be corrected almost immediately.
And if the preference function was just over the human’s ‘goodness’ of the end result, rather than the accuracy of the human’s understanding of the predictions, the AI might tell you something that was predictively false but whose implementation would lead you to what the AI defines as a ‘good’ outcome.
If the AI has the right idea about “human understanding”, I would think it would have the right idea about what we mean by “good”. Also, why would you implement such a function before asking the AI to evaluate examples of “good” and provide their own?
And if we ask how happy the human is, the resulting decision procedure would exert optimization pressure to convince the human to take drugs, and so on.
Is making humans happy so hard that it’s actually easier to deceive them into taking happy pills than to do what they mean? Is fooling humans into accepting different definitions easier than understanding what they really mean? In what circumstances would the former ever happen before the latter?
And if you ask it to tell you whether “taking happy pills” is an outcome most humans would approve of, what is it going to answer? If it’s going to do this for happiness, won’t it do it for everything? Again: do you think weaving an elaborate fib to fool every human being into becoming wireheads and never picking up on the trend is actually less effort than just giving humans what they really want? To me this is like driving a whole extra hour to get to a store that sells an item you want fifty cents cheaper.
I’m not saying these things are not possible. I’m saying that they are contrived: they are constructed to the express purpose of being failure modes, but there’s no reason to think they would actually happen, especially given that they seem to be more complicated than the desired behavior.
Now, here’s the thing: you want to develop FAI. In order to develop FAI, you will need tools. The best tool is Tool AI. Consider a bootstrapping scheme: in order for commands written in English to be properly followed, you first make AI for the very purpose of modelling human language semantics. You can check that the AI is on the same page as you are by discussing with it and asking questions such as: “is doing X in line with the objective ‘Y’?”; it doesn’t even need to be self-modifying at all. The resulting AI can then be transformed into a utility function computer: you give the first AI an English statement and build a second AI maximizing the utility which is given to it by the first AI.
And let’s be frank here: how else do you figure friendly AI could be made? The human brain is a complex, organically grown, possibly inconsistent mess; you are not going, from human wits alone, to build some kind of formal proof of friendliness, even a probabilistic one. More likely than not, there is no such thing: concepts such as life, consciousness, happiness or sentience are ill-defined and you can’t even demonstrate the friendliness of a human being, or even of a group of human beings, let alone of humanity as a whole, which also is a poorly defined thing.
However, massive amounts of information about our internal thought processes are leaked through our languages. You need AI to sift through it and model these processes, their average and their variance. You need AI to extract this information, fill in the holes, produce probability clouds about intent that match whatever borderline incoherent porridge of ideas our brains implement as the end result of billions of years of evolutionary fumbling. In a sense, I guess this would be X in your seed AI: AI which already demonstrated, to our satisfaction, that it understands what we mean, and directly takes charge of a second AI’s utility measurement. I don’t really see any alternatives: if you want FAI, start by focusing on AI that can extract meaning from sentences. Reliable semantic extraction is virtually a prerequisite for FAI, if you can’t do the former, forget about the latter.
The problem, I reckon, is that X will never be anything like this.
It will likely be something much more mundane, i.e. modelling the world properly and predicting outcomes given various counterfactuals. You might be worried by it trying to expand its hardware resources in an unbounded fashion, but any AI doing this would try to shut itself down if its utility function was penalized by the amount of resources that it had, so you can check by capping utility in inverse proportion to available hardware—at worst, it will eventually figure out how to shut itself down, and you will dodge a bullet. I also reckon that the AI’s capacity for deception would be severely crippled if its utility function penalized it when it didn’t predict its own actions or the consequences of its actions correctly. And if you’re going to let the AI actually do things… why not do exactly that?
Arguably, such an AI would rather uneventfully arrive to a point where, when asking it “make us happy”, it would just answer with a point by point plan that represents what it thinks we mean, and fill in details until we feel sure our intents are properly met. Then we just tell it to do it. I mean, seriously, if we were making an AGI, I would think “tell us what will happen next” would be fairly high in our list of priorities, only surpassed by “do not do anything we veto”. Why would you program AI to “maximize happiness” rather than “produce documents detailing every step of maximizing happiness”? They are basically the same thing, except that the latter gives you the opportunity for a sanity check.
What counts as ‘resources’? Do we think that ‘hardware’ and ‘software’ are natural kinds, such that the AI will always understand what we mean by the two? What if software innovations on their own suffice to threaten the world, without hardware takeover?
Hm? That seems to only penalize it for self-deception, not for deceiving others.
You’re talking about an Oracle AI. This is one useful avenue to explore, but it’s almost certainly not as easy as you suggest:
“‘Tool AI’ may sound simple in English, a short sentence in the language of empathically-modeled agents — it’s just ‘a thingy that shows you plans instead of a thingy that goes and does things.’ If you want to know whether this hypothetical entity does X, you just check whether the outcome of X sounds like ‘showing someone a plan’ or ‘going and doing things’, and you’ve got your answer. It starts sounding much scarier once you try to say something more formal and internally-causal like ‘Model the user and the universe, predict the degree of correspondence between the user’s model and the universe, and select from among possible explanation-actions on this basis.’ [...]
“If we take the concept of the Google Maps AGI at face value, then it actually has four key magical components. (In this case, ‘magical’ isn’t to be taken as prejudicial, it’s a term of art that means we haven’t said how the component works yet.) There’s a magical comprehension of the user’s utility function, a magical world-model that GMAGI uses to comprehend the consequences of actions, a magical planning element that selects a non-optimal path using some method other than exploring all possible actions, and a magical explain-to-the-user function.
“report($leading_action) isn’t exactly a trivial step either. Deep Blue tells you to move your pawn or you’ll lose the game. You ask ‘Why?’ and the answer is a gigantic search tree of billions of possible move-sequences, leafing at positions which are heuristically rated using a static-position evaluation algorithm trained on millions of games. Or the planning Oracle tells you that a certain DNA sequence will produce a protein that cures cancer, you ask ‘Why?’, and then humans aren’t even capable of verifying, for themselves, the assertion that the peptide sequence will fold into the protein the planning Oracle says it does.
“‘So,’ you say, after the first dozen times you ask the Oracle a question and it returns an answer that you’d have to take on faith, ‘we’ll just specify in the utility function that the plan should be understandable.’
“Whereupon other things start going wrong. Viliam_Bur, in the comments thread, gave this example, which I’ve slightly simplified:
“‘Example question: “How should I get rid of my disease most cheaply?” Example answer: “You won’t. You will die soon, unavoidably. This report is 99.999% reliable”. Predicted human reaction: Decides to kill self and get it over with. Success rate: 100%, the disease is gone. Costs of cure: zero. Mission completed.’
“Bur is trying to give an example of how things might go wrong if the preference function is over the accuracy of the predictions explained to the human— rather than just the human’s ‘goodness’ of the outcome. And if the preference function was just over the human’s ‘goodness’ of the end result, rather than the accuracy of the human’s understanding of the predictions, the AI might tell you something that was predictively false but whose implementation would lead you to what the AI defines as a ‘good’ outcome. And if we ask how happy the human is, the resulting decision procedure would exert optimization pressure to convince the human to take drugs, and so on.
“I’m not saying any particular failure is 100% certain to occur; rather I’m trying to explain—as handicapped by the need to describe the AI in the native human agent-description language, using empathy to simulate a spirit-in-a-box instead of trying to think in mathematical structures like A* search or Bayesian updating—how, even so, one can still see that the issue is a tad more fraught than it sounds on an immediate examination.
“If you see the world just in terms of math, it’s even worse; you’ve got some program with inputs from a USB cable connecting to a webcam, output to a computer monitor, and optimization criteria expressed over some combination of the monitor, the humans looking at the monitor, and the rest of the world. It’s a whole lot easier to call what’s inside a ‘planning Oracle’ or some other English phrase than to write a program that does the optimization safely without serious unintended consequences. Show me any attempted specification, and I’ll point to the vague parts and ask for clarification in more formal and mathematical terms, and as soon as the design is clarified enough to be a hundred light years from implementation instead of a thousand light years, I’ll show a neutral judge how that math would go wrong. (Experience shows that if you try to explain to would-be AGI designers how their design goes wrong, in most cases they just say “Oh, but of course that’s not what I meant.” Marcus Hutter is a rare exception who specified his AGI in such unambiguous mathematical terms that he actually succeeded at realizing, after some discussion with SIAI personnel, that AIXI would kill off its users and seize control of its reward button. But based on past sad experience with many other would-be designers, I say ‘Explain to a neutral judge how the math kills” and not “Explain to the person who invented that math and likes it.’)
“Just as the gigantic gap between smart-sounding English instructions and actually smart algorithms is the main source of difficulty in AI, there’s a gap between benevolent-sounding English and actually benevolent algorithms which is the source of difficulty in FAI. ‘Just make suggestions—don’t do anything!’ is, in the end, just more English.”
What is “taking over the world”, if not taking control of resources (hardware)? Where is the motivation in doing it? Also consider, as others pointed out, that an AI which “misunderstands” your original instructions will demonstrate this earlier than later. For instance, if you create a resource “honeypot” outside the AI which is trivial to take, an AI would naturally take that first, and then you know there’s a problem. It is not going to figure out you don’t want it to take it before it takes it.
When I say “predict”, I mean publishing what will happen next, and then taking a utility hit if the published account deviates from what happens, as evaluated by a third party.
The first part of what you copy pasted seems to say that “it’s nontrivial to implement”. No shit, but I didn’t say the contrary. Then there is a bunch of “what if” scenarios I think are not particularly likely and kind of contrived:
Because asking for understandable plans means you can’t ask for plans you don’t understand? And you’re saying that refusing to give a plan counts as success and not failure? Sounds like a strange set up that would be corrected almost immediately.
If the AI has the right idea about “human understanding”, I would think it would have the right idea about what we mean by “good”. Also, why would you implement such a function before asking the AI to evaluate examples of “good” and provide their own?
Is making humans happy so hard that it’s actually easier to deceive them into taking happy pills than to do what they mean? Is fooling humans into accepting different definitions easier than understanding what they really mean? In what circumstances would the former ever happen before the latter?
And if you ask it to tell you whether “taking happy pills” is an outcome most humans would approve of, what is it going to answer? If it’s going to do this for happiness, won’t it do it for everything? Again: do you think weaving an elaborate fib to fool every human being into becoming wireheads and never picking up on the trend is actually less effort than just giving humans what they really want? To me this is like driving a whole extra hour to get to a store that sells an item you want fifty cents cheaper.
I’m not saying these things are not possible. I’m saying that they are contrived: they are constructed to the express purpose of being failure modes, but there’s no reason to think they would actually happen, especially given that they seem to be more complicated than the desired behavior.
Now, here’s the thing: you want to develop FAI. In order to develop FAI, you will need tools. The best tool is Tool AI. Consider a bootstrapping scheme: in order for commands written in English to be properly followed, you first make AI for the very purpose of modelling human language semantics. You can check that the AI is on the same page as you are by discussing with it and asking questions such as: “is doing X in line with the objective ‘Y’?”; it doesn’t even need to be self-modifying at all. The resulting AI can then be transformed into a utility function computer: you give the first AI an English statement and build a second AI maximizing the utility which is given to it by the first AI.
And let’s be frank here: how else do you figure friendly AI could be made? The human brain is a complex, organically grown, possibly inconsistent mess; you are not going, from human wits alone, to build some kind of formal proof of friendliness, even a probabilistic one. More likely than not, there is no such thing: concepts such as life, consciousness, happiness or sentience are ill-defined and you can’t even demonstrate the friendliness of a human being, or even of a group of human beings, let alone of humanity as a whole, which also is a poorly defined thing.
However, massive amounts of information about our internal thought processes are leaked through our languages. You need AI to sift through it and model these processes, their average and their variance. You need AI to extract this information, fill in the holes, produce probability clouds about intent that match whatever borderline incoherent porridge of ideas our brains implement as the end result of billions of years of evolutionary fumbling. In a sense, I guess this would be X in your seed AI: AI which already demonstrated, to our satisfaction, that it understands what we mean, and directly takes charge of a second AI’s utility measurement. I don’t really see any alternatives: if you want FAI, start by focusing on AI that can extract meaning from sentences. Reliable semantic extraction is virtually a prerequisite for FAI, if you can’t do the former, forget about the latter.