This entire debate is supposed to about my argument, as presented in the original article I published on the IEET.org website (“The Fallacy of Dumb Superintelligence”).
But in that case, what should I do when Rob insists on talking about something that I did not say in that article?
My strategy was to explain his mistake, but not engage in a debate about his red herring. Sensible people of all stripes would consider that a mature response.
But over and over again Rob avoided the actual argument and insisted on talking about his red herring.
And then FINALLY I realized that I could write down my original claim in such a way that it is IMPOSSIBLE for Rob to misinterpret it.
(That was easy, in retrospect: all I had to do was remove the language that he was using as the jumping-off point for his red herring).
That final, succinct statement of my argument is sitting there at the end of his blog ….. so far ignored by you, and by him. Perhaps he will be able to respond, I don’t know, but you say you have read it, so you have had a chance to actually understand why it is that he has been talking about something of no relevance to my original argument.
But you, in your wisdom, chose to (a) completely ignore that statement of my argument, and (b) give me a patronizing rebuke for not being able to understand Rob’s red herring argument.
Further to RobbBB’s highly useful reply (you should read it), I would like to point out that the phenomenon of optimization processes having unexpected or unusual behaviour when placed in new situations or given new capabilities is quite general. It applies to pretty much any strong optimization process working toward any goal.
For example, you program an AI/robot with no resources to fetch a banana as quickly as possible in some sandbox environment, where the banana is guarded by a few people. The AI will describe its strategy as talking to the guards and convincing them to give it the banana. Or perhaps buying the banana off them if you give it some money. But there are many ways to change the situation:
Make the AI much smarter—it will realise that by taking a certain sequence of steps it can get from here to the banana without ever moving within someone’s field of view. No need to pay for it. (1.5—Make the AI even smarter, and it will realise that by painting a Basilisk on the robot’s front it can disable all the guards effortlessly and take a straight line to the banana.)
Give the AI a teleportation device—it teleports straight to the banana.
Give the AI a plasma cannon—it shoots all the guards dead and runs up and takes the banana.
The point is… it’s not at all weird for AI behaviour to be “inconsistent”. It isn’t a sign of anything being broken, in fact the goal is being achieved. The AI is just able to think of more effective ways to do it then you are. That is, after all, the point of superintelligence. And an AI that does this is not broken or stupid, and is certainly capable of being dangerous.
By the way, you can try to do something like this:
[ And by the way: one important feature that is OBVIOUSLY going to be in the goalX code is this: that the outcome of any actions that the goalX code prescribes, should always be checked to see if they are as consistent as possible with the verbal description of the class of results X, and if any inconsistency occurs the goalX code should be deemed defective, and be shut down for adjustment.]
But, to start with I have no idea how you would program this or what it means formally, but even if you could, it takes human judgement to identify “inconsistencies” that would matter to humans. Without embedding human values in there you’ll have the AI shut down every time it tries to do anything new, or use a stronger criterion of “inconsistency” and miss a few cases where the AI does something you actually don’t want.
Or, you know, the AI will deduce that the full “verbal description of the class of results X” (which is an infinite list) is of course defined by its goal (ie. the goalX code) and therefore reason that nothing the goalX code can do will be inconsistent with it.
I didn’t mean to ignore your argument; I just didn’t get around to it. As I said, there were a lot of things I wanted to respond to. (In fact, this post was going to be longer, but I decided to focus on your primary argument.)
Your story:
This hypothetical AI will say “I have a goal, and my goal is to get a certain class of results, X, in the real world.” [...] And we say “Hey, no problem: looks like your goal code is totally consistent with that verbal description of the desired class of results.” Everything is swell up to this point.
My version:
The AI is lying. Or possibly it isn’t very smart yet, so it’s bad at describing its goal. Or it’s oversimplifying, because the programmers told it to, because otherwise the goal description would take days. And the goal code itself is too complicated for the programmers to fully understand. In any case, everything is not swell.
Your story:
Then one day the AI says “Okay now, today my goalX code says I should do this…” and it describes an action that is VIOLENTLY inconsistent with the previously described class of results, X. This action violates every one of the features of the class that were previously given.
My version:
The AI’s goal was never really X. It was actually Z. The AI’s actions perfectly coincide with Z.
In the rest of the scenario you described, I agree that the AI’s behavior is pretty incoherent, if its goal is X. But if it’s really aiming for Z, then its behavior is perfectly, terrifyingly coherent.
And your “obvious” fail-safe isn’t going to help. The AI is smarter than us. If it wants Z, and a fail-safe prevents it from getting Z, it will find a way around that fail-safe.
I know, your premise is that X really is the AI’s true goal. But that’s my sticking point.
Making it actually have the goal X, before it starts self-modifying, is far from easy. You can’t just skip over that step and assume it as your premise.
What you say makes sense …. except that you and I are both bound by the terms of a scenario that someone else has set here.
So, the terms (as I say, this is not my doing!) of reference are that an AI might sincerely believe that it is pursuing its original goal of making humans happy (whatever that means …. the ambiguity is in the original), but in the course of sincerely and genuinely pursuing that goal, it might get into a state where it believes that the best way to achieve the goal is to do something that we humans would consider to be NOT achieving the goal.
What you did was consider some other possibilities, such as those in which the AI is actually not being sincere. Nothing wrong with considering those, but that would be a story for another day.
Oh, and one other thing that arises from your above remark: remember that what you have called the “fail-safe” is not actually a fail-safe, it is an integral part of the original goal code (X). So there is no question of this being a situation where ”… it wants Z, and a fail-safe prevents it from getting Z, [so] it will find a way around that fail-safe.” In fact, the check is just part of X, so it WANTS to check as much as wants anything else involved in the goal.
I am not sure that self-modification is part of the original terms of reference here, either. When Muehlhauser (for example) went on a radio show and explained to the audience that a superintelligence might be programmed to make humans happy, but then SINCERELY think it was making us happy when it put us on a Dopamine Drip, I think he was clearly not talking about a free-wheeling AI that can modify its goal code. Surely, if he wanted to imply that, the whole scenario goes out the window. The AI could have any motivation whatsoever.
You and I are both bound by the terms of a scenario that someone else has set here.
Ok, if you want to pass the buck, I won’t stop you. But this other person’s scenario still has a faulty premise. I’ll take it up with them if you like; just point out where they state that the goal code starts out working correctly.
To summarize my complaint, it’s not very useful to discuss an AI with a “sincere” goal of X, because the difficulty comes from giving the AI that goal in the first place.
What you did was consider some other possibilities, such as those in which the AI is actually not being sincere. Nothing wrong with considering those, but that would be a story for another day.
As I see it, your (adopted) scenario is far less likely than other scenario(s), so in a sense that one is the “story for another day.” Specifically, a day when we’ve solved the “sincere goal” issue.
This entire debate is supposed to about my argument, as presented in the original article I published on the IEET.org website (“The Fallacy of Dumb Superintelligence”).
But in that case, what should I do when Rob insists on talking about something that I did not say in that article?
My strategy was to explain his mistake, but not engage in a debate about his red herring. Sensible people of all stripes would consider that a mature response.
But over and over again Rob avoided the actual argument and insisted on talking about his red herring.
And then FINALLY I realized that I could write down my original claim in such a way that it is IMPOSSIBLE for Rob to misinterpret it.
(That was easy, in retrospect: all I had to do was remove the language that he was using as the jumping-off point for his red herring).
That final, succinct statement of my argument is sitting there at the end of his blog ….. so far ignored by you, and by him. Perhaps he will be able to respond, I don’t know, but you say you have read it, so you have had a chance to actually understand why it is that he has been talking about something of no relevance to my original argument.
But you, in your wisdom, chose to (a) completely ignore that statement of my argument, and (b) give me a patronizing rebuke for not being able to understand Rob’s red herring argument.
Further to RobbBB’s highly useful reply (you should read it), I would like to point out that the phenomenon of optimization processes having unexpected or unusual behaviour when placed in new situations or given new capabilities is quite general. It applies to pretty much any strong optimization process working toward any goal.
For example, you program an AI/robot with no resources to fetch a banana as quickly as possible in some sandbox environment, where the banana is guarded by a few people. The AI will describe its strategy as talking to the guards and convincing them to give it the banana. Or perhaps buying the banana off them if you give it some money. But there are many ways to change the situation:
Make the AI much smarter—it will realise that by taking a certain sequence of steps it can get from here to the banana without ever moving within someone’s field of view. No need to pay for it. (1.5—Make the AI even smarter, and it will realise that by painting a Basilisk on the robot’s front it can disable all the guards effortlessly and take a straight line to the banana.)
Give the AI a teleportation device—it teleports straight to the banana.
Give the AI a plasma cannon—it shoots all the guards dead and runs up and takes the banana.
The point is… it’s not at all weird for AI behaviour to be “inconsistent”. It isn’t a sign of anything being broken, in fact the goal is being achieved. The AI is just able to think of more effective ways to do it then you are. That is, after all, the point of superintelligence. And an AI that does this is not broken or stupid, and is certainly capable of being dangerous.
By the way, you can try to do something like this:
But, to start with I have no idea how you would program this or what it means formally, but even if you could, it takes human judgement to identify “inconsistencies” that would matter to humans. Without embedding human values in there you’ll have the AI shut down every time it tries to do anything new, or use a stronger criterion of “inconsistency” and miss a few cases where the AI does something you actually don’t want.
Or, you know, the AI will deduce that the full “verbal description of the class of results X” (which is an infinite list) is of course defined by its goal (ie. the goalX code) and therefore reason that nothing the goalX code can do will be inconsistent with it.
I didn’t mean to ignore your argument; I just didn’t get around to it. As I said, there were a lot of things I wanted to respond to. (In fact, this post was going to be longer, but I decided to focus on your primary argument.)
Your story:
My version:
Your story:
My version:
In the rest of the scenario you described, I agree that the AI’s behavior is pretty incoherent, if its goal is X. But if it’s really aiming for Z, then its behavior is perfectly, terrifyingly coherent.
And your “obvious” fail-safe isn’t going to help. The AI is smarter than us. If it wants Z, and a fail-safe prevents it from getting Z, it will find a way around that fail-safe.
I know, your premise is that X really is the AI’s true goal. But that’s my sticking point.
Making it actually have the goal X, before it starts self-modifying, is far from easy. You can’t just skip over that step and assume it as your premise.
What you say makes sense …. except that you and I are both bound by the terms of a scenario that someone else has set here.
So, the terms (as I say, this is not my doing!) of reference are that an AI might sincerely believe that it is pursuing its original goal of making humans happy (whatever that means …. the ambiguity is in the original), but in the course of sincerely and genuinely pursuing that goal, it might get into a state where it believes that the best way to achieve the goal is to do something that we humans would consider to be NOT achieving the goal.
What you did was consider some other possibilities, such as those in which the AI is actually not being sincere. Nothing wrong with considering those, but that would be a story for another day.
Oh, and one other thing that arises from your above remark: remember that what you have called the “fail-safe” is not actually a fail-safe, it is an integral part of the original goal code (X). So there is no question of this being a situation where ”… it wants Z, and a fail-safe prevents it from getting Z, [so] it will find a way around that fail-safe.” In fact, the check is just part of X, so it WANTS to check as much as wants anything else involved in the goal.
I am not sure that self-modification is part of the original terms of reference here, either. When Muehlhauser (for example) went on a radio show and explained to the audience that a superintelligence might be programmed to make humans happy, but then SINCERELY think it was making us happy when it put us on a Dopamine Drip, I think he was clearly not talking about a free-wheeling AI that can modify its goal code. Surely, if he wanted to imply that, the whole scenario goes out the window. The AI could have any motivation whatsoever.
Hope that clarifies rather than obscures.
Ok, if you want to pass the buck, I won’t stop you. But this other person’s scenario still has a faulty premise. I’ll take it up with them if you like; just point out where they state that the goal code starts out working correctly.
To summarize my complaint, it’s not very useful to discuss an AI with a “sincere” goal of X, because the difficulty comes from giving the AI that goal in the first place.
As I see it, your (adopted) scenario is far less likely than other scenario(s), so in a sense that one is the “story for another day.” Specifically, a day when we’ve solved the “sincere goal” issue.