Why exactly do we want “recursive self-improvement” anyways?
Because we want more out of FAI than just lowercase-f friendly androids that we can rely upon not to rebel or break too badly. If we can figure out a rigorous Friendly goal system and a provably stable decision theory, then we should want to; then the world gets saved and the various current humanitarian emergencies get solved much quicker than they would if we didn’t know whether the AI’s goal system was stable and we had to check it at every stage and not let it impinge upon the world directly (not that that’s feasible anyway —)
Why not build into the architecture the impossibility of rewriting its own code, prove the “friendliness” of the software that we put there, and then push the ON button without qualms. And then, when we feel like it, we can ask our AI to design a more powerful successor to itself.
Then, we repeat the task of checking the security of the architecture and proving the friendliness of the software before we build and turn on the new AI.
Most likely, after each iteration, it would become more and more incomprehensible to us. Rice’s theorem suggests that we will not be able to prove the necessary properties of a system from the top down, not knowing how it was designed; that is a massively different problem than proving properties of a system we’re constructing from the bottom up. (The AI will know how it’s designing the code it writes, but the problem is making sure that it is willing and able to continuously prove that it is not modifying its goals.)
And, in the end, this is just another kind of AI-boxing. If an AI gets smart enough and it ends up deciding that it has some goals that would be best carried out by something smarter than itself, then it will probably get around any safeguards we put in place. It’ll emit some code that looks Friendly to us but isn’t, or some proof that is too massively complicated for us to check, or it’ll do something far too clever for a human like me to think of as an example. I’d say there’s a dangerously high possibility that an AI will be able to start a hard takeoff even if it doesn’t have access to its own code — it may be able to introspect and understand intelligence well enough that it could just write its own AI (if we can do that, then why can’t it?), and then push that AI out “into the wild” by the usual means (smooth-talk a human operator, invent molecular nanotech that assembles a computer that runs the new software, etc.).
Even trying to do it this way would likely be a huge waste of time (at best) — if we don’t build in a goal system that we know will preserve itself in the first place, then why would we expect its self-designed successor to preserve its goals?
If an AGI is not safe under recursive self-improvement, then it is not safe at all.
… we will not be able to prove the necessary properties of a system from the top down, not knowing how it was designed.
I guess I didn’t make clear that I was talking about proof-checking rather than proof-finding. And, of course, we ask the designer to find the proof—if it can’t provide one, then we (and it) have no reason to trust the design.
Doing it this way would also likely be a major waste of time — if we don’t build in a goal system that we know will preserve itself in the first place, then why would we expect its self-designed successor to preserve its goals?
If an AGI is not safe under recursive self-improvement, then it is not safe at all.
I may be a bit less optimistic than you that we will ever be able to prove the correctness of self-modifying programs. But assume that such proofs are possible, but we humans have not yet made the conceptual breakthroughs by the time we are ready to build our first super-human AI. But assume that we can prove friendliness for non-self-modifying programs.
In this case, proceeding as I suggest, and then asking the AI to help discover the missing proof technology, would not be wasting time—it would be saving time.
I guess I didn’t make clear that I was talking about proof-checking rather than proof-finding. And, of course, we ask the designer to find the proof—if it can’t provide one, then we (and it) have no reason to trust the design.
This still assumes in the first place that the AI will be motivated to design a successor that preserves its own goal system. If it wants to do this, or can be made to do this just by being told to, and you have a very good reason to believe this, then you’ve already solved the problem. We’re just not sure if that comes automatically — there are intuitive arguments that it does, like the one about Gandhi and the murder-pill, but I’m convinced that the stakes are high enough that we should prove this to be true before we push the On button on anything that’s smarter than us or could become smarter than us. The danger is that while you’re waiting for it to provide a new program and a proof of correctness to verify, it might instead decide to unbox itself and go off and do something with Friendly intentions but unstable self-modification mechanisms, and then we’ll end up with a really powerful optimization process with a goal system that only stabilizes after it’s become worthless. Or even if you have an AI with no goals other than truthfully answering questions, that’s still dangerous; you can ask it to design a provably-stable reflective decision theory, and perhaps it will try, but if it doesn’t already have a Friendly motivational mechanism, then it may go about finding the answer in less-than-agreeable ways. Again as per the Omohundro paper, we can expect recursive self-improvement to be pretty much automatic (whether or not it has access to its own code), and we don’t know if value-preservation is automatic, and we know that Friendliness is definitely not automatic, so creating a non-provably-stable or non-Friendly AI and trying to have it solve these problems is putting the cart before the horse, and there’s too great a risk of it backfiring.
Your final sentence is a slogan, not an argument.
It was neither; it was intended only as a summary of the conclusion of the points I was arguing in the preceding paragraphs.
Because we want more out of FAI than just lowercase-f friendly androids that we can rely upon not to rebel or break too badly. If we can figure out a rigorous Friendly goal system and a provably stable decision theory, then we should want to; then the world gets saved and the various current humanitarian emergencies get solved much quicker than they would if we didn’t know whether the AI’s goal system was stable and we had to check it at every stage and not let it impinge upon the world directly (not that that’s feasible anyway —)
Most likely, after each iteration, it would become more and more incomprehensible to us. Rice’s theorem suggests that we will not be able to prove the necessary properties of a system from the top down, not knowing how it was designed; that is a massively different problem than proving properties of a system we’re constructing from the bottom up. (The AI will know how it’s designing the code it writes, but the problem is making sure that it is willing and able to continuously prove that it is not modifying its goals.)
And, in the end, this is just another kind of AI-boxing. If an AI gets smart enough and it ends up deciding that it has some goals that would be best carried out by something smarter than itself, then it will probably get around any safeguards we put in place. It’ll emit some code that looks Friendly to us but isn’t, or some proof that is too massively complicated for us to check, or it’ll do something far too clever for a human like me to think of as an example. I’d say there’s a dangerously high possibility that an AI will be able to start a hard takeoff even if it doesn’t have access to its own code — it may be able to introspect and understand intelligence well enough that it could just write its own AI (if we can do that, then why can’t it?), and then push that AI out “into the wild” by the usual means (smooth-talk a human operator, invent molecular nanotech that assembles a computer that runs the new software, etc.).
Even trying to do it this way would likely be a huge waste of time (at best) — if we don’t build in a goal system that we know will preserve itself in the first place, then why would we expect its self-designed successor to preserve its goals?
If an AGI is not safe under recursive self-improvement, then it is not safe at all.
I guess I didn’t make clear that I was talking about proof-checking rather than proof-finding. And, of course, we ask the designer to find the proof—if it can’t provide one, then we (and it) have no reason to trust the design.
I may be a bit less optimistic than you that we will ever be able to prove the correctness of self-modifying programs. But assume that such proofs are possible, but we humans have not yet made the conceptual breakthroughs by the time we are ready to build our first super-human AI. But assume that we can prove friendliness for non-self-modifying programs.
In this case, proceeding as I suggest, and then asking the AI to help discover the missing proof technology, would not be wasting time—it would be saving time.
Your final sentence is a slogan, not an argument.
This still assumes in the first place that the AI will be motivated to design a successor that preserves its own goal system. If it wants to do this, or can be made to do this just by being told to, and you have a very good reason to believe this, then you’ve already solved the problem. We’re just not sure if that comes automatically — there are intuitive arguments that it does, like the one about Gandhi and the murder-pill, but I’m convinced that the stakes are high enough that we should prove this to be true before we push the On button on anything that’s smarter than us or could become smarter than us. The danger is that while you’re waiting for it to provide a new program and a proof of correctness to verify, it might instead decide to unbox itself and go off and do something with Friendly intentions but unstable self-modification mechanisms, and then we’ll end up with a really powerful optimization process with a goal system that only stabilizes after it’s become worthless. Or even if you have an AI with no goals other than truthfully answering questions, that’s still dangerous; you can ask it to design a provably-stable reflective decision theory, and perhaps it will try, but if it doesn’t already have a Friendly motivational mechanism, then it may go about finding the answer in less-than-agreeable ways. Again as per the Omohundro paper, we can expect recursive self-improvement to be pretty much automatic (whether or not it has access to its own code), and we don’t know if value-preservation is automatic, and we know that Friendliness is definitely not automatic, so creating a non-provably-stable or non-Friendly AI and trying to have it solve these problems is putting the cart before the horse, and there’s too great a risk of it backfiring.
It was neither; it was intended only as a summary of the conclusion of the points I was arguing in the preceding paragraphs.