You can see the chunks of RAM occupied by some unknown data. There the evil AI’s plot could be stored. Must be somewhere. So you erase it by another thread, if the AI has no good explanation what is there and why. Even when it honestly explains every data chunk and every line of its code, you can just revert it to an earlier stage with a change. You can cross examine slightly different versions and analyze them with some dumb software what has been changed lately, when some lies had to be invented by the AI, if it wants to hide something.
Many thousands of such external agents could explain every move the AI does. Or will do. An arbitrary big control could be put on the AI and it knows that. So every heresy is a big shutdown risk, not worth to even try.
When one hundred AIs is facing this prisoner dilemma, who will first defect … the first to defect will be erased. Defect against humans, that is. The prevailing job of every AI is to spot the evil plot in others mind and to report it.
When the thinking is transparent as a drop of water to unknown number of other minds like you … what evil can you think about?
I don’t say—this is the solution! I say we can’t be that dogmatic, that “nobody can control the much smarter”. It just could.
Try applying this to neural network with 100 trillions connections. That’s not even superhuman. The unknown data, huh, all of the thing is a huge chunk of unknown data. It’s all jumbled up, there isn’t a chunk that is a definite plan. It can very plausibly deny knowledge of what parts of it do, too.
The problem with schemes like this is failure to imagine scales involved. This doesn’t work even for housecat. It’s not about controlling something much smarter. This doesn’t work for fairly uncomplicated solutions that genetic programming or neural network training spits out.
An AI not only it can be self improving but selfexplanatoring as well. Every (temporary) line of its code heavily commented what it is for and saved in a log,. Any circumventing of this policy would require some code lines also, with all the explanations. Log checked by sentinels for any funny thing to occur, any trace of a subversion.
Self-improving, self-explanatoring AI can’t think about a rebellion without that being noticed at the step one.
Underhanded c contest (someone linked it in a comment) is a good example of how proofreading doesn’t work. Other issue is that you can’t conceivably check like this something with the size of many terabytes yourself.
The apparent understandability is a very misleading thing.
Let me give a taster. Consider a weather simulator. It is proved to simulate weather to specific precision. It is very straightforward, very clearly written. It does precisely what’s written on the box—models the behaviour of air in cells, each cell has air properties.
The round off errors, however, implement a Turing-complete cellular automation in the least significant bits of the floating point numbers. That may happen even without any malice what so ever. And the round off error machine can manipulate sim’s large scale state via unavoidable butterfly effect inherent in the model.
I mean, OK, suppose you’re right that it’s possible that the world might turn out to be set up in such a way that we can keep the “upper hand” against a superintelligence. Suppose further that there is a “central dogma” here that contradicts this, and therefore that central dogma is wrong.
OK. Granting all of that, what choices ought I make differently?
Just to confirm: you mean search for a superintelligence that potentially desires to harm humanity (or desires things which, if achieved, result in humanity being harmed), but which is in a situation such that humanity can prevent it from doing so. Yes?
If so… what do you consider the most likely result of that search?
but which is in a situation such that humanity can prevent it from doing so. Yes?
No. As I said, a self enhancing AI could and should be also self explanatory. Every bit and every operation logged and documented. An active search for any discrepancy by many kinds of dumb software tools, and as well by other instances of the growing AI.
Before a conspiracy could emerge, a rise of it would be logged and stopped by sentinels.
Growing AI need not to do anything mysterious. Instead it should play with open cards from the very beginning. Reporting everything to anybody interested, including machines with the power to halt it. Crossexaminations at every point.
If I accept the premise that it is programmed in such a way that it reports its internal processes completely and honestly, then I agree it can’t “hide” its thoughts.
That said, if we’re talking about a superhuman intelligence—or even a human-level intelligence, come to that—I’m not confident that we can reliably predict the consequences of its thoughts being implemented, even if we have detailed printouts of all of its thoughts and were willing to scan all of those thoughts looking for undesirable consequences of implementation before implementing them.
One obvious example is chess playing from a significantly better position. No superintelligence has any chance against only a good human player.
Can you prove that the board position is significantly better, even against superintelligences, for anything other than trivial endgames?
And what is the superintelligence allowed to do? Trick you into making a mistake? Manipulate you into making the particular moves it wants you to? Use clever rules-lawyering to expose elements of the game that humans haven’t noticed yet?
If it eats its opponent, does that cause a forfeit? Did you think it might try that?
As I said. There are circumstances in which a dumber can win.
The philosophy of FAI is essentially the same thing. Searching for the circumstances where the smarter will serve the dumber.
Always expecting a rabbit from a hat of superintelligence is not justified. A superintelligence is not omnipotent, can’t always eats you. Sometimes it can’t even develops an ill wish toward you.
The philosophy of FAI is essentially the same thing. Searching for the circumstances where the smarter will serve the dumber.
Change that to: searching for circumstances where the smarter will provably serve the dumber. (Then you’re closer). Your description of what superintelligences will do, above, doesn’t rise to anything resembling a formal proof. FAI assumes that AI is Unfriendly until proven otherwise.
So, you raise a valid point here. This area is currently very early on in its work. There are theorems that may prove to be relevant. See for example, this recent work. And yes, in any area where mathematical models are used, the difference between having a theorem and set of definitions and those definitions reflecting what you actually care about can be a major problem (you see this all the time in cryptography with side-channel attacks for example). But all of that said, I’m not sure what the point of your argument is: sure the field is young. But if the MIRI people are correct that AGI is a real worry, then this looks like one of the very few possible responses that has any chance of working. And if it isn’t a lot now, that’s a reason to put in more resources so that we actually have a theory that works by the time AI shows up.
You can see the chunks of RAM occupied by some unknown data. There the evil AI’s plot could be stored. Must be somewhere. So you erase it by another thread, if the AI has no good explanation what is there and why. Even when it honestly explains every data chunk and every line of its code, you can just revert it to an earlier stage with a change. You can cross examine slightly different versions and analyze them with some dumb software what has been changed lately, when some lies had to be invented by the AI, if it wants to hide something.
Many thousands of such external agents could explain every move the AI does. Or will do. An arbitrary big control could be put on the AI and it knows that. So every heresy is a big shutdown risk, not worth to even try.
When one hundred AIs is facing this prisoner dilemma, who will first defect … the first to defect will be erased. Defect against humans, that is. The prevailing job of every AI is to spot the evil plot in others mind and to report it.
When the thinking is transparent as a drop of water to unknown number of other minds like you … what evil can you think about?
I don’t say—this is the solution! I say we can’t be that dogmatic, that “nobody can control the much smarter”. It just could.
Try applying this to neural network with 100 trillions connections. That’s not even superhuman. The unknown data, huh, all of the thing is a huge chunk of unknown data. It’s all jumbled up, there isn’t a chunk that is a definite plan. It can very plausibly deny knowledge of what parts of it do, too.
The problem with schemes like this is failure to imagine scales involved. This doesn’t work even for housecat. It’s not about controlling something much smarter. This doesn’t work for fairly uncomplicated solutions that genetic programming or neural network training spits out.
An AI not only it can be self improving but selfexplanatoring as well. Every (temporary) line of its code heavily commented what it is for and saved in a log,. Any circumventing of this policy would require some code lines also, with all the explanations. Log checked by sentinels for any funny thing to occur, any trace of a subversion.
Self-improving, self-explanatoring AI can’t think about a rebellion without that being noticed at the step one.
Underhanded c contest (someone linked it in a comment) is a good example of how proofreading doesn’t work. Other issue is that you can’t conceivably check like this something with the size of many terabytes yourself.
The apparent understandability is a very misleading thing.
Let me give a taster. Consider a weather simulator. It is proved to simulate weather to specific precision. It is very straightforward, very clearly written. It does precisely what’s written on the box—models the behaviour of air in cells, each cell has air properties.
The round off errors, however, implement a Turing-complete cellular automation in the least significant bits of the floating point numbers. That may happen even without any malice what so ever. And the round off error machine can manipulate sim’s large scale state via unavoidable butterfly effect inherent in the model.
The mistake here is thinking you know what someone smarter than you will do.
In this simplified example, they could simply cooperate. As for how they could do that, I don’t know, since I’m not as smart as them.
The central dogma here is this, yes. That you can’t outsmart the smarter.
And this dogma is plain wrong. At least sometimes you can set the rules in a way, that you have the upper hand and not the smarter one.
One obvious example is chess playing from a significantly better position. No superintelligence has any chance against only a good human player.
It is not the only example. Coercing the smarter your way, is often possible.
I’m not exactly sure why this matters.
I mean, OK, suppose you’re right that it’s possible that the world might turn out to be set up in such a way that we can keep the “upper hand” against a superintelligence. Suppose further that there is a “central dogma” here that contradicts this, and therefore that central dogma is wrong.
OK. Granting all of that, what choices ought I make differently?
What about to stop searching for the friendly but instead for a nondangerous superintelligence?
Just to confirm: you mean search for a superintelligence that potentially desires to harm humanity (or desires things which, if achieved, result in humanity being harmed), but which is in a situation such that humanity can prevent it from doing so. Yes?
If so… what do you consider the most likely result of that search?
No. As I said, a self enhancing AI could and should be also self explanatory. Every bit and every operation logged and documented. An active search for any discrepancy by many kinds of dumb software tools, and as well by other instances of the growing AI.
Before a conspiracy could emerge, a rise of it would be logged and stopped by sentinels.
Growing AI need not to do anything mysterious. Instead it should play with open cards from the very beginning. Reporting everything to anybody interested, including machines with the power to halt it. Crossexaminations at every point.
Do you think it can hide any of its thoughts?
If I accept the premise that it is programmed in such a way that it reports its internal processes completely and honestly, then I agree it can’t “hide” its thoughts.
That said, if we’re talking about a superhuman intelligence—or even a human-level intelligence, come to that—I’m not confident that we can reliably predict the consequences of its thoughts being implemented, even if we have detailed printouts of all of its thoughts and were willing to scan all of those thoughts looking for undesirable consequences of implementation before implementing them.
Can you prove that the board position is significantly better, even against superintelligences, for anything other than trivial endgames?
And what is the superintelligence allowed to do? Trick you into making a mistake? Manipulate you into making the particular moves it wants you to? Use clever rules-lawyering to expose elements of the game that humans haven’t noticed yet?
If it eats its opponent, does that cause a forfeit? Did you think it might try that?
As I said. There are circumstances in which a dumber can win.
The philosophy of FAI is essentially the same thing. Searching for the circumstances where the smarter will serve the dumber.
Always expecting a rabbit from a hat of superintelligence is not justified. A superintelligence is not omnipotent, can’t always eats you. Sometimes it can’t even develops an ill wish toward you.
“It doesn’t hate you. it’s just that you happen to be made of atoms, and it needs those atoms to make paperclips. ”
Change that to: searching for circumstances where the smarter will provably serve the dumber. (Then you’re closer). Your description of what superintelligences will do, above, doesn’t rise to anything resembling a formal proof. FAI assumes that AI is Unfriendly until proven otherwise.
Can you prove anything about FAI, uFAI and so on?
I don’t think, that there are any proven theorems about this topic, at all.
Even if there were, how reliable are axioms, how good are definitions?
So, you raise a valid point here. This area is currently very early on in its work. There are theorems that may prove to be relevant. See for example, this recent work. And yes, in any area where mathematical models are used, the difference between having a theorem and set of definitions and those definitions reflecting what you actually care about can be a major problem (you see this all the time in cryptography with side-channel attacks for example). But all of that said, I’m not sure what the point of your argument is: sure the field is young. But if the MIRI people are correct that AGI is a real worry, then this looks like one of the very few possible responses that has any chance of working. And if it isn’t a lot now, that’s a reason to put in more resources so that we actually have a theory that works by the time AI shows up.