Just to confirm: you mean search for a superintelligence that potentially desires to harm humanity (or desires things which, if achieved, result in humanity being harmed), but which is in a situation such that humanity can prevent it from doing so. Yes?
If so… what do you consider the most likely result of that search?
but which is in a situation such that humanity can prevent it from doing so. Yes?
No. As I said, a self enhancing AI could and should be also self explanatory. Every bit and every operation logged and documented. An active search for any discrepancy by many kinds of dumb software tools, and as well by other instances of the growing AI.
Before a conspiracy could emerge, a rise of it would be logged and stopped by sentinels.
Growing AI need not to do anything mysterious. Instead it should play with open cards from the very beginning. Reporting everything to anybody interested, including machines with the power to halt it. Crossexaminations at every point.
If I accept the premise that it is programmed in such a way that it reports its internal processes completely and honestly, then I agree it can’t “hide” its thoughts.
That said, if we’re talking about a superhuman intelligence—or even a human-level intelligence, come to that—I’m not confident that we can reliably predict the consequences of its thoughts being implemented, even if we have detailed printouts of all of its thoughts and were willing to scan all of those thoughts looking for undesirable consequences of implementation before implementing them.
What about to stop searching for the friendly but instead for a nondangerous superintelligence?
Just to confirm: you mean search for a superintelligence that potentially desires to harm humanity (or desires things which, if achieved, result in humanity being harmed), but which is in a situation such that humanity can prevent it from doing so. Yes?
If so… what do you consider the most likely result of that search?
No. As I said, a self enhancing AI could and should be also self explanatory. Every bit and every operation logged and documented. An active search for any discrepancy by many kinds of dumb software tools, and as well by other instances of the growing AI.
Before a conspiracy could emerge, a rise of it would be logged and stopped by sentinels.
Growing AI need not to do anything mysterious. Instead it should play with open cards from the very beginning. Reporting everything to anybody interested, including machines with the power to halt it. Crossexaminations at every point.
Do you think it can hide any of its thoughts?
If I accept the premise that it is programmed in such a way that it reports its internal processes completely and honestly, then I agree it can’t “hide” its thoughts.
That said, if we’re talking about a superhuman intelligence—or even a human-level intelligence, come to that—I’m not confident that we can reliably predict the consequences of its thoughts being implemented, even if we have detailed printouts of all of its thoughts and were willing to scan all of those thoughts looking for undesirable consequences of implementation before implementing them.