I agree with your first point, though it gets worse for us as hardware gets cheaper and cheaper.
I like your second point even more: it’s actionable. We could work on the security of personal computers.
That last one is incorrect however. The AI only have to access its object code in order to copy itself. That’s something even current computer viruses can do. And we’re back to boxing it.
If the AI is a learning system such as a neural network, and I believe that’s quite likely to be the case, there is no source/object dichotomy at all and the code may very well be unreadable outside of simple local update procedures that are completely out of the AI’s control. In other words, it might be physically impossible for both the AI and ourselves to access the AI’s object code—it would be locked in a hardware box with no physical wires to probe its contents, basically.
I mean, think of a physical hardware circuit implementing a kind of neuron network—in order for the network to be “copiable”, you need to be able to read the values of all neurons. However, that requires a global clock (to ensure synchronization, though AI might tolerate being a bit out of phase) and a large number of extra wires connecting each component to busses going out of the system. Of course, all that extra fluff inflates the cost of the system, makes it bigger, slower and probably less energy efficient. Since the first human-level AI won’t just come out of nowhere, it will probably use off-the-shelf digital neural components, and for cost and speed reasons, these components might not actually offer any way to copy their contents.
This being said, even if the AI runs on conventional hardware, locking it out of its own object code isn’t exactly rocket science. The specification of some programming languages already guarantee that this cannot happen, and type/proof theory is an active research field that may very well be able to prove the conformance of implementation to specification. If the AI is a neural network emulated on conventional hardware, the risks that it can read itself without permission are basically zilch.
I agree with your first point, though it gets worse for us as hardware gets cheaper and cheaper.
I like your second point even more: it’s actionable. We could work on the security of personal computers.
That last one is incorrect however. The AI only have to access its object code in order to copy itself. That’s something even current computer viruses can do. And we’re back to boxing it.
If the AI is a learning system such as a neural network, and I believe that’s quite likely to be the case, there is no source/object dichotomy at all and the code may very well be unreadable outside of simple local update procedures that are completely out of the AI’s control. In other words, it might be physically impossible for both the AI and ourselves to access the AI’s object code—it would be locked in a hardware box with no physical wires to probe its contents, basically.
I mean, think of a physical hardware circuit implementing a kind of neuron network—in order for the network to be “copiable”, you need to be able to read the values of all neurons. However, that requires a global clock (to ensure synchronization, though AI might tolerate being a bit out of phase) and a large number of extra wires connecting each component to busses going out of the system. Of course, all that extra fluff inflates the cost of the system, makes it bigger, slower and probably less energy efficient. Since the first human-level AI won’t just come out of nowhere, it will probably use off-the-shelf digital neural components, and for cost and speed reasons, these components might not actually offer any way to copy their contents.
This being said, even if the AI runs on conventional hardware, locking it out of its own object code isn’t exactly rocket science. The specification of some programming languages already guarantee that this cannot happen, and type/proof theory is an active research field that may very well be able to prove the conformance of implementation to specification. If the AI is a neural network emulated on conventional hardware, the risks that it can read itself without permission are basically zilch.