It’s easy to make code immutable. It’s pretty common to make a given production system use unchanging code (with separate storage for changing data, generally, or what’s the point?)
It’s trickier with AI, because a lot of it has a weaker barrier between code and data. But still possible. Harder to deal with is the fact that the transition from limited tool-AI to fully-general FOOM-capable AI requires a change in code. Which implies some process for changing code. This reduces to the box problem—the AI just needs to convince the humans who control changes that they should let it out.
A traditional Turing machine doesn’t make a distinction between program and data. The distinction between program and data is really a hardware efficiency optimization that came from the Harvard architecture. Since many systems are Turing complete, creating an immutable program seems impossible to me.
For example a system capable of speech could exploit the Turing completeness of formal grammars to execute de novo subroutines.
A second example. Hackers were able to exploit the surprising Turing completeness of an image compression standard to embed a virtual machine in a gif.
A traditional Turing machine doesn’t make a distinction between program and data.
Well, a regular Turing machine does, it has a tape and a state machine and the two are totally different.
I guess you mean a traditional universal Turing machine doesn’t distinguish between “Turing machine I’m simulating” and “data I’m simulating as input to that Turing machine”.
It’s easy to make code immutable. It’s pretty common to make a given production system use unchanging code (with separate storage for changing data, generally, or what’s the point?)
It’s trickier with AI, because a lot of it has a weaker barrier between code and data. But still possible. Harder to deal with is the fact that the transition from limited tool-AI to fully-general FOOM-capable AI requires a change in code. Which implies some process for changing code. This reduces to the box problem—the AI just needs to convince the humans who control changes that they should let it out.
A traditional Turing machine doesn’t make a distinction between program and data. The distinction between program and data is really a hardware efficiency optimization that came from the Harvard architecture. Since many systems are Turing complete, creating an immutable program seems impossible to me.
For example a system capable of speech could exploit the Turing completeness of formal grammars to execute de novo subroutines.
A second example. Hackers were able to exploit the surprising Turing completeness of an image compression standard to embed a virtual machine in a gif.
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html
Well, a regular Turing machine does, it has a tape and a state machine and the two are totally different.
I guess you mean a traditional universal Turing machine doesn’t distinguish between “Turing machine I’m simulating” and “data I’m simulating as input to that Turing machine”.