A traditional Turing machine doesn’t make a distinction between program and data. The distinction between program and data is really a hardware efficiency optimization that came from the Harvard architecture. Since many systems are Turing complete, creating an immutable program seems impossible to me.
For example a system capable of speech could exploit the Turing completeness of formal grammars to execute de novo subroutines.
A second example. Hackers were able to exploit the surprising Turing completeness of an image compression standard to embed a virtual machine in a gif.
A traditional Turing machine doesn’t make a distinction between program and data.
Well, a regular Turing machine does, it has a tape and a state machine and the two are totally different.
I guess you mean a traditional universal Turing machine doesn’t distinguish between “Turing machine I’m simulating” and “data I’m simulating as input to that Turing machine”.
A traditional Turing machine doesn’t make a distinction between program and data. The distinction between program and data is really a hardware efficiency optimization that came from the Harvard architecture. Since many systems are Turing complete, creating an immutable program seems impossible to me.
For example a system capable of speech could exploit the Turing completeness of formal grammars to execute de novo subroutines.
A second example. Hackers were able to exploit the surprising Turing completeness of an image compression standard to embed a virtual machine in a gif.
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html
Well, a regular Turing machine does, it has a tape and a state machine and the two are totally different.
I guess you mean a traditional universal Turing machine doesn’t distinguish between “Turing machine I’m simulating” and “data I’m simulating as input to that Turing machine”.