define “altering its own code” very precisely. is it allowed to have internal state on which it branches? what is its code? how much state can this program have? note that there’s no clear distinction between code and data in modern computers; there’s a weak one in the form of the distinction between the cpu code and the stack/heap, but frequently you’re running an interpreter that has interesting code on the stack/heap, and it’s pretty easy to blend between these. I would classify neural networks as being initially-random programs implemented in a continuous vm. are they programs that alter “their own” code? I would only say that they alter their own code if metalearning is used.
define “altering its own code” very precisely. is it allowed to have internal state on which it branches? what is its code? how much state can this program have? note that there’s no clear distinction between code and data in modern computers; there’s a weak one in the form of the distinction between the cpu code and the stack/heap, but frequently you’re running an interpreter that has interesting code on the stack/heap, and it’s pretty easy to blend between these. I would classify neural networks as being initially-random programs implemented in a continuous vm. are they programs that alter “their own” code? I would only say that they alter their own code if metalearning is used.