My understanding was that this was about whether the singularity was “AI going beyond “following its programming”,” with goal-modification being an example of how an AI might go beyond its programming.
I certainly agree with that statement. It was merely my interpretation that violating the intentions of the developer by not “following it’s programming” is functionally identical to poor design and therefore failure.
The AI is a program. Running on a processor. With an instruction set. Reading the instructions from memory. These instructions are its programming. There is no room for acausal magic here. When the goals get modified, they are done so by a computer, running code.
This is another example of something that only a poorly designed AI would do.
Note that immutable goal sets are not feasible, because of ontological crises.
Of course this is something that only a poorly designed AI would do. But we’re talking about AI failure modes and this is a valid concern.
My understanding was that this was about whether the singularity was “AI going beyond “following its programming”,” with goal-modification being an example of how an AI might go beyond its programming.
I certainly agree with that statement. It was merely my interpretation that violating the intentions of the developer by not “following it’s programming” is functionally identical to poor design and therefore failure.
The AI is a program. Running on a processor. With an instruction set. Reading the instructions from memory. These instructions are its programming. There is no room for acausal magic here. When the goals get modified, they are done so by a computer, running code.
I’m fairly confident that you’re replying to the wrong person. Look through the earlier posts; I’m quoting this to summarize its author’s argument.