Yes, the AI knows full well, and won’t mind but. What I’m meaning by the “act as if it didn’t believe it would get blown up” is that before we adjust its utility, it has a particular behaviour B that it would follow if it believed that the detonator would never trigger. Then after adjust its utility to make it indifferent, it will follow B.
In terms of behaviour, this utility adjustment has the same effect as if we convinced it that the detonator could never trigger—but without it having false beliefs.
In computer science terms, this is going to result in an untidy shutdown of the machine. If the AI is actually doing something potentially dangerous at the time, then this algorithm will terminate the AI in the middle of doing it. It may even decide it’s entirely appropriate to start flying aircraft or operating nuclear plants after it already knows you’re going to blow it up.
Yes, the AI knows full well, and won’t mind but. What I’m meaning by the “act as if it didn’t believe it would get blown up” is that before we adjust its utility, it has a particular behaviour B that it would follow if it believed that the detonator would never trigger. Then after adjust its utility to make it indifferent, it will follow B.
In terms of behaviour, this utility adjustment has the same effect as if we convinced it that the detonator could never trigger—but without it having false beliefs.
In computer science terms, this is going to result in an untidy shutdown of the machine. If the AI is actually doing something potentially dangerous at the time, then this algorithm will terminate the AI in the middle of doing it. It may even decide it’s entirely appropriate to start flying aircraft or operating nuclear plants after it already knows you’re going to blow it up.
Still better than letting it take over.
No doubt about that....