The human-level intelligence version of the robot will notice its vision has been inverted. It will know it is shooting yellow objects. It will know it is failing at its original goal of blue-minimization.
I don’t see that, why would it “care” if its goal isn’t complex enough to allow it to care about the subversion of its sensors? I mean, the level of intelligence seems irrelevant here. Intelligence isn’t even instrumental to such simple goals because all it “wants” is to fire a laser at blue objects. Its utility function says nothing about maximizing its efficiency or anything like that.
Unfortunately, your question is unanswerable, at least until Yvain invents the rest of the story. We haven’t been told what goals, if any, are embodied in the intelligent part of the code. Strike that “if any” part, though—I think we can infer that it has goals from the specification that it has human-level intelligence. And even infer something about what some of the goals are like (truth-seeking, for example).
We also haven’t been told the relationship between blue-zapping code and intelligence—whether it is physically possible, for example, for the intelligent processes to modify the blue-zapping code modules.
I don’t see that, why would it “care” if its goal isn’t complex enough to allow it to care about the subversion of its sensors? I mean, the level of intelligence seems irrelevant here. Intelligence isn’t even instrumental to such simple goals because all it “wants” is to fire a laser at blue objects. Its utility function says nothing about maximizing its efficiency or anything like that.
Unfortunately, your question is unanswerable, at least until Yvain invents the rest of the story. We haven’t been told what goals, if any, are embodied in the intelligent part of the code. Strike that “if any” part, though—I think we can infer that it has goals from the specification that it has human-level intelligence. And even infer something about what some of the goals are like (truth-seeking, for example).
We also haven’t been told the relationship between blue-zapping code and intelligence—whether it is physically possible, for example, for the intelligent processes to modify the blue-zapping code modules.
Edit: Psy-Kosh raised similar questions.