A really smart ‘shoot lasers at “blue” things’ robot will shoot at blue things if there are any, and will move in a programmed way if there aren’t. All its actions are triggered by the situation it is in; and if you want to make it smarter by giving it an ability to better distinguish actually-blue from blue-looking things, then any such activity must be triggered as well. If you program it to shoot at projectors that project blue things it won’t become smarter, it will just shoot at some non-blue things. If you paint it blue and put a mirror in front of it it will shoot at itself, and if you program it to not shoot at blue things that look like itself it won’t become smarter, it will just shoot at fewer blue things. If anything it shoots at doesn’t cease to be blue or you give it a blue laser or camera lens, it will just continue shooting because it doesn’t care about blue things or shooting; it just shoots when it sees blue. It certainly won’t create blue things to shoot at.
A really dumb ‘minimize blue’ robot with a laser will shoot at anything blue it sees, but if shooting at something doesn’t make it stop being blue, it will stop shooting at it. If there’s nothing blue around it will search for blue things. If you paint it blue and put a mirror in front of it it will shoot at itself. If you give it a blue camera lens it will shoot at something, stop shooting, shoot at something different, stop shooting, move around, shoot at something, stop shooting, etc, and eventually stop moving and shooting altogether and weep. If instead of the camera lens you give it a blue laser it will become terribly confused.
Why would a dumb ‘minimize blue’ robot search for blue things?
Current condition: I see no blue things. Expected result of searching for blue things: There are more blue things. Expected result of not searching for blue things: There are no blue things.
Where ‘blue’ is defined as ‘eliciting a defined response from the camera’. Nothing outside the view of the camera is blue by that definition.
Expected result of searching for blue things: There are more blue things.
Expected result of not searching for blue things: There are no blue things.
A “dumb robot” is assumed to be a weak optimizer. It isn’t defined to be actively biased and defective in known ways related to the human experience ‘denial’. Sure you can come up with specific ways that an optimizer could be broken and draw conclusions about what that particular defective robot will do. But these don’t lead to a conclusion where it makes sense to rhetorically imply the inconceivability of a robot that doesn’t have that particular bug.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant, lacking object permanence. Human toddler intelligence counts as ‘smart’. I’m considering a strict reward system: negatrons added proportional to the number of blue objects detected.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant
Then, as per the grandparent, the answer to the rhetorical question “Why would a dumb ‘minimize blue’ robot search for blue things?” is because it doesn’t happen to be a robot designed with the exact same peculiarities and weaknesses of a human infant.
Lack of object permanence isn’t a peculiar weakness. The ability to spontaneously leave Plato’s cave is one of the things that I reserve for ‘smart’ actors as opposed to ‘dumb’ ones.
A really smart ‘shoot lasers at “blue” things’ robot will shoot at blue things if there are any, and will move in a programmed way if there aren’t. All its actions are triggered by the situation it is in; and if you want to make it smarter by giving it an ability to better distinguish actually-blue from blue-looking things, then any such activity must be triggered as well. If you program it to shoot at projectors that project blue things it won’t become smarter, it will just shoot at some non-blue things. If you paint it blue and put a mirror in front of it it will shoot at itself, and if you program it to not shoot at blue things that look like itself it won’t become smarter, it will just shoot at fewer blue things. If anything it shoots at doesn’t cease to be blue or you give it a blue laser or camera lens, it will just continue shooting because it doesn’t care about blue things or shooting; it just shoots when it sees blue. It certainly won’t create blue things to shoot at.
A really dumb ‘minimize blue’ robot with a laser will shoot at anything blue it sees, but if shooting at something doesn’t make it stop being blue, it will stop shooting at it. If there’s nothing blue around it will search for blue things. If you paint it blue and put a mirror in front of it it will shoot at itself. If you give it a blue camera lens it will shoot at something, stop shooting, shoot at something different, stop shooting, move around, shoot at something, stop shooting, etc, and eventually stop moving and shooting altogether and weep. If instead of the camera lens you give it a blue laser it will become terribly confused.
Why would a dumb ‘minimize blue’ robot search for blue things?
Current condition: I see no blue things.
Expected result of searching for blue things: There are more blue things.
Expected result of not searching for blue things: There are no blue things.
Where ‘blue’ is defined as ‘eliciting a defined response from the camera’. Nothing outside the view of the camera is blue by that definition.
...and a sufficiently non-dumb ‘minimize blue’ robot using that definition would disconnect its own camera.
Right. If we want anthropomorphic behavior, we need to have multiple motivations.
A “dumb robot” is assumed to be a weak optimizer. It isn’t defined to be actively biased and defective in known ways related to the human experience ‘denial’. Sure you can come up with specific ways that an optimizer could be broken and draw conclusions about what that particular defective robot will do. But these don’t lead to a conclusion where it makes sense to rhetorically imply the inconceivability of a robot that doesn’t have that particular bug.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant, lacking object permanence. Human toddler intelligence counts as ‘smart’. I’m considering a strict reward system: negatrons added proportional to the number of blue objects detected.
Then, as per the grandparent, the answer to the rhetorical question “Why would a dumb ‘minimize blue’ robot search for blue things?” is because it doesn’t happen to be a robot designed with the exact same peculiarities and weaknesses of a human infant.
Lack of object permanence isn’t a peculiar weakness. The ability to spontaneously leave Plato’s cave is one of the things that I reserve for ‘smart’ actors as opposed to ‘dumb’ ones.