A really smart ‘shoot lasers at “blue” things’ robot will shoot at blue things if there are any, and will move in a programmed way if there aren’t. All its actions are triggered by the situation it is in; and if you want to make it smarter by giving it an ability to better distinguish actually-blue from blue-looking things, then any such activity must be triggered as well. If you program it to shoot at projectors that project blue things it won’t become smarter, it will just shoot at some non-blue things. If you paint it blue and put a mirror in front of it it will shoot at itself, and if you program it to not shoot at blue things that look like itself it won’t become smarter, it will just shoot at fewer blue things. If anything it shoots at doesn’t cease to be blue or you give it a blue laser or camera lens, it will just continue shooting because it doesn’t care about blue things or shooting; it just shoots when it sees blue. It certainly won’t create blue things to shoot at.
A really dumb ‘minimize blue’ robot with a laser will shoot at anything blue it sees, but if shooting at something doesn’t make it stop being blue, it will stop shooting at it. If there’s nothing blue around it will search for blue things. If you paint it blue and put a mirror in front of it it will shoot at itself. If you give it a blue camera lens it will shoot at something, stop shooting, shoot at something different, stop shooting, move around, shoot at something, stop shooting, etc, and eventually stop moving and shooting altogether and weep. If instead of the camera lens you give it a blue laser it will become terribly confused.
Why would a dumb ‘minimize blue’ robot search for blue things?
Current condition: I see no blue things. Expected result of searching for blue things: There are more blue things. Expected result of not searching for blue things: There are no blue things.
Where ‘blue’ is defined as ‘eliciting a defined response from the camera’. Nothing outside the view of the camera is blue by that definition.
Expected result of searching for blue things: There are more blue things.
Expected result of not searching for blue things: There are no blue things.
A “dumb robot” is assumed to be a weak optimizer. It isn’t defined to be actively biased and defective in known ways related to the human experience ‘denial’. Sure you can come up with specific ways that an optimizer could be broken and draw conclusions about what that particular defective robot will do. But these don’t lead to a conclusion where it makes sense to rhetorically imply the inconceivability of a robot that doesn’t have that particular bug.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant, lacking object permanence. Human toddler intelligence counts as ‘smart’. I’m considering a strict reward system: negatrons added proportional to the number of blue objects detected.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant
Then, as per the grandparent, the answer to the rhetorical question “Why would a dumb ‘minimize blue’ robot search for blue things?” is because it doesn’t happen to be a robot designed with the exact same peculiarities and weaknesses of a human infant.
Lack of object permanence isn’t a peculiar weakness. The ability to spontaneously leave Plato’s cave is one of the things that I reserve for ‘smart’ actors as opposed to ‘dumb’ ones.
What is the difference between a smart ‘shoot lasers at “blue” things’ robot and a really dumb ‘minimize blue’ robot with a laser?
A smart ‘shoot lasers at “blue” things’ robot may title the cosmic commons with blue things to be shot (exactly what it tiles the cosmic commons with depends on the details of implementation.) A really dumb minimize blue robot… um… it’ll shoot blue things. Probably. And sometimes miss.
I didn’t mean a smart “maximize blue things shot with lasers” robot. Although I suppose creating blue things to shoot is a reasonable action to take once all the easily accessible blue things have been destroyed.
Oddly enough, a similar behavior has been noted in AA and other rehab support groups; when there are no more easily accessible addicts to cure, someone will relapse. That’s perfectly rational behavior for a group that wants to rehabilitate people, even if it isn’t conscious.
What is the difference between a smart ‘shoot lasers at “blue” things’ robot and a really dumb ‘minimize blue’ robot with a laser?
A really smart ‘shoot lasers at “blue” things’ robot will shoot at blue things if there are any, and will move in a programmed way if there aren’t. All its actions are triggered by the situation it is in; and if you want to make it smarter by giving it an ability to better distinguish actually-blue from blue-looking things, then any such activity must be triggered as well. If you program it to shoot at projectors that project blue things it won’t become smarter, it will just shoot at some non-blue things. If you paint it blue and put a mirror in front of it it will shoot at itself, and if you program it to not shoot at blue things that look like itself it won’t become smarter, it will just shoot at fewer blue things. If anything it shoots at doesn’t cease to be blue or you give it a blue laser or camera lens, it will just continue shooting because it doesn’t care about blue things or shooting; it just shoots when it sees blue. It certainly won’t create blue things to shoot at.
A really dumb ‘minimize blue’ robot with a laser will shoot at anything blue it sees, but if shooting at something doesn’t make it stop being blue, it will stop shooting at it. If there’s nothing blue around it will search for blue things. If you paint it blue and put a mirror in front of it it will shoot at itself. If you give it a blue camera lens it will shoot at something, stop shooting, shoot at something different, stop shooting, move around, shoot at something, stop shooting, etc, and eventually stop moving and shooting altogether and weep. If instead of the camera lens you give it a blue laser it will become terribly confused.
Why would a dumb ‘minimize blue’ robot search for blue things?
Current condition: I see no blue things.
Expected result of searching for blue things: There are more blue things.
Expected result of not searching for blue things: There are no blue things.
Where ‘blue’ is defined as ‘eliciting a defined response from the camera’. Nothing outside the view of the camera is blue by that definition.
...and a sufficiently non-dumb ‘minimize blue’ robot using that definition would disconnect its own camera.
Right. If we want anthropomorphic behavior, we need to have multiple motivations.
A “dumb robot” is assumed to be a weak optimizer. It isn’t defined to be actively biased and defective in known ways related to the human experience ‘denial’. Sure you can come up with specific ways that an optimizer could be broken and draw conclusions about what that particular defective robot will do. But these don’t lead to a conclusion where it makes sense to rhetorically imply the inconceivability of a robot that doesn’t have that particular bug.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant, lacking object permanence. Human toddler intelligence counts as ‘smart’. I’m considering a strict reward system: negatrons added proportional to the number of blue objects detected.
Then, as per the grandparent, the answer to the rhetorical question “Why would a dumb ‘minimize blue’ robot search for blue things?” is because it doesn’t happen to be a robot designed with the exact same peculiarities and weaknesses of a human infant.
Lack of object permanence isn’t a peculiar weakness. The ability to spontaneously leave Plato’s cave is one of the things that I reserve for ‘smart’ actors as opposed to ‘dumb’ ones.
A smart robot that shoots lasers at blue things will shoot at blue things it models as being there even if it can’t see them.
A smart ‘shoot lasers at “blue” things’ robot may title the cosmic commons with blue things to be shot (exactly what it tiles the cosmic commons with depends on the details of implementation.) A really dumb minimize blue robot… um… it’ll shoot blue things. Probably. And sometimes miss.
I didn’t mean a smart “maximize blue things shot with lasers” robot. Although I suppose creating blue things to shoot is a reasonable action to take once all the easily accessible blue things have been destroyed.
Oddly enough, a similar behavior has been noted in AA and other rehab support groups; when there are no more easily accessible addicts to cure, someone will relapse. That’s perfectly rational behavior for a group that wants to rehabilitate people, even if it isn’t conscious.