Why would a dumb ‘minimize blue’ robot search for blue things?
Current condition: I see no blue things. Expected result of searching for blue things: There are more blue things. Expected result of not searching for blue things: There are no blue things.
Where ‘blue’ is defined as ‘eliciting a defined response from the camera’. Nothing outside the view of the camera is blue by that definition.
Expected result of searching for blue things: There are more blue things.
Expected result of not searching for blue things: There are no blue things.
A “dumb robot” is assumed to be a weak optimizer. It isn’t defined to be actively biased and defective in known ways related to the human experience ‘denial’. Sure you can come up with specific ways that an optimizer could be broken and draw conclusions about what that particular defective robot will do. But these don’t lead to a conclusion where it makes sense to rhetorically imply the inconceivability of a robot that doesn’t have that particular bug.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant, lacking object permanence. Human toddler intelligence counts as ‘smart’. I’m considering a strict reward system: negatrons added proportional to the number of blue objects detected.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant
Then, as per the grandparent, the answer to the rhetorical question “Why would a dumb ‘minimize blue’ robot search for blue things?” is because it doesn’t happen to be a robot designed with the exact same peculiarities and weaknesses of a human infant.
Lack of object permanence isn’t a peculiar weakness. The ability to spontaneously leave Plato’s cave is one of the things that I reserve for ‘smart’ actors as opposed to ‘dumb’ ones.
Why would a dumb ‘minimize blue’ robot search for blue things?
Current condition: I see no blue things.
Expected result of searching for blue things: There are more blue things.
Expected result of not searching for blue things: There are no blue things.
Where ‘blue’ is defined as ‘eliciting a defined response from the camera’. Nothing outside the view of the camera is blue by that definition.
...and a sufficiently non-dumb ‘minimize blue’ robot using that definition would disconnect its own camera.
Right. If we want anthropomorphic behavior, we need to have multiple motivations.
A “dumb robot” is assumed to be a weak optimizer. It isn’t defined to be actively biased and defective in known ways related to the human experience ‘denial’. Sure you can come up with specific ways that an optimizer could be broken and draw conclusions about what that particular defective robot will do. But these don’t lead to a conclusion where it makes sense to rhetorically imply the inconceivability of a robot that doesn’t have that particular bug.
I’m putting ‘dumb’ at roughly the level of cognition of a human infant, lacking object permanence. Human toddler intelligence counts as ‘smart’. I’m considering a strict reward system: negatrons added proportional to the number of blue objects detected.
Then, as per the grandparent, the answer to the rhetorical question “Why would a dumb ‘minimize blue’ robot search for blue things?” is because it doesn’t happen to be a robot designed with the exact same peculiarities and weaknesses of a human infant.
Lack of object permanence isn’t a peculiar weakness. The ability to spontaneously leave Plato’s cave is one of the things that I reserve for ‘smart’ actors as opposed to ‘dumb’ ones.