The entire example is deeply misleading. We model the robot as a fairly stupid blue minimizer because this seems to be a good succinct description of the robots entire externally observable behavior and would cease to do so if it also had a speaker or display window with which it communicated it’s internal reflections.
So to retain the intuitive appeal of describing the robot as a blue minimizer the robots human level intelligence must be walled off inside the robot unable to effectively signal to the outside world. But so long as the human level intelligence is irrelevant to predicting the robots exterior behavior the blue minimizing model is an appropriate one to keep in mind to guide our interactions with the robot. That is like any good scientific model it provides good predictive power relative to it’s cost in mental (or simulating computer’s) effort and memory.
It’s pretty obvious why it’s useful to us to describe stuff in ways that lets us feasibly predict/approximate the behavior of external entities/effects we encounter. Perhaps though you are puzzled or arguing against the idea that belief-desire style models are often a good tradeoff between accuracy and ease of use. However, this is also easily explained as a result of evolutionary hard wiring that effectively functions as a hardware accelerator for belief desire models so we didn’t get eaten and the salience of objects designed by other humans to our lives. The acceleration means that even in domains were the fit is poor (like charges want to get away from each other) the ease of application still makes them a useful heuristic. Also since human made objects are usually built to achieve a particular goal that had to be represented usefully in the builders mind these objects usually offer the most effective behavior for accomplishing that goal relative to a given level of computational complexity.
In other words since the guy who builds the robot to lase the blue dyed cancer cells does so by coming up with a goal he wants the robot to achieve (discriminating between blue cells and other blue things is hard so we will just build a robot to fry all the blue things it sees) and then offering up the best implementation he can come up with given the constraints the resulting behavior can be well modeled as the object desiring some end but being stupid in various ways. In other words if you want to zap blue cells you don’t add extra code to 1 time in a million zap yellow nor would you tack on AI without needing it’s expertise to implement the desired behavior so the resulting behavior looks like a stupid creature trying to achieve the inventors chosen goal.
Interstingly I suspect that being well described by a belief-desire model probably simply corresponds to being on the set of non-dominated ways to achieve a goal people can reasonably conceptualize. Thus we see it all the time in evolution as we can easily understand both the species level goal of survival and individual level goals of avoiding suffering and satisfying some basic wants and natural selection ensures that the implementations we usually see are at least locally non-dominated (if you want to make a better hunter on the savannah than the lion you have to either jump to a whole new basic design or use a bigger computational/energy budget.)
The entire example is deeply misleading. We model the robot as a fairly stupid blue minimizer because this seems to be a good succinct description of the robots entire externally observable behavior and would cease to do so if it also had a speaker or display window with which it communicated it’s internal reflections.
So to retain the intuitive appeal of describing the robot as a blue minimizer the robots human level intelligence must be walled off inside the robot unable to effectively signal to the outside world. But so long as the human level intelligence is irrelevant to predicting the robots exterior behavior the blue minimizing model is an appropriate one to keep in mind to guide our interactions with the robot. That is like any good scientific model it provides good predictive power relative to it’s cost in mental (or simulating computer’s) effort and memory.
It’s pretty obvious why it’s useful to us to describe stuff in ways that lets us feasibly predict/approximate the behavior of external entities/effects we encounter. Perhaps though you are puzzled or arguing against the idea that belief-desire style models are often a good tradeoff between accuracy and ease of use. However, this is also easily explained as a result of evolutionary hard wiring that effectively functions as a hardware accelerator for belief desire models so we didn’t get eaten and the salience of objects designed by other humans to our lives. The acceleration means that even in domains were the fit is poor (like charges want to get away from each other) the ease of application still makes them a useful heuristic. Also since human made objects are usually built to achieve a particular goal that had to be represented usefully in the builders mind these objects usually offer the most effective behavior for accomplishing that goal relative to a given level of computational complexity.
In other words since the guy who builds the robot to lase the blue dyed cancer cells does so by coming up with a goal he wants the robot to achieve (discriminating between blue cells and other blue things is hard so we will just build a robot to fry all the blue things it sees) and then offering up the best implementation he can come up with given the constraints the resulting behavior can be well modeled as the object desiring some end but being stupid in various ways. In other words if you want to zap blue cells you don’t add extra code to 1 time in a million zap yellow nor would you tack on AI without needing it’s expertise to implement the desired behavior so the resulting behavior looks like a stupid creature trying to achieve the inventors chosen goal.
Interstingly I suspect that being well described by a belief-desire model probably simply corresponds to being on the set of non-dominated ways to achieve a goal people can reasonably conceptualize. Thus we see it all the time in evolution as we can easily understand both the species level goal of survival and individual level goals of avoiding suffering and satisfying some basic wants and natural selection ensures that the implementations we usually see are at least locally non-dominated (if you want to make a better hunter on the savannah than the lion you have to either jump to a whole new basic design or use a bigger computational/energy budget.)