A couple of points here. First, as other people seem to have indicated, there does seem to be a problem with saying ‘the robot has human level intelligence/self-reflective insight’ and simultaneously that it carries out its programming with regards to firing lasers at percepts which appear blue unreflectively, in so far as it would seem that the former would entail that the latter would /not/ be done unreflectively. What you have here are two seperate and largely unintegrated cognitive systems one of which ascribes functional-intentional properties to things, including the robot, and has human level intelligence on the one hand and the robot on the other.
The second point is that there may be a confusion based upon what your functional ascriptions to the robot are tracking. I want to say that objects have functions only relative to a system in which they play a role, which means that for example the robot might have ‘the function’ of eliminating blue objects within the wider system which is the Department of Homeland Security, however there is no discoverable fact about the robot which describes it’s ‘function simpliciter’. You can observe what appears to be goal directed behaviour, of course, but your ascriptions of goals to the robot are only good in so far as they serve to predict future behaviour of the robot (this is a standard descriptivist/projectivist approach to mental content ascription, of the sort Dennett describes in ‘The Intentional Stance’). So when you insert the investing in front of the robots camera it ceases to exercise the same goal directed behaviour or (what amounts to the same, expressed differently) your previous goal ascriptions to the object cease to be able to make reliable predictions of the robots future behaviour and need to be corrected. ((I’m going to ignore the issue of the ontological status of these ascriptions. If this is an interest you happen to have Dennett discusses his views on the subject in an essay entitled ‘Real Patterns’ and there is further commentary in a couple of the articles in Ross and Brook’s ’Dennett’s Philosophy)).
I realise you are consciously using a naive version of behaviourism as the backdrop of your discussion, so it’s possible that I’m just jumping ahead to ‘where you’re going with this’ but it does seem that with subsequent post-behaviourist approaches to mental content ascription the puzzle you seem to be describing of how to correctly describe the robot dissolves. ((N.B. - You might want to Millar’s Understanding People which surveys a broad range of the various approaches to mental state ascription)).
A couple of points here. First, as other people seem to have indicated, there does seem to be a problem with saying ‘the robot has human level intelligence/self-reflective insight’ and simultaneously that it carries out its programming with regards to firing lasers at percepts which appear blue unreflectively, in so far as it would seem that the former would entail that the latter would /not/ be done unreflectively. What you have here are two seperate and largely unintegrated cognitive systems one of which ascribes functional-intentional properties to things, including the robot, and has human level intelligence on the one hand and the robot on the other.
The second point is that there may be a confusion based upon what your functional ascriptions to the robot are tracking. I want to say that objects have functions only relative to a system in which they play a role, which means that for example the robot might have ‘the function’ of eliminating blue objects within the wider system which is the Department of Homeland Security, however there is no discoverable fact about the robot which describes it’s ‘function simpliciter’. You can observe what appears to be goal directed behaviour, of course, but your ascriptions of goals to the robot are only good in so far as they serve to predict future behaviour of the robot (this is a standard descriptivist/projectivist approach to mental content ascription, of the sort Dennett describes in ‘The Intentional Stance’). So when you insert the investing in front of the robots camera it ceases to exercise the same goal directed behaviour or (what amounts to the same, expressed differently) your previous goal ascriptions to the object cease to be able to make reliable predictions of the robots future behaviour and need to be corrected. ((I’m going to ignore the issue of the ontological status of these ascriptions. If this is an interest you happen to have Dennett discusses his views on the subject in an essay entitled ‘Real Patterns’ and there is further commentary in a couple of the articles in Ross and Brook’s ’Dennett’s Philosophy)).
I realise you are consciously using a naive version of behaviourism as the backdrop of your discussion, so it’s possible that I’m just jumping ahead to ‘where you’re going with this’ but it does seem that with subsequent post-behaviourist approaches to mental content ascription the puzzle you seem to be describing of how to correctly describe the robot dissolves. ((N.B. - You might want to Millar’s Understanding People which surveys a broad range of the various approaches to mental state ascription)).