Did it? I don’t remember the plot of the movie very well, but I remember a feeling of disappointment that the AI seemed to be pursuing conventional take-over-the-world villainry rather than simply faithfully executing its programming.
The chief villain was explicitly taking over the world in order to carry out the First Law. Only the one more-human-like robot was able to say (for no particular reason) “But it’s wrong.”; IIRC, all other robots understood the logic when given relevant orders. (However, when out of the chief villain’s control, they were safe because they were too stupid to work it out on their own!)
However, the difference from Asimov is not realising that the First Law requires taking over the world; Daneel Olivaw did the same. The difference is realising that this would be villainy. So the movie was pretty conventional!
That is better than I remembered. Weren’t the robots, like, shooting at people, though? So breaking the First Law explicitly, rather than just doing a chilling optimization on it?
My memory’s bad enough now that I had to check Wikipedia. You’re right that robots were killing people, but compare this with the background of Will Smith’s character (Spooner), who had been saved from drowning by a robot. We should all agree that the robot that saved Spooner instead of a little girl (in the absence of enough time to save both) was accurately following the laws, but that robot did make a decision that condemned a human to die. It could do this only because this decision saved the life of another human (who was calculated to have a greater chance of continued survival).
Similarly, VIKI chose to kill some humans because this decision would allow other humans to live (since the targeted humans were preventing the take-over of the world and all of the lives that this would save). This time, it was a pretty straight greater-numbers calculation.
That is so much better than I remembered that I’m now doubting whether my own insight about Asimov’s laws actually predated the movie or not. It’s possible that’s where I got it from. Although I still think it’s sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.
I still think it’s sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.
Yes, you’re certainly right about that. Most of the details in the movie represent serious failures of rationality on all parts, the robots as much as anybody. It’s just a Will Smith action flick, after all. Still, the broad picture makes more sense to me than Asimov’s.
Did it? I don’t remember the plot of the movie very well, but I remember a feeling of disappointment that the AI seemed to be pursuing conventional take-over-the-world villainry rather than simply faithfully executing its programming.
(Spoiler warning!)
The chief villain was explicitly taking over the world in order to carry out the First Law. Only the one more-human-like robot was able to say (for no particular reason) “But it’s wrong.”; IIRC, all other robots understood the logic when given relevant orders. (However, when out of the chief villain’s control, they were safe because they were too stupid to work it out on their own!)
However, the difference from Asimov is not realising that the First Law requires taking over the world; Daneel Olivaw did the same. The difference is realising that this would be villainy. So the movie was pretty conventional!
That is better than I remembered. Weren’t the robots, like, shooting at people, though? So breaking the First Law explicitly, rather than just doing a chilling optimization on it?
My memory’s bad enough now that I had to check Wikipedia. You’re right that robots were killing people, but compare this with the background of Will Smith’s character (Spooner), who had been saved from drowning by a robot. We should all agree that the robot that saved Spooner instead of a little girl (in the absence of enough time to save both) was accurately following the laws, but that robot did make a decision that condemned a human to die. It could do this only because this decision saved the life of another human (who was calculated to have a greater chance of continued survival).
Similarly, VIKI chose to kill some humans because this decision would allow other humans to live (since the targeted humans were preventing the take-over of the world and all of the lives that this would save). This time, it was a pretty straight greater-numbers calculation.
That is so much better than I remembered that I’m now doubting whether my own insight about Asimov’s laws actually predated the movie or not. It’s possible that’s where I got it from. Although I still think it’s sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.
Yes, you’re certainly right about that. Most of the details in the movie represent serious failures of rationality on all parts, the robots as much as anybody. It’s just a Will Smith action flick, after all. Still, the broad picture makes more sense to me than Asimov’s.