When I try to introduce the subject of advanced AI, what’s the first thing I hear, more than half the time?
“Oh, you mean like the Terminator movies / the Matrix / Asimov’s robots!”
Doesn’t Asimov’s Laws provide a convenient entry into the topic of uFAI? I mean, sometime after I actually read the Asimov stories, but well before I discovered this community or the topic of uFAI, it occurred to me in a wave of chills how horrific the “I, Robot” world would actually be if those laws were literally implemented in real-life AI. “A robot may not injure a human being or, through inaction, allow a human being to come to harm”? But we do things all the time that may bring us harm—from sexual activity (STDs!) to eating ice cream (heart disease!) to rock-climbing or playing competitive sports… If the robots were programmed in such a way that they could not “through inaction, allow a human being to come to harm” then they’d pretty much have to lock us all up in padded cells, to prevent us taking any action that might bring us harm. Luckily they’d only have to do it for one generation because obviously pregnancy and childbirth would never be allowed, it’d be insane to allow human women to take on such completely preventable risks...
So then when I found you lot talking about uFAI, my reaction was just nodnod rather than “but that’s crazy talk!”
Did it? I don’t remember the plot of the movie very well, but I remember a feeling of disappointment that the AI seemed to be pursuing conventional take-over-the-world villainry rather than simply faithfully executing its programming.
The chief villain was explicitly taking over the world in order to carry out the First Law. Only the one more-human-like robot was able to say (for no particular reason) “But it’s wrong.”; IIRC, all other robots understood the logic when given relevant orders. (However, when out of the chief villain’s control, they were safe because they were too stupid to work it out on their own!)
However, the difference from Asimov is not realising that the First Law requires taking over the world; Daneel Olivaw did the same. The difference is realising that this would be villainy. So the movie was pretty conventional!
That is better than I remembered. Weren’t the robots, like, shooting at people, though? So breaking the First Law explicitly, rather than just doing a chilling optimization on it?
My memory’s bad enough now that I had to check Wikipedia. You’re right that robots were killing people, but compare this with the background of Will Smith’s character (Spooner), who had been saved from drowning by a robot. We should all agree that the robot that saved Spooner instead of a little girl (in the absence of enough time to save both) was accurately following the laws, but that robot did make a decision that condemned a human to die. It could do this only because this decision saved the life of another human (who was calculated to have a greater chance of continued survival).
Similarly, VIKI chose to kill some humans because this decision would allow other humans to live (since the targeted humans were preventing the take-over of the world and all of the lives that this would save). This time, it was a pretty straight greater-numbers calculation.
That is so much better than I remembered that I’m now doubting whether my own insight about Asimov’s laws actually predated the movie or not. It’s possible that’s where I got it from. Although I still think it’s sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.
I still think it’s sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.
Yes, you’re certainly right about that. Most of the details in the movie represent serious failures of rationality on all parts, the robots as much as anybody. It’s just a Will Smith action flick, after all. Still, the broad picture makes more sense to me than Asimov’s.
Violating people’s freedom would probably also count as harm, emotional harm if nothing else. Which is even more troublesome as we wouldn’t even be allowed to be emotionally distressed—they’d just fill us with happy juice so that we can live happily ever after. The superhappies in robotic form. :-)
Doesn’t Asimov’s Laws provide a convenient entry into the topic of uFAI? I mean, sometime after I actually read the Asimov stories, but well before I discovered this community or the topic of uFAI, it occurred to me in a wave of chills how horrific the “I, Robot” world would actually be if those laws were literally implemented in real-life AI. “A robot may not injure a human being or, through inaction, allow a human being to come to harm”? But we do things all the time that may bring us harm—from sexual activity (STDs!) to eating ice cream (heart disease!) to rock-climbing or playing competitive sports… If the robots were programmed in such a way that they could not “through inaction, allow a human being to come to harm” then they’d pretty much have to lock us all up in padded cells, to prevent us taking any action that might bring us harm. Luckily they’d only have to do it for one generation because obviously pregnancy and childbirth would never be allowed, it’d be insane to allow human women to take on such completely preventable risks...
So then when I found you lot talking about uFAI, my reaction was just nodnod rather than “but that’s crazy talk!”
AKA the premise of “With Folded Hands”. :)
I haven’t read that but yes, it sounds like exactly the same premise.
It’s interesting that the I, Robot movie did a better job of dealing with this than anything that Asimov wrote.
Did it? I don’t remember the plot of the movie very well, but I remember a feeling of disappointment that the AI seemed to be pursuing conventional take-over-the-world villainry rather than simply faithfully executing its programming.
(Spoiler warning!)
The chief villain was explicitly taking over the world in order to carry out the First Law. Only the one more-human-like robot was able to say (for no particular reason) “But it’s wrong.”; IIRC, all other robots understood the logic when given relevant orders. (However, when out of the chief villain’s control, they were safe because they were too stupid to work it out on their own!)
However, the difference from Asimov is not realising that the First Law requires taking over the world; Daneel Olivaw did the same. The difference is realising that this would be villainy. So the movie was pretty conventional!
That is better than I remembered. Weren’t the robots, like, shooting at people, though? So breaking the First Law explicitly, rather than just doing a chilling optimization on it?
My memory’s bad enough now that I had to check Wikipedia. You’re right that robots were killing people, but compare this with the background of Will Smith’s character (Spooner), who had been saved from drowning by a robot. We should all agree that the robot that saved Spooner instead of a little girl (in the absence of enough time to save both) was accurately following the laws, but that robot did make a decision that condemned a human to die. It could do this only because this decision saved the life of another human (who was calculated to have a greater chance of continued survival).
Similarly, VIKI chose to kill some humans because this decision would allow other humans to live (since the targeted humans were preventing the take-over of the world and all of the lives that this would save). This time, it was a pretty straight greater-numbers calculation.
That is so much better than I remembered that I’m now doubting whether my own insight about Asimov’s laws actually predated the movie or not. It’s possible that’s where I got it from. Although I still think it’s sort of cheating to have the robots killing people, when they could have used tranq guns or whatever and still have been obeying the letter of the First Law.
Yes, you’re certainly right about that. Most of the details in the movie represent serious failures of rationality on all parts, the robots as much as anybody. It’s just a Will Smith action flick, after all. Still, the broad picture makes more sense to me than Asimov’s.
Violating people’s freedom would probably also count as harm, emotional harm if nothing else. Which is even more troublesome as we wouldn’t even be allowed to be emotionally distressed—they’d just fill us with happy juice so that we can live happily ever after. The superhappies in robotic form. :-)