Every so often, when explaining issues related to AI safety, I call on
good old Asimov. That’s easy: almost everyone that is at least interested
in science knows his name, and the Three Laws of Robotics are a very good
example of misspecified goal. Or are they?
The truth is: I don’t know. My last reading through Asimov’s robots
dates back ten years; it was in french; and I didn’t know anything about
AI safety, specification and many parts of my current mental scaffolding.
So when I use Asimov for my points now, I’m not sure whether I’m spouting
bullshit or not.
Fortunately, the solution is simple, for once: I just have to read the
goddamn stories. And since I’m not the only one I heard talking about Asimov
in this context, I thought that a sequence on the robots stories would prove useful.
My first stop is by “I,Robot”, the first robot short story collection. And it
starts with the first story published by Asimov, “Robbie”.
Basically, this Robbie is a robot that takes care of a little
girl named Gloria. All is well, until Gloria’s mother turns into the bad guy,
and decides that her girl
should not be raised by a machine. She harasses her weak husband until he accepts
to get rid of Robbie. But when Gloria discovers the loss of her friend, nothing
can comfort her. The parents try everything, including a trip to New York, paradise
to suburbians. But nope, the girl is still heartbroken. Last try of the father: a
visit to a factory manned by robots, so little Gloria can see that they are lifeless
machines, not real people. But, tada! Robbie was there! And he even saves the girl
from an oncoming truck! It’s revealed that the father planned it
(Robbie being there, not the murder attempt on his daughter),
but even so, the mother can’t really send back the savior of her little girl. The End.
Just a simple story about a nice little robot beloved by a girl,
and the machinations of her mother to “protect” her from him.
What’s not to love? It’s straight to the point, nicely written,
and, if you can gloss over the obvious sexism, quite enjoyable.
How does it hold in terms of AI safety discussion?
Well, let Mr Weston, the father, give it to us:
‘Nonsense’, Weston denied, with an involuntary nervous shiver. ‘That’s completely
ridiculous. We had a long discussion at the time we bought Robbie about the
First Law of Robotics. You know it’s impossible for a robot to harm a human
being; that long before enough can go wrong to alter that First Law, a robot
would be completely inoperable. It’s a mathematical impossibility. Besides
I have an engineer from US Robots here twice a year to give the poor gadget
a complete overhaul. Why, there’s no more chance of anything at all going
wrong with Robbie than there is of you or I suddenly going looney—considerably
less, in fact. [...]’
That was underwhelming.
See, Robbie is a human in a tin wrapping. Even worse, he’s a human with a perfect
temper, that never really gets mad at the girl. For example, here:
And Robbie cowered, holding his hands over his face, so that she had to add,
‘No, I won’t, Robbie. I won’t spank you.[...]’
and here:
But Robbie was hurt at the unjust accusation, so he seated himself carefully
and shook his head ponderously from side to side.
Nowhere do I see the kind
of AI we’re all thinking about—an AI that does not hate you, but does not
love you either. Robbie loves you. At least Gloria. And this sidesteps pretty
much every issue of AI safety.
To be fair with old Isaac, the point of this story is clearly to counter
the paranoia about robots and machines. An anti-terminator, if you wish.
And it works decently on that front. Robbie is always nice with Gloria --
he even saves her at the end. He’s one of the characters with which we
have more empathy. And the only bad guys are the mother, and the
robophobic neighbors.
This would be okay, if it did not wrap a wrong assumption:
robots are safe and the only issue comes from the nasty humans.
Whereas what we want people to understand is that robots and AIs are not
unsafe because they don’t do what we tell them to do, but because they
do exactly that.
What about the First Law, you may ask? After all, it was mentioned in the quote
above. Well, that mention is all we get in this story. To find the actual Law
(yes, I know it, and so do you, but let’s assume an innocent reader), you have
to look at the first page of the book:
1- A robot may not injure a human being, or, through inaction, allow a human
being to come to harm.
That’s what I’m talking about! I’ve come looking for Laws breaking up, not
anti-discrimination against non-existent robots.
I assume these are treated in the next stories. After all, there are three
Laws of Robotics, and only one is mentioned—not even written—here.
I’ll reserve my judgement until all the stories are in. But still, don’t try
to pull another Robbie on me, Asimov.
Lessons from Isaac: Poor Little Robbie
Every so often, when explaining issues related to AI safety, I call on good old Asimov. That’s easy: almost everyone that is at least interested in science knows his name, and the Three Laws of Robotics are a very good example of misspecified goal. Or are they?
The truth is: I don’t know. My last reading through Asimov’s robots dates back ten years; it was in french; and I didn’t know anything about AI safety, specification and many parts of my current mental scaffolding. So when I use Asimov for my points now, I’m not sure whether I’m spouting bullshit or not.
Fortunately, the solution is simple, for once: I just have to read the goddamn stories. And since I’m not the only one I heard talking about Asimov in this context, I thought that a sequence on the robots stories would prove useful.
My first stop is by “I,Robot”, the first robot short story collection. And it starts with the first story published by Asimov, “Robbie”.
Basically, this Robbie is a robot that takes care of a little girl named Gloria. All is well, until Gloria’s mother turns into the bad guy, and decides that her girl should not be raised by a machine. She harasses her weak husband until he accepts to get rid of Robbie. But when Gloria discovers the loss of her friend, nothing can comfort her. The parents try everything, including a trip to New York, paradise to suburbians. But nope, the girl is still heartbroken. Last try of the father: a visit to a factory manned by robots, so little Gloria can see that they are lifeless machines, not real people. But, tada! Robbie was there! And he even saves the girl from an oncoming truck! It’s revealed that the father planned it (Robbie being there, not the murder attempt on his daughter), but even so, the mother can’t really send back the savior of her little girl. The End.
Just a simple story about a nice little robot beloved by a girl, and the machinations of her mother to “protect” her from him. What’s not to love? It’s straight to the point, nicely written, and, if you can gloss over the obvious sexism, quite enjoyable.
How does it hold in terms of AI safety discussion? Well, let Mr Weston, the father, give it to us:
That was underwhelming.
See, Robbie is a human in a tin wrapping. Even worse, he’s a human with a perfect temper, that never really gets mad at the girl. For example, here:
and here:
Nowhere do I see the kind of AI we’re all thinking about—an AI that does not hate you, but does not love you either. Robbie loves you. At least Gloria. And this sidesteps pretty much every issue of AI safety.
To be fair with old Isaac, the point of this story is clearly to counter the paranoia about robots and machines. An anti-terminator, if you wish. And it works decently on that front. Robbie is always nice with Gloria -- he even saves her at the end. He’s one of the characters with which we have more empathy. And the only bad guys are the mother, and the robophobic neighbors.
This would be okay, if it did not wrap a wrong assumption: robots are safe and the only issue comes from the nasty humans. Whereas what we want people to understand is that robots and AIs are not unsafe because they don’t do what we tell them to do, but because they do exactly that.
What about the First Law, you may ask? After all, it was mentioned in the quote above. Well, that mention is all we get in this story. To find the actual Law (yes, I know it, and so do you, but let’s assume an innocent reader), you have to look at the first page of the book:
That’s what I’m talking about! I’ve come looking for Laws breaking up, not anti-discrimination against non-existent robots. I assume these are treated in the next stories. After all, there are three Laws of Robotics, and only one is mentioned—not even written—here. I’ll reserve my judgement until all the stories are in. But still, don’t try to pull another Robbie on me, Asimov.