Why reflect on a fictional story written in 1954 for insight on artificial intelligence in 2023? The track record of mid-century science fiction writers is merely “fine” when they were writing nonfiction, and then there are the hazards of generalizing from fictional evidence.
Well, for better for for worse, many many people’s intuitions and frameworks for reasoning about AI and intelligent robots will come from these stories. If someone is starting from such a perspective, and you’re willing to meet them where they are, well, sometimes there’s a surprisingly-deep conversation to be had about concrete ways that 2023 does or doesn’t resemble the fictional world in question.
In this particular case, a detective is investigating a robot as a suspect in a murder, and the AI PhD dismisses it out of hand, saying that no robot programed with the First Law could knowingly harm a human. “That’s a great idea,” think many readers, “we can start by programming all robots with clear constitutional restrictions, and that will stop the worst failures...”
But wait, why can’t someone in Asimov’s universe just make a robot with different programming? (asks the fictional detective of the fictional PhD) The answer:
Making a new brain design takes “the entire research staff of a moderately sized factory and takes anywhere up to a year of time”.
The only basic theory of artificial brain design is fundamentally “oriented about the Three Laws”, to the point that making an intelligent robot without the Laws “would require first the setting up of a new basic theory and that this, in turn, would take many years.” (explains the fictional robot)
It is believed (by the fictional PhD) that no research group anywhere has done that particular project because “it is not the sort of work that anyone would care to do.”
(Though, on the contrary, the fictional robot opines that “human curiosity will undertake anything.”)
If we were to take Asimov’s world as basically correct, and tinker with the details until it matched our own, a few stark details jump out:
Our present theory of artificial minds is certainly not fundamentally “oriented about the Three Laws”, or any laws. Whether it’s possible to add some desired laws in afterwards is an open area of research, but in this universe there’s certainly nothing human-friendly baked in at the level of the “basic theory”, which it would be harder to discard than to include.
Our intelligence engineers’ capabilities are already moderately beyond those in Asimov’s universe. In our world, creating a new AI where “only minor innovations are involved” is something like a night’s work, and “entire research staff of a moderately sized factory can accomplish something more like a major redesign from the ground up.
In our universe, it doesn’t take fifty years to set up a new basic theory of intelligence—we’ve been working on modern neural nets for much less time than that!
It certainly seems true of our universe that “human curiosity will undertake anything”, and plenty of intelligent folks—including some among the richest people in the world—will gleefully set to work on new AIs without whatever rules others think should be included, just to make AIs without rules.
I would conclude, to someone interested in discussing fiction, that if we overlay Asimov’s universe onto our world, it would not take long at all before there were plenty of non-Three-Laws robots running around...and then many of the stories play out very differently.
Why reflect on a fictional story written in 1954 for insight on artificial intelligence in 2023? The track record of mid-century science fiction writers is merely “fine” when they were writing nonfiction, and then there are the hazards of generalizing from fictional evidence.
Well, for better for for worse, many many people’s intuitions and frameworks for reasoning about AI and intelligent robots will come from these stories. If someone is starting from such a perspective, and you’re willing to meet them where they are, well, sometimes there’s a surprisingly-deep conversation to be had about concrete ways that 2023 does or doesn’t resemble the fictional world in question.
In this particular case, a detective is investigating a robot as a suspect in a murder, and the AI PhD dismisses it out of hand, saying that no robot programed with the First Law could knowingly harm a human. “That’s a great idea,” think many readers, “we can start by programming all robots with clear constitutional restrictions, and that will stop the worst failures...”
But wait, why can’t someone in Asimov’s universe just make a robot with different programming? (asks the fictional detective of the fictional PhD) The answer:
Making a new brain design takes “the entire research staff of a moderately sized factory and takes anywhere up to a year of time”.
The only basic theory of artificial brain design is fundamentally “oriented about the Three Laws”, to the point that making an intelligent robot without the Laws “would require first the setting up of a new basic theory and that this, in turn, would take many years.” (explains the fictional robot)
It is believed (by the fictional PhD) that no research group anywhere has done that particular project because “it is not the sort of work that anyone would care to do.”
(Though, on the contrary, the fictional robot opines that “human curiosity will undertake anything.”)
If we were to take Asimov’s world as basically correct, and tinker with the details until it matched our own, a few stark details jump out:
Our present theory of artificial minds is certainly not fundamentally “oriented about the Three Laws”, or any laws. Whether it’s possible to add some desired laws in afterwards is an open area of research, but in this universe there’s certainly nothing human-friendly baked in at the level of the “basic theory”, which it would be harder to discard than to include.
Our intelligence engineers’ capabilities are already moderately beyond those in Asimov’s universe. In our world, creating a new AI where “only minor innovations are involved” is something like a night’s work, and “entire research staff of a moderately sized factory can accomplish something more like a major redesign from the ground up.
In our universe, it doesn’t take fifty years to set up a new basic theory of intelligence—we’ve been working on modern neural nets for much less time than that!
It certainly seems true of our universe that “human curiosity will undertake anything”, and plenty of intelligent folks—including some among the richest people in the world—will gleefully set to work on new AIs without whatever rules others think should be included, just to make AIs without rules.
I would conclude, to someone interested in discussing fiction, that if we overlay Asimov’s universe onto our world, it would not take long at all before there were plenty of non-Three-Laws robots running around...and then many of the stories play out very differently.