What if you ask Google Interrogation Aid for the best way to get a confession out of Eddie the Snitch, given the constraints of the law and Eddie’s psychographics? What if you ask Google Municipal Planner for the best way to reduce crime? What if you ask Google Operations Assistant for the best way to maximize your paperclip production?
Google Maps has options for walking, public transit, and avoiding major highways; a hypothetical interrogation assistant would have equivalent options for degrees of legal or ethical restraint, including “How do I make sure Eddie only confesses to things he’s actually done?” If Google Operations Assistant says that a few simple modifications to the factory can produce a volume of paperclips that outmasses the Earth, there will be follow-up questions about warehousing and buyers.
Reducing crime is comparatively straightforward: more cops per capita, fewer laws for them to enforce, enough economic opportunity to make sure people don’t get desperate and stupid. The real problem is political, rather than technical, so any proposed solution will have a lot of hoops to jump through.
Yes, all it takes is a little common sense to see that legal and ethical restraint are important considerations during your interview and interrogation of Eddie. However, as the complexity of the problem rises, the tractability of the solution to a human reader lowers, as does the probability that your tool AI has sufficient common sense.
A route on a map only has a few degrees of freedom; and it’s easy to spot violations of common-sense constraints that weren’t properly programmed in, or to abort the direction-following process when problems spring up. A route to a virally delivered cancer cure has many degrees of freedom, and it’s harder to spot violations of common-sense constraints, and problems may only become evident when it’s too late to abort.
If all it took was “a little common sense” to do interrogations safely and ethically, the Stanford Prison Experiment wouldn’t have turned out the way it did. These are not simple problems!
When a medical expert system spits out a novel plan for cancer treatment, do you think that plan would be less trustworthy, or receive less scrutiny at every stage, than one invented by human experts? If an initial trial results in some statistically significant number of rats erupting into clockwork horror and rampaging through the lab until cleansed by fire, or even just keeling over from seemingly-unrelated kidney failure, do you think the FDA would approve?
What if you ask Google Interrogation Aid for the best way to get a confession out of Eddie the Snitch, given the constraints of the law and Eddie’s psychographics? What if you ask Google Municipal Planner for the best way to reduce crime? What if you ask Google Operations Assistant for the best way to maximize your paperclip production?
Google Maps has options for walking, public transit, and avoiding major highways; a hypothetical interrogation assistant would have equivalent options for degrees of legal or ethical restraint, including “How do I make sure Eddie only confesses to things he’s actually done?” If Google Operations Assistant says that a few simple modifications to the factory can produce a volume of paperclips that outmasses the Earth, there will be follow-up questions about warehousing and buyers.
Reducing crime is comparatively straightforward: more cops per capita, fewer laws for them to enforce, enough economic opportunity to make sure people don’t get desperate and stupid. The real problem is political, rather than technical, so any proposed solution will have a lot of hoops to jump through.
Yes, all it takes is a little common sense to see that legal and ethical restraint are important considerations during your interview and interrogation of Eddie. However, as the complexity of the problem rises, the tractability of the solution to a human reader lowers, as does the probability that your tool AI has sufficient common sense.
A route on a map only has a few degrees of freedom; and it’s easy to spot violations of common-sense constraints that weren’t properly programmed in, or to abort the direction-following process when problems spring up. A route to a virally delivered cancer cure has many degrees of freedom, and it’s harder to spot violations of common-sense constraints, and problems may only become evident when it’s too late to abort.
If all it took was “a little common sense” to do interrogations safely and ethically, the Stanford Prison Experiment wouldn’t have turned out the way it did. These are not simple problems!
When a medical expert system spits out a novel plan for cancer treatment, do you think that plan would be less trustworthy, or receive less scrutiny at every stage, than one invented by human experts? If an initial trial results in some statistically significant number of rats erupting into clockwork horror and rampaging through the lab until cleansed by fire, or even just keeling over from seemingly-unrelated kidney failure, do you think the FDA would approve?