This is a very timely question for me. I asked something very similar of Michael Vassar last week. He pointed me to Eliezer’s “Creating Friendly AI 1.0” paper and, like you, I didn’t find the answer there.
I’ve wondered if the Field of Law has been considered as a template for a solution to FAI—something along the lines of maintaining a constantly-updating body of law/ethics on a chip. I’ve started calling it “Asimov’s Laws++.” Here’s a proposal I made on the AGI discussion list in December 2009:
“We all agree that a few simple laws (ala Asimov) are inadequate for guiding AGI behavior. Why not require all AGIs be linked to a SINGLE large database of law—legislation, orders, case law, pending decisions—to account for the constant shifts [in what’s prohibited and what’s allowed]? Such a corpus would be ever-changing and reflect up-to-the-minute legislation and decisions on all matters man and machine. Presumably there would be some high level guiding laws, like the US Constitution and Bill of Rights, to inform the sub-nanosecond decisions. And when an AGI has miliseconds to act, it can inform its action using analysis of the deeper corpus. Surely a 200 volume set of international law would be a cakewalk for an AGI. The latest version of the corpus could be stored locally in most AGIs and just key parts local in low end models—with all being promptly and wirelessly updated as appropriate.
This seems like a reasonable solution given the need to navigate in a complex, ever changing, context-dependent universe.”
Given this approach, AIs’ goals and motivations might be mostly decoupled from an ethics module. An AI could make plans and set goals using any cognitive processes it deems fit. However, before taking actions, the AI must check the corpus to make sure it’s desired actions are legal. If they are not legal, the AI must consider other actions or suffer the wrath of law enforcement (from fines to rehabilitation). This legal system of the future would be similar to what we’re familiar with today, including being managed as a collaborative process between lots of agents (human and machine citizens, legislators, judges, and enforcers). Unlike current legal systems, however, it could hopefully be more nimble, fair, and effective given emerging computer-related technologies and methods (e.g, AI, WiFi, ubiquitous sensors, cheap/powerful processors, decision theory, Computational Law, …).
This seems like a potentially practical, flexible, and effective approach given its long history of human precedent. AIs could even refer to the appropriate corpus when traveling in different jurisdictions (e.g., Western Law, Islamic Law, Chinese Law) in advance of more universal laws/ethics that might emerge in the future.
This approach should make most runaway paper clip production scenarios off limits. Such behavior would seem to violate a myriad of laws (human welfare, property rights, speeding (?)) and would be dealt with harshly.
Perhaps this might be seen as a kind of practical implementation of CEV?
This is a very timely question for me. I asked something very similar of Michael Vassar last week. He pointed me to Eliezer’s “Creating Friendly AI 1.0” paper and, like you, I didn’t find the answer there.
I’ve wondered if the Field of Law has been considered as a template for a solution to FAI—something along the lines of maintaining a constantly-updating body of law/ethics on a chip. I’ve started calling it “Asimov’s Laws++.” Here’s a proposal I made on the AGI discussion list in December 2009:
“We all agree that a few simple laws (ala Asimov) are inadequate for guiding AGI behavior. Why not require all AGIs be linked to a SINGLE large database of law—legislation, orders, case law, pending decisions—to account for the constant shifts [in what’s prohibited and what’s allowed]? Such a corpus would be ever-changing and reflect up-to-the-minute legislation and decisions on all matters man and machine. Presumably there would be some high level guiding laws, like the US Constitution and Bill of Rights, to inform the sub-nanosecond decisions. And when an AGI has miliseconds to act, it can inform its action using analysis of the deeper corpus. Surely a 200 volume set of international law would be a cakewalk for an AGI. The latest version of the corpus could be stored locally in most AGIs and just key parts local in low end models—with all being promptly and wirelessly updated as appropriate.
This seems like a reasonable solution given the need to navigate in a complex, ever changing, context-dependent universe.”
Given this approach, AIs’ goals and motivations might be mostly decoupled from an ethics module. An AI could make plans and set goals using any cognitive processes it deems fit. However, before taking actions, the AI must check the corpus to make sure it’s desired actions are legal. If they are not legal, the AI must consider other actions or suffer the wrath of law enforcement (from fines to rehabilitation). This legal system of the future would be similar to what we’re familiar with today, including being managed as a collaborative process between lots of agents (human and machine citizens, legislators, judges, and enforcers). Unlike current legal systems, however, it could hopefully be more nimble, fair, and effective given emerging computer-related technologies and methods (e.g, AI, WiFi, ubiquitous sensors, cheap/powerful processors, decision theory, Computational Law, …).
This seems like a potentially practical, flexible, and effective approach given its long history of human precedent. AIs could even refer to the appropriate corpus when traveling in different jurisdictions (e.g., Western Law, Islamic Law, Chinese Law) in advance of more universal laws/ethics that might emerge in the future.
This approach should make most runaway paper clip production scenarios off limits. Such behavior would seem to violate a myriad of laws (human welfare, property rights, speeding (?)) and would be dealt with harshly.
Perhaps this might be seen as a kind of practical implementation of CEV?
Complex problems require complex solutions.
Comments? Pointers?