Nanny AI is a form of Artificial General Intelligence proposed by Ben Goertzel to delay the Singularity while protecting and nurturing humanity. Nanny AI was proposed as a means of reducing the risks associated with the Singularity by delaying it until a predetermined time has passed, predetermined conditions are met, or permanently. Delaying the Singularity would allow for further research and reflection about our values and time to built a friendly artificial intelligence.
Ben Goertzel has suggested a number of preliminary components for building a Nanny AI:
A mildly superhuman Artificial General Intelligence
A global surveillance network tied to the Nanny AI
Final control of all robots given to the Nanny AI
To be reluctant to change its goals, increase its intelligence, or act against humanity’s extrapolated desires
To be able to reinterpret its goals at human prompting
To prevent any technological development that would hinder it
To yield control to another AI at a predetermined time
In a paper by Luke Muehlhauser and Anna Salamon to be published in The Singularity Hypothesis, it was suggested that programming a safe Nanny AI will require solving most, if not all, the problems that must be solved in programming a Friendly Artificial Intelligence. Ben Goertzel suggests that a Nanny AI may be a necessary evil to prevent disasters from developing technology, though he acknowledges it poses risks of its own.
References
Should humanity build a global AI nanny to delay the singularity until it’s better understood? by Ben Goertzel
Does Humanity Need an AI Nanny? by Ben Goertzel
Muehlhauser, Luke; Salamon, Anna (2012). “Intelligence Explosion: Evidence and Import”. in Eden, Amnon; Søraker, Johnny; Moor, James H. et al.. The singularity hypothesis: A scientific and philosophical assessment. Berlin: Springer.