Probably the closest thing I have seen to a definition of “friendly” from E.Y. is:
“The term “Friendly AI” refers to the production of human-benefiting, non-human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals.”
That appears to make Deep Blue “friendly”. It hasn’t harmed too many people so far—though maybe Kasparov’s ego got a little bruised.
Another rather different attempt:
“I use the term “Friendly AI” to refer to this whole challenge. Creating a mind that doesn’t kill people but does cure cancer …which is a rather limited way of putting it. More generally, the problem of pulling a mind out of mind design space, such that afterwards that you are glad you did it.”
...that one has some pretty obvious problems, as I describe here.
These are not operational definitions. For example, both rely on some kind of unspecified definition of what a “person” is. That maybe obvious today—but human nature will probably be putty in the hands of an intelligent machine—and it may well start wondering about the best way to gently transform a person into a non-person.
Probably the closest thing I have seen to a definition of “friendly” from E.Y. is:
“The term “Friendly AI” refers to the production of human-benefiting, non-human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals.”
http://singinst.org/ourresearch/publications/CFAI/challenge.html
That appears to make Deep Blue “friendly”. It hasn’t harmed too many people so far—though maybe Kasparov’s ego got a little bruised.
Another rather different attempt:
“I use the term “Friendly AI” to refer to this whole challenge. Creating a mind that doesn’t kill people but does cure cancer …which is a rather limited way of putting it. More generally, the problem of pulling a mind out of mind design space, such that afterwards that you are glad you did it.”
here, 29 minutes in
...that one has some pretty obvious problems, as I describe here.
These are not operational definitions. For example, both rely on some kind of unspecified definition of what a “person” is. That maybe obvious today—but human nature will probably be putty in the hands of an intelligent machine—and it may well start wondering about the best way to gently transform a person into a non-person.