Only when we have reduced ethics to an algorithmic level we can define friendly AI...
What does it mean to reduce ethics to an algorithmic level, where do you draw the line there? Does it involve an algorithmic description of what constitutes a human being, of pain and consciousness? If not, then to whom will the AI be friendly? Are good, bad, right and wrong universally applicable, e.g. to trees and stones? If not, then what is more important in designing friendly AI, figuring out the meaning of moral propositions, to designate the referent, or to mathematically define the objects that utter moral propositions, e.g. humans?
What does it mean to reduce ethics to an algorithmic level, where do you draw the line there? Does it involve an algorithmic description of what constitutes a human being, of pain and consciousness? If not, then to whom will the AI be friendly? Are good, bad, right and wrong universally applicable, e.g. to trees and stones? If not, then what is more important in designing friendly AI, figuring out the meaning of moral propositions, to designate the referent, or to mathematically define the objects that utter moral propositions, e.g. humans?