If a friendly AI is one that does protect and cultivate human values, how does ethics help to achieve this?
We don’t know what a friendly AI is. Ethics is supposed to tell us. “protect and cultivate human values” might or might not be it. Only when we have reduced ethics to an algorithmic level we can define friendly AI to be any AI that implements this algorithm (or more like some algorithm with these properties).
Only when we have reduced ethics to an algorithmic level we can define friendly AI...
What does it mean to reduce ethics to an algorithmic level, where do you draw the line there? Does it involve an algorithmic description of what constitutes a human being, of pain and consciousness? If not, then to whom will the AI be friendly? Are good, bad, right and wrong universally applicable, e.g. to trees and stones? If not, then what is more important in designing friendly AI, figuring out the meaning of moral propositions, to designate the referent, or to mathematically define the objects that utter moral propositions, e.g. humans?
I don’t get your point.
We don’t know what a friendly AI is. Ethics is supposed to tell us. “protect and cultivate human values” might or might not be it. Only when we have reduced ethics to an algorithmic level we can define friendly AI to be any AI that implements this algorithm (or more like some algorithm with these properties).
What does it mean to reduce ethics to an algorithmic level, where do you draw the line there? Does it involve an algorithmic description of what constitutes a human being, of pain and consciousness? If not, then to whom will the AI be friendly? Are good, bad, right and wrong universally applicable, e.g. to trees and stones? If not, then what is more important in designing friendly AI, figuring out the meaning of moral propositions, to designate the referent, or to mathematically define the objects that utter moral propositions, e.g. humans?
Do you mean that by solving ethics we might figure out that what we actually value is to abandon what we value?