Yet we keep having long discussions about what kind of morality to give the smarter-than-human AI. What am I missing?
T,mJ: for some time now, Eliezer has been arguing from a position of moral relativism, implicitly adopting the stance that increased intelligence has no implications for the sorts of moral or ethical systems an entity will possess.
He has essentially been saying that we need to program a moral system we feel is appropriate into the AI and constrain it so that it cannot operate outside of that system. Its greater intelligence will then permit it to understand the implications of actions better than we can, and it will act in ways aligned with our chosen morality while having greater ability to plan and anticipate.
He has essentially been saying that we need to program a moral system we feel is appropriate into the AI and constrain it so that it cannot operate outside of that system. Its greater intelligence will then permit it to understand the implications of actions better than we can, and it will act in ways aligned with our chosen morality while having greater ability to plan and anticipate.