I think it is a very big mistake to create a utility-maximizing rational economic agent a la Steve Omohundro, because such an agent is maximally ethically constrained—it cannot change it’s mind about any ethical question whatsoever, because a utility maximizing agent never changes it’s utility function.
That argument assumes that all ethical values are terminal values: that no ethical values are instrumental values. I assume I don’t need to explain how unlikely it is that anyone will ever build an AI with terminal values which provide environment-independent solutions to all the ethical conundrums which an AI might face.
That argument assumes that all ethical values are terminal values: that no ethical values are instrumental values. I assume I don’t need to explain how unlikely it is that anyone will ever build an AI with terminal values which provide environment-independent solutions to all the ethical conundrums which an AI might face.