We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.
IAWYC, but would like to hear more about why you think the last sentence is supported by the previous sentence. I don’t see an easy argument from “X is a terminal value for many people” to “X should be promoted by the FAI.” Are you supposing a sort of idealized desire fulfilment view about value? That’s fine—it’s a sensible enough view. I just wouldn’t have thought it so obvious that it would be a good idea to go around invisibly assuming it.
IAWYC, but would like to hear more about why you think the last sentence is supported by the previous sentence. I don’t see an easy argument from “X is a terminal value for many people” to “X should be promoted by the FAI.” Are you supposing a sort of idealized desire fulfilment view about value? That’s fine—it’s a sensible enough view. I just wouldn’t have thought it so obvious that it would be a good idea to go around invisibly assuming it.