“However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren’t in there.”
Several years ago, I was attracted to pure libertarianism as a possible objective morality for precisely this reason. The idea that, eg., chocolate tastes good can’t possibly be represented directly in an objective morality, as chocolate is unique to Earth and objective moralities need to apply everywhere. However, the idea of immorality stemming from violation of another person’s liberty seemed simple enough to arise spontaneously from the mathematics of utility functions.
It turns out that you do get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren’t very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples). If libertarianism really was a universal morality, Friendly AI would be much simpler, as we could fail on the first try without the UFAI killing us all.
“However, those objective values probably differ quite a lot from most of what most human beings find important in their lives; for example our obsessions with sex, romance and child-rearing probably aren’t in there.”
Several years ago, I was attracted to pure libertarianism as a possible objective morality for precisely this reason. The idea that, eg., chocolate tastes good can’t possibly be represented directly in an objective morality, as chocolate is unique to Earth and objective moralities need to apply everywhere. However, the idea of immorality stemming from violation of another person’s liberty seemed simple enough to arise spontaneously from the mathematics of utility functions.
It turns out that you do get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren’t very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples). If libertarianism really was a universal morality, Friendly AI would be much simpler, as we could fail on the first try without the UFAI killing us all.