@Tom McCabe: It turns out that you do get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren’t very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples).
I’m not actually focusing on the values/ethics/morality that you can get out of utility functions, I’m asking the more general question of what values/ethics/morality you can get out of the mathematics of an agent with goals interacting with an environment. Utilitarian agents are just one example of such agents.
I think that the canonical set of instrumental values that Omohundro, Hollerith and myself have been talking about have perhaps been slated more than they deserve. To me, it seems that the four “basic drives”—Self-preservation, Acquisition, Efficiency, Creativity, embody precisely the best aspects of human civilization. Not the best aspects of individual behavior, mind you, which is a different problem.
But I think that we would all like to see a civilization that acquired more free energy and space, worked harder to preserve its own existence (I think some people at oxford might have had a small gathering about that one recently), used its resources more efficiently, and strove for a greater degree of creativity. In fact I cannot think of a more concise and general description of the goals of transhumanism than Omohundro’s basic AI drives, where the “agent” is our entire human civilization.
@Tom McCabe: It turns out that you do get a morality out of the mathematics of utility functions (sort of), in the sense that utility functions will tend towards certain actions and away from others unless some special conditions are met. Unfortunately, these actions aren’t very Friendly; they involve things like turning the universe into computronium to solve the Riemann Hypothesis (see http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf for some examples).
I’m not actually focusing on the values/ethics/morality that you can get out of utility functions, I’m asking the more general question of what values/ethics/morality you can get out of the mathematics of an agent with goals interacting with an environment. Utilitarian agents are just one example of such agents.
I think that the canonical set of instrumental values that Omohundro, Hollerith and myself have been talking about have perhaps been slated more than they deserve. To me, it seems that the four “basic drives”—Self-preservation, Acquisition, Efficiency, Creativity, embody precisely the best aspects of human civilization. Not the best aspects of individual behavior, mind you, which is a different problem.
But I think that we would all like to see a civilization that acquired more free energy and space, worked harder to preserve its own existence (I think some people at oxford might have had a small gathering about that one recently), used its resources more efficiently, and strove for a greater degree of creativity. In fact I cannot think of a more concise and general description of the goals of transhumanism than Omohundro’s basic AI drives, where the “agent” is our entire human civilization.