First of all, let me say that I’ve been busy today and thus apologize for the sporadic character of my replies. Now, to begin with the most shocking and blunt statements...
Fourth, there is probably no way to “extrapolate” values away from the organism. Your list of “terminal human values” would be full of statements like “Humans value sweet and salty tastes” and “Males value having their penises stroked.”
What’s the problem? Were you expecting something other than humanity to come through in your model of humanity? Your phrasing signals that you are looking down on both sex and the enjoyment of food, and that you view them as aesthetically and/or morally inferior to… what? To “nonhuman bodies”? To intellectual pursuits?
Do you think intellectual pursuits will not also have their place in a well-learned model of human preferences? Are you trying to signal some attachment to the Spiral instinct/will-to-power/tsuyoku naritai principle? But even if you terminally value the expansion of your own causal or optimization power, there are other things you terminally value as well; it is unwise to throw away the rest of your humanity for power. You’ll be missing out.
To repeat one of my overly-repeated catch phrases: cynicism and detachment are not innately virtuous or wise. If what real, live human beings actually want, in the limit of increasing information and reflection, is to spend existence indulging tastes you happen to find gauche or déclassé, from where are you deriving some kind of divine-command-style moral authority to tell everyone, including yourself, to want things other than what we actually want?
What rational grounds can you have to say that a universe of pleasures—high and low—and ongoing personal development, and ongoing social development, and creativity, and emotionally significant choices to make, and genuine, engaging challenges to meet, and other people to do it all with (yes I am just listing Fun Theory Sequence entries because I can’t be bothered to be original at midnight)… is just not good enough for you if it requires learning a different way to conceptualize it all that turns out to correspond to your original psychological structure more than it corresponds to a realm of Platonic Forms, since there turned out not to be Platonic Forms?
Why do you feel guilty for not getting the approval of deities who don’t exist?
Any attempt by an AI to enforce these values would seem to require keeping the standard human body for the rest of the life of the Universe.
Or, and this is the neat bit, to create new kinds of nonhuman bodies, or nonbodily existence, that are more suited to what we value than our evolved human ones.
This is not, I think, what is most-important for us to pass on to the Universe a billion years from now.
Simply put: why not?
Try enumerating examples of terminal values. You’ll find they are contradictory, they change within individuals and within societies rapidly, they are not constants of human history, and they are very often things that one would think we would rather eliminate from society than build a big AI to guarantee we will have them with us forever.
Again: this is why we are trying to reduce the problem to cognitive algorithms, about which facts clearly exist, rather than leaving it at the level of “a theory is a collection of sentences written in first-order logic augmented with some primitive predicates”. The former is a scientific reality we can model and compute with, while the latter is a cancerous bunch of Platonist nonsense slowly killing the entire field of philosophy by metastasizing into whole fields and replacing actual reductionist rigor with the illusion of mathematical formalism.
(The above is, of course, a personal opinion, which you can tell because of the extreme vehemence. But holy shit do I hate Platonism and all its attendant fake rigor.)
Anyway, the rest I’ll have to answer in the morning, after a night’s sleep.
First of all, let me say that I’ve been busy today and thus apologize for the sporadic character of my replies. Now, to begin with the most shocking and blunt statements...
What’s the problem? Were you expecting something other than humanity to come through in your model of humanity? Your phrasing signals that you are looking down on both sex and the enjoyment of food, and that you view them as aesthetically and/or morally inferior to… what? To “nonhuman bodies”? To intellectual pursuits?
Do you think intellectual pursuits will not also have their place in a well-learned model of human preferences? Are you trying to signal some attachment to the Spiral instinct/will-to-power/tsuyoku naritai principle? But even if you terminally value the expansion of your own causal or optimization power, there are other things you terminally value as well; it is unwise to throw away the rest of your humanity for power. You’ll be missing out.
To repeat one of my overly-repeated catch phrases: cynicism and detachment are not innately virtuous or wise. If what real, live human beings actually want, in the limit of increasing information and reflection, is to spend existence indulging tastes you happen to find gauche or déclassé, from where are you deriving some kind of divine-command-style moral authority to tell everyone, including yourself, to want things other than what we actually want?
What rational grounds can you have to say that a universe of pleasures—high and low—and ongoing personal development, and ongoing social development, and creativity, and emotionally significant choices to make, and genuine, engaging challenges to meet, and other people to do it all with (yes I am just listing Fun Theory Sequence entries because I can’t be bothered to be original at midnight)… is just not good enough for you if it requires learning a different way to conceptualize it all that turns out to correspond to your original psychological structure more than it corresponds to a realm of Platonic Forms, since there turned out not to be Platonic Forms?
Why do you feel guilty for not getting the approval of deities who don’t exist?
Or, and this is the neat bit, to create new kinds of nonhuman bodies, or nonbodily existence, that are more suited to what we value than our evolved human ones.
Simply put: why not?
Again: this is why we are trying to reduce the problem to cognitive algorithms, about which facts clearly exist, rather than leaving it at the level of “a theory is a collection of sentences written in first-order logic augmented with some primitive predicates”. The former is a scientific reality we can model and compute with, while the latter is a cancerous bunch of Platonist nonsense slowly killing the entire field of philosophy by metastasizing into whole fields and replacing actual reductionist rigor with the illusion of mathematical formalism.
(The above is, of course, a personal opinion, which you can tell because of the extreme vehemence. But holy shit do I hate Platonism and all its attendant fake rigor.)
Anyway, the rest I’ll have to answer in the morning, after a night’s sleep.