While the article shows with neat scientific references that it is possible to want something that we don’t end up liking, this is irrelevant to the problem of value in ethics, or in AI. You could as well say without any scientific studies that a child may want to put their hand in the fire and end up not liking the experience. It is well possible to want something by mistake. But it is not possible to like something by mistake, as far as I know. Differently from wanting, “liking” is valuable in itself.
Wanting is a bad thing according to Epicurus, for example. Consider the Greek concepts of akrasia and ataraxia. Wanting has instrumental value for motivating us, though, but it may feel bad.
Examine Yudkowsky, with his theory of Coherent Extrapolated Volition. He saw only the variance and not what is common to it (his stated goal doesn’t have an abstract constancy such as feeling good, but instead it is “fulfilling every persons’ extrapolated volition”, or their wish in case they had unlimited intelligence). This is a smarter version of preference utilitarianism. However, since people’s basic condition is essentially the same, there needn’t be this variance. It doesn’t matter if people like different flavors of ice cream, they all want it to taste good.
On the other hand, standard utilitarianism seems to see only the constancy and not take account of the variance, and for this reason it is criticized. It is like giving strawberry ice cream to everyone because Bentham thought that it was the best. Some people may hate strawberry ice cream and want it chocolate instead, and they criticize standard utilitarianism and may go to an ethical nihilism because of flavor disputes. What does this translate into in terms of feelings? One could prefer love, rough sex, solitude, company, insight, meaningfulness, flow, pleasure, etc. to different extents, and value different sensory inputs differently, especially if one is an alien or of another species.
Ethics is real and objective in abstraction and subjective in mental interpretation of content. In other words, it’s like an equation or algorithm with a free variable, which is the subject’s interpretation of feelings (which is just noise in the data), and and objective evaluation of it in an axis of good or bad, which corresponds to real moral value.
The free variable doesn’t mean that ethics is not objective. It is actually a noise in the data caused by a chain of events that is longer than should be considered. If we looked only at the hardware algorithm (or “molecular signature”, as David Pearce calls it) of good and bad feelings, we might see it as completely objective, but in humans there is a complex labyrinth between a given sensory stimulus and the output to this hardware algorithm of good and bad, such that a same stimulus may produce a different result for different organisms, because first they need to pass through a different labyrinth.
This is the reason for some variance in preference of feelings (affective preference? experiential preference?), or as also could be said, preference in tastes. Some people like strawberry and some prefer chocolate, but the end result in terms of good feelings is similarly valuable.
Since sentient experience seems to be all that matters, instead of, say, rocks, and in sentience the quality of the experience seems to be what matters, then to achieve value (quality of experience) there’s still a variable which is the variation in people’s tastes. This variation is not in the value itself (that is, feeling better) but on the particular tastes that are linked to it for each person. The value is still constant despite this variance (they may have different taste buds, but presented with the right stimuli they all lead to feeling good or feeling bad).
While the article shows with neat scientific references that it is possible to want something that we don’t end up liking, this is irrelevant to the problem of value in ethics, or in AI. You could as well say without any scientific studies that a child may want to put their hand in the fire and end up not liking the experience. It is well possible to want something by mistake. But it is not possible to like something by mistake, as far as I know. Differently from wanting, “liking” is valuable in itself.
Wanting is a bad thing according to Epicurus, for example. Consider the Greek concepts of akrasia and ataraxia. Wanting has instrumental value for motivating us, though, but it may feel bad.
Examine Yudkowsky, with his theory of Coherent Extrapolated Volition. He saw only the variance and not what is common to it (his stated goal doesn’t have an abstract constancy such as feeling good, but instead it is “fulfilling every persons’ extrapolated volition”, or their wish in case they had unlimited intelligence). This is a smarter version of preference utilitarianism. However, since people’s basic condition is essentially the same, there needn’t be this variance. It doesn’t matter if people like different flavors of ice cream, they all want it to taste good.
On the other hand, standard utilitarianism seems to see only the constancy and not take account of the variance, and for this reason it is criticized. It is like giving strawberry ice cream to everyone because Bentham thought that it was the best. Some people may hate strawberry ice cream and want it chocolate instead, and they criticize standard utilitarianism and may go to an ethical nihilism because of flavor disputes. What does this translate into in terms of feelings? One could prefer love, rough sex, solitude, company, insight, meaningfulness, flow, pleasure, etc. to different extents, and value different sensory inputs differently, especially if one is an alien or of another species.
Ethics is real and objective in abstraction and subjective in mental interpretation of content. In other words, it’s like an equation or algorithm with a free variable, which is the subject’s interpretation of feelings (which is just noise in the data), and and objective evaluation of it in an axis of good or bad, which corresponds to real moral value.
The free variable doesn’t mean that ethics is not objective. It is actually a noise in the data caused by a chain of events that is longer than should be considered. If we looked only at the hardware algorithm (or “molecular signature”, as David Pearce calls it) of good and bad feelings, we might see it as completely objective, but in humans there is a complex labyrinth between a given sensory stimulus and the output to this hardware algorithm of good and bad, such that a same stimulus may produce a different result for different organisms, because first they need to pass through a different labyrinth.
This is the reason for some variance in preference of feelings (affective preference? experiential preference?), or as also could be said, preference in tastes. Some people like strawberry and some prefer chocolate, but the end result in terms of good feelings is similarly valuable.
Since sentient experience seems to be all that matters, instead of, say, rocks, and in sentience the quality of the experience seems to be what matters, then to achieve value (quality of experience) there’s still a variable which is the variation in people’s tastes. This variation is not in the value itself (that is, feeling better) but on the particular tastes that are linked to it for each person. The value is still constant despite this variance (they may have different taste buds, but presented with the right stimuli they all lead to feeling good or feeling bad).