The idea of “human values” is rather late and formed in the middle of 20th century. Before that, people used other models to predict behaviour of peoples around them. E.g. Freudian Id, Ego and SuperEgo, model or Christian model of soul choosing between rules and desires.
When you say the idea of human values is new, do you mean the idea of humans having values with regards to a utilitarian-ish ethics, is new? Or do you mean the concept of humans maximizing things rationally (or some equivalent concept) is new? If it’s the latter I’d be surprised (but maybe I shouldn’t be?).
The father of utilitarianism Bentam who worked around 1800 calculated utility as a balance between pleasures and pains, without mentioning “human values”. He wrote: “Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. ”
The idea of “maximising things” seems to be even later. Wiki: “In Ethics (1912), Moore rejects a purely hedonistic utilitarianism and argues that there is a range of values that might be maximized.” But “values” he wrote about are just abstarct ideas lie love, not “human values”.
The next step was around 1977 when the idea of preference utilitarianism was formed.
Another important thing for all this is von Neumann-Morgenstern theorem which connects ordered set of preferences and utility function.
So the way to the idea that “humans have values which they are maximising in utilitarian way” formed rather slowly.
I was referring to “values” more like the second case. Consider the choice blindness experiments (which are well-replicated). People think they value certain things in a partner, or politics, but really it’s just a bias to model themselves as being more agentic than they actually are.
Answer here is obvious, but let’s look at another example: Should I eat an apple? Apple promises pleasure and I want it, but after I have eaten it, I don’t want eat anything as I am full. So the expected pleasure source has shifted.
In other words, we have in some sense bicameral mind: a conscious part which always follows pleasure and an unconscious part which constantly changes rewards depending on the persons’ needs. If we want to learn person’s preferences, we want to learn rules why the rewards are given to some things and are not given for other. Someone likes reading and other one likes skying.
And it is not a complete model of mind, just an illustration why reward is not enough to represent human values.
The idea of “human values” is rather late and formed in the middle of 20th century. Before that, people used other models to predict behaviour of peoples around them. E.g. Freudian Id, Ego and SuperEgo, model or Christian model of soul choosing between rules and desires.
When you say the idea of human values is new, do you mean the idea of humans having values with regards to a utilitarian-ish ethics, is new? Or do you mean the concept of humans maximizing things rationally (or some equivalent concept) is new? If it’s the latter I’d be surprised (but maybe I shouldn’t be?).
The father of utilitarianism Bentam who worked around 1800 calculated utility as a balance between pleasures and pains, without mentioning “human values”. He wrote: “Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. ”
The idea of “maximising things” seems to be even later. Wiki: “In Ethics (1912), Moore rejects a purely hedonistic utilitarianism and argues that there is a range of values that might be maximized.” But “values” he wrote about are just abstarct ideas lie love, not “human values”.
The next step was around 1977 when the idea of preference utilitarianism was formed.
Another important thing for all this is von Neumann-Morgenstern theorem which connects ordered set of preferences and utility function.
So the way to the idea that “humans have values which they are maximising in utilitarian way” formed rather slowly.
I was referring to “values” more like the second case. Consider the choice blindness experiments (which are well-replicated). People think they value certain things in a partner, or politics, but really it’s just a bias to model themselves as being more agentic than they actually are.
[edited]
Answer here is obvious, but let’s look at another example: Should I eat an apple? Apple promises pleasure and I want it, but after I have eaten it, I don’t want eat anything as I am full. So the expected pleasure source has shifted.
In other words, we have in some sense bicameral mind: a conscious part which always follows pleasure and an unconscious part which constantly changes rewards depending on the persons’ needs. If we want to learn person’s preferences, we want to learn rules why the rewards are given to some things and are not given for other. Someone likes reading and other one likes skying.
And it is not a complete model of mind, just an illustration why reward is not enough to represent human values.