My possibly stupid question is: “Are some/all of LessWrong’s values manufactured?”
Robin Hanson brings up the plasticity of values. Humans exposed to spicy food and social conformity pressures rewire their brain to make the pain pleasurable. The jump from plastic qualia to plastic values is a big one, but it seems plausible. It seems likely that cultural prestige causes people to rewire things like research, studying, etc. as interesting/pleasurable. Perhaps intellectual values and highbrow culture are entirely manufactured values. This seems mildly troubling to me, but it would explain why rationality and logic are so hard to come by. Perhaps the geek to nerd metamorphosis involves a more substantial utility function modification than merely acquiring a taste for something new.
Define manufactured? There isn’t really any “default” culture to compare current human values to, for which you could say that “these values are manufactured because they don’t manifest in the default culture”.
By “manufactured values” I meant artificial values coming from nurture rather than innate human nature. Obviously there are things we give terminal value, and things we give instrumental value. I meant to refer to a subset of our terminal values which we were not born with. That may be a null set, if it is impossible to manufacture artificial values from scratch or from acquired tastes. Even if this is the case, that wouldn’t imply that instrumental values could not be constructed from terminal values as we learn about the world. There are 4 possible categories, and I meant only to refer to the last one:
Innate terminal values: “Being generous is innately good, and those who share are good people.” (Note: “generosity admired” is on the list of human universals, so it’s likely to be an innate value we are born with.)
Innate instrumental values: N/A (I don’t think there is anything in this category, because innate human values in babies precede the capacity to reason and develop instrumental values. Maybe certain aesthetic values don’t express themselves until a baby first opens its eyes, and so there could be reasoned instrumental values which are more “innate” than aesthetic values.)
learned instrumental values: “eating spicy food is good to do because it clears your sinuses”
learned terminal values (that is, “manufactured” values): “Bacteria suffering matters, even though I have no emotional connection to them, because of these abstract notions of fairness.” Or, alternatively “Eating spicy food is a pure, virtuous activity in its own right rather than for some other reason. Those who partake are thus good people, and those who don’t are unclean and subhuman.” The former is merely extrapolated from existing values and dubbed a terminal value, while the latter arises from an artificially conditioned aesthetic.
To use a more LW-central example, those of us who favor epistemic rationality over instrumental rationality do so because true knowledge is a terminal value for us. If this value is a human universal, then that would be strong evidence that every neurotypical baby is born valuing truth, and therefore that truth-seeking is a terminal value. If only a few cultures value truth, then it would seem more plausible that truth-seeking was a manufactured terminal value or an instrumental value.
To test ideas like this, we can look at the terms on the list related to epistemic rationality: abstraction in speech & thought, classification, conjectural reasoning, interpolation, logical notions [there are several examples on the list], measuring, numerals (counting), overestimating objectivity of thought, semantics [several semantic categories are also on the list], true and false distinguished. So, either all cultures get a lot of value out of instrumental truth-seeking, or truth-seeking is an innate human value. Judging by the curiosity of children, I’m strongly inclined toward the latter. Perhaps LW users have refined and accentuated their innate human curiosity, but it certainly doesn’t seem like a manufactured value.
But it looks like you guys forced me to make my question specific enough that I could answer it empirically. I could just take each item on the list of the twelve virtues of rationality, or any other list I thought gave a good representation of LW values or intellectual values. Just cross-reference them against a couple lists of human universals and lists of traits of small children. If very small children display a value, it’s probably innate, but may be learned very early. If no infants have it but some/all adults do, it’s probably a learned value developed later in life. If it seems like it is probably a learned value, and seems subjectively to be a terminal value, then it is manufactured.
Also, to be clear, just because something is manufactured doesn’t make it a bad thing. To say so is to commit the naturalistic fallacy. However, altering one’s utility function is scary. If we are going to replace our natural impulses with more refined values, we should do so carefully. Things like the trolley problem arguably segregate people who have replaced their default values with more abstract utilitarian notions (value all lives equally, regardless of in-group or a sense of duty). Extrapolating new values from existing ones doesn’t seem as dangerous as deriving them from acquired tastes.
By “manufactured values” I meant artificial values coming from nurture rather than innate human nature.
I don’t think that this distinction really cuts reality at the joints. In general, it’s my impression that researchers have been moving towards rejecting the whole nature/nurture distinction, as e.g. hinted at in the last paragraph of the Wikipedia article that you linked.
More specifically, as the Hanson article you linked to notes, the human mind seems pretty much built for a very large degree of value plasticity, and for being capable of adopting a wide range of values depending on its environment. That by itself starts to make the distinction suspect—if it’s easy for us to acquire new terminal values via nurture because our nature is one that easily adopts new kinds of values that come from nurture… then how do you tell whether some value came more from nurture or nature? If both were integral in the acquisition of this value, then it’s unclear whether the distinction makes any sense.
One way of looking at it: an artificial neural network can in principle learn any computable function. So you take an untrained network, and teach it to classify things based on which side of the line drawn by the function 2X + 6 they fall on. Does the property of classifying things based on the function 2X + 6 come from nature or nurture? Arguably from nurture, since without that particular training data, the neural net wouldn’t have learned to classify things according to that specific function. But on the other hand “learning any function” is in the untrained neural network’s nature, so just because something came from nurture, doesn’t mean that the intervention from nurture would have shifted the neural network away from some function that it would have learned to compute in the absence of any intervention. In the absence of any intervention from nurture, the neural network wouldn’t have learned to discriminate anything.
Similarly, without a culture surrounding us we’ll just end up as feral children (though arguably even feral children grow up in some culture, like an animal one). We’re clearly born with tendencies towards manifesting some values more likely than others, but in order for those tendencies to manifest, we also need a culture that manufactures things on top of those tendencies. Similar to how different neural net architectures will make the net more predisposed towards learning a specific function more easily, but they still need the environmental training data to determine which function is actually learned.
Similar to the neural net analogy—where the NN has the potential to learn an infinite number of different functions, and training data selects some part of that potential to teach it specific functions—Jonathan Haidt has argued that different cultures take part of the pre-existing potential for morality and then select parts of it, so that the latent “potential morality” becomes an actual concrete morality:
The acquisition of phonology provides a useful analogy for the
acquisition of morality. Children are born with the ability to
distinguish among hundreds of phonemes, but after a few years of
exposure to a specific language they lose the ability to make some
unexercised phoneme contrasts (Werker & Tees, 1984). Likewise,
Ruth Benedict (1934/1959) suggested, we can imagine a great “arc
of culture” on which are arrayed all the possible aspects of human
functioning. “A culture that capitalized even a considerable proportion
of these would be as unintelligible as a language that used
all the clicks, all the glottal stops, all the labials” (Benedict,
1934/1959, p. 24).
Similarly, a culture that emphasized all of the moral intuitions
that the human mind is prepared to experience would risk paralysis
as every action triggered multiple conflicting intuitions. Cultures
seem instead to specialize in a subset of human moral potential.
For example, Shweder’s theory of the “big three” moral ethics
(Shweder, Much, Mahapatra, & Park, 1997; see also Jensen, 1997)
proposes that moral “goods” (i.e., culturally shared beliefs about
what is morally admirable and valuable) generally cluster into
three complexes, or ethics, which cultures embrace to varying
degrees: the ethic of autonomy (focusing on goods that protect the
autonomous individual, such as rights, freedom of choice, and
personal welfare), the ethic of community (focusing on goods that
protect families, nations, and other collectivities, such as loyalty,
duty, honor, respectfulness, modesty, and self-control), and the
ethic of divinity (focusing on goods that protect the spiritual self,
such as piety and physical and mental purity). A child is born
prepared to develop moral intuitions in all three ethics, but her
local cultural environment generally stresses only one or two of the
ethics. Intuitions within culturally supported ethics become sharper
and more chronically accessible (Higgins, 1996), whereas intuitions
within unsupported ethics become weaker and less accessible.
Such “maintenance-loss” models have been documented in
other areas of human higher cognition. It seems to be a design
feature of mammalian brains that much of neural development is
“experience expectant” (Black, Jones, Nelson, & Greenough,
1998). That is, there are developmentally timed periods of high
neural plasticity, as though the brain “expected” certain types of
experience to be present at a certain time to guide its final wiring.
To take your proposed test, of taking a value and trying to find out how cross-cultural it is: consider appreciation of novels, movies, and video games. On one hand, you could argue that an appreciation of these things is clearly not a human universal, because cultures that haven’t yet invented them don’t value them. And there are cultures such as the Amish that reject at least some of these values. On the other hand, you could argue that an appreciation of these things comes naturally to humans, because these are all art forms that tap into our pre-existing value of appreciating stories and storytelling. But then, that still doesn’t prevent some cultures from rejecting these things...
First example that comes to my mind: Most cultures value killing their enemies (or something like that). However, LW culture prefers to find a way to make everyone happy (by inventing Friendly AI, donating to effective charity, etc.).
An uncharitable explanation would be that nerds are usually physically weak, and even if they happen to be strong individually, they would be still weak as a group (because most of them are weak as individuals, most people are not nerds, you cannot easily “convert” people into nerds, etc.)… so we have this “learned helplessness” about the basic human value of exterminating your enemies, and we deny having this value.
But if you would change the laws of universe so that understanding equations would allow you to directly shoot fireballs from your fingers (and the Bayes’ rule would be the most powerful fireball), LessWrong local groups would quickly turn into some kind of mage-Nazi militant groups, and we would all laugh diabolically at the pain of our enemies.
I think many of LW values are manufactured. I think you detect which are most manufactured by looking at the ones not widely held by other humans. Values like “you should get your head frozen when you die” are probably at the most manufactured end as they are nearly unique to LW and fellow travelers. Values like polyamory are pretty manufactured by do show up in a larger minority of non-LW types than head freezing. Values like a world with 3**3 created AIs in it that are a little happy is better than a world with 1 billion humans in it who are all living quite well are manufactured. Certainly held beyond LW but plenty of people hold the opposite value, that a better world would have a sustainable biologically human population.
In my opinion, the human values that are not manufactured are the ones you are born with. They feel more like moral sentiments than coherently stated values, because, I think, you aren’t born with ideas, you are born with tendencies to feel certain ways. From your moral sentiments, and discussions with other people, you build ideas that, in my opinion, you think help you explain why you have your moral sentiments. In my opinion you have your moral sentiments, and thinking your value ideas account for them is like being attracted to another person and “thinking” that means they are attracted to you: it is a form of projection, a human bias, very helpful in propagating the species but not particularly well suited to accurately explaining how the world works.
It isn’t just LW values that are manufactured, in my opinion, all values expressed as ideas are manufactured. This is why they have to be taught to be propagated, no particular set of values expressed as ideas arise spontaneously in a large number of humans.
I don’t think that manufactured is a useful word here. If I were to try to use it, I would say that any LessWrong value that you gained from LessWrong was “manufactured in you”. I would also say that any value commonly expressed on LessWrong has been shaped beyond the form in which it was originally conceived, and ‘manufactured’ in this sense.
There is no real sense that you can can any value you hold is not manufactured, unless you are talking about values like eating and breathing.
P.S. As far an a universal human culture goes, we can say with some certainty that religion, for example, is part of human nature—but no specific god, church, or belief is. So any religious/spiritual views you hold are clearly manufactured; the extent to which you hold them or do not was ‘shaped’ (which you may call manufactured or not).
My possibly stupid question is: “Are some/all of LessWrong’s values manufactured?”
Robin Hanson brings up the plasticity of values. Humans exposed to spicy food and social conformity pressures rewire their brain to make the pain pleasurable. The jump from plastic qualia to plastic values is a big one, but it seems plausible. It seems likely that cultural prestige causes people to rewire things like research, studying, etc. as interesting/pleasurable. Perhaps intellectual values and highbrow culture are entirely manufactured values. This seems mildly troubling to me, but it would explain why rationality and logic are so hard to come by. Perhaps the geek to nerd metamorphosis involves a more substantial utility function modification than merely acquiring a taste for something new.
Define manufactured? There isn’t really any “default” culture to compare current human values to, for which you could say that “these values are manufactured because they don’t manifest in the default culture”.
By “manufactured values” I meant artificial values coming from nurture rather than innate human nature. Obviously there are things we give terminal value, and things we give instrumental value. I meant to refer to a subset of our terminal values which we were not born with. That may be a null set, if it is impossible to manufacture artificial values from scratch or from acquired tastes. Even if this is the case, that wouldn’t imply that instrumental values could not be constructed from terminal values as we learn about the world. There are 4 possible categories, and I meant only to refer to the last one:
Innate terminal values: “Being generous is innately good, and those who share are good people.” (Note: “generosity admired” is on the list of human universals, so it’s likely to be an innate value we are born with.)
Innate instrumental values: N/A (I don’t think there is anything in this category, because innate human values in babies precede the capacity to reason and develop instrumental values. Maybe certain aesthetic values don’t express themselves until a baby first opens its eyes, and so there could be reasoned instrumental values which are more “innate” than aesthetic values.)
learned instrumental values: “eating spicy food is good to do because it clears your sinuses”
learned terminal values (that is, “manufactured” values): “Bacteria suffering matters, even though I have no emotional connection to them, because of these abstract notions of fairness.” Or, alternatively “Eating spicy food is a pure, virtuous activity in its own right rather than for some other reason. Those who partake are thus good people, and those who don’t are unclean and subhuman.” The former is merely extrapolated from existing values and dubbed a terminal value, while the latter arises from an artificially conditioned aesthetic.
To use a more LW-central example, those of us who favor epistemic rationality over instrumental rationality do so because true knowledge is a terminal value for us. If this value is a human universal, then that would be strong evidence that every neurotypical baby is born valuing truth, and therefore that truth-seeking is a terminal value. If only a few cultures value truth, then it would seem more plausible that truth-seeking was a manufactured terminal value or an instrumental value.
To test ideas like this, we can look at the terms on the list related to epistemic rationality: abstraction in speech & thought, classification, conjectural reasoning, interpolation, logical notions [there are several examples on the list], measuring, numerals (counting), overestimating objectivity of thought, semantics [several semantic categories are also on the list], true and false distinguished. So, either all cultures get a lot of value out of instrumental truth-seeking, or truth-seeking is an innate human value. Judging by the curiosity of children, I’m strongly inclined toward the latter. Perhaps LW users have refined and accentuated their innate human curiosity, but it certainly doesn’t seem like a manufactured value.
But it looks like you guys forced me to make my question specific enough that I could answer it empirically. I could just take each item on the list of the twelve virtues of rationality, or any other list I thought gave a good representation of LW values or intellectual values. Just cross-reference them against a couple lists of human universals and lists of traits of small children. If very small children display a value, it’s probably innate, but may be learned very early. If no infants have it but some/all adults do, it’s probably a learned value developed later in life. If it seems like it is probably a learned value, and seems subjectively to be a terminal value, then it is manufactured.
Also, to be clear, just because something is manufactured doesn’t make it a bad thing. To say so is to commit the naturalistic fallacy. However, altering one’s utility function is scary. If we are going to replace our natural impulses with more refined values, we should do so carefully. Things like the trolley problem arguably segregate people who have replaced their default values with more abstract utilitarian notions (value all lives equally, regardless of in-group or a sense of duty). Extrapolating new values from existing ones doesn’t seem as dangerous as deriving them from acquired tastes.
I don’t think that this distinction really cuts reality at the joints. In general, it’s my impression that researchers have been moving towards rejecting the whole nature/nurture distinction, as e.g. hinted at in the last paragraph of the Wikipedia article that you linked.
More specifically, as the Hanson article you linked to notes, the human mind seems pretty much built for a very large degree of value plasticity, and for being capable of adopting a wide range of values depending on its environment. That by itself starts to make the distinction suspect—if it’s easy for us to acquire new terminal values via nurture because our nature is one that easily adopts new kinds of values that come from nurture… then how do you tell whether some value came more from nurture or nature? If both were integral in the acquisition of this value, then it’s unclear whether the distinction makes any sense.
One way of looking at it: an artificial neural network can in principle learn any computable function. So you take an untrained network, and teach it to classify things based on which side of the line drawn by the function 2X + 6 they fall on. Does the property of classifying things based on the function 2X + 6 come from nature or nurture? Arguably from nurture, since without that particular training data, the neural net wouldn’t have learned to classify things according to that specific function. But on the other hand “learning any function” is in the untrained neural network’s nature, so just because something came from nurture, doesn’t mean that the intervention from nurture would have shifted the neural network away from some function that it would have learned to compute in the absence of any intervention. In the absence of any intervention from nurture, the neural network wouldn’t have learned to discriminate anything.
Similarly, without a culture surrounding us we’ll just end up as feral children (though arguably even feral children grow up in some culture, like an animal one). We’re clearly born with tendencies towards manifesting some values more likely than others, but in order for those tendencies to manifest, we also need a culture that manufactures things on top of those tendencies. Similar to how different neural net architectures will make the net more predisposed towards learning a specific function more easily, but they still need the environmental training data to determine which function is actually learned.
Similar to the neural net analogy—where the NN has the potential to learn an infinite number of different functions, and training data selects some part of that potential to teach it specific functions—Jonathan Haidt has argued that different cultures take part of the pre-existing potential for morality and then select parts of it, so that the latent “potential morality” becomes an actual concrete morality:
To take your proposed test, of taking a value and trying to find out how cross-cultural it is: consider appreciation of novels, movies, and video games. On one hand, you could argue that an appreciation of these things is clearly not a human universal, because cultures that haven’t yet invented them don’t value them. And there are cultures such as the Amish that reject at least some of these values. On the other hand, you could argue that an appreciation of these things comes naturally to humans, because these are all art forms that tap into our pre-existing value of appreciating stories and storytelling. But then, that still doesn’t prevent some cultures from rejecting these things...
First example that comes to my mind: Most cultures value killing their enemies (or something like that). However, LW culture prefers to find a way to make everyone happy (by inventing Friendly AI, donating to effective charity, etc.).
An uncharitable explanation would be that nerds are usually physically weak, and even if they happen to be strong individually, they would be still weak as a group (because most of them are weak as individuals, most people are not nerds, you cannot easily “convert” people into nerds, etc.)… so we have this “learned helplessness” about the basic human value of exterminating your enemies, and we deny having this value.
But if you would change the laws of universe so that understanding equations would allow you to directly shoot fireballs from your fingers (and the Bayes’ rule would be the most powerful fireball), LessWrong local groups would quickly turn into some kind of mage-Nazi militant groups, and we would all laugh diabolically at the pain of our enemies.
I think many of LW values are manufactured. I think you detect which are most manufactured by looking at the ones not widely held by other humans. Values like “you should get your head frozen when you die” are probably at the most manufactured end as they are nearly unique to LW and fellow travelers. Values like polyamory are pretty manufactured by do show up in a larger minority of non-LW types than head freezing. Values like a world with 3**3 created AIs in it that are a little happy is better than a world with 1 billion humans in it who are all living quite well are manufactured. Certainly held beyond LW but plenty of people hold the opposite value, that a better world would have a sustainable biologically human population.
In my opinion, the human values that are not manufactured are the ones you are born with. They feel more like moral sentiments than coherently stated values, because, I think, you aren’t born with ideas, you are born with tendencies to feel certain ways. From your moral sentiments, and discussions with other people, you build ideas that, in my opinion, you think help you explain why you have your moral sentiments. In my opinion you have your moral sentiments, and thinking your value ideas account for them is like being attracted to another person and “thinking” that means they are attracted to you: it is a form of projection, a human bias, very helpful in propagating the species but not particularly well suited to accurately explaining how the world works.
It isn’t just LW values that are manufactured, in my opinion, all values expressed as ideas are manufactured. This is why they have to be taught to be propagated, no particular set of values expressed as ideas arise spontaneously in a large number of humans.
I don’t think that manufactured is a useful word here. If I were to try to use it, I would say that any LessWrong value that you gained from LessWrong was “manufactured in you”. I would also say that any value commonly expressed on LessWrong has been shaped beyond the form in which it was originally conceived, and ‘manufactured’ in this sense.
There is no real sense that you can can any value you hold is not manufactured, unless you are talking about values like eating and breathing.
P.S. As far an a universal human culture goes, we can say with some certainty that religion, for example, is part of human nature—but no specific god, church, or belief is. So any religious/spiritual views you hold are clearly manufactured; the extent to which you hold them or do not was ‘shaped’ (which you may call manufactured or not).