It seems to me all the more basic desires (“values”), e.g. the lower layers of Maslow’s hierarchy of needs, are mainly determined by heritable factors. Because they are relatively stable across cultures. So presumably you talk about “higher” values being a function of “data sources in the world”? I.e. of nurture rather than nature?
Another point I’d like to raise is that values (in the sense of desires/goals) are arguably quite different from morals. First, morals are more general than desires. Extraterrestrials could also come up with a familiar theory of, say, preference utilitarianism, while not sharing several of our desires, e.g. for eating chocolate or for having social contacts. Indeed, theories of ethics like utilitarianism or Kantian deontology “abstract away” from specific desires by coming up with more general principles which are independent of concrete things individuals may want. Second, it is clearly possible and consistent for someone (e.g. a psychopath) to want X without believing that X is morally right. Conversely, philosophers arguing for some theory of ethics don’t necessarily adhere perfectly to the principles of this system, in the same way in which a philosopher arguing for a theory of rationality isn’t necessarily perfectly rational himself.
It seems to me all the more basic desires (“values”), e.g. the lower layers of Maslow’s hierarchy of needs, are mainly determined by heritable factors. Because they are relatively stable across cultures. So presumably you talk about “higher” values being a function of “data sources in the world”? I.e. of nurture rather than nature?
I agree there probably are some heritable values, though my big difference here is that I think that the set of primitive values is quite a bit smaller than you might think.
Though be warned, heritability doesn’t actually answer our question, because the way it’s interpreted by laymen is pretty wrong:
I probably should have separated formal ethical theories that people are describing, which you call morals and what their values actually are more.
I was always referring to values when I was talking about morals.
You are correct that someone describing a moral theory doesn’t mean that they actually agree with or implement the theory.
I still think that if you had the amount of control over a human that an ML person had over an AI today, you could brainwash them to value ~arbitrary values with a lot of control, and it would be the central technology of political and social situations, which is a lot.
It seems to me all the more basic desires (“values”), e.g. the lower layers of Maslow’s hierarchy of needs, are mainly determined by heritable factors. Because they are relatively stable across cultures. So presumably you talk about “higher” values being a function of “data sources in the world”? I.e. of nurture rather than nature?
Another point I’d like to raise is that values (in the sense of desires/goals) are arguably quite different from morals. First, morals are more general than desires. Extraterrestrials could also come up with a familiar theory of, say, preference utilitarianism, while not sharing several of our desires, e.g. for eating chocolate or for having social contacts. Indeed, theories of ethics like utilitarianism or Kantian deontology “abstract away” from specific desires by coming up with more general principles which are independent of concrete things individuals may want. Second, it is clearly possible and consistent for someone (e.g. a psychopath) to want X without believing that X is morally right. Conversely, philosophers arguing for some theory of ethics don’t necessarily adhere perfectly to the principles of this system, in the same way in which a philosopher arguing for a theory of rationality isn’t necessarily perfectly rational himself.
I agree there probably are some heritable values, though my big difference here is that I think that the set of primitive values is quite a bit smaller than you might think.
Though be warned, heritability doesn’t actually answer our question, because the way it’s interpreted by laymen is pretty wrong:
https://www.lesswrong.com/posts/YpsGjsfT93aCkRHPh/what-does-knowing-the-heritability-of-a-trait-tell-me-in
I probably should have separated formal ethical theories that people are describing, which you call morals and what their values actually are more.
I was always referring to values when I was talking about morals.
You are correct that someone describing a moral theory doesn’t mean that they actually agree with or implement the theory.
I still think that if you had the amount of control over a human that an ML person had over an AI today, you could brainwash them to value ~arbitrary values with a lot of control, and it would be the central technology of political and social situations, which is a lot.