But the values would change with a higher intelligence, wouldn’t they? The perspective on the world changes dramatically!
Well, yes and no. Perhaps it would be better if you look into relevant Sequences, so I don’t have to rediscover the wheel here, but essentially: some things we value as means to get something else—and this is the part which may change dramatically when we get more knowledge—but it cannot be an infinite chain, it has to end somewhere.
For example a good food is a tool to be healthy, and the health is a tool to live longer, feel better, and be more attractive. With more knowledge, my opinion about good and bad food might change dramatically, but I would probably still value health, and I would certainly value feeling good.
So I would like the AI to recommend me the best food according to the best scientific knowledge (and in a Singularity scenario I assume the AI has thousand times better knowledge than me), not based on what food I like now—because this is what I would do if I had the AI’s intelligence and knowledge. However, I would appreciate if the AI also cared about my other values, for example wanting to eat tasty food, so it would find a best way to make me enjoy the diet. What exactly would be the best way? There are many possibilities: for example artificial food flavors or hypnotizing me to like the new taste. Again, I would like AI to pick the solution that I would prefer, if I were intelligent enough to understand the consequences of each choice.
There can be many steps of iteration, but they must be grounded in what I value now. Otherwise the AI could simply make me happy by stimulating the pleasure and desire centers of my brains, and it would make me happy with that treatment—the only argument against such solution is that it is in a strong conflict with my current values and probably cannot be derived from them by merely giving me more knowledge.
Of course this whole concept has some unclear parts and criticism, and they are discussed in separate articles on this site.
So, here is the definition… and the following discussions are probably scattered in comments of many posts on this site. I remember reading more about it, but unfortunately I don’t remember where.
Generally, I think it is difficult to predict what we would value if we were more intelligent. Sure, there seems to be a trend towards more intellectual pursuits. But many highly educated people also enjoy sex or chocolate. So maybe we are not moving away from bodily pleasures, just expanding the range.
Well, yes and no. Perhaps it would be better if you look into relevant Sequences, so I don’t have to rediscover the wheel here, but essentially: some things we value as means to get something else—and this is the part which may change dramatically when we get more knowledge—but it cannot be an infinite chain, it has to end somewhere.
For example a good food is a tool to be healthy, and the health is a tool to live longer, feel better, and be more attractive. With more knowledge, my opinion about good and bad food might change dramatically, but I would probably still value health, and I would certainly value feeling good.
So I would like the AI to recommend me the best food according to the best scientific knowledge (and in a Singularity scenario I assume the AI has thousand times better knowledge than me), not based on what food I like now—because this is what I would do if I had the AI’s intelligence and knowledge. However, I would appreciate if the AI also cared about my other values, for example wanting to eat tasty food, so it would find a best way to make me enjoy the diet. What exactly would be the best way? There are many possibilities: for example artificial food flavors or hypnotizing me to like the new taste. Again, I would like AI to pick the solution that I would prefer, if I were intelligent enough to understand the consequences of each choice.
There can be many steps of iteration, but they must be grounded in what I value now. Otherwise the AI could simply make me happy by stimulating the pleasure and desire centers of my brains, and it would make me happy with that treatment—the only argument against such solution is that it is in a strong conflict with my current values and probably cannot be derived from them by merely giving me more knowledge.
Of course this whole concept has some unclear parts and criticism, and they are discussed in separate articles on this site.
Oh, I’d love it if you were so kind as to link me there. Although the issues you pointed out weren’t at all what I had in mind. What I wanted to convey is that I understand that the more intelligent one is, the more one values using one’s intelligence and the pleasures and achievements and sense of personal importance that one can derive from it. One can also grow uninterested if not outright contemptuous of pursuits that are not as intellectual in nature. Also, one grows more tolerant to difference, and also more individualistic, as one needs less and less to trust ad-hoc rules, and can actually rely on one’s own judgement. Relatively unintelligent people reciprocate the feeling, show mistrust towards the intelligent, and place more value in what they can achieve. It’s a very self-serving form of bias, but not one that can be resolved with more intelligence, I think.
Oops, now I realized that CEV is not a sequence.
So, here is the definition… and the following discussions are probably scattered in comments of many posts on this site. I remember reading more about it, but unfortunately I don’t remember where.
Generally, I think it is difficult to predict what we would value if we were more intelligent. Sure, there seems to be a trend towards more intellectual pursuits. But many highly educated people also enjoy sex or chocolate. So maybe we are not moving away from bodily pleasures, just expanding the range.