But the values would change with a higher intelligence, wouldn’t they? The perspective on the world changes dramatically!
Well, yes and no. Perhaps it would be better if you look into relevant Sequences, so I don’t have to rediscover the wheel here, but essentially: some things we value as means to get something else—and this is the part which may change dramatically when we get more knowledge—but it cannot be an infinite chain, it has to end somewhere.
For example a good food is a tool to be healthy, and the health is a tool to live longer, feel better, and be more attractive. With more knowledge, my opinion about good and bad food might change dramatically, but I would probably still value health, and I would certainly value feeling good.
So I would like the AI to recommend me the best food according to the best scientific knowledge (and in a Singularity scenario I assume the AI has thousand times better knowledge than me), not based on what food I like now—because this is what I would do if I had the AI’s intelligence and knowledge. However, I would appreciate if the AI also cared about my other values, for example wanting to eat tasty food, so it would find a best way to make me enjoy the diet. What exactly would be the best way? There are many possibilities: for example artificial food flavors or hypnotizing me to like the new taste. Again, I would like AI to pick the solution that I would prefer, if I were intelligent enough to understand the consequences of each choice.
There can be many steps of iteration, but they must be grounded in what I value now. Otherwise the AI could simply make me happy by stimulating the pleasure and desire centers of my brains, and it would make me happy with that treatment—the only argument against such solution is that it is in a strong conflict with my current values and probably cannot be derived from them by merely giving me more knowledge.
Of course this whole concept has some unclear parts and criticism, and they are discussed in separate articles on this site.
So, here is the definition… and the following discussions are probably scattered in comments of many posts on this site. I remember reading more about it, but unfortunately I don’t remember where.
Generally, I think it is difficult to predict what we would value if we were more intelligent. Sure, there seems to be a trend towards more intellectual pursuits. But many highly educated people also enjoy sex or chocolate. So maybe we are not moving away from bodily pleasures, just expanding the range.
But the values would change with a higher intelligence, wouldn’t they?
Yes, which is precisely why CEV proponents think a constrained structure of this form is necessary… they are trying to solve the problem of getting the benefits of superintelligence while keeping current values fixed, rather than trusting their future to whatever values a superintelligence (e.g., an AI or an intelligence-augmented human being or whatever) might end up with on its own.
Well, it shares with the U.S. Constitution (and many other constitutions) the property of being intended to keep certain values fixed over time, I suppose. Is that what you meant? I don’t consider that a terribly strong similarity, but, sure.
I find the US constitution remarkable in its sheer longevity, and how well-designed it was that it can still be used at this point in time. Compare and contrast with the French and Spanish consitutions throughout the XIXth and XXth centuries, which have been changing with every new regime. Sometimes with every new party. The Constitutions tended to be fairly detailed and restrictive, and not written with eternity in mind. I still used to prefer the latest versions of those because they tended to be explicitly Human Rights Compliant (TM), and found the Bill of Rights and the Amendments to be fairly incomplete and outdated in that regard. But it’s been growing on me as of late.
Anyway, yes, the similarity I draw is that both are protocols and guidelines that are intended to outlast their creators far, far into the future, and still be useful to people much more intelligent and knowledgeable than the creators, to be applied to much more complex problems than the creators ever faced.
The U.S. constitution still has its problems (the Electoral College turned out to be a stupid idea, and the requirement that each state have equal representation in the Senate is also problematic), but it seems to have worked well enough...
You’d expect the CEV’s performance to be within those parameters. But I have one question: when can one decide to abolish either of those, and replace it with a new system entirely? Sometimes it is better to restart from scratch.
This certainly isn’t the time. The two problems CronoDAS mentioned are at most mildly annoying, it isn’t worth destroying a powerful and useful Schelling point merely to fix them.
Another wonderful line I’ve got to use someday.
But the values would change with a higher intelligence, wouldn’t they? The perspective on the world changes dramatically!
Well, yes and no. Perhaps it would be better if you look into relevant Sequences, so I don’t have to rediscover the wheel here, but essentially: some things we value as means to get something else—and this is the part which may change dramatically when we get more knowledge—but it cannot be an infinite chain, it has to end somewhere.
For example a good food is a tool to be healthy, and the health is a tool to live longer, feel better, and be more attractive. With more knowledge, my opinion about good and bad food might change dramatically, but I would probably still value health, and I would certainly value feeling good.
So I would like the AI to recommend me the best food according to the best scientific knowledge (and in a Singularity scenario I assume the AI has thousand times better knowledge than me), not based on what food I like now—because this is what I would do if I had the AI’s intelligence and knowledge. However, I would appreciate if the AI also cared about my other values, for example wanting to eat tasty food, so it would find a best way to make me enjoy the diet. What exactly would be the best way? There are many possibilities: for example artificial food flavors or hypnotizing me to like the new taste. Again, I would like AI to pick the solution that I would prefer, if I were intelligent enough to understand the consequences of each choice.
There can be many steps of iteration, but they must be grounded in what I value now. Otherwise the AI could simply make me happy by stimulating the pleasure and desire centers of my brains, and it would make me happy with that treatment—the only argument against such solution is that it is in a strong conflict with my current values and probably cannot be derived from them by merely giving me more knowledge.
Of course this whole concept has some unclear parts and criticism, and they are discussed in separate articles on this site.
Oh, I’d love it if you were so kind as to link me there. Although the issues you pointed out weren’t at all what I had in mind. What I wanted to convey is that I understand that the more intelligent one is, the more one values using one’s intelligence and the pleasures and achievements and sense of personal importance that one can derive from it. One can also grow uninterested if not outright contemptuous of pursuits that are not as intellectual in nature. Also, one grows more tolerant to difference, and also more individualistic, as one needs less and less to trust ad-hoc rules, and can actually rely on one’s own judgement. Relatively unintelligent people reciprocate the feeling, show mistrust towards the intelligent, and place more value in what they can achieve. It’s a very self-serving form of bias, but not one that can be resolved with more intelligence, I think.
Oops, now I realized that CEV is not a sequence.
So, here is the definition… and the following discussions are probably scattered in comments of many posts on this site. I remember reading more about it, but unfortunately I don’t remember where.
Generally, I think it is difficult to predict what we would value if we were more intelligent. Sure, there seems to be a trend towards more intellectual pursuits. But many highly educated people also enjoy sex or chocolate. So maybe we are not moving away from bodily pleasures, just expanding the range.
Yes, which is precisely why CEV proponents think a constrained structure of this form is necessary… they are trying to solve the problem of getting the benefits of superintelligence while keeping current values fixed, rather than trusting their future to whatever values a superintelligence (e.g., an AI or an intelligence-augmented human being or whatever) might end up with on its own.
So it’s kind of like the American Consitution?
Well, it shares with the U.S. Constitution (and many other constitutions) the property of being intended to keep certain values fixed over time, I suppose. Is that what you meant? I don’t consider that a terribly strong similarity, but, sure.
I find the US constitution remarkable in its sheer longevity, and how well-designed it was that it can still be used at this point in time. Compare and contrast with the French and Spanish consitutions throughout the XIXth and XXth centuries, which have been changing with every new regime. Sometimes with every new party. The Constitutions tended to be fairly detailed and restrictive, and not written with eternity in mind. I still used to prefer the latest versions of those because they tended to be explicitly Human Rights Compliant (TM), and found the Bill of Rights and the Amendments to be fairly incomplete and outdated in that regard. But it’s been growing on me as of late.
Anyway, yes, the similarity I draw is that both are protocols and guidelines that are intended to outlast their creators far, far into the future, and still be useful to people much more intelligent and knowledgeable than the creators, to be applied to much more complex problems than the creators ever faced.
The U.S. constitution still has its problems (the Electoral College turned out to be a stupid idea, and the requirement that each state have equal representation in the Senate is also problematic), but it seems to have worked well enough...
You’d expect the CEV’s performance to be within those parameters. But I have one question: when can one decide to abolish either of those, and replace it with a new system entirely? Sometimes it is better to restart from scratch.
This certainly isn’t the time. The two problems CronoDAS mentioned are at most mildly annoying, it isn’t worth destroying a powerful and useful Schelling point merely to fix them.