The part about non-voters was not supposed to be about facts, but about rationalizations. Whenever someone loses election, they can imagine that they would have won, if all the people would have voted. This is how one keeps their faith in democracy despite seeing that their ideas have lost in democratic elections.
I guess the typical mind fallacy strongly contributes to the democracy worship. If I believe that most people have the same opinions as me, then a majority vote should bring victory to my opinions. When it does not happen, then unless I want to give up the fallacy, I have to come with an explanation why the experimental data don’t match my theory—for example most people had the same opinion like me, but some of them were too lazy to vote, so this is why we lost. Or they were manipulated, but next time they will see the truth just as clearly as I do. And then, sometimes, like when looking at the voting for Islamist parties, it’s like: WFT, I can’t even find a plausible rationalization for this!
Human minds are prone to separate all humans into two basic categories: us and them. If someone is in the “us” category, we assume they are exactly like us. If someone is in the “them” category, then they are evil, they hate us, and that’s why we (despite being good and peaceful people) should destroy them before they destroy us. Whatever education we get, these two extremes still attract our thinking. In recent decades we have learned that other humans are humans too, but it causes us to underestimate the differences, and always brings a big surprise when those other humans, despite being humans like us, decide for something different than we would.
Apparently the Human Rights Declaration of 1948 is the be-all and end-all of governmental morality. Except in the USA, “because they are weird like that” (and that’s the charitable memetic explanation).
In USA they already have the Bill of Rights. Despite differences, it seems to me that both documents inhabit the same memetic niche (that is: officially recognized and worshiped document which you can quote against your government and against the majority vote).
What’s the CEV concept, again?
Here.Shortly: “our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together” extrapolated by a super-human intelligent machine. It is proposed as a solution to problem what should we ask such machine to do, assuming that the machine is smarter than us, and we don’t want to get burned by our own stupidity. Something like: my true wish is what I would have wished if I had my values and your superior intelligence; plus assumption that sufficiently intelligent humans could together agree on a mutually satisfying solution, and the super-human intelligence should be able to find this solution.
The part about non-voters was not supposed to be about facts, but about rationalizations. Whenever someone loses election, they can imagine that they would have won, if all the people would have voted.
Another popular rationalization, is that my side would have won if it wasn’t for the biased media misinforming the public. I suppose that’s also similar to CEV.
But the values would change with a higher intelligence, wouldn’t they? The perspective on the world changes dramatically!
Well, yes and no. Perhaps it would be better if you look into relevant Sequences, so I don’t have to rediscover the wheel here, but essentially: some things we value as means to get something else—and this is the part which may change dramatically when we get more knowledge—but it cannot be an infinite chain, it has to end somewhere.
For example a good food is a tool to be healthy, and the health is a tool to live longer, feel better, and be more attractive. With more knowledge, my opinion about good and bad food might change dramatically, but I would probably still value health, and I would certainly value feeling good.
So I would like the AI to recommend me the best food according to the best scientific knowledge (and in a Singularity scenario I assume the AI has thousand times better knowledge than me), not based on what food I like now—because this is what I would do if I had the AI’s intelligence and knowledge. However, I would appreciate if the AI also cared about my other values, for example wanting to eat tasty food, so it would find a best way to make me enjoy the diet. What exactly would be the best way? There are many possibilities: for example artificial food flavors or hypnotizing me to like the new taste. Again, I would like AI to pick the solution that I would prefer, if I were intelligent enough to understand the consequences of each choice.
There can be many steps of iteration, but they must be grounded in what I value now. Otherwise the AI could simply make me happy by stimulating the pleasure and desire centers of my brains, and it would make me happy with that treatment—the only argument against such solution is that it is in a strong conflict with my current values and probably cannot be derived from them by merely giving me more knowledge.
Of course this whole concept has some unclear parts and criticism, and they are discussed in separate articles on this site.
So, here is the definition… and the following discussions are probably scattered in comments of many posts on this site. I remember reading more about it, but unfortunately I don’t remember where.
Generally, I think it is difficult to predict what we would value if we were more intelligent. Sure, there seems to be a trend towards more intellectual pursuits. But many highly educated people also enjoy sex or chocolate. So maybe we are not moving away from bodily pleasures, just expanding the range.
But the values would change with a higher intelligence, wouldn’t they?
Yes, which is precisely why CEV proponents think a constrained structure of this form is necessary… they are trying to solve the problem of getting the benefits of superintelligence while keeping current values fixed, rather than trusting their future to whatever values a superintelligence (e.g., an AI or an intelligence-augmented human being or whatever) might end up with on its own.
Well, it shares with the U.S. Constitution (and many other constitutions) the property of being intended to keep certain values fixed over time, I suppose. Is that what you meant? I don’t consider that a terribly strong similarity, but, sure.
I find the US constitution remarkable in its sheer longevity, and how well-designed it was that it can still be used at this point in time. Compare and contrast with the French and Spanish consitutions throughout the XIXth and XXth centuries, which have been changing with every new regime. Sometimes with every new party. The Constitutions tended to be fairly detailed and restrictive, and not written with eternity in mind. I still used to prefer the latest versions of those because they tended to be explicitly Human Rights Compliant (TM), and found the Bill of Rights and the Amendments to be fairly incomplete and outdated in that regard. But it’s been growing on me as of late.
Anyway, yes, the similarity I draw is that both are protocols and guidelines that are intended to outlast their creators far, far into the future, and still be useful to people much more intelligent and knowledgeable than the creators, to be applied to much more complex problems than the creators ever faced.
The U.S. constitution still has its problems (the Electoral College turned out to be a stupid idea, and the requirement that each state have equal representation in the Senate is also problematic), but it seems to have worked well enough...
You’d expect the CEV’s performance to be within those parameters. But I have one question: when can one decide to abolish either of those, and replace it with a new system entirely? Sometimes it is better to restart from scratch.
This certainly isn’t the time. The two problems CronoDAS mentioned are at most mildly annoying, it isn’t worth destroying a powerful and useful Schelling point merely to fix them.
The part about non-voters was not supposed to be about facts, but about rationalizations. Whenever someone loses election, they can imagine that they would have won, if all the people would have voted. This is how one keeps their faith in democracy despite seeing that their ideas have lost in democratic elections.
I guess the typical mind fallacy strongly contributes to the democracy worship. If I believe that most people have the same opinions as me, then a majority vote should bring victory to my opinions. When it does not happen, then unless I want to give up the fallacy, I have to come with an explanation why the experimental data don’t match my theory—for example most people had the same opinion like me, but some of them were too lazy to vote, so this is why we lost. Or they were manipulated, but next time they will see the truth just as clearly as I do. And then, sometimes, like when looking at the voting for Islamist parties, it’s like: WFT, I can’t even find a plausible rationalization for this!
Human minds are prone to separate all humans into two basic categories: us and them. If someone is in the “us” category, we assume they are exactly like us. If someone is in the “them” category, then they are evil, they hate us, and that’s why we (despite being good and peaceful people) should destroy them before they destroy us. Whatever education we get, these two extremes still attract our thinking. In recent decades we have learned that other humans are humans too, but it causes us to underestimate the differences, and always brings a big surprise when those other humans, despite being humans like us, decide for something different than we would.
In USA they already have the Bill of Rights. Despite differences, it seems to me that both documents inhabit the same memetic niche (that is: officially recognized and worshiped document which you can quote against your government and against the majority vote).
Here. Shortly: “our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together” extrapolated by a super-human intelligent machine. It is proposed as a solution to problem what should we ask such machine to do, assuming that the machine is smarter than us, and we don’t want to get burned by our own stupidity. Something like: my true wish is what I would have wished if I had my values and your superior intelligence; plus assumption that sufficiently intelligent humans could together agree on a mutually satisfying solution, and the super-human intelligence should be able to find this solution.
Another popular rationalization, is that my side would have won if it wasn’t for the biased media misinforming the public. I suppose that’s also similar to CEV.
Another wonderful line I’ve got to use someday.
But the values would change with a higher intelligence, wouldn’t they? The perspective on the world changes dramatically!
Well, yes and no. Perhaps it would be better if you look into relevant Sequences, so I don’t have to rediscover the wheel here, but essentially: some things we value as means to get something else—and this is the part which may change dramatically when we get more knowledge—but it cannot be an infinite chain, it has to end somewhere.
For example a good food is a tool to be healthy, and the health is a tool to live longer, feel better, and be more attractive. With more knowledge, my opinion about good and bad food might change dramatically, but I would probably still value health, and I would certainly value feeling good.
So I would like the AI to recommend me the best food according to the best scientific knowledge (and in a Singularity scenario I assume the AI has thousand times better knowledge than me), not based on what food I like now—because this is what I would do if I had the AI’s intelligence and knowledge. However, I would appreciate if the AI also cared about my other values, for example wanting to eat tasty food, so it would find a best way to make me enjoy the diet. What exactly would be the best way? There are many possibilities: for example artificial food flavors or hypnotizing me to like the new taste. Again, I would like AI to pick the solution that I would prefer, if I were intelligent enough to understand the consequences of each choice.
There can be many steps of iteration, but they must be grounded in what I value now. Otherwise the AI could simply make me happy by stimulating the pleasure and desire centers of my brains, and it would make me happy with that treatment—the only argument against such solution is that it is in a strong conflict with my current values and probably cannot be derived from them by merely giving me more knowledge.
Of course this whole concept has some unclear parts and criticism, and they are discussed in separate articles on this site.
Oh, I’d love it if you were so kind as to link me there. Although the issues you pointed out weren’t at all what I had in mind. What I wanted to convey is that I understand that the more intelligent one is, the more one values using one’s intelligence and the pleasures and achievements and sense of personal importance that one can derive from it. One can also grow uninterested if not outright contemptuous of pursuits that are not as intellectual in nature. Also, one grows more tolerant to difference, and also more individualistic, as one needs less and less to trust ad-hoc rules, and can actually rely on one’s own judgement. Relatively unintelligent people reciprocate the feeling, show mistrust towards the intelligent, and place more value in what they can achieve. It’s a very self-serving form of bias, but not one that can be resolved with more intelligence, I think.
Oops, now I realized that CEV is not a sequence.
So, here is the definition… and the following discussions are probably scattered in comments of many posts on this site. I remember reading more about it, but unfortunately I don’t remember where.
Generally, I think it is difficult to predict what we would value if we were more intelligent. Sure, there seems to be a trend towards more intellectual pursuits. But many highly educated people also enjoy sex or chocolate. So maybe we are not moving away from bodily pleasures, just expanding the range.
Yes, which is precisely why CEV proponents think a constrained structure of this form is necessary… they are trying to solve the problem of getting the benefits of superintelligence while keeping current values fixed, rather than trusting their future to whatever values a superintelligence (e.g., an AI or an intelligence-augmented human being or whatever) might end up with on its own.
So it’s kind of like the American Consitution?
Well, it shares with the U.S. Constitution (and many other constitutions) the property of being intended to keep certain values fixed over time, I suppose. Is that what you meant? I don’t consider that a terribly strong similarity, but, sure.
I find the US constitution remarkable in its sheer longevity, and how well-designed it was that it can still be used at this point in time. Compare and contrast with the French and Spanish consitutions throughout the XIXth and XXth centuries, which have been changing with every new regime. Sometimes with every new party. The Constitutions tended to be fairly detailed and restrictive, and not written with eternity in mind. I still used to prefer the latest versions of those because they tended to be explicitly Human Rights Compliant (TM), and found the Bill of Rights and the Amendments to be fairly incomplete and outdated in that regard. But it’s been growing on me as of late.
Anyway, yes, the similarity I draw is that both are protocols and guidelines that are intended to outlast their creators far, far into the future, and still be useful to people much more intelligent and knowledgeable than the creators, to be applied to much more complex problems than the creators ever faced.
The U.S. constitution still has its problems (the Electoral College turned out to be a stupid idea, and the requirement that each state have equal representation in the Senate is also problematic), but it seems to have worked well enough...
You’d expect the CEV’s performance to be within those parameters. But I have one question: when can one decide to abolish either of those, and replace it with a new system entirely? Sometimes it is better to restart from scratch.
This certainly isn’t the time. The two problems CronoDAS mentioned are at most mildly annoying, it isn’t worth destroying a powerful and useful Schelling point merely to fix them.