Are you saying that a lot of people would hit on “Pope and Dalai Lama,” for the initial extrapolation, or that that would be a good idea?
Schelling point is a solution that people can agree on without any prior communication, just based on their general knowledge.
Imagine that at this very moment there is an alien invasion on Earth. Aliens are generally benevolent, they want to give us technology and whatever, because they are incredibly advanced. They want to speak with the “morality spokesman” of the humankind… and are deeply horrified that we have no such official function. For a short moment they contemplate the possibility of exterminating this immoral species, but then they decide to give us a benefit of doubt—even if we have no official global morality and no official global morality leaders, we obviously are capable of moral thinking, and our individual moralities are significantly correlated, so perhaps they could help humanity make the next necessary step.
All humans are put into bubbles, so they cannot communicate with each other. Then everyone must say who in their opinion is the “morality spokesman” of humankind (it can be an individual or a group with less than thousand members). The majority vote wins. Then everyone, who did not vote for the winner, is killed. At this moment, the “morality spokesman” will receive the technology from aliens and will decide how humanity should use it.
Perhaps you, as a rational person, have serious doubts about this whole process. You don’t understand why humanity should have one or few “morality spokesmen”, why they should be selected by this kind of voting, or why everyone else should die. I agree with you; but we don’t make the rules, aliens do. At this moment you try to survive the election. Who would you vote for?
(Part of this story is a metaphor for super-human AI. The part about killing those who disagree with majority votes, serves to illustrate the Schelling point—you are supposed to answer the question not as you want, but as you guess other people will. Some of them will give a honest answer, but many will try to survive just as you do.)
I suspect that (at least in a Western world) “Pope” and “Dalai Lama” would be the most frequent answers. If you disagree, say your candidates. (Note: You cannot vote for Mother Theresa, she is dead.)
I agree with what you’re saying here, that if my goal was to survive I would pick the Pope. Though I’m not sure how much I’d want to live in a world based off the Pope’s EV. Also, I think the whole point is moot, because the FAI programmers don’t have to pick a Schelling point. They can pick Universal, or a random sample, or call for volunteers, or call for volunteers with some screening test to get rid of sociopaths.
I think we can agree on what I said in the grandparent: the pope would be the biggest one-person schelling point, and it’s not a good choice for initial dynamic.
I think we can agree on what I said in the grandparent: the pope would be the biggest one-person schelling point, and it’s not a good choice for initial dynamic.
Actually it might not be that bad. The theology of the body thing they have going might however mess up transhumanist aspirations I have and that would suck. But otherwise I expect a world mostly free of disease and poverty with much longer (but perhaps finite) lifespans where Western traditional values are given a boost.
That’s pretty close to utopia when considering most other outcomes. Certainly it would be a more pleasant place to live than Robin Hansons em world.
the pope would be the biggest one-person schelling point
Yes, exactly.
and it’s not a good choice for initial dynamic
And therefore, choosing a Schelling point for morality as base of CEV is probably not as good idea as it may seem. Unless one believes that ten-person or hundred-person Schelling points for morality would bring dramatically different results.
(And this is basically what I was trying to express in the comment that got so many negative points. Pope could be a Schelling point, Dalai Lama could be a Schelling point… Eliezer Yudkowsky would be a Schelling point inside LW community, but not outside.)
I suspect that (at least in a Western world) “Pope” and “Dalai Lama” would be the most frequent answers.
“Western world” is small portion of mankind and, in this scenario, all mankind counts. I cannot see even one Western person out of hundred remember Dalai Lama when facing death and for the rest of the world, the few who heard about him (excepting Tibetan Buddhists) would not appreciate his morality in the slightest.
My vote goes to the Pope—Roman Catholics are the largest religous group worldwide. The result of your gedankenexperiment is fully Catholic world and Crusade decared against the alien scum.
The result of your gedankenexperiment is fully Catholic world and Crusade decared against the alien scum.
It’s extrapolated volition that matters, not current volition. If the Pope had the same beliefs about facts that we do, his most important difference with most of us might well be something like old age.
In the thought experiment I would also likely vote pope since he seems by far the most likely candidate to win and also would not be a moral leader so bad that I wouldn’t want to live in that world.
The result of your gedankenexperiment is fully Catholic world
Not actually true, I’m sure lots of educated people would make the guess that the pope is likely to win the election and vote the same way. I’m also pretty sure many non-Catholic Christians might decide he is the best pick likely to win.
I’m also pretty sure almost instantly after the calamity lots of humans would start worshipping the aliens.
and Crusade decared against the alien scum.
Unlikely to happen because of how suicidal that would be and that most Popes being intelligent people would realize this and would I think encourage the “turn the other cheek” memes to deal with the grief and outrage. However a few billion deaths might animate mankind aware of the cost in powerful and difficult to control ways.
They would kill us, or worse, without giving us any chance.
Just like a super-human AI without design for friendliness will probably kill us, or worse. An AI designed for friendliness will need some choices from us—for example whether to use CEV of humankind, and how to approximate it if we can’t measure literally every person on the planet—and a bad choice could have horrible consequences.
Schelling point is a solution that people can agree on without any prior communication, just based on their general knowledge.
Imagine that at this very moment there is an alien invasion on Earth. Aliens are generally benevolent, they want to give us technology and whatever, because they are incredibly advanced. They want to speak with the “morality spokesman” of the humankind… and are deeply horrified that we have no such official function. For a short moment they contemplate the possibility of exterminating this immoral species, but then they decide to give us a benefit of doubt—even if we have no official global morality and no official global morality leaders, we obviously are capable of moral thinking, and our individual moralities are significantly correlated, so perhaps they could help humanity make the next necessary step.
All humans are put into bubbles, so they cannot communicate with each other. Then everyone must say who in their opinion is the “morality spokesman” of humankind (it can be an individual or a group with less than thousand members). The majority vote wins. Then everyone, who did not vote for the winner, is killed. At this moment, the “morality spokesman” will receive the technology from aliens and will decide how humanity should use it.
Perhaps you, as a rational person, have serious doubts about this whole process. You don’t understand why humanity should have one or few “morality spokesmen”, why they should be selected by this kind of voting, or why everyone else should die. I agree with you; but we don’t make the rules, aliens do. At this moment you try to survive the election. Who would you vote for?
(Part of this story is a metaphor for super-human AI. The part about killing those who disagree with majority votes, serves to illustrate the Schelling point—you are supposed to answer the question not as you want, but as you guess other people will. Some of them will give a honest answer, but many will try to survive just as you do.)
I suspect that (at least in a Western world) “Pope” and “Dalai Lama” would be the most frequent answers. If you disagree, say your candidates. (Note: You cannot vote for Mother Theresa, she is dead.)
I agree with what you’re saying here, that if my goal was to survive I would pick the Pope. Though I’m not sure how much I’d want to live in a world based off the Pope’s EV. Also, I think the whole point is moot, because the FAI programmers don’t have to pick a Schelling point. They can pick Universal, or a random sample, or call for volunteers, or call for volunteers with some screening test to get rid of sociopaths.
I think we can agree on what I said in the grandparent: the pope would be the biggest one-person schelling point, and it’s not a good choice for initial dynamic.
Actually it might not be that bad. The theology of the body thing they have going might however mess up transhumanist aspirations I have and that would suck. But otherwise I expect a world mostly free of disease and poverty with much longer (but perhaps finite) lifespans where Western traditional values are given a boost.
That’s pretty close to utopia when considering most other outcomes. Certainly it would be a more pleasant place to live than Robin Hansons em world.
Yes, exactly.
And therefore, choosing a Schelling point for morality as base of CEV is probably not as good idea as it may seem. Unless one believes that ten-person or hundred-person Schelling points for morality would bring dramatically different results.
(And this is basically what I was trying to express in the comment that got so many negative points. Pope could be a Schelling point, Dalai Lama could be a Schelling point… Eliezer Yudkowsky would be a Schelling point inside LW community, but not outside.)
How would malevolent aliens behave? :-P
“Western world” is small portion of mankind and, in this scenario, all mankind counts. I cannot see even one Western person out of hundred remember Dalai Lama when facing death and for the rest of the world, the few who heard about him (excepting Tibetan Buddhists) would not appreciate his morality in the slightest.
My vote goes to the Pope—Roman Catholics are the largest religous group worldwide. The result of your gedankenexperiment is fully Catholic world and Crusade decared against the alien scum.
It’s extrapolated volition that matters, not current volition. If the Pope had the same beliefs about facts that we do, his most important difference with most of us might well be something like old age.
In the thought experiment I would also likely vote pope since he seems by far the most likely candidate to win and also would not be a moral leader so bad that I wouldn’t want to live in that world.
Not actually true, I’m sure lots of educated people would make the guess that the pope is likely to win the election and vote the same way. I’m also pretty sure many non-Catholic Christians might decide he is the best pick likely to win.
I’m also pretty sure almost instantly after the calamity lots of humans would start worshipping the aliens.
Unlikely to happen because of how suicidal that would be and that most Popes being intelligent people would realize this and would I think encourage the “turn the other cheek” memes to deal with the grief and outrage. However a few billion deaths might animate mankind aware of the cost in powerful and difficult to control ways.
They would kill us, or worse, without giving us any chance.
Just like a super-human AI without design for friendliness will probably kill us, or worse. An AI designed for friendliness will need some choices from us—for example whether to use CEV of humankind, and how to approximate it if we can’t measure literally every person on the planet—and a bad choice could have horrible consequences.