This sounds to me like an irresistible force/immovable object problem—two people who are focused on different (large or intense) aspects of a problem disagree—but the real solution is to reframe the problem as a balance of considerations.
As I understand it, on the one hand, there are the arguments (e.g. Eliezer Yudkowksky’s document “Creating Friendly AI”) that technological progress is mostly not stoppable and enthusiasts and tinkerers are accidentally going to build recursively self-improving entities that probably do not share your values. On the other hand, striving to conquer the world and impose one’s values by force is (I hope we agree) a reprehensible thing to do.
If, for example, there was a natural law that all superintelligences must necessarily have essentially the same ethical system, then that would tip the balance against striving to conquer the world. In this hypothetical world, enthusiasts and tinkerers may succeed, but they wouldn’t do any harm. John C. Wright posits this in his Golden Transcendence books and EY thought this was the case earlier in his life.
If there was a natural law that there’s some sort of upper bound on the rate of recursive self-improvement, and the world as a whole (and the world economy in particular) is already at the maximum rate, then that would also tip the balance against striving to conquer the world. In this hypothetical world, the world as a whole will continue to be more powerful than the tinkerers, the enthusiasts, and you. Robin Hanson might believe some variant of this scenario.
On the other hand, striving to conquer the world and impose one’s values by force is (I hope we agree) a reprehensible thing to do.
Not at all. It’s the only truly valuable thing to do. If I thought I had even a tiny chance of succeeding, or if I had any concrete plan, I would definitely try to build a singleton that would conquer the world.
I hope that the values I would impose in such a case are sufficiently similar to yours, and to (almost) every other human’s, that the disadvantage of being ruled by someone else would be balanced for you by the safety from ever being ruled by someone you really wouldn’t like.
A significant part of the past discussion here and in other singularity-related forums has been about verifying that our values are in fact compatible in this way. This is a necessary condition for community efforts.
If, for example, there was a natural law that all superintelligences must necessarily have essentially the same ethical system, then that would tip the balance against striving to conquer the world.
But there isn’t, so why bring it up? Unless you have a reason to think some other condition holds that changes the balance in some way. Saying some condition might hold isn’t enough. And if some such condition does hold, we’ll encounter it anyway while trying to conquer the world, so no harm done :-)
If there was a natural law that there’s some sort of upper bound on the rate of recursive self-improvement, and the world as a whole (and the world economy in particular) is already at the maximum rate, then that would also tip the balance against striving to conquer the world. In this hypothetical world, the world as a whole will continue to be more powerful than the tinkerers, the enthusiasts, and you. Robin Hanson might believe some variant of this scenario.
I’m quite certain that we’re nowhere near such a hypothetical limit. Even if we are, this limit would have to be more or less exponential, and exponential curves with the right coefficients have a way of fooming that tends to surprise people. Where does Robin talk about this?
A significant part of the past discussion here and in other singularity-related forums has been about verifying that our values are in fact compatible in this way. This is a necessary condition for community efforts.
Not so much. Multiple FAIs of different values (cooperating in one world) are equivalent to one FAI of amalgamated values, so a community effort can be predicated on everyone getting their share (and, of course, that includes altruistic aspects of each person’s preference). See also Bayesians vs. Barbarians for an idea of when it would make sense to do something CEV-ish without an explicitly enforced contract.
On the other hand, striving to conquer the world and impose one’s values by force is (I hope we agree) a reprehensible thing to do.
No it’s not. We are talking about “my values”, and so if I believe it’s improper to impose them using procedure X, then part of “my values” is that procedure X shouldn’t have been performed, and so using procedure X to impose my values is unacceptable (not a valid subgoal of “imposing my values”). Whatever means are used to “impose my values” must be good according to my values. Thus, not implementing the dark aspects of “conquering the world”, such as “by force”, is part of “conquering the world” as instrumental action for achieving one’s goals. You create a singleton that chooses to be nice to the conquered.
There is also a perhaps much more important aspect of protecting from mistakes: even if I was the only person in the world, and not in immediate danger from anything, it still would make sense to create a “singleton” that governs my own actions. Thus the intuition for CEV, where you specify an everyone’s singleton, not particularly preferring given people.
Possibly you’re using technical jargon here. When non-LessWrong-reading humans talk about one person imposing their values on everyone else, they would generally consider it immoral. Are we in agreement here?
Now, I could understand your starement (“No it’s not”) in either of two ways: Either you believe they’re mistaken about whether the action is immoral, or you are using a different (technical jargon) sense of the words involved. Which is it?
My guess is that you’re using a technical sense of “values”, which includes something like the various clauses enumerated in EY’s description of CEV: “volition is what we wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together, …”.
If by “values” you include those things that you don’t think you value now but you would value if you had more knowledge of them, or would be persuaded to value by a peer if you hadn’t conquered the world and therefore eliminated all of your peers, then perhaps I can see what you’re trying to say.
By talking about “imposing your own values” without all of the additional extrapolated volition clauses, you’re committing an error of moral overconfidence—something which has caused vast amounts of unpleasantness throughout human history.
When non-LessWrong-reading humans talk about one person imposing their values on everyone else, they would generally consider it immoral. Are we in agreement here?
Not at all. The morality of imposing my values on you depends entirely on what you were doing, or were going to do, before I forced you to behave nicely.
You may have misread that, and answered a different question, something like “Is it moral?”. The quote actually is asking “Do non-LessWrong-reading humans generally consider it moral?”.
Random examples: was the U.S. acting morally when it entered WW2 against the Nazis, and imposed their values across Western Europe and in Japan? Is the average government acting morally when it forcefully collects taxes, enforcing its wealth-redistribution values? Or when it enforces most kinds of laws?
I think most people by far (to answer your question about non-LW-readers) support some value-imposing policies. Very few people are really pure personal-liberty non-interventionists. The morality of the act depends on the behavior being imposed, and on the default behavior that exists without such imposition.
It remains to stipulate that the government has a single person at its head who imposes his or her values on everyone else. Some governments do run this way, some others approximate it.
Edit: What you may have meant to say, is that the average non-LW-reading person, when hearing the phrase “one human imposing their values on everyone else”, will imagine some very evil and undesirable values, and conclude that the action is immoral. I agree with that—it’s all a matter of framing.
Of course, I’m talking about values as they should be, with moral mistakes filtered out, not as humans realistically enact them, especially when the situation creates systematic distortions, as is the case with granting absolute power.
Posts referring to necessary background for this discussion:
This sounds to me like an irresistible force/immovable object problem—two people who are focused on different (large or intense) aspects of a problem disagree—but the real solution is to reframe the problem as a balance of considerations.
As I understand it, on the one hand, there are the arguments (e.g. Eliezer Yudkowksky’s document “Creating Friendly AI”) that technological progress is mostly not stoppable and enthusiasts and tinkerers are accidentally going to build recursively self-improving entities that probably do not share your values. On the other hand, striving to conquer the world and impose one’s values by force is (I hope we agree) a reprehensible thing to do.
If, for example, there was a natural law that all superintelligences must necessarily have essentially the same ethical system, then that would tip the balance against striving to conquer the world. In this hypothetical world, enthusiasts and tinkerers may succeed, but they wouldn’t do any harm. John C. Wright posits this in his Golden Transcendence books and EY thought this was the case earlier in his life.
If there was a natural law that there’s some sort of upper bound on the rate of recursive self-improvement, and the world as a whole (and the world economy in particular) is already at the maximum rate, then that would also tip the balance against striving to conquer the world. In this hypothetical world, the world as a whole will continue to be more powerful than the tinkerers, the enthusiasts, and you. Robin Hanson might believe some variant of this scenario.
Not at all. It’s the only truly valuable thing to do. If I thought I had even a tiny chance of succeeding, or if I had any concrete plan, I would definitely try to build a singleton that would conquer the world.
I hope that the values I would impose in such a case are sufficiently similar to yours, and to (almost) every other human’s, that the disadvantage of being ruled by someone else would be balanced for you by the safety from ever being ruled by someone you really wouldn’t like.
A significant part of the past discussion here and in other singularity-related forums has been about verifying that our values are in fact compatible in this way. This is a necessary condition for community efforts.
But there isn’t, so why bring it up? Unless you have a reason to think some other condition holds that changes the balance in some way. Saying some condition might hold isn’t enough. And if some such condition does hold, we’ll encounter it anyway while trying to conquer the world, so no harm done :-)
I’m quite certain that we’re nowhere near such a hypothetical limit. Even if we are, this limit would have to be more or less exponential, and exponential curves with the right coefficients have a way of fooming that tends to surprise people. Where does Robin talk about this?
Not so much. Multiple FAIs of different values (cooperating in one world) are equivalent to one FAI of amalgamated values, so a community effort can be predicated on everyone getting their share (and, of course, that includes altruistic aspects of each person’s preference). See also Bayesians vs. Barbarians for an idea of when it would make sense to do something CEV-ish without an explicitly enforced contract.
You describe one form of compatibility.
How so? I don’t place restrictions on values, more than what’s obvious in normal human interaction.
No it’s not. We are talking about “my values”, and so if I believe it’s improper to impose them using procedure X, then part of “my values” is that procedure X shouldn’t have been performed, and so using procedure X to impose my values is unacceptable (not a valid subgoal of “imposing my values”). Whatever means are used to “impose my values” must be good according to my values. Thus, not implementing the dark aspects of “conquering the world”, such as “by force”, is part of “conquering the world” as instrumental action for achieving one’s goals. You create a singleton that chooses to be nice to the conquered.
There is also a perhaps much more important aspect of protecting from mistakes: even if I was the only person in the world, and not in immediate danger from anything, it still would make sense to create a “singleton” that governs my own actions. Thus the intuition for CEV, where you specify an everyone’s singleton, not particularly preferring given people.
Possibly you’re using technical jargon here. When non-LessWrong-reading humans talk about one person imposing their values on everyone else, they would generally consider it immoral. Are we in agreement here?
Now, I could understand your starement (“No it’s not”) in either of two ways: Either you believe they’re mistaken about whether the action is immoral, or you are using a different (technical jargon) sense of the words involved. Which is it?
My guess is that you’re using a technical sense of “values”, which includes something like the various clauses enumerated in EY’s description of CEV: “volition is what we wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together, …”.
If by “values” you include those things that you don’t think you value now but you would value if you had more knowledge of them, or would be persuaded to value by a peer if you hadn’t conquered the world and therefore eliminated all of your peers, then perhaps I can see what you’re trying to say.
By talking about “imposing your own values” without all of the additional extrapolated volition clauses, you’re committing an error of moral overconfidence—something which has caused vast amounts of unpleasantness throughout human history.
http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html
Not at all. The morality of imposing my values on you depends entirely on what you were doing, or were going to do, before I forced you to behave nicely.
You may have misread that, and answered a different question, something like “Is it moral?”. The quote actually is asking “Do non-LessWrong-reading humans generally consider it moral?”.
I answered the right quote.
Random examples: was the U.S. acting morally when it entered WW2 against the Nazis, and imposed their values across Western Europe and in Japan? Is the average government acting morally when it forcefully collects taxes, enforcing its wealth-redistribution values? Or when it enforces most kinds of laws?
I think most people by far (to answer your question about non-LW-readers) support some value-imposing policies. Very few people are really pure personal-liberty non-interventionists. The morality of the act depends on the behavior being imposed, and on the default behavior that exists without such imposition.
It remains to stipulate that the government has a single person at its head who imposes his or her values on everyone else. Some governments do run this way, some others approximate it.
Edit: What you may have meant to say, is that the average non-LW-reading person, when hearing the phrase “one human imposing their values on everyone else”, will imagine some very evil and undesirable values, and conclude that the action is immoral. I agree with that—it’s all a matter of framing.
Of course, I’m talking about values as they should be, with moral mistakes filtered out, not as humans realistically enact them, especially when the situation creates systematic distortions, as is the case with granting absolute power.
Posts referring to necessary background for this discussion:
Ends Don’t Justify Means (Among Humans)
Not Taking Over the World