On the other hand, striving to conquer the world and impose one’s values by force is (I hope we agree) a reprehensible thing to do.
Not at all. It’s the only truly valuable thing to do. If I thought I had even a tiny chance of succeeding, or if I had any concrete plan, I would definitely try to build a singleton that would conquer the world.
I hope that the values I would impose in such a case are sufficiently similar to yours, and to (almost) every other human’s, that the disadvantage of being ruled by someone else would be balanced for you by the safety from ever being ruled by someone you really wouldn’t like.
A significant part of the past discussion here and in other singularity-related forums has been about verifying that our values are in fact compatible in this way. This is a necessary condition for community efforts.
If, for example, there was a natural law that all superintelligences must necessarily have essentially the same ethical system, then that would tip the balance against striving to conquer the world.
But there isn’t, so why bring it up? Unless you have a reason to think some other condition holds that changes the balance in some way. Saying some condition might hold isn’t enough. And if some such condition does hold, we’ll encounter it anyway while trying to conquer the world, so no harm done :-)
If there was a natural law that there’s some sort of upper bound on the rate of recursive self-improvement, and the world as a whole (and the world economy in particular) is already at the maximum rate, then that would also tip the balance against striving to conquer the world. In this hypothetical world, the world as a whole will continue to be more powerful than the tinkerers, the enthusiasts, and you. Robin Hanson might believe some variant of this scenario.
I’m quite certain that we’re nowhere near such a hypothetical limit. Even if we are, this limit would have to be more or less exponential, and exponential curves with the right coefficients have a way of fooming that tends to surprise people. Where does Robin talk about this?
A significant part of the past discussion here and in other singularity-related forums has been about verifying that our values are in fact compatible in this way. This is a necessary condition for community efforts.
Not so much. Multiple FAIs of different values (cooperating in one world) are equivalent to one FAI of amalgamated values, so a community effort can be predicated on everyone getting their share (and, of course, that includes altruistic aspects of each person’s preference). See also Bayesians vs. Barbarians for an idea of when it would make sense to do something CEV-ish without an explicitly enforced contract.
Not at all. It’s the only truly valuable thing to do. If I thought I had even a tiny chance of succeeding, or if I had any concrete plan, I would definitely try to build a singleton that would conquer the world.
I hope that the values I would impose in such a case are sufficiently similar to yours, and to (almost) every other human’s, that the disadvantage of being ruled by someone else would be balanced for you by the safety from ever being ruled by someone you really wouldn’t like.
A significant part of the past discussion here and in other singularity-related forums has been about verifying that our values are in fact compatible in this way. This is a necessary condition for community efforts.
But there isn’t, so why bring it up? Unless you have a reason to think some other condition holds that changes the balance in some way. Saying some condition might hold isn’t enough. And if some such condition does hold, we’ll encounter it anyway while trying to conquer the world, so no harm done :-)
I’m quite certain that we’re nowhere near such a hypothetical limit. Even if we are, this limit would have to be more or less exponential, and exponential curves with the right coefficients have a way of fooming that tends to surprise people. Where does Robin talk about this?
Not so much. Multiple FAIs of different values (cooperating in one world) are equivalent to one FAI of amalgamated values, so a community effort can be predicated on everyone getting their share (and, of course, that includes altruistic aspects of each person’s preference). See also Bayesians vs. Barbarians for an idea of when it would make sense to do something CEV-ish without an explicitly enforced contract.
You describe one form of compatibility.
How so? I don’t place restrictions on values, more than what’s obvious in normal human interaction.