Moloch appears at any point when multiple agents have similar levels of power and different goals. Any time you have multiple agents with similar levels of capability and different utility functions, a form of moloch appears.
With current tech, it would be very hard to give total power to one human. The power would have to be borrowed, in the sense that their power is in setting a Nash equilibria as a shelling point. “Everyone do X and kill anyone who breaks this rule” is a nash equilibria, if everyone else is doing it, you better too. The dictator sets the shelling point by choice of X. The dictator is forced to quash any rebels or loose power. Another moloch.
Given that we have limited control over the preferences of new humans, there is likely to be some differences in utility functions between humans. Humans can die, go mad ect. You need to be able to transfer power to a new human, without having any adverse selection pressure in the choice of which.
One face of moloch is evolution. To stop it, you need to be reseting the gene pool with fresh DNA from long term storage, otherwise, over time the population genome might drift in a direction you don’t like.
We might be able to keep Moloch at a reasonably low damage level, just a sliver of moloch making things not quite as nice as they could be. At least if people know Moloch go out of their way to destroy it.
Maybe one-shot Prisoner’s Dilemma is rare and Moloch doesn’t turn out to be a big issue after all
On the other hand, perhaps the FAI solution is just sweeping all the hard problems under the AI-alignment rug and isn’t any more viable than engineering a global social system that is stable over millions of years (possibly using human genetic engineering)
If we assume that super-intelligent AI is a thing, you have to engineer a global social system thats stable over milllions of years and where no one makes ASI in that time.
Well this requirement doesn’t appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).
It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn’t turn out to be a thing. What makes the FAI problem unique isn’t that it’s an existential threat—there are plenty of those to go around—but that it’s also an existential opportunity. The only one we know of thus far.
Why? What about non-technological solutions?
Moloch appears at any point when multiple agents have similar levels of power and different goals. Any time you have multiple agents with similar levels of capability and different utility functions, a form of moloch appears.
With current tech, it would be very hard to give total power to one human. The power would have to be borrowed, in the sense that their power is in setting a Nash equilibria as a shelling point. “Everyone do X and kill anyone who breaks this rule” is a nash equilibria, if everyone else is doing it, you better too. The dictator sets the shelling point by choice of X. The dictator is forced to quash any rebels or loose power. Another moloch.
Given that we have limited control over the preferences of new humans, there is likely to be some differences in utility functions between humans. Humans can die, go mad ect. You need to be able to transfer power to a new human, without having any adverse selection pressure in the choice of which.
One face of moloch is evolution. To stop it, you need to be reseting the gene pool with fresh DNA from long term storage, otherwise, over time the population genome might drift in a direction you don’t like.
We might be able to keep Moloch at a reasonably low damage level, just a sliver of moloch making things not quite as nice as they could be. At least if people know Moloch go out of their way to destroy it.
Maybe one-shot Prisoner’s Dilemma is rare and Moloch doesn’t turn out to be a big issue after all
On the other hand, perhaps the FAI solution is just sweeping all the hard problems under the AI-alignment rug and isn’t any more viable than engineering a global social system that is stable over millions of years (possibly using human genetic engineering)
If we assume that super-intelligent AI is a thing, you have to engineer a global social system thats stable over milllions of years and where no one makes ASI in that time.
Well this requirement doesn’t appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).
It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn’t turn out to be a thing. What makes the FAI problem unique isn’t that it’s an existential threat—there are plenty of those to go around—but that it’s also an existential opportunity. The only one we know of thus far.