Maybe, let’s generalize this a bit… let’s call these types of solutions:
Singleton solutions—there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.
Typical problems:
Requires absolute power; not sure if we can get there without sacrificing everything to Moloch during the wars between competing royal dynasties / political systems / artificial intelligencies.
Does not answer how the singleton makes decisions internally: royal succession problems / infighting in the political party / interaction between individual modules of the AI.
Fragility of outcome; there is a risk of huge dysutility if we happen to get an insane king / a political party with inhumane ideology / an unfriendly artificial intelligence.
Primitivism solutions—all problems will be simple if we make our lifestyle simple.
Typical problems:
Avoiding Moloch is an instrumental goal; the terminal goal is to promote human well-being. But in primitive societies people starve, get sick, most of their kids die, etc.
Doesn’t work in long term; even if you would reduce the entire planet into stone age, there would be a competition who gets out of the stone age first.
In a primitive society, some formerly easy coordination problems may become harder to solve, when you don’t have internet or phones.
Singleton solutions—there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.
Royal dynasties and political parties are not Singletons by any stretch of the imagination. Infighting is Moloch. But even if we assumed an immortal benevolent human dictator, a dictator only exercises power through keys to power and still has to constantly fight off competition for his power. Stalin didn’t start the Great Purge for shits and giggles; it’s a tried and true strategy used by rulers throughout history.
The hope with artificial superintelligence is that, due to the wide design space of possible AIs, we can perhaps pick one that is sub-agent stable and free of mesa-optimization, and also more powerful than all other agents in the universe combined by a huge margin. If no AI can satisfy these conditions, we are just as doomed.
Primitivism solutions—all problems will be simple if we make our lifestyle simple.
That’s not defeating Moloch, that’s surrendering completely and unconditionally to Moloch in its original form of natural selection.
An immortal benevolent human dictator isn’t a singleton either. Human cells tend to cooperate to make humans because it tends to be their most effective competitive strategy. The cells in an immortal all powerful human dictator would have a different payoff matrix and would likely start defecting over time.
These are interesting parallels (maybe? The unabomber parallel seems odd but I don’t actually know enough about him to critique it properly) But they don’t seem to answer my question. If there is an answer being implied, please spell it out more explicitly. Otherwise maybe this belongs as a comment, not an answer?
For Marx, capitalism was Moloch, and communism was a solution.
For Unabomber, the method to stop Moloch was the destruction of complex technological society and all complex coordination problems.
Maybe, let’s generalize this a bit… let’s call these types of solutions:
Singleton solutions—there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.
Typical problems:
Requires absolute power; not sure if we can get there without sacrificing everything to Moloch during the wars between competing royal dynasties / political systems / artificial intelligencies.
Does not answer how the singleton makes decisions internally: royal succession problems / infighting in the political party / interaction between individual modules of the AI.
Fragility of outcome; there is a risk of huge dysutility if we happen to get an insane king / a political party with inhumane ideology / an unfriendly artificial intelligence.
Primitivism solutions—all problems will be simple if we make our lifestyle simple.
Typical problems:
Avoiding Moloch is an instrumental goal; the terminal goal is to promote human well-being. But in primitive societies people starve, get sick, most of their kids die, etc.
Doesn’t work in long term; even if you would reduce the entire planet into stone age, there would be a competition who gets out of the stone age first.
In a primitive society, some formerly easy coordination problems may become harder to solve, when you don’t have internet or phones.
Royal dynasties and political parties are not Singletons by any stretch of the imagination. Infighting is Moloch. But even if we assumed an immortal benevolent human dictator, a dictator only exercises power through keys to power and still has to constantly fight off competition for his power. Stalin didn’t start the Great Purge for shits and giggles; it’s a tried and true strategy used by rulers throughout history.
The hope with artificial superintelligence is that, due to the wide design space of possible AIs, we can perhaps pick one that is sub-agent stable and free of mesa-optimization, and also more powerful than all other agents in the universe combined by a huge margin. If no AI can satisfy these conditions, we are just as doomed.
That’s not defeating Moloch, that’s surrendering completely and unconditionally to Moloch in its original form of natural selection.
An immortal benevolent human dictator isn’t a singleton either. Human cells tend to cooperate to make humans because it tends to be their most effective competitive strategy. The cells in an immortal all powerful human dictator would have a different payoff matrix and would likely start defecting over time.
These are interesting parallels (maybe? The unabomber parallel seems odd but I don’t actually know enough about him to critique it properly) But they don’t seem to answer my question. If there is an answer being implied, please spell it out more explicitly. Otherwise maybe this belongs as a comment, not an answer?