Why do so many technophiles dislike the idea of world government?
I rarely see the concept of “world government”, or governance, or a world court or any such thing, spoken of positively by anyone. That includes technophiles and futurists who are fully cognizant of and believe in the concept of a technological singularity that needs to be controlled, “aligned”, made safe etc.
Solutions to AI safety usually focus on how the AI should be coded, and it seems to me that the idea of “cancelling war/ merely human economics”—in a sense, dropping our tools wherever humanity is not focused entirely on making a safe FAI—is a little neglected.
Of course, some of the people who focus on the mathematical/logical/code aspects of safe AI are doing a great job, and I don’t mean to disparage their work. But I am nonetheless posing this question.
I also do not (necessarily) mean to conflate world government with a communist system that ignores Hayek’s fatal conceit and therefore renders humanity less capable of building AIs, computers etc. Just some type of governance singleton that means all nukes are in safe hands, etc.
The principle of subsidiarity is valued in a lot of political frameworks.
A world government likely means that decisions are made by bureaucrats that are more out of touch with ground reality and lobbyists who fight for the interests of their companies.
[ epistemic status: a small slice of my model, likely misleading because it’s not part of a much larger discussion. It’s a mistake to engage with most political/philosophical discussions from Hacker News, but that won’t stop me! ]
Technophiles (and really, most groups who want status to track intellectual prowess) have a weird and inconsistent relationship with governments. They desperately seek government as an entity that can solve the hard/impossible problems of massive populations of humans who want stuff that’s not consistent with what the technophiles (or other intellectuals) want for them. They often call this “coordination problems”, rather than the more accurate “conflicting misalignment of values and desires problem”.
At the same time, they see the clear costs, limits, and inefficiencies of government action in the real world, where government decisions are NOT made by the preferred elite (technophiles themselves), but by the masses, or by a different profile of elites. This obviously gets worse as the government gets bigger and more distant, in part because bigger means “less capturable by my preferred mechanisms”.
This makes it obvious that the best government is a loose federation of smaller, local (or even domain-specific) governments, which can be controlled easily by the “correct” elite. Ideally, the federation does minimially-intrusive enforcement of exactly the correct property rights in order to prevent violence that threatens the privilege of the controllers of smaller governments. “maintain order” in both the “prevent violence” and “prevent significant change of order” senses.
Why do so many technophiles dislike the idea of world government?
I rarely see the concept of “world government”, or governance, or a world court or any such thing, spoken of positively by anyone. That includes technophiles and futurists who are fully cognizant of and believe in the concept of a technological singularity that needs to be controlled, “aligned”, made safe etc.
Solutions to AI safety usually focus on how the AI should be coded, and it seems to me that the idea of “cancelling war/ merely human economics”—in a sense, dropping our tools wherever humanity is not focused entirely on making a safe FAI—is a little neglected.
Of course, some of the people who focus on the mathematical/logical/code aspects of safe AI are doing a great job, and I don’t mean to disparage their work. But I am nonetheless posing this question.
I also do not (necessarily) mean to conflate world government with a communist system that ignores Hayek’s fatal conceit and therefore renders humanity less capable of building AIs, computers etc. Just some type of governance singleton that means all nukes are in safe hands, etc.
(crosspost from Hacker News)
The principle of subsidiarity is valued in a lot of political frameworks.
A world government likely means that decisions are made by bureaucrats that are more out of touch with ground reality and lobbyists who fight for the interests of their companies.
[ epistemic status: a small slice of my model, likely misleading because it’s not part of a much larger discussion. It’s a mistake to engage with most political/philosophical discussions from Hacker News, but that won’t stop me! ]
Technophiles (and really, most groups who want status to track intellectual prowess) have a weird and inconsistent relationship with governments. They desperately seek government as an entity that can solve the hard/impossible problems of massive populations of humans who want stuff that’s not consistent with what the technophiles (or other intellectuals) want for them. They often call this “coordination problems”, rather than the more accurate “conflicting misalignment of values and desires problem”.
At the same time, they see the clear costs, limits, and inefficiencies of government action in the real world, where government decisions are NOT made by the preferred elite (technophiles themselves), but by the masses, or by a different profile of elites. This obviously gets worse as the government gets bigger and more distant, in part because bigger means “less capturable by my preferred mechanisms”.
This makes it obvious that the best government is a loose federation of smaller, local (or even domain-specific) governments, which can be controlled easily by the “correct” elite. Ideally, the federation does minimially-intrusive enforcement of exactly the correct property rights in order to prevent violence that threatens the privilege of the controllers of smaller governments. “maintain order” in both the “prevent violence” and “prevent significant change of order” senses.