I’ve held this view for years and am even more pessimistic than you :-/
In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out.
Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won’t be democracy.
Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant
It seems really hard to think of any examples of such tech.
Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won’t be democracy.
Agreed. The rich and powerful could pick off more and more economically irrelevant classes while promising the remaining ones the same won’t happen to them, until eventually they can get everything they need from AI and live in enclaves protected by vast drone armies. Pretty bleak, but seems like the default scenario given the current incentives.
It seems really hard to think of any examples of such tech.
I think you would effectively have to build extensions to people’s neocortexes in such a way that those extensions cannot ever function on their own. Building AI agents is clearly not that.
I’ve held this view for years and am even more pessimistic than you :-/
Unfortunately, democracy itself depends on the economic and military relevance of masses of people. If that goes away, the iceberg will flip and the equilibrium system of government won’t be democracy.
It seems really hard to think of any examples of such tech.
Agreed. The rich and powerful could pick off more and more economically irrelevant classes while promising the remaining ones the same won’t happen to them, until eventually they can get everything they need from AI and live in enclaves protected by vast drone armies. Pretty bleak, but seems like the default scenario given the current incentives.
I think you would effectively have to build extensions to people’s neocortexes in such a way that those extensions cannot ever function on their own. Building AI agents is clearly not that.