Good AI tools could help people to make better sense of the world, and make more rational decisions.
I have a feeling this may go both ways. If AI development is nation-led (which may become true at some point somewhere), the nation’s leaders would perhaps want the AI to be aligned with their own values. There is some risk that in such way biases could be solidified instead of overcome, and the AI would recommend even more irrational (in terms of the common good) decisions—or rather, rational decisions based on irrational premises. Which could lead to increased risk of conflicts. It might be especially true for authoritarian countries.
AI could potentially give new powerful tools for democratic accountability, holding individual decisions to higher standards of scrutiny (without creating undue overhead or privacy issues)
The way I understand it could work is that democratic leaders with “democracy-aligned AI” would get more effective influence on nondemocratic figures (by fine-tuned persuasion or some kind of AI-designed political zugzwang or etc), thus reducing totalitarian risks. Is my understanding correct?
(I also had a thought that maybe you meant a yet-misaligned leader would agree to cooperate with aligned-AI, but it sounds unlikely—such leader would probably refuse because their values would differ from the AI’s)
The way I understand it could work is that democratic leaders with “democracy-aligned AI” would get more effective influence on nondemocratic figures (by fine-tuned persuasion or some kind of AI-designed political zugzwang or etc), thus reducing totalitarian risks. Is my understanding correct?
Not what I’d meant—rather, that democracies could demand better oversight of their leaders, and so reduce the risk of democracies slipping into various traps (corruption, authoritarianism).
The idea sounds nice, but practically it may also occur to be a double edged sword. If there is an AI that could significantly help in oversight of decision-makers, then there is almost surely an AI that could help the decision-makers drive public opinion in their desired direction. And since leaders usually have more resources (network, money) than the public, I’d assume that this scenario has larger probability than the successful oversight scenario. Intuitively, way larger.
I wonder how we could achieve oversight without getting controlled back in the process. Seems like a tough problem.
I have a feeling this may go both ways. If AI development is nation-led (which may become true at some point somewhere), the nation’s leaders would perhaps want the AI to be aligned with their own values. There is some risk that in such way biases could be solidified instead of overcome, and the AI would recommend even more irrational (in terms of the common good) decisions—or rather, rational decisions based on irrational premises. Which could lead to increased risk of conflicts. It might be especially true for authoritarian countries.
The way I understand it could work is that democratic leaders with “democracy-aligned AI” would get more effective influence on nondemocratic figures (by fine-tuned persuasion or some kind of AI-designed political zugzwang or etc), thus reducing totalitarian risks. Is my understanding correct?
(I also had a thought that maybe you meant a yet-misaligned leader would agree to cooperate with aligned-AI, but it sounds unlikely—such leader would probably refuse because their values would differ from the AI’s)
Not what I’d meant—rather, that democracies could demand better oversight of their leaders, and so reduce the risk of democracies slipping into various traps (corruption, authoritarianism).
Thanks!
The idea sounds nice, but practically it may also occur to be a double edged sword. If there is an AI that could significantly help in oversight of decision-makers, then there is almost surely an AI that could help the decision-makers drive public opinion in their desired direction. And since leaders usually have more resources (network, money) than the public, I’d assume that this scenario has larger probability than the successful oversight scenario. Intuitively, way larger.
I wonder how we could achieve oversight without getting controlled back in the process. Seems like a tough problem.