Would this multiple evaluation/regulatory bodies solution not just lead to the sort of balkanized internet described in this story? I guess multiple internet censorship-and-propaganda-regimes is better than one. But ideally we’d have none.
One alternative might be to ban or regulate persuasion tools, i.e. any AI system optimized for an objective/reward function that involves persuading people of things. Especially politicized or controversial things.
Standards for truthful AI could be “opt-in”. So humans might (a) choose to opt into truthfulness standards for their AI systems, and (b) choose from multiple competing evaluation bodies. Standards need not be mandated by governments to apply to all systems. (I’m not sure how much of your Balkanized internet is mandated by governments rather than arising from individuals opting into different web stacks).
We also discuss having different standards for different applications. For example, you might want stricter and more conservative standards for AI that helps assess nuclear weapon safety than for AI that teaches foreign languages to children or assists philosophers with thought experiments.
In my story it’s partly the result of individual choice and partly the result of government action, but I think even if governments stay out of it, individual choice will be enough to get us there. There won’t be a complete stack for every niche combination of views; instead, the major ideologies will each have their own stack. People who don’t agree 100% with any major ideology (which is most people) will have to put up with some amount of propaganda/censorship they don’t agree with.
Would this multiple evaluation/regulatory bodies solution not just lead to the sort of balkanized internet described in this story? I guess multiple internet censorship-and-propaganda-regimes is better than one. But ideally we’d have none.
One alternative might be to ban or regulate persuasion tools, i.e. any AI system optimized for an objective/reward function that involves persuading people of things. Especially politicized or controversial things.
Standards for truthful AI could be “opt-in”. So humans might (a) choose to opt into truthfulness standards for their AI systems, and (b) choose from multiple competing evaluation bodies. Standards need not be mandated by governments to apply to all systems. (I’m not sure how much of your Balkanized internet is mandated by governments rather than arising from individuals opting into different web stacks).
We also discuss having different standards for different applications. For example, you might want stricter and more conservative standards for AI that helps assess nuclear weapon safety than for AI that teaches foreign languages to children or assists philosophers with thought experiments.
In my story it’s partly the result of individual choice and partly the result of government action, but I think even if governments stay out of it, individual choice will be enough to get us there. There won’t be a complete stack for every niche combination of views; instead, the major ideologies will each have their own stack. People who don’t agree 100% with any major ideology (which is most people) will have to put up with some amount of propaganda/censorship they don’t agree with.