I think this is an important issue because it’s at least not obvious to people that speech needs to be as free as possible. It’s at least possible to think of situations where it would seemingly be a bad thing. For example, if agent A knows that agent B would act in a good or bad way according to agent A’s utility function depending on agent B’s belief in statement S, agent A would be motivated to present only information to B that would cause it to place a probability on S that maximizes A’s utility function. If agent B has goals that operate against A’s goals, A might have the incentive to manipulate or present as much false information to B as possible.
More generally, assuming you know how an agent processes information, you might be compelled to control the information flow to that agent. And this becomes more of a viable and worthwhile option the more powerful an agent gets and the more information it has access to (and control over). And I don’t think this only applies to superintelligent agents, but actually would apply to human groups and organizations with varying incentives.
Of course within a group of agents which share the same goals, the agents should be incentivized to share information accurately and truthfully (although which information gets shared between which agents would still presumably be controlled).
But as society becomes increasingly polarized, with people seeming to cluster around heavily opposed ideologies, we should expect to see a lot more speech blocking and attempts to restrict or control access to information in general. The question is, if we like free access to high quality information in general, how do we act against this trend?
I think this is an important issue because it’s at least not obvious to people that speech needs to be as free as possible. It’s at least possible to think of situations where it would seemingly be a bad thing. For example, if agent A knows that agent B would act in a good or bad way according to agent A’s utility function depending on agent B’s belief in statement S, agent A would be motivated to present only information to B that would cause it to place a probability on S that maximizes A’s utility function. If agent B has goals that operate against A’s goals, A might have the incentive to manipulate or present as much false information to B as possible.
More generally, assuming you know how an agent processes information, you might be compelled to control the information flow to that agent. And this becomes more of a viable and worthwhile option the more powerful an agent gets and the more information it has access to (and control over). And I don’t think this only applies to superintelligent agents, but actually would apply to human groups and organizations with varying incentives.
Of course within a group of agents which share the same goals, the agents should be incentivized to share information accurately and truthfully (although which information gets shared between which agents would still presumably be controlled).
But as society becomes increasingly polarized, with people seeming to cluster around heavily opposed ideologies, we should expect to see a lot more speech blocking and attempts to restrict or control access to information in general. The question is, if we like free access to high quality information in general, how do we act against this trend?