I find scenarios in which a single agent forms a significant global threat very implausible because even for very high IQ humans (200+) it seems very difficult to cross a large inference gap on their own.
Moreover, it would have to iterate on empirical data, which it somehow needs to gather, which will be more noticeable as it scales up.
If it employs other agents, such as copies of itself, this only exacerbates the problem, because how will the original agent be able to control its copies enough to keep them from going rogue and being noticed?
The most likely scenario to me seems one where over some number of years we willingly give these agents more and more economic power and they leverage that to gain more and more political power, i.e., by using the same levers of power that humans use and in a collective way, not through a single agent.
I find scenarios in which a single agent forms a significant global threat very implausible because even for very high IQ humans (200+) it seems very difficult to cross a large inference gap on their own.
Moreover, it would have to iterate on empirical data, which it somehow needs to gather, which will be more noticeable as it scales up.
If it employs other agents, such as copies of itself, this only exacerbates the problem, because how will the original agent be able to control its copies enough to keep them from going rogue and being noticed?
The most likely scenario to me seems one where over some number of years we willingly give these agents more and more economic power and they leverage that to gain more and more political power, i.e., by using the same levers of power that humans use and in a collective way, not through a single agent.