Could this translate to agents having difficulty predicting other agents values and reactions, leading to a lesser likelihood of multiple agent systems acting as one?
If you have multiple AI agents in mind here, then yes, though it’s not the focus. Otherwise, also yes, though depending on what you mean with “multiple agent systems acting as one”, one could also see this as being fulfilled by some agents dominating other agents so that they (have to) act as one. I’d put it rather as the difficulty of predicting other agents’ actions and goals leads to the risks from AI & the difficulty of the alignment and control problem.
Could this translate to agents having difficulty predicting other agents values and reactions, leading to a lesser likelihood of multiple agent systems acting as one?
If you have multiple AI agents in mind here, then yes, though it’s not the focus. Otherwise, also yes, though depending on what you mean with “multiple agent systems acting as one”, one could also see this as being fulfilled by some agents dominating other agents so that they (have to) act as one. I’d put it rather as the difficulty of predicting other agents’ actions and goals leads to the risks from AI & the difficulty of the alignment and control problem.