P2B seems related to the planning step in the Active Inference loop.
I mean that power-seeking and cooperation are mutually exclusive, and if the world selects for cooperation more strongly than for agency, the instrumental convergence arguments for power-seeking may not go through.
Power-seeking, cooperation, and agency are all vague behaviour patterns that I think it makes little sense to talk about what “the world selects for”.
I think cooperation should be considered as the construction of a higher-level system (whether this system is an “agent” or not is an unrelated question, if this question is scientifically meaningful at all, which I doubt). For example, cells in the human body cooperate to create a human. Also, using the examples from the post, humans form communities (also companies and societies) and ants form ant colonies in this way, all higher-level systems relative to individual people or ants.
Power-seeking is similar, in fact. Power-seeking can either be conceived as the power-seeking agent pretending to be a higher-level agent itself or comprise a higher-level system with some other systems which it dominates. So it leads to the creation of a higher-level system with a different communication/control/governance structure than in the case of cooperation.
Then, which type of system (grassroots-cooperative or centrally controlled) is “[morally] better in the long-term” or “outcompetes” or “emerges from the current AI development trajectory, coupled with economic, cultural, and political trajectories of our civilisation) is a totally separate question, or, rather, multiple different questions with possibly different answers. The answers depend on the features of our world, available for inquiry today, and the emergent properties of these systems: agility/adaptivity, raw information processing power, etc.
But at least based on simple trend extrapolation and the biological evidence, we should bet that the future belongs to entities that feature unusually high levels of cooperation, not unusually high levels of power-seeking.
From what I wrote above, I would say this bet doesn’t make much sense, or at least not properly sharpened. You should focus on the properties of the emergent systems.
Active Inference tells us that instrumental convergence is not about power per se, it’s about the predictability, of both oneself and one’s environment. Power is just one of the good precursors of predictability, but not the only one: balanced systems with many feedback loops (see John Doyle’s work on “diversity-enabled sweet spots”, e. g. https://ieeexplore.ieee.org/abstract/document/9867859) should expect to be predictable, including to themselves.
Thinking about the trajectories which lead to the selection for a cooperative system, I think we should revisit Drexler’s comprehensive AI services.
P2B seems related to the planning step in the Active Inference loop.
Power-seeking, cooperation, and agency are all vague behaviour patterns that I think it makes little sense to talk about what “the world selects for”.
I think cooperation should be considered as the construction of a higher-level system (whether this system is an “agent” or not is an unrelated question, if this question is scientifically meaningful at all, which I doubt). For example, cells in the human body cooperate to create a human. Also, using the examples from the post, humans form communities (also companies and societies) and ants form ant colonies in this way, all higher-level systems relative to individual people or ants.
Power-seeking is similar, in fact. Power-seeking can either be conceived as the power-seeking agent pretending to be a higher-level agent itself or comprise a higher-level system with some other systems which it dominates. So it leads to the creation of a higher-level system with a different communication/control/governance structure than in the case of cooperation.
Then, which type of system (grassroots-cooperative or centrally controlled) is “[morally] better in the long-term” or “outcompetes” or “emerges from the current AI development trajectory, coupled with economic, cultural, and political trajectories of our civilisation) is a totally separate question, or, rather, multiple different questions with possibly different answers. The answers depend on the features of our world, available for inquiry today, and the emergent properties of these systems: agility/adaptivity, raw information processing power, etc.
From what I wrote above, I would say this bet doesn’t make much sense, or at least not properly sharpened. You should focus on the properties of the emergent systems.
Active Inference tells us that instrumental convergence is not about power per se, it’s about the predictability, of both oneself and one’s environment. Power is just one of the good precursors of predictability, but not the only one: balanced systems with many feedback loops (see John Doyle’s work on “diversity-enabled sweet spots”, e. g. https://ieeexplore.ieee.org/abstract/document/9867859) should expect to be predictable, including to themselves.
Thinking about the trajectories which lead to the selection for a cooperative system, I think we should revisit Drexler’s comprehensive AI services.