It is plausible that much of cooperation we see in the real world is actually approximate Lobian cooperation rather than purely given by traditional game-theoretic incentives. Lobian cooperation is far stronger in cases where the players resemble each other and/or have access to one another’s blueprint. This is arguably only very approximately the case between different humans but it is much closer to be the case when we are considering different versions of the same human through time as well as subminds of that human.
All these considerations could potentially make it possible for future AI societies to exhibit vastly more cooperative behaviour.
Artificial minds also have several features that make them intrinsically likely to engage in Lobian cooperation. i.e. their easy copyability (which might lead to giant ‘spur’ clans). Artificial minds can be copied, their source code and weight may be shared and the widespread use of simulations may become feasible. All these point towards the importance of Lobian cooperation and Open-Source Game theory more generally.
[With benefits also come drawbacks like the increased capacity for surveillance and torture. Hopefully, future societies may develop sophisticated norms and technology to avoid these outcomes. ]
I definitely agree that cooperation can definitely be way better in the future, and Lobian cooperation, especially with Payor’s Lemma, might well be enough to get coordination across entire solar system.
That stated, it’s much more tricky to expand this strategy to galactic scales, assuming our physical models aren’t wrong, because light speed starts to become a very taut constraint under a galaxy wide brain, and acausal strategies will require a lot of compute to simulate entire civilizations. Even worse, they depend on some common structure of values, and I suspect it’s impossible to do in the fully general case.
Reasons to think Lobian Cooperation is important
Usually the modal Lobian cooperation is dismissed as not relevant for real situations but it is plausible that Lobian cooperation extends far more broadly than what is proved currently.
It is plausible that much of cooperation we see in the real world is actually approximate Lobian cooperation rather than purely given by traditional game-theoretic incentives.
Lobian cooperation is far stronger in cases where the players resemble each other and/or have access to one another’s blueprint. This is arguably only very approximately the case between different humans but it is much closer to be the case when we are considering different versions of the same human through time as well as subminds of that human.
In the future we may very well see probabilistically checkable proof protocols, generalized notions of proof like heuristic arguments, magical cryptographic trust protocols and formal computer-checked contracts widely deployed.
All these considerations could potentially make it possible for future AI societies to exhibit vastly more cooperative behaviour.
Artificial minds also have several features that make them intrinsically likely to engage in Lobian cooperation. i.e. their easy copyability (which might lead to giant ‘spur’ clans). Artificial minds can be copied, their source code and weight may be shared and the widespread use of simulations may become feasible. All these point towards the importance of Lobian cooperation and Open-Source Game theory more generally.
[With benefits also come drawbacks like the increased capacity for surveillance and torture. Hopefully, future societies may develop sophisticated norms and technology to avoid these outcomes. ]
The Galaxy brain take is the trans-multi-Galactic brain of Acausal Society.
I definitely agree that cooperation can definitely be way better in the future, and Lobian cooperation, especially with Payor’s Lemma, might well be enough to get coordination across entire solar system.
That stated, it’s much more tricky to expand this strategy to galactic scales, assuming our physical models aren’t wrong, because light speed starts to become a very taut constraint under a galaxy wide brain, and acausal strategies will require a lot of compute to simulate entire civilizations. Even worse, they depend on some common structure of values, and I suspect it’s impossible to do in the fully general case.