Not sure the extent to which this falls under “coordination tech” but are you familiar with work in collective intelligence? This article has some examples of existing work and future directions: https://www.wired.com/story/collective-intelligence-democracy/. Notably, it covers enhancements in expressing preferences (quadratic voting), prediction (prediction markets), representation (liquid democracy), consensus in groups (Polis), and aggregating knowledge (Wikipedia).
As you reference above, there’s non-AI collective action tech: https://foresight.org/a-simple-secure-coordination-platform-for-collective-action/
In the area of cognitive architectures, the open agency proposals contain governance tech, like Drexler’s original Open Agency model (https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model), Davidad’s dramatically more complex Open Agency Architecture (https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-bold-plan-for-alignment-an-in-depth-explanation), and the recently proposed Gaia Network (https://www.lesswrong.com/posts/AKBkDNeFLZxaMqjQG/gaia-network-a-practical-incremental-pathway-to-open-agency).
The main way I look at this is that software can greatly boost collective intelligence (CI), and one part of collective intelligence is coordination. Collective intelligence seems really under explored and I think there are very promising ways to improve it. More on my plan for CI + AGI here if of interest: https://www.web10.ai/p/web-10-in-under-10-minutes
While I think CI can be useful for things like AI governance, I think collective intelligence is actually very related to AI safety in the context of a cognitive architecture (CA). CI can be used to federate responsibilities in a cognitive architecture, including AI systems reviewing other AI systems as you mention. It can be used to enhance human control and participation in a CA, including allowing humans to set the goals of a cognitive architecture–based system, allow humans to perform the thinking and acting in a CA, and allow humans to participate in the oversight and evaluation of the granular and high-level operation of a CA. I write more on the safety aspects here if you’re interested: https://www.lesswrong.com/posts/caeXurgTwKDpSG4Nh/safety-first-agents-architectures-are-a-promising-path-to
In my view, it is most optimal to integrate CI and AI together in the same federated cognitive architecture, but CI systems can themselves be superintelligent, and that could be useful for developing and working with safe artificial super intelligence (including AI to help with primarily human-orchestrated CI, which blurs the line between CI and a combined human-AI cognitive architecture).
I see certain AI developments as boosting the same underlying tech required for next-level collective intelligence (modeling reasoning, for example, which would fall under symbolic AI) and augmenting collective intelligence (e.g. helping to identify areas of consensus in a more automated manner, like: https://ai.objectives.institute/talk-to-the-city).
I think many examples of AI engagement in CI and CA boil down to translating information from humans into various forms of unstructured, semi-structured, and structured data (my preference is for the latter, which I view is pretty crucial in next-gen cognitive architecture and CI systems) which are used to perform many functions from identifying each person’s preferences and existing beliefs to performing planning to conducting evaluations.
Unfortunately I see this question didn’t get much engagement when it was originally posted, but I’m going to put a vote in for highly federated systems along the axes of agency, cognitive processes, and thinking, especially those that maximize transparency and determinism. I think that LM agents are just a first step into this area of safety. I write more about this here: https://www.lesswrong.com/posts/caeXurgTwKDpSG4Nh/safety-first-agents-architectures-are-a-promising-path-to
For specific proposals I’d recommend Drexler’s work on federating agency https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model and federating cognitive processes (memory) https://www.lesswrong.com/posts/FKE6cAzQxEK4QH9fC/qnr-prospects-are-important-for-ai-alignment-research