scale up to superintelligence in parallel across many different projects / nations / factions, such that the power is distributed
This has always struck me as worryingly unstable. ETA: Because in this regime you’re incentivized to pursue reckless behaviour to outcompete the other AIs, e.g. recursive self-improvement.
Is there a good post out there making a case for why this would work? A few possibilities:
The AIs are all relatively good / aligned. But they could be outcompeted by malevolent AIs. I guess this is what you’re getting at with “most of the ASIs are aligned at any given time”, so they can band together and defend against the bad AIs?
They all decide / understand that conflict is more costly than cooperation. A darker variation on this is mutually assured destruction, which I don’t find especially comforting to live under.
Some technological solution to binding / unbreakable contracts such that reneging on your commitments is extremely costly.
This has always struck me as worryingly unstable. ETA: Because in this regime you’re incentivized to pursue reckless behaviour to outcompete the other AIs, e.g. recursive self-improvement.
Is there a good post out there making a case for why this would work? A few possibilities:
The AIs are all relatively good / aligned. But they could be outcompeted by malevolent AIs. I guess this is what you’re getting at with “most of the ASIs are aligned at any given time”, so they can band together and defend against the bad AIs?
They all decide / understand that conflict is more costly than cooperation. A darker variation on this is mutually assured destruction, which I don’t find especially comforting to live under.
Some technological solution to binding / unbreakable contracts such that reneging on your commitments is extremely costly.