But eventually the Board realizes this “slow and bureaucratic check-in process” is making their company sluggish and uncompetitive, so they instruct the auto-CEO more and more to act without alignment check ins. The auto-CEO might warns them that this will decrease its overall level of per-decision alignment with them, but they say “Do it anyway; done is better than perfect” or something along those lines. All Boards wish other Boards would stop doing this, but neither they nor their CEOs manage to strike up a bargain with the rest of the world stop it. [emphasis mine]
This is the part that is most confusing to me. Why isn’t it the case that one auto-CEO (or more likely, a number of auto-CEOs, each one reasoning along similar lines, independently) comes to its board and lays out the kinds of problems that are likely to occur if the world keeps accelerating (of the sort described in this post) and proposes some coordination schemes to move towards a pareto-improved equilibrium? Then that company goes around and starts brokering with the other companies, many of whom are independently seeking to implement some sort of coordination scheme like this one.
Stated differently, why don’t the pretty-aligned_(single, single) AI systems develop the bargaining and coordination methods that you’re proposing we invest in now?
It seems like if we have single-single solved, we’re in a pretty good place for delegating single-multi, and multi-multi to the AIs.
This is the part that is most confusing to me. Why isn’t it the case that one auto-CEO (or more likely, a number of auto-CEOs, each one reasoning along similar lines, independently) comes to its board and lays out the kinds of problems that are likely to occur if the world keeps accelerating (of the sort described in this post) and proposes some coordination schemes to move towards a pareto-improved equilibrium? Then that company goes around and starts brokering with the other companies, many of whom are independently seeking to implement some sort of coordination scheme like this one.
Stated differently, why don’t the pretty-aligned_(single, single) AI systems develop the bargaining and coordination methods that you’re proposing we invest in now?
It seems like if we have single-single solved, we’re in a pretty good place for delegating single-multi, and multi-multi to the AIs.