It seems like lack of coordination for AGI strategy increases the variance? That is, without coordination somebody will quickly launch an attempt at value aligned AGI; if they get it, we win. If they don’t, we probably lose. With coordination, we might all be able to go slower to lower the risk and therefore variance of the outcome.
I guess it depends on some details, but I don’t understand your last sentence. I’m talking about coordinating on one gamble.
Analogous the the OP, I’m thinking of AI companies making a bad bet (like 90% chance of loss of control, 10% chance gain the tools to do a pivotal act in the next year). Losing the bet ends the betting, and winning allows everyone to keep playing. Then if many of them make similar independent gambles simultaneously, it becomes almost certain that one of them loses control.
It seems like lack of coordination for AGI strategy increases the variance? That is, without coordination somebody will quickly launch an attempt at value aligned AGI; if they get it, we win. If they don’t, we probably lose. With coordination, we might all be able to go slower to lower the risk and therefore variance of the outcome.
I guess it depends on some details, but I don’t understand your last sentence. I’m talking about coordinating on one gamble.
Analogous the the OP, I’m thinking of AI companies making a bad bet (like 90% chance of loss of control, 10% chance gain the tools to do a pivotal act in the next year). Losing the bet ends the betting, and winning allows everyone to keep playing. Then if many of them make similar independent gambles simultaneously, it becomes almost certain that one of them loses control.