Hm, interesting. This suggests an alternative model where the US tries to negotiate, and there are four possible outcomes:
US believes it can coordinate with PRC, creates MAGIC, PRC secretly defects.
US believes it can coordinate with PRC, creates MAGIC, US secretly defects.
US believes it can coordinate with PRC, both create MAGIC, none defect.
US believes it can’t coordinate with PRC, both race.
One problem I see with encoding this in a model is that game theory is very brittle, as correlated equilibria (which we can use here in place of Nash equilibria, both because they’re easier to compute and because the actions of both players are correlated with the difficulty of alignment) can change drastically with small changes in the payoffs.
I hadn’t informed myself super thoroughly about the different positions people take on pausing AI and the relation to racing with the PRC, my impression was that people were not being very explicit about what should be done there, and the people who were explicit were largely saying that a unilateral ceasing of TAI development would be better. But I’m willing to have my mind changed on that, and have updated based on your reply.
Defecting becomes unlikely if everyone can track the compute supply chain and if compute is generally supposed to be handled exclusively by the shared project.
I am not as convinced as many other people of compute governance being sufficient, both because I suspect there are much better architectures/algorithms/paradigms waiting to be discovered, which could require very different types of (or just less) compute (which defectors could then use), and all from what I’ve read so far about federated learning has strengthened my belief that part of the training of advanced AI systems could be done in federation (e.g. search). If federated learning becomes more important, then the existing stock of compute countries have also becomes more important.
Hm, interesting. This suggests an alternative model where the US tries to negotiate, and there are four possible outcomes:
US believes it can coordinate with PRC, creates MAGIC, PRC secretly defects.
US believes it can coordinate with PRC, creates MAGIC, US secretly defects.
US believes it can coordinate with PRC, both create MAGIC, none defect.
US believes it can’t coordinate with PRC, both race.
One problem I see with encoding this in a model is that game theory is very brittle, as correlated equilibria (which we can use here in place of Nash equilibria, both because they’re easier to compute and because the actions of both players are correlated with the difficulty of alignment) can change drastically with small changes in the payoffs.
I hadn’t informed myself super thoroughly about the different positions people take on pausing AI and the relation to racing with the PRC, my impression was that people were not being very explicit about what should be done there, and the people who were explicit were largely saying that a unilateral ceasing of TAI development would be better. But I’m willing to have my mind changed on that, and have updated based on your reply.
Defecting becomes unlikely if everyone can track the compute supply chain and if compute is generally supposed to be handled exclusively by the shared project.
I am not as convinced as many other people of compute governance being sufficient, both because I suspect there are much better architectures/algorithms/paradigms waiting to be discovered, which could require very different types of (or just less) compute (which defectors could then use), and all from what I’ve read so far about federated learning has strengthened my belief that part of the training of advanced AI systems could be done in federation (e.g. search). If federated learning becomes more important, then the existing stock of compute countries have also becomes more important.