I find that it is better than not racing. Advocating for an international project to build TAI instead of racing turns out to be good if the probability of such advocacy succeeding is ≥20%.
Both of these sentences are false if you accept that my position is an option (racing is in fact worse than international cooperation which is encompassed within the ‘not racing’ outcomes, and advocating for an international project is in fact not in tension with racing whenever some major party is declining to sign on.)
There are actually a lot of people out there who don’t think they’re allowed to advocate for a collective action without cargo culting it if the motion fails, so this isn’t a straw-reading.
Hm, interesting. This suggests an alternative model where the US tries to negotiate, and there are four possible outcomes:
US believes it can coordinate with PRC, creates MAGIC, PRC secretly defects.
US believes it can coordinate with PRC, creates MAGIC, US secretly defects.
US believes it can coordinate with PRC, both create MAGIC, none defect.
US believes it can’t coordinate with PRC, both race.
One problem I see with encoding this in a model is that game theory is very brittle, as correlated equilibria (which we can use here in place of Nash equilibria, both because they’re easier to compute and because the actions of both players are correlated with the difficulty of alignment) can change drastically with small changes in the payoffs.
I hadn’t informed myself super thoroughly about the different positions people take on pausing AI and the relation to racing with the PRC, my impression was that people were not being very explicit about what should be done there, and the people who were explicit were largely saying that a unilateral ceasing of TAI development would be better. But I’m willing to have my mind changed on that, and have updated based on your reply.
Defecting becomes unlikely if everyone can track the compute supply chain and if compute is generally supposed to be handled exclusively by the shared project.
I am not as convinced as many other people of compute governance being sufficient, both because I suspect there are much better architectures/algorithms/paradigms waiting to be discovered, which could require very different types of (or just less) compute (which defectors could then use), and all from what I’ve read so far about federated learning has strengthened my belief that part of the training of advanced AI systems could be done in federation (e.g. search). If federated learning becomes more important, then the existing stock of compute countries have also becomes more important.
Well let’s fix this then?
Both of these sentences are false if you accept that my position is an option (racing is in fact worse than international cooperation which is encompassed within the ‘not racing’ outcomes, and advocating for an international project is in fact not in tension with racing whenever some major party is declining to sign on.)
There are actually a lot of people out there who don’t think they’re allowed to advocate for a collective action without cargo culting it if the motion fails, so this isn’t a straw-reading.
Hm, interesting. This suggests an alternative model where the US tries to negotiate, and there are four possible outcomes:
US believes it can coordinate with PRC, creates MAGIC, PRC secretly defects.
US believes it can coordinate with PRC, creates MAGIC, US secretly defects.
US believes it can coordinate with PRC, both create MAGIC, none defect.
US believes it can’t coordinate with PRC, both race.
One problem I see with encoding this in a model is that game theory is very brittle, as correlated equilibria (which we can use here in place of Nash equilibria, both because they’re easier to compute and because the actions of both players are correlated with the difficulty of alignment) can change drastically with small changes in the payoffs.
I hadn’t informed myself super thoroughly about the different positions people take on pausing AI and the relation to racing with the PRC, my impression was that people were not being very explicit about what should be done there, and the people who were explicit were largely saying that a unilateral ceasing of TAI development would be better. But I’m willing to have my mind changed on that, and have updated based on your reply.
Defecting becomes unlikely if everyone can track the compute supply chain and if compute is generally supposed to be handled exclusively by the shared project.
I am not as convinced as many other people of compute governance being sufficient, both because I suspect there are much better architectures/algorithms/paradigms waiting to be discovered, which could require very different types of (or just less) compute (which defectors could then use), and all from what I’ve read so far about federated learning has strengthened my belief that part of the training of advanced AI systems could be done in federation (e.g. search). If federated learning becomes more important, then the existing stock of compute countries have also becomes more important.