I think open source models probably reduce profit incentives to race, but can increase strategic (e.g., national security) incentives to race. Consider that if you’re the Chinese government, you might think that you’re too far behind in AI and can’t hope to catch up, and therefore decide to spend your resources on other ways to mitigate the risk of a future transformative AI built by another country. But then an open model is released, and your AI researchers catch up to near state-of-the-art by learning from it, which may well change your (perceived) tradeoffs enough that you start spending a lot more on AI research.
It seems this is more about open models making it easier to train closed models than about nations vs corporations? Since this reasoning could also apply to a corporation that is behind.
Hmm, open models make it easier for a corporation to train closed models, but also make that activity less profitable, whereas for a government the latter consideration doesn’t apply or has much less weight, so it seems much clearer that open models increase overall incentive for AI race between nations.
For corporations I assume their revenue is proportional to f(y) - f(x) where y is cost of their model and x is cost of open source model. Do you think governments would have a substantially different utility function from that?
A government might model the situation as something like “the first country/coalition to open up an AI capabilities gap of size X versus everyone else wins” because it can then easily win a tech/cultural/memetic/military/economic competition against everyone else and take over the world. (Or a fuzzy version of this to take into account various uncertainties.) Seems like a very different kind of utility function.
I think open source models probably reduce profit incentives to race, but can increase strategic (e.g., national security) incentives to race. Consider that if you’re the Chinese government, you might think that you’re too far behind in AI and can’t hope to catch up, and therefore decide to spend your resources on other ways to mitigate the risk of a future transformative AI built by another country. But then an open model is released, and your AI researchers catch up to near state-of-the-art by learning from it, which may well change your (perceived) tradeoffs enough that you start spending a lot more on AI research.
It seems this is more about open models making it easier to train closed models than about nations vs corporations? Since this reasoning could also apply to a corporation that is behind.
Hmm, open models make it easier for a corporation to train closed models, but also make that activity less profitable, whereas for a government the latter consideration doesn’t apply or has much less weight, so it seems much clearer that open models increase overall incentive for AI race between nations.
For corporations I assume their revenue is proportional to f(y) - f(x) where y is cost of their model and x is cost of open source model. Do you think governments would have a substantially different utility function from that?
A government might model the situation as something like “the first country/coalition to open up an AI capabilities gap of size X versus everyone else wins” because it can then easily win a tech/cultural/memetic/military/economic competition against everyone else and take over the world. (Or a fuzzy version of this to take into account various uncertainties.) Seems like a very different kind of utility function.