I think I agree that this is possible, and it’s closely connected to the reasons I think making alignable ASI open source definitely wouldn’t lead to egalitarian outcomes:
Artificial Superintelligence, lacking the equalizing human frailties of internal opacity, corruption (principle-agent problems), and senescence, is going to be even more prone to rich-get-richer monopoly effects than corporate agency was. You can give your opponent last-gen ASI, but if they have only a fraction of the hardware to run it on, and only a fraction of the advanced manufacturing armatures. Being behind on the superexponential curve of recursively accelerating progress leaves them with roughly zero part of the pie.
(Remember that the megacorporations who advocate for open source AI in the name of democratization while holding enduring capital advantages in reserve all realize this.)
I think I agree that this is possible, and it’s closely connected to the reasons I think making alignable ASI open source definitely wouldn’t lead to egalitarian outcomes:
Artificial Superintelligence, lacking the equalizing human frailties of internal opacity, corruption (principle-agent problems), and senescence, is going to be even more prone to rich-get-richer monopoly effects than corporate agency was.
You can give your opponent last-gen ASI, but if they have only a fraction of the hardware to run it on, and only a fraction of the advanced manufacturing armatures. Being behind on the superexponential curve of recursively accelerating progress leaves them with roughly zero part of the pie.
(Remember that the megacorporations who advocate for open source AI in the name of democratization while holding enduring capital advantages in reserve all realize this.)