This question is material to us, as we’re building an impact certificate market (a major component in retroactive public goods funding), and if the answer is yes, we might actually want to abort, or — more likely — I’d want to put a lot of work into helping to sure up mechanisms for making it sensitive to long-term negative externalities.
Another phrasing: Are there any dependencies for AGI, that private/academic AI/AGI projects are failing to coordinate to produce, that near-future foundations for developing free software would produce?
I first arrived at this question with my economist hat on, and the answer was “of course, there would be”, because knowledge and software infrastructure are non-excludable goods (useful to many but not profitable to release). But then my collaborators suggested that I take the economist hat off and try to remember what’s actually happening in reality, in which, oh yeah, it genuinely seems like all of the open source code and software infrastructures and knowledge required for AI are being produced and freely released by private actors, in which case, us promoting public goods markets couldn’t make things worse. (Sub-question: Why is that happening?)
But it’s possible that that’s not actually happening, it could be a streetlight effect: Maybe I’ve only come to think that all of the progress is being publicly released because I don’t see all of the stuff that isn’t! Maybe there are a whole lot of coordination problems going on in the background that are holding back progress, maybe OpenAI and Deepmind, the algorithmic traders, DJI, and defense researchers are all doing a lot of huge stuff but it’s not being shared and fitted together, but a lot of it would be in the public cauldron if an impact cert market existed. I wouldn’t know! Can we rule it out?
It would be really great to hear from anyone working on AI, AGI, and alignment on this. When you’re working in an engineering field, you know what the missing pieces are, you know where people are failing to coordinate, you probably already know whether there’s a lot of crucial work that no individual player has an incentive to do.
Keep your economist hat on! For-profit companies release useful open source all the time, including for the following self-interested reasons:
Attracting and retaining employees who like working with cool tech
Sharing development costs of foundational tools like e.g. LLVM
“Commoditizing your complement”, e.g. free ML software is great for NVIDIA
This is sufficient incentive that in the case of ML tools, volunteers just don’t have the resources to keep up with corporate projects. They still exist, but e.g. mygrad is not pytorch. For a deeper treatment, I’d suggest reading Working in Public (Nadia Eghbal) for a contemporary picture of how open-source development works, then maybe The Cathedral and the Bazzar (Eric Raymond) for the historical/founding-myth view.
I’d generally expect impact-motivated open source foundations to avoid competing directly with big tech, and instead try to build out under-resourced parts of the ecosystem like e.g. testing and verification. Regardless of the specifics here, to the extent that they work impact certificates invoke the unilateralists curse and so you really do need to consider negative externalities.
The fact that there is more than zero contributions from for-profit companies and other sources does not mean that the optimal level of public-good funding has been approached; the fact that other public-goods efforts are crowded out by existing efforts does not mean that either. (The fact that novel incentive or fundraising or corporate structures in the cryptocurrency world can raise tens of billions of dollars to create public-good-ish things while such structures still fall far short of solving ‘funding public goods’, however, does strongly suggest that there is an extremely large gap between those non-zero contributions and the socially-optimal level of funding.)
I entirely agree that private contributions to open source are far below socially-optimal level of public goods funding—I’d just expect that the first few billion dollars would best be spent on producing neglected goods like language-level improvements, testing, debugging, verification, etc. where most value is not captured. The state of the art in these areas is mostly set by individuals or small teams, and it would be easy to massively scale up given funding.
(disclosure: I got annoyed enough by this that I’ve tried to commercialize HypoFuzz, specifically in order to provide sustainable funding for Hypothesis. Commercialize products to which your favorite public goods are complements!)
If I am reading you correctly, you are trying to build an incentive structure that will accelerate the development of AGI. Many alignment researchers (I am one) will tell you that this is not a good idea, instead you want to build an incentive structure that will accelerate the development of safety systems and alignment methods for AI and AGI.
There is a lot of open source production in the AI world, but you are right in speculating that a lot of AI code and know-how is never open sourced. Take a look at the self-driving car R&D landscape if you want to see this in action.
As already mentioned by Zac, for-profit companies release useful open source all the time for many self-interested reasons.
One reason not yet mentioned by Zac is that an open source release may be a direct attack to suck the oxygen our of the business model of one or more competitors, an attack which aims to commoditize the secret sauce (the software functions and know-how) that the competitor relies on to maintain profitability.
This motivation explains why Facebook started to release big data handling software and open source AI frameworks: they were attacking Google’s stated long-term business strategy, which relied on Google being better at big data and AI than anybody else. To make this more complicated, Google’s market power never relied as much on big data and advanced AI as it wanted its late-stage investors to believe, so the whole move has been somewhat of an investor story telling shadow war.
Personally, I am not a big fan of the idea that one might try to leverage crypto-based markets as a way to improve on this resource allocation mess.
No, I’m not sure how you got that impression (was it “failing to coordinate”?), I’m asking for the opposite reason.
I guess I got that impression from the ‘public good producers significantly accelerate the development of AGI’ in the title, and then looking at the impactcerts website.
I somehow overlooked the bit where you state that you are also wondering if that would be a good idea.
To be clear: my sense of the current AI open source space is that it definitely under-produces certain software components, software components that could be relevant for improving AI/AGI safety.
What are some of those components? We can put them on a list.
By the way, “myopic” means “pathologically short-term”.
Good question. I don’t have a list, just a general sense of the situation. Making a list would be a research project in itself. Also, different people here would give you different answers. That being said,
I occasionally see comments from alignment research orgs who do actual software experiments that they spend a lot of time on just building and maintaining the infrastructure to run large scale experiments. You’d have to talk to actual orgs to ask them what they would need most. I’m currently a more theoretical alignment researcher, so I cannot offer up-to-date actionable insights here.
As a theoretical researcher, I do reflect on what useful roads are not being taken, by industry and academia. One observation here is that there is an under-investment in public high-quality datasets for testing and training, and in the (publicly available) tools needed for dataset preparation and quality assurance. I am not the only one making that observation, see for example https://research.google/pubs/pub49953/ . Another observation is that everybody is working on open source ML algorithms, but almost nobody is working on open source reward functions that try to capture the actual complex details of human needs, laws, or morality. Also, where is the open source aligned content recommender?
On a more practical note, AI benchmarks have turned out to be a good mechanism for drawing attention to certain problems. Many feel that this benchmarks are having a bad influence on the field of AI, I have a lot of sympathy for that view, but you might also go with the flow. A (crypto) market that rewards progress on selected alignment benchmarks may be a thing that has value. You can think here of benchmarks that reward cooperative behaviour, truthfulness and morality in answers given by natural language querying systems, playing games ethically ( https://arxiv.org/pdf/2110.13136.pdf ), etc. My preference would be to reward benchmark contributions that win by building strong priors into the AI to guide and channel machine learning; many ML researchers would consider this to be cheating, but these are supposed to be alignment benchmarks, not machine-learning-from-blank-slate benchmarks. I have some doubts about the benchmarks for fairness in ML which are becoming popular, if I look at the latest NeurIPS: the ones I have seen offer tests which look a bit too easy, if the objective is to reward progress on techniques that have the promise of scaling up to more complex notions of fairness and morality you would like to have at the AGI level, or even for something like a simple content recommendation AI. Some cooperative behaviour benchmarks also strike me as being too simple, in their problem statements and mechanics, to reward the type of research that I would like to see. Generally, you would want to retire a benchmark from the rewards-generating market when the improvements on the score level out.