Open source software has long differentiated between “free as in speech” (libre) and “free as in beer” (gratis). In the first case, libre software has a license that allows the user freedom to view the source and modify it, understand it, and remix it. In the second case, gratis software does not need to be paid for, but the user doesn’t necessarily have access to the pieces, can’t make new versions, and cannot remix or change it.
...
If Open Source AI is neither gratis or libre, then those calling free model weights “Open Source,” should figure out what free means to them. Perhaps it’s “free as in oxygen” (dangerous due to reactions it can cause), or “free as in birds” (wild, without any person responsible).
I’m not necessarily opposed to judicious release of model weights, though as with any technology, designers and developers should consider the impact of their work before making or releasing it, as LeCun has recently agreed. But calling this new competitive strategy by Facebook “Open Source” without insisting on the actual features of open source is an insult to the name.
A model is like compiled binaries, except compilation is extremely expensive. Distributing a model alone and claiming it’s “open source” is like calling a binary distributive without source code “open source”.
The term that’s catching on is open weight models as distinct from open source models. The latter would need to come with datasets and open source training code that enables reproducing the model.
I think the compiled binary analogy isn’t quite right. For instance, the vast majority of modifications and experiments people want to run are possible (and easiest) with just access to the weights in the LLM case.
As in, if you want to modify an LLM to be slightly different, access to the original training code or dataset is mostly unimportant.
(Edit: unlike the software case where modifying compiled binaries to have different behavior isn’t really doable without the source code.)
The vast majority of uses of software are via changing configuration and inputs, not modifying code and recompiling the software. (Though lots of Software as a Service doesn’t even let you change configuration directly.) But software is not open in this sense unless you can recompile, because it’s not actually giving you full access to what was used to build it.
The same is the case for what Facebook call open-source LLMs; it’s not actually giving you full access to what was used to build it.
The vast majority of ordinary uses of LLMs (e.g. when using ChatGPT) are via changing and configuring inputs, not modifying code or fine-tuning the model. This still seems analogous to ordinary software, in my opinion, making Ryan Greenblatt’s point apt.
(But I agree that simply releasing model weights is not fully open source. I think these things exist on a spectrum. Releasing model weights could be considered a form of partially open sourcing the model.)
I agree that releasing model weights is “partially open sourcing”—in much the same way that freeware is “partially open sourcing” software, or restrictive licences with code availability is.
But that’s exactly the point; you don’t get to call something X because it’s kind-of-like X, it needs to actually fulfill the requirements in order to get the label. What is being called Open Source AI doesn’t actually do the thing that it needs to.
I don’t think that’s accurate. the data is the code, the model is just the binary format the code gets compiled to.
Yeah. I think it is fairly arguable that open data sets aren’t required for open source; it’s not the form you’d prefer to modify it in as a programmer, and they’re not exactly code to start with, Shakespeare didn’t write his plays as programming instructions for algorithms to generate Shakespeare-like plays. No one wants a trillion tokens that take ~$200k to ‘compile’ as their starting point to build from after someone else has already done that and made it available. (Hyperbolic; but the reasons someone wants that generally aren’t the same reasons they’d want to compile from source code.) Open datasets are nice information to have, but lack a reasonable reproduction cost or much direct utility beyond explanation.
Llama’s state as not open source is much more strongly made by the restrictions on usage, as noted in the prior discussion. There is a meaningful distinction between open dataset and closed dataset, but I’d put the jump between Mistral and Llama to be from ‘open source with hidden supply chain’ to ‘open weights with restrictive licensing’, where the jump between Mistral and RedPajama is more ‘open source with hidden supply chain’ to ‘open source with revealed supply chain’.
I agree that the reasons someone wants the dataset generally aren’t the same reasons they’d want to compile from source code. But there’s a lot of utility for research in having access to the dataset even if you don’t recompile. Checking whether there was test-set leakage for metrics, for example, or assessing how much of LLM ability is stochastic parroting of specific passages versus recombination. And if it was actually open, these would not be hidden from researchers.
And supply chain is a reasonable analogy—but many open-source advocates make sure that their code doesn’t depend on closed / proprietary libraries. It’s not actually “libre” if you need to have a closed source component or pay someone to make the thing work. Some advocates, those who built or control quite a lot of the total open source ecosystem, also put effort into ensuring that the entire toolchain needed to compile their code is open, because replicability shouldn’t be contingent on companies that can restrict usage or hide things in the code. It’s not strictly required, but it’s certainly relevant.
Yes, agreed—as I said in the post, “Open Source AI simply means that the models have the model weights released—the equivalent of software which makes the compiled code available. (This is otherwise known as software.)”
Some prior discussion.
[Edit: didn’t mean to suggest David’s post is redundant.]
Thanks—I agree that this discusses the licenses, which would be enough to make LlaMa not qualify, but I think there’s a strong claim I put forward in the full linked piece that even if the model weights were released using a GPL license, those “open” model weights wouldn’t make it open in the sense that Open Source means elsewhere.
Have you met Mistral, Phi-2, Falcon, MPT, etc … ? There are plenty of freely remixable models out there; some even link to their datasets and recipes involved in processing them (though I wouldn’t be surprised if some relevant thing got left out because no one researched that it was relevant yet).
Though I’m reasonably sure Llama license isn’t preventing viewing the source (though of course not the training data), modifying it, understanding it and remixing it. It’s a less open license than others, but Facebook didn’t just free-as-in-beer release a compiled black box you put on your computer and can never change; research was part of the purpose, and needs to do that. It’s not the best open source license, but I’m not sure if being a good example of something is required to meet the definition.
“Freely remixable” models don’t generally have open datasets used for training. If you know of one, that’s great, and would be closer to open source. (Not Mistral. And Phi-2 is using synthetic data from other LLMs—I don’t know what they released about the methods used to generate or select the text, but it’s not open.)
But the entire point is that weights are not the source code for an LLM, they are the compiled program. Yes, it’s modifiable via LoRA and similar, but that’s not open source! Open source would mean I could replicate it, from the ground up. For facebook’s models, at least, the details of the training methods, the RLHF training they do, where they get the data, all of those things are secrets. But they call it “Open Source AI” anyways.
Oh, I do, they’re just generally not quite the best available/most popular for hobbyists. Some I can find quickly enough are Pythia and OpenLLaMA, and some of the RedPajama models Together.ai trained on their own RedPajama dataset (which is freely available and described). (Also the mentioned Falcon and MPT, as well as StableLM. You might have to get into the weeds to find out how much of the data processing step is replicable.)
(It’s going to be expensive to replicate any big pretrained model though, and possibly not deterministic enough to do it perfectly; especially since datasets sometimes adjust due to removing unsafe data, the recipes for data processing included random selection and shuffling from the datasets, etc. Smaller examples where people have fine-tuned using the same recipe coincidentally or intentionally have gotten identical model weights though.)
Thanks—Redpajama definitely looks like it fits the bill, but it shouldn’t need to bill itself as making “fully-open, reproducible models,” since that’s what “open source” is already supposed to mean. (Unfortunately, the largest model they have is 7B.)