The vast majority of ordinary uses of LLMs (e.g. when using ChatGPT) are via changing and configuring inputs, not modifying code or fine-tuning the model. This still seems analogous to ordinary software, in my opinion, making Ryan Greenblatt’s point apt.
(But I agree that simply releasing model weights is not fully open source. I think these things exist on a spectrum. Releasing model weights could be considered a form of partially open sourcing the model.)
I agree that releasing model weights is “partially open sourcing”—in much the same way that freeware is “partially open sourcing” software, or restrictive licences with code availability is.
But that’s exactly the point; you don’t get to call something X because it’s kind-of-like X, it needs to actually fulfill the requirements in order to get the label. What is being called Open Source AI doesn’t actually do the thing that it needs to.
The vast majority of ordinary uses of LLMs (e.g. when using ChatGPT) are via changing and configuring inputs, not modifying code or fine-tuning the model. This still seems analogous to ordinary software, in my opinion, making Ryan Greenblatt’s point apt.
(But I agree that simply releasing model weights is not fully open source. I think these things exist on a spectrum. Releasing model weights could be considered a form of partially open sourcing the model.)
I agree that releasing model weights is “partially open sourcing”—in much the same way that freeware is “partially open sourcing” software, or restrictive licences with code availability is.
But that’s exactly the point; you don’t get to call something X because it’s kind-of-like X, it needs to actually fulfill the requirements in order to get the label. What is being called Open Source AI doesn’t actually do the thing that it needs to.