There are many, many actors in the open-source space, working on many, many AI models (even just fine-tunes of LLaMA/Llama2).
To clarify, I’m imagining that this protocol would be applied to the open sourcing of foundation models. Probably you could operationalize this as “any training run which consumed > X compute” for some judiciously chosen X.
To clarify, I’m imagining that this protocol would be applied to the open sourcing of foundation models. Probably you could operationalize this as “any training run which consumed > X compute” for some judiciously chosen X.