This requires reporting of plans for training and deployment, as well as ownership and security of weights, for any model with training compute over 1026 FLOPs. Might be enough of a talking point with corporate leadership to stave off things like hypothetical irreversible proliferation of a GPT-4.5 scale open weight LLaMA 4.
This requires reporting of plans for training and deployment, as well as ownership and security of weights, for any model with training compute over 1026 FLOPs. Might be enough of a talking point with corporate leadership to stave off things like hypothetical irreversible proliferation of a GPT-4.5 scale open weight LLaMA 4.