If we cannot do compute governance, and we cannot do model-level governance, then I do not see an alternative solution. I only see bad options, a choice between an EU-style regime and doing essentially nothing.
It feels like this and some other parts of the post implies the EU AI Act does not apply compute governance and model-level governance, which seems false as far as I can tell (semantics?)
From a summary of the act: “All providers of GPAI models that present a systemic risk* – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.”
*”GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 1025 floating point operations (FLOPs)”
It feels like this and some other parts of the post implies the EU AI Act does not apply compute governance and model-level governance, which seems false as far as I can tell (semantics?)
From a summary of the act: “All providers of GPAI models that present a systemic risk* – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.”
*”GPAI models present systemic risks when the cumulative amount of compute used for its training is greater than 1025 floating point operations (FLOPs)”