Something like this may be useful, but I do struggle to come up with workable versions that try to get specific about hardware details. Most options yield Goodhart problems- e.g. shift the architecture a little bit so that real world ML performance per watt/dollar is unaffected, but it falls below the threshold because “it’s not one GPU, see!” or whatever else. Throwing enough requirements at it might work, but it seems weaker as a category than “used in a datacenter” given how ML works at the moment.
It could be that we have to bite the bullet and try for this kind of extra restriction anyway if ML architectures shift in such a way that internet-distributed ML becomes competitive, but I’m wary of pushing for it before that point because the restrictions would be far more visible to consumers.
Something like this may be useful, but I do struggle to come up with workable versions that try to get specific about hardware details. Most options yield Goodhart problems- e.g. shift the architecture a little bit so that real world ML performance per watt/dollar is unaffected, but it falls below the threshold because “it’s not one GPU, see!” or whatever else. Throwing enough requirements at it might work, but it seems weaker as a category than “used in a datacenter” given how ML works at the moment.
It could be that we have to bite the bullet and try for this kind of extra restriction anyway if ML architectures shift in such a way that internet-distributed ML becomes competitive, but I’m wary of pushing for it before that point because the restrictions would be far more visible to consumers.
In summary, maybeshrugidunno!