If you can produce a checkable proof of this over the actual EfficientNet architecture, I’d pay out the prize. Note that this uses floating-point numbers, not the reals!
I’m confused because I understand you to be asking for a bound on the derivative of an EfficientNet model, but it seems quite easy (though perhaps kind of a hassle) to get this bound.
I don’t think the floating point numbers matter very much (especially if you’re ok with the bound being computed a bit more loosely).
Ah, crux: I do think the floating-point matters! Issues of precision, underflow, overflow, and NaNs bedevil model training and occasionally deployment-time behavior. By analogy, if we deploy an AGI the ideal mathematical form of which is aligned we may still be doomed, even it’s plausibly our best option in expectation.
Checkable meaning that I or someone I trust with this has to be able to check it! Maxwell’s proposal is simple enough that I can reason through the whole thing, even over float32 rather than R, but for more complex arguments I’d probably want it machine-checkable for at least the tricky numeric parts.
If you can produce a checkable proof of this over the actual EfficientNet architecture, I’d pay out the prize. Note that this uses floating-point numbers, not the reals!
Would running the method in this paper on EfficientNet count?
What if we instead used a weaker but still sound method (e.g. based on linear programs instead of semidefinite programs)?
On a quick skim it looks like that fails both “not equivalent to executing the model” and the
float32
vs R problem.It’s a nice approach, but I’d also be surprised if it scales to maintain tight bounds on much larger networks.
By “checkable” do you mean “machine checkable”?
I’m confused because I understand you to be asking for a bound on the derivative of an EfficientNet model, but it seems quite easy (though perhaps kind of a hassle) to get this bound.
I don’t think the floating point numbers matter very much (especially if you’re ok with the bound being computed a bit more loosely).
Ah, crux: I do think the floating-point matters! Issues of precision, underflow, overflow, and NaNs bedevil model training and occasionally deployment-time behavior. By analogy, if we deploy an AGI the ideal mathematical form of which is aligned we may still be doomed, even it’s plausibly our best option in expectation.
Checkable meaning that I or someone I trust with this has to be able to check it! Maxwell’s proposal is simple enough that I can reason through the whole thing, even over
float32
rather than R, but for more complex arguments I’d probably want it machine-checkable for at least the tricky numeric parts.