most verification techniques assume unlimited fast access to the specification
Making the specification faster than the model doesn’t really help you. In this case the specification is a somewhat more expensive than the model itself, but as far as I can tell that should just make verification somewhat more expensive.
That’s fair. I suspect I was assuming a really fast model. Are you imagining that both the model and the specification are quite fast, or that both are relatively slow?
I think my intuition is something like: the number of queries to the model / specification required to obtain worst-case guarantees is orders of magnitude more than the number of queries needed to train the model, and this ratio gets worse the more complex your environment is. (Though if the training distribution is large enough such that it “covers everything the environment can do”, the intuition becomes weaker, mostly because this makes the cost of training much higher.)
the number of queries to the model / specification required to obtain worst-case guarantees is orders of magnitude more than the number of queries needed to train the model, and this ratio gets worse the more complex your environment is
Not clear to me whether this is true in general—if the property you are specifying is in some sense “easy” to satisfy (e.g. it holds of the random model, holds for some model near any given model), and the behavior you are training is “hard” (e.g. requires almost all of the model’s capacity) then it seems possible that verification won’t add too much.
Making the specification faster than the model doesn’t really help you. In this case the specification is a somewhat more expensive than the model itself, but as far as I can tell that should just make verification somewhat more expensive.
That’s fair. I suspect I was assuming a really fast model. Are you imagining that both the model and the specification are quite fast, or that both are relatively slow?
I think my intuition is something like: the number of queries to the model / specification required to obtain worst-case guarantees is orders of magnitude more than the number of queries needed to train the model, and this ratio gets worse the more complex your environment is. (Though if the training distribution is large enough such that it “covers everything the environment can do”, the intuition becomes weaker, mostly because this makes the cost of training much higher.)
Not clear to me whether this is true in general—if the property you are specifying is in some sense “easy” to satisfy (e.g. it holds of the random model, holds for some model near any given model), and the behavior you are training is “hard” (e.g. requires almost all of the model’s capacity) then it seems possible that verification won’t add too much.
Yeah, I agree it is not always true and so isn’t a clear obstruction.