For example, although our results show CoinRun models failed to learn the general capability of pursuing the coin, the more natural interpretation is that the model has learned a robust ability to avoid obstacles and navigate the levels,[7] but the objective it learned is something like “get to the end of the level,” instead of “go to the coin.”
It seems to me that every robustness failure can be interpreted as an objective robustness failure (as aptly titled in your other post). Do you have examples of a capability robustness failure that is not an objective robustness failure?
It seems to me that every robustness failure can be interpreted as an objective robustness failure (as aptly titled in your other post). Do you have examples of a capability robustness failure that is not an objective robustness failure?