Thoughts on when models will or won’t use edge cases? For example, if you made an electronic circuit using evolutionary algorithms in a high fidelity simulation, I would expect it to take advantage of V = IR being wrong in edge cases.
In other words, how much of the work do you expect to be in inducing models to play nice with abstraction?
ETA: abstractions are sometimes wrong in stable (or stabilizable) states, so you can’t always lean on chaos washing it out
When we have a good understanding of abstraction, it should also be straightforward to recognize when a distribution shift violates the abstraction. In particular, insofar as abstractions are basically deterministic constraints, we can see when the constraint is violated. And as long as we can detect it, it should be straightforward (though not necessarily easy) to handle it.
Thoughts on when models will or won’t use edge cases? For example, if you made an electronic circuit using evolutionary algorithms in a high fidelity simulation, I would expect it to take advantage of V = IR being wrong in edge cases.
In other words, how much of the work do you expect to be in inducing models to play nice with abstraction?
ETA: abstractions are sometimes wrong in stable (or stabilizable) states, so you can’t always lean on chaos washing it out
When we have a good understanding of abstraction, it should also be straightforward to recognize when a distribution shift violates the abstraction. In particular, insofar as abstractions are basically deterministic constraints, we can see when the constraint is violated. And as long as we can detect it, it should be straightforward (though not necessarily easy) to handle it.