So in theory we could train models violating natural abstractions by only giving them access to high-dimensional simulated environments? This seems testable even.
So in theory we could train models violating natural abstractions by only giving them access to high-dimensional simulated environments? This seems testable even.