Funny! I’ve now been doing ML-adjacent work for long enough that I have internalized the idea that data is part of the model, not just calculations. The separation of reality as “simple physics” plus “lots storage for starting/current quantum configurations” just doesn’t click for me. The data is huge, and that’s all that matters in terms of model size/complexity.
Funny! I’ve now been doing ML-adjacent work for long enough that I have internalized the idea that data is part of the model, not just calculations. The separation of reality as “simple physics” plus “lots storage for starting/current quantum configurations” just doesn’t click for me. The data is huge, and that’s all that matters in terms of model size/complexity.
This goes into the same direction and may be more to your liking: How Many Bits Of Optimization Can One Bit Of Observation Unlock?
Maybe you can see it as a factoring of a model into sub-models?