I am trying to make a general statement about models and contexts, and thinking about the consequences of applying the concept to AI.
Another example could be Newtonian versus Relativistic physics. There is a trade off of something like efficiency/simplicity/interpretability versus accuracy/precision/complexity. Both models have contexts in which they are the more valid model to use. If you try to force both models to exist at once, you lose both sets of advantages. You will cause and amplify errors if you try to interchange them arbitrarily.
So we don’t combine the two, but instead try to understand when and why we should chose to adopt one model over the other.
I am trying to make a general statement about models and contexts, and thinking about the consequences of applying the concept to AI.
Another example could be Newtonian versus Relativistic physics. There is a trade off of something like efficiency/simplicity/interpretability versus accuracy/precision/complexity. Both models have contexts in which they are the more valid model to use. If you try to force both models to exist at once, you lose both sets of advantages. You will cause and amplify errors if you try to interchange them arbitrarily.
So we don’t combine the two, but instead try to understand when and why we should chose to adopt one model over the other.