I don’t think these sorts of equations are good examples of what you are trying to say, since laws of physics and related equations are counterfactual and thus causal. That is, if I were to counterfactually change the length of the pipe in your equation, it would still predict the loss correctly. Invariance to change is precisely what makes these kinds of equations useful and powerful, and this invariance is causal. The fact that the equation is ‘ad hoc’ rather than deduced from a theory is irrelevant to whether the equation is causal or not. Causality has to do with counterfactual invariance (see also Hume’s counterfactual definition).
I think a better example would be something like the crazy “expert voting” algorithm that won the Netflix prize. I think in that case, though, given sufficient knowledge, a causal model would do better. Not because it was causal, mind you, but just because observing enough about the domain gives you as a side effect causal knowledge of the domain. In the Netflix prize case, which was about movie recommendations, ‘sufficient knowledge’ would entail having detailed knowledge of decision and preference algorithms of all potential users of the system. At that point, the model becomes so detailed it inevitably encodes causal information.
I don’t think these sorts of equations are good examples of what you are trying to say, since laws of physics and related equations are counterfactual and thus causal. That is, if I were to counterfactually change the length of the pipe in your equation, it would still predict the loss correctly. Invariance to change is precisely what makes these kinds of equations useful and powerful, and this invariance is causal. The fact that the equation is ‘ad hoc’ rather than deduced from a theory is irrelevant to whether the equation is causal or not. Causality has to do with counterfactual invariance (see also Hume’s counterfactual definition).
I think a better example would be something like the crazy “expert voting” algorithm that won the Netflix prize. I think in that case, though, given sufficient knowledge, a causal model would do better. Not because it was causal, mind you, but just because observing enough about the domain gives you as a side effect causal knowledge of the domain. In the Netflix prize case, which was about movie recommendations, ‘sufficient knowledge’ would entail having detailed knowledge of decision and preference algorithms of all potential users of the system. At that point, the model becomes so detailed it inevitably encodes causal information.