Depends on your audience, but generally I see these distinct sources of uncertainty split out in papers—report the modeled confidence (output of the model based on data), AND be clear about the likelihood of model error.
It’s good to know that this a extended practice (do you have handy examples to see how others approach this issue?)
However to clarify my question is not whether those should be distinguished, but rather what should be the the confidence interval I should be reporting, given we are making the distinction between model predection and model error.
Depends on your audience, but generally I see these distinct sources of uncertainty split out in papers—report the modeled confidence (output of the model based on data), AND be clear about the likelihood of model error.
It’s good to know that this a extended practice (do you have handy examples to see how others approach this issue?)
However to clarify my question is not whether those should be distinguished, but rather what should be the the confidence interval I should be reporting, given we are making the distinction between model predection and model error.