if there’s a domain where the model gives two incompatible predictions, then as soon as that’s noticed it has to be rectified in some way.
This feels excessive to me, but maybe you didn’t intend it as strongly as I interpret.
I do think it’s the case that if you have incompatible predictions, something is wrong. But I think often the best you can do to correct it is to say something like...
“Okay, this part of my model would predict this thing, and that part would predict that thing, and I don’t really know how to reconcile that. I don’t know which if either is correct, and until I understand this better I’m going to proceed with caution in this area, and not trust either of those parts of my model too much.”
Does that seem like it would satisfy the intent of what you wrote?
This feels excessive to me, but maybe you didn’t intend it as strongly as I interpret.
I do think it’s the case that if you have incompatible predictions, something is wrong. But I think often the best you can do to correct it is to say something like...
“Okay, this part of my model would predict this thing, and that part would predict that thing, and I don’t really know how to reconcile that. I don’t know which if either is correct, and until I understand this better I’m going to proceed with caution in this area, and not trust either of those parts of my model too much.”
Does that seem like it would satisfy the intent of what you wrote?
Yes, it does. The probabilistic part applies to different parts of my model as well as to outputs of a single model part.