This doesn’t make any sense but people try it anyway, especially when they’re talking to high status people and/or experts. “Oh, okay, I’ll try hard to believe what the expert said, so I look like I know what I’m talking about.”
It seems to me, sometimes you just need to ignore your model’s output and just go with an expert’s opinion. For example, the current expected value of trying to integrate the expert’s model might be low (even zero if it’s just impossible for you to get it!).
Still, I agree that you shouldn’t throw away your model. Looking at your diagrams, it’s easy to think we could take a leaf from Pearl’s Causality (Ch. 3). Pearl is talking about modelling intervention in causal graphs. That does seem kind of similar to modelling an ‘expert override’ in belief networks, but that’s not necessary to steal the idea. We can either just snip the arrows from our model to the output probability and set the output to the expert’s belief, or (perhaps more realistically), add an additional node ‘what I view to be expert consensus’ that modulates how our credence depends on our model.
Your diagrams even have white circles before the credence that could represent our ‘seeming’—i.e. what the probability we get just from the model is—and then that can be modulated by a node representing expert consensus and how confidently we think we can diverge from it.
It seems to me, sometimes you just need to ignore your model’s output and just go with an expert’s opinion. For example, the current expected value of trying to integrate the expert’s model might be low (even zero if it’s just impossible for you to get it!).
Still, I agree that you shouldn’t throw away your model. Looking at your diagrams, it’s easy to think we could take a leaf from Pearl’s Causality (Ch. 3). Pearl is talking about modelling intervention in causal graphs. That does seem kind of similar to modelling an ‘expert override’ in belief networks, but that’s not necessary to steal the idea. We can either just snip the arrows from our model to the output probability and set the output to the expert’s belief, or (perhaps more realistically), add an additional node ‘what I view to be expert consensus’ that modulates how our credence depends on our model.
Your diagrams even have white circles before the credence that could represent our ‘seeming’—i.e. what the probability we get just from the model is—and then that can be modulated by a node representing expert consensus and how confidently we think we can diverge from it.