I’m not sure that listening to ones intuitions is enough to cause accurate model changes. Perhaps it is not rational to hold a single model in your head, as your information is incomplete. Instead one can consciously examine the situation from multiple perspectives, in this way the nicer (simpler, more consistent, whatever your metric is) model response can be applied. Alternatively you could legitimately assume that all the models you hold have merit and produce a response that balances their outcomes e.g. if your model of the medical profession is wrong and they die from your advice it is much worse than the unnecessary calling of the ambulance (letting the medical profession address the balance of resources). This would lead a rational person to simultaneously believe many contradictory perspectives and act as if they were all potentially true. Does anyone know of any theory in this area? the modelling of models (and efficiently resolving multiple models) would be very useful in AI.
I’m not sure that listening to ones intuitions is enough to cause accurate model changes. Perhaps it is not rational to hold a single model in your head, as your information is incomplete. Instead one can consciously examine the situation from multiple perspectives, in this way the nicer (simpler, more consistent, whatever your metric is) model response can be applied. Alternatively you could legitimately assume that all the models you hold have merit and produce a response that balances their outcomes e.g. if your model of the medical profession is wrong and they die from your advice it is much worse than the unnecessary calling of the ambulance (letting the medical profession address the balance of resources). This would lead a rational person to simultaneously believe many contradictory perspectives and act as if they were all potentially true. Does anyone know of any theory in this area? the modelling of models (and efficiently resolving multiple models) would be very useful in AI.