A lot of explainability research has focused on instilling more trust in AI systems without asking how much trust would be appropriate, even though there is research showing that hiding model bias instead of truthfully revealing it can increase trust in an AI system.
So it might be better to ask “Does hiding model bias lead to better team performance?” (In what environments/over what time horizon/with what kind of players?)
I am more sceptical that the differences between inductive and deductive explanations will be the same in different contexts.
“Overall, access to the AI strongly improved the subjects’ accuracy from below 50% to around 70%, which was further boosted to a value slightly below the AI’s accuracy of 75% when users also saw explanations. “
But this seems to be a function of the AI system’s actual performance, the human’s expectations of said performance, as well as the human’s baseline performance. So I’d expect it to vary a lot between tasks and with different systems.
So it might be better to ask “Does hiding model bias lead to better team performance?” (In what environments/over what time horizon/with what kind of players?)
I wonder how people do without an explanation.
“Overall, access to the AI strongly improved the subjects’ accuracy from below 50% to around 70%, which was further boosted to a value slightly below the AI’s accuracy of 75% when users also saw explanations. “
But this seems to be a function of the AI system’s actual performance, the human’s expectations of said performance, as well as the human’s baseline performance. So I’d expect it to vary a lot between tasks and with different systems.