It’s not all egotism either. When the choice was between betting on the algorithm and betting on another person, participants were still more likely to avoid the algorithm if they’d seen how it performed and therefore, inevitably, had seen it err.
My emphasis.
The authors also have a forthcoming paper on this issue:
If showing results doesn’t help avoid algorithm aversion, allowing human input might. In a forthcoming paper, the same researchers found that people are significantly more willing to trust and use algorithms if they’re allowed to tweak the output a little bit. If, say, the algorithm predicted a student would perform in the top 10% of their MBA class, participants would have the chance to revise that prediction up or down by a few points. This made them more likely to bet on the algorithm, and less likely to lose confidence after seeing how it performed.
Of course, in many cases adding human input made the final forecast worse. We pride ourselves on our ability to learn, but the one thing we just can’t seem to grasp is that it’s typically best to just trust that the algorithm knows better.
Presumably another bias, the IKEA effect, which says that people prefer products they’ve partially created themselves, is at play here.
Here’s an article in Harvard Business Review about algorithm aversion:
My emphasis.
The authors also have a forthcoming paper on this issue:
Presumably another bias, the IKEA effect, which says that people prefer products they’ve partially created themselves, is at play here.