I would loosely model my own aversion to trusting algorithms as follows: Both human and algorithmic forecasters will have blind spots, not all of them overlapping. (I.e. there will be cases “obvious” to each which the other gets wrong.) We’ve been dealing with human blind spots for the entire history of civilization, and we’re accustomed to them. Algorithmic blindspots, on the other hand, are terrifying: When an algorithm makes a decision that harms you, and the decision is—to any human—obviously stupid, the resulting situation would best be described as ‘Kafkaesque’.
I suppose there’s another psychological factor at work here, too: When an algorithm makes an “obviously wrong” decision, we feel helpless. By contrast, when a human does it, there’s someone to be angry at. That doesn’t make us any less helpless, but it makes us FEEL less so. (This makes me think of http://lesswrong.com/lw/jad/attempted_telekinesis/ .)
But wait! If many of the algorithm’s mistakes are obvious to any human with some common sense, then there is probably a process of algorithm+sanity check by a human, which will outperform even the algorithm. In which case, you yourself can volunteer for the sanity check role, and this should make you even more eager to use the algorithm.
(Yes, I’m vaguely aware of some research which shows that “sanity check by a human” often makes things worse. But let’s just suppose.)
I do think an algorithm-supported-human approach will probably beat at least an unassisted human, and I think a lot of people would be more comfortable with it than algorithm-alone. (As long as the final discretion belongs to a human, the worst fears are ameliorated.)
I would loosely model my own aversion to trusting algorithms as follows: Both human and algorithmic forecasters will have blind spots, not all of them overlapping. (I.e. there will be cases “obvious” to each which the other gets wrong.) We’ve been dealing with human blind spots for the entire history of civilization, and we’re accustomed to them. Algorithmic blindspots, on the other hand, are terrifying: When an algorithm makes a decision that harms you, and the decision is—to any human—obviously stupid, the resulting situation would best be described as ‘Kafkaesque’.
I suppose there’s another psychological factor at work here, too: When an algorithm makes an “obviously wrong” decision, we feel helpless. By contrast, when a human does it, there’s someone to be angry at. That doesn’t make us any less helpless, but it makes us FEEL less so. (This makes me think of http://lesswrong.com/lw/jad/attempted_telekinesis/ .)
But wait! If many of the algorithm’s mistakes are obvious to any human with some common sense, then there is probably a process of algorithm+sanity check by a human, which will outperform even the algorithm. In which case, you yourself can volunteer for the sanity check role, and this should make you even more eager to use the algorithm.
(Yes, I’m vaguely aware of some research which shows that “sanity check by a human” often makes things worse. But let’s just suppose.)
I do think an algorithm-supported-human approach will probably beat at least an unassisted human, and I think a lot of people would be more comfortable with it than algorithm-alone. (As long as the final discretion belongs to a human, the worst fears are ameliorated.)