I’m not sure that explains why they judge the algorithm’s mistakes more harshly even after seeing the algorithm perform better. If you hadn’t seen the algorithm perform and didn’t know it had been rigorously tested, you could justify being skeptical about how it works, but seeing its performance should answer that. Besides, a human’s “expert judgment” on a subject you know little about is just as much of a black box.
If people see you as an authority and you make a mistake, they can accept that no one is perfect and mistakes happen. If they doubt the legitimacy of your authority, any mistakes will be taken as evidence of hubris and incompetence.
I think part of it is the general population just not being used to algorithms on a conceptual level. One can understand the methods used and so accept the algorithm, or one can get used to such algorithms over a period of time and come to accept them.
Besides, a human’s “expert judgment” on a subject you know little about is just as much of a black box.
And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.
And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.
True, but that seems inconsistent with taking human experts but not algorithms as authorities. Maybe these tend to be different people, or they’re just inconsistent about judging human experts.
It’s worth thinking about what makes one an expert, and what convinces others of one’s expertise. Someone has to agree that you’re an expert before they take you as an authority. There’s a social dynamic at work here.
I’m not sure that explains why they judge the algorithm’s mistakes more harshly even after seeing the algorithm perform better. If you hadn’t seen the algorithm perform and didn’t know it had been rigorously tested, you could justify being skeptical about how it works, but seeing its performance should answer that. Besides, a human’s “expert judgment” on a subject you know little about is just as much of a black box.
If people see you as an authority and you make a mistake, they can accept that no one is perfect and mistakes happen. If they doubt the legitimacy of your authority, any mistakes will be taken as evidence of hubris and incompetence.
I think part of it is the general population just not being used to algorithms on a conceptual level. One can understand the methods used and so accept the algorithm, or one can get used to such algorithms over a period of time and come to accept them.
And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.
True, but that seems inconsistent with taking human experts but not algorithms as authorities. Maybe these tend to be different people, or they’re just inconsistent about judging human experts.
It’s worth thinking about what makes one an expert, and what convinces others of one’s expertise. Someone has to agree that you’re an expert before they take you as an authority. There’s a social dynamic at work here.