Probably because humans who don’t know much about algorithms basically have no way to observe or verify the procedure. The result of an algorithm has all the force of an appeal to authority, and we’re far more comfortable granting authority to humans.
I think people have also had plenty of experience with machines that malfunction and have objections on those grounds. We can tell when a human goes crazy if his arguments turn into gibberish, but it’s a bit harder to do with computers. If an algorithm outputs gibberish that’s one thing, but there are cases when the algorithm produces a seemingly reasonable number that ends up being completely false.
It’s a question of whether to trust a transparent process with a higher risk of error or a black box with a lower, but still non-negligible risk of error.
I’m not sure that explains why they judge the algorithm’s mistakes more harshly even after seeing the algorithm perform better. If you hadn’t seen the algorithm perform and didn’t know it had been rigorously tested, you could justify being skeptical about how it works, but seeing its performance should answer that. Besides, a human’s “expert judgment” on a subject you know little about is just as much of a black box.
If people see you as an authority and you make a mistake, they can accept that no one is perfect and mistakes happen. If they doubt the legitimacy of your authority, any mistakes will be taken as evidence of hubris and incompetence.
I think part of it is the general population just not being used to algorithms on a conceptual level. One can understand the methods used and so accept the algorithm, or one can get used to such algorithms over a period of time and come to accept them.
Besides, a human’s “expert judgment” on a subject you know little about is just as much of a black box.
And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.
And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.
True, but that seems inconsistent with taking human experts but not algorithms as authorities. Maybe these tend to be different people, or they’re just inconsistent about judging human experts.
It’s worth thinking about what makes one an expert, and what convinces others of one’s expertise. Someone has to agree that you’re an expert before they take you as an authority. There’s a social dynamic at work here.
Probably because humans who don’t know much about algorithms basically have no way to observe or verify the procedure. The result of an algorithm has all the force of an appeal to authority, and we’re far more comfortable granting authority to humans.
I think people have also had plenty of experience with machines that malfunction and have objections on those grounds. We can tell when a human goes crazy if his arguments turn into gibberish, but it’s a bit harder to do with computers. If an algorithm outputs gibberish that’s one thing, but there are cases when the algorithm produces a seemingly reasonable number that ends up being completely false.
It’s a question of whether to trust a transparent process with a higher risk of error or a black box with a lower, but still non-negligible risk of error.
I’m not sure that explains why they judge the algorithm’s mistakes more harshly even after seeing the algorithm perform better. If you hadn’t seen the algorithm perform and didn’t know it had been rigorously tested, you could justify being skeptical about how it works, but seeing its performance should answer that. Besides, a human’s “expert judgment” on a subject you know little about is just as much of a black box.
If people see you as an authority and you make a mistake, they can accept that no one is perfect and mistakes happen. If they doubt the legitimacy of your authority, any mistakes will be taken as evidence of hubris and incompetence.
I think part of it is the general population just not being used to algorithms on a conceptual level. One can understand the methods used and so accept the algorithm, or one can get used to such algorithms over a period of time and come to accept them.
And such experts are routinely denounced by people who know little about the subject in question. I leave examples as an exercise for the reader.
True, but that seems inconsistent with taking human experts but not algorithms as authorities. Maybe these tend to be different people, or they’re just inconsistent about judging human experts.
It’s worth thinking about what makes one an expert, and what convinces others of one’s expertise. Someone has to agree that you’re an expert before they take you as an authority. There’s a social dynamic at work here.