This brings to mind the infamous case of Google censoring search results in China according to the government’s will. That’s an example of a deliberate human action, but examples will increasingly be “algorithmic byproduct” with zero human intervention. Unlike humans the algorithm can’t questioned or intimidated by the media or taken to a court of law.
Legally and professionally, I suppose the product team could be taken responsible, but I definitely think there needs to be a push for more scrutinizable computation. (There have been discussion along these lines in terms of computer security. Sometimes open source is cited as a solution, but it hasn’t necessary helped—e.g. Heartbleed.)
This brings to mind the infamous case of Google censoring search results in China according to the government’s will. That’s an example of a deliberate human action, but examples will increasingly be “algorithmic byproduct” with zero human intervention. Unlike humans the algorithm can’t questioned or intimidated by the media or taken to a court of law.
Legally and professionally, I suppose the product team could be taken responsible, but I definitely think there needs to be a push for more scrutinizable computation. (There have been discussion along these lines in terms of computer security. Sometimes open source is cited as a solution, but it hasn’t necessary helped—e.g. Heartbleed.)