Robot ethics [link]
The Economist has a new article on ethical dilemmas faced by machine designers.
Evidently:
1. In the event of an immoral decision by a machine, neural networks make it too hard to know who is at fault—the programmer, the operator, the manufacturer, or the designer. Thus, neural networks might be a bad idea.
2. Robots’ ethical systems ought to resonate with “most people.”
3. Proper robot consciences are more likely to arise given greater collaboration among engineers, ethicists, policymakers, and lawyers. Key quotation:
Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts.
The second clause of the above sentence is quite similar to something Yudkowsky wrote, perhaps more than once, about the value of approaching ethics from an AI standpoint. I do not recall where he wrote it, nor did my search turn up the appropriate post.
Hm. I would prefer that quote to look more like
My meaning is that it seems awkward if the engineers are doing something to “reassure society”—they should be doing it to get things right.
I think this is nonsense. We have a current legal system without “detailed logs”. Humans can still attribute blame without them. A need for logs doesn’t rule out the use of neural networks.