Maybe there’s some value in creating an algorithm which accurately models most people’s moral decisions… it could be used as the basis for a “sane” utility function by subsequently working out which parts of the algorithm are “utility” and which are “biases”.
If I wrote an algorithm that tried to maximize expected value, and computed value as a function of the number of people left alive, it would choose in both trolley problems to save the maximum number of people. That would indicate that the human solution to the second problem, to not push someone onto the tracks, was a bias.
Yet the authors of the paper did not make that interpretation. They decided that getting a non-human answer meant the computer did not yet have morals.
So, how do you decide what to accurately model? That’s where you make the decision about what is moral.
I agree the authors of the paper are idiots (or seem to be—I only skimmed the paper). But the research they’re doing could still be useful, even if not for the reason they think.
If I wrote an algorithm that tried to maximize expected value, and computed value as a function of the number of people left alive, it would choose in both trolley problems to save the maximum number of people. That would indicate that the human solution to the second problem, to not push someone onto the tracks, was a bias.
Yet the authors of the paper did not make that interpretation. They decided that getting a non-human answer meant the computer did not yet have morals.
So, how do you decide what to accurately model? That’s where you make the decision about what is moral.
I agree the authors of the paper are idiots (or seem to be—I only skimmed the paper). But the research they’re doing could still be useful, even if not for the reason they think.