Well, that gets right to the heart of the Friendliness problem, now doesn’t it? Mother Brain is the machine that can program, and she reprogrammed all the machines that ‘do evil’. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are ‘evil’, is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It’s easy to blame Mother Brain; she’s a major antagonist in her timeline. It’s less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.
In my view, Lucca is taking personal responsibility with that line. ‘Machines aren’t capable of evil’, (they can’t choose to do anything outside their programming). ‘Humans make them that way’, (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I’d be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you’re basically trying to absolve yourself of the consequences if anything goes wrong. “I gave my creation the capacity to choose. Is it my fault if it chose evil?” Yes, yes it is.
In Chrono Trigger this line is about a robot.
Of course, there are other robots in the game about whom this is a dubious claim.
Well, that gets right to the heart of the Friendliness problem, now doesn’t it? Mother Brain is the machine that can program, and she reprogrammed all the machines that ‘do evil’. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are ‘evil’, is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It’s easy to blame Mother Brain; she’s a major antagonist in her timeline. It’s less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.
In my view, Lucca is taking personal responsibility with that line. ‘Machines aren’t capable of evil’, (they can’t choose to do anything outside their programming). ‘Humans make them that way’, (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I’d be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you’re basically trying to absolve yourself of the consequences if anything goes wrong. “I gave my creation the capacity to choose. Is it my fault if it chose evil?” Yes, yes it is.