I might, if I was a god talking to other gods. And if I was a gun talking to other guns, I’d tell them to shut up about humans and take responsibility for their own bullets.
Epistemic status: interesting idea I think I’ve heard somewhere. Don’t dare to ask me if I believe it myself or I will ask you to taboo words and you don’t want to do that.
The problem is that this may change in the future, and a blog sponsored by the organization that’s trying to prevent exactly that is probably not the right place to post such a quote.
My point in posting it was that UFAI isn’t ‘evil’, it’s badly programmed. If an AI proves itself unfriendly and does something bad, the fault lies with the programmer.
That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).
I don’t understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY’s quotes about apathetic uFAIs?
I’m not familiar with Chrono Trigger, but when I hear that sentiment in real life I take it to be a rebuttal to an argument against technology based on confusion between terminal and instrumental values. (Guns aren’t intrinsically evil (i.e. there’s no negative term in our utility function for how many guns exist in the world) even though they can be used to do evil, &c.)
Well, that gets right to the heart of the Friendliness problem, now doesn’t it? Mother Brain is the machine that can program, and she reprogrammed all the machines that ‘do evil’. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are ‘evil’, is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It’s easy to blame Mother Brain; she’s a major antagonist in her timeline. It’s less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.
In my view, Lucca is taking personal responsibility with that line. ‘Machines aren’t capable of evil’, (they can’t choose to do anything outside their programming). ‘Humans make them that way’, (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I’d be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you’re basically trying to absolve yourself of the consequences if anything goes wrong. “I gave my creation the capacity to choose. Is it my fault if it chose evil?” Yes, yes it is.
-Lucca, Chrono Trigger
Eh. Would you say that “humans aren’t capable of evil. Evolution makes them that way”?
I might, if I was a god talking to other gods. And if I was a gun talking to other guns, I’d tell them to shut up about humans and take responsibility for their own bullets.
-
I feel like a strange loop is now formed when humans say things like: “God isn’t capable of evil. Our definition makes him that way.”
“Evolution isn’t capable of evil. Time made it that way.”
Zugzwang: your turn!
“DO NOT MESS WITH TIME”
“Time isn’t capable of evil, its not even an optimization process.”
Yes it is. Its optimizand is entropy.
Epistemic status: interesting idea I think I’ve heard somewhere. Don’t dare to ask me if I believe it myself or I will ask you to taboo words and you don’t want to do that.
But evolution isn’t sapient and… Or isn’t it?
[Edited to remove potentially mind-killing example.]
Who?
But humans are sapient and have rebelled against evolution, whereas machines aren’t and just do what humans tell them.
The problem is that this may change in the future, and a blog sponsored by the organization that’s trying to prevent exactly that is probably not the right place to post such a quote.
My point in posting it was that UFAI isn’t ‘evil’, it’s badly programmed. If an AI proves itself unfriendly and does something bad, the fault lies with the programmer.
That line always bugged me, even when I was a little kid. It seems obviously false (especially in the in-game context).
I don’t understand why this is a rationality quote at all; Am I missing something, or is it just because of the superficial similarity to some of EY’s quotes about apathetic uFAIs?
I’m not familiar with Chrono Trigger, but when I hear that sentiment in real life I take it to be a rebuttal to an argument against technology based on confusion between terminal and instrumental values. (Guns aren’t intrinsically evil (i.e. there’s no negative term in our utility function for how many guns exist in the world) even though they can be used to do evil, &c.)
In Chrono Trigger this line is about a robot.
Of course, there are other robots in the game about whom this is a dubious claim.
Well, that gets right to the heart of the Friendliness problem, now doesn’t it? Mother Brain is the machine that can program, and she reprogrammed all the machines that ‘do evil’. It is likely, then, that the first machine that Mother Brain reprogrammed was herself. If a machine is given the ability to reprogram itself, and uses that ability to make itself decide to do things that are ‘evil’, is the machine itself evil? Or does the fault lie with the programmer, for failing to take into account the possibility that the machine might change its utility function? It’s easy to blame Mother Brain; she’s a major antagonist in her timeline. It’s less easy to think back to some nameless programmer behind the scenes, considering the problem of coding an intelligent machine, and deciding how much freedom to give it in making its own decisions.
In my view, Lucca is taking personal responsibility with that line. ‘Machines aren’t capable of evil’, (they can’t choose to do anything outside their programming). ‘Humans make them that way’, (so the programmer has the responsibility of ensuring their actions are moral). There are other interpretations, but I’d be wary of any view that shifts moral responsibility to the machine. If you, as a programmer, give up any of your moral responsibility to your program, then you’re basically trying to absolve yourself of the consequences if anything goes wrong. “I gave my creation the capacity to choose. Is it my fault if it chose evil?” Yes, yes it is.