I believe that the only sort of seed AI anyone should ever launch has the “transparency” property, namely, that it is very clear and obvious to its creators what the seed AI’s optimization target is. (Eliezer agrees with me about that.) If you do not believe that, then it might prove impossible to persuade you of what I said before, namely, that it is foolish to create a seed AI with the intention that it will figure out morality after it is launched.
Humans emphatically do not have the “transparency” property, and consequently (for some humans) it makes sense to speak of a human’s morality changing or of a human’s figuring out what morality will command his loyalty.
I believe that the only sort of seed AI anyone should ever launch has the “transparency” property, namely, that it is very clear and obvious to its creators what the seed AI’s optimization target is. (Eliezer agrees with me about that.) If you do not believe that, then it might prove impossible to persuade you of what I said before, namely, that it is foolish to create a seed AI with the intention that it will figure out morality after it is launched.
Humans emphatically do not have the “transparency” property, and consequently (for some humans) it makes sense to speak of a human’s morality changing or of a human’s figuring out what morality will command his loyalty.