Eliezer2000 is starting to think inside the black box. His reasons for pursuing this course of action—those don’t matter at all. link
When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI. His reasons for doing this don’t matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism. link
That’s two instances of Eliezer placing no moral value “at all” on his own motives in his pursuit of the motive of AI morals. Not necessarily a contradiction, but less elegant than might be.
I don’t think he’s saying that motives are morally irrelevant—I think he’s saying that they are irrelevant to the point he is trying to make with that blog post.
That’s two instances of Eliezer placing no moral value “at all” on his own motives in his pursuit of the motive of AI morals. Not necessarily a contradiction, but less elegant than might be.
I don’t think he’s saying that motives are morally irrelevant—I think he’s saying that they are irrelevant to the point he is trying to make with that blog post.