Suppose we replace Wei Dai’s simple consequentialist robot with a robot that has similar behavior, but that also responds to the question, “What system do you want to answer the question of what you want for you?” with the answer, “I want to decide for myself” and responds to the question, “What do you want to do?” with the answer, “I want to make babies happy. Oh and help grandmother out of the burning building. Oh, and without killing her. Oh, and to preserve complex novelty. Oh and boredom. Oh, and there should still be people in the world who are trying to improve it. Oh and...damnit, this is complicated. Okay, never mind, I want you to ask the version of myself who I presently think is smart enough to answer this question and who knows what the right thing to do is even better than me.”
It can answer those two questions, but if you ask it to clarify the last response, it just blows up.
Suppose we replace Wei Dai’s simple consequentialist robot with a robot that has similar behavior, but that also responds to the question, “What system do you want to answer the question of what you want for you?” with the answer, “I want to decide for myself” and responds to the question, “What do you want to do?” with the answer, “I want to make babies happy. Oh and help grandmother out of the burning building. Oh, and without killing her. Oh, and to preserve complex novelty. Oh and boredom. Oh, and there should still be people in the world who are trying to improve it. Oh and...damnit, this is complicated. Okay, never mind, I want you to ask the version of myself who I presently think is smart enough to answer this question and who knows what the right thing to do is even better than me.”
It can answer those two questions, but if you ask it to clarify the last response, it just blows up.