Eliezer, “changes in my progamming that seem to result in improvements” are sufficently arbitrary that you may still have to face the halting problem, i.e. if you are programming an intelligent being, it is going to be sufficiently complicated that you will never prove that there are no bugs in your original programming, i.e. even ones that may show no effect until it has improved itself 1,000,000 times, and by then it will be too late.
Apart from this, no intelligent entity can predict in own actions, i.e. it will always have a feeling of “free will.” This is necessary because whenever it looks at a choice between A and B, it will always say, “I could do A, if I thought it was better,” and “I could also do B, if I thought it was better.” So it’s own actions are surely unpredictable to it, it can’t predict the choice until it actually makes the choice, just like us. But this implies that “insight into intelligence” may be impossible, or at least full insight into one’s own intelligence, and that is enough to imply that your whole project may be impossible, or at least that it may go very slowly, so Robin will turn our to be right.
Eliezer, “changes in my progamming that seem to result in improvements” are sufficently arbitrary that you may still have to face the halting problem, i.e. if you are programming an intelligent being, it is going to be sufficiently complicated that you will never prove that there are no bugs in your original programming, i.e. even ones that may show no effect until it has improved itself 1,000,000 times, and by then it will be too late.
Apart from this, no intelligent entity can predict in own actions, i.e. it will always have a feeling of “free will.” This is necessary because whenever it looks at a choice between A and B, it will always say, “I could do A, if I thought it was better,” and “I could also do B, if I thought it was better.” So it’s own actions are surely unpredictable to it, it can’t predict the choice until it actually makes the choice, just like us. But this implies that “insight into intelligence” may be impossible, or at least full insight into one’s own intelligence, and that is enough to imply that your whole project may be impossible, or at least that it may go very slowly, so Robin will turn our to be right.