You keep constructing scenarios whose intent, as far as I can tell, is to let you argue that in those scenarios any currently imaginable non-human system would be incapable of choosing a correct or defensible course of action. By comparison, however, you must also be arguing that some human system in each of those scenarios would be capable of choosing a correct or defensible course of action. How?
And: Suppose you knew that someone was trying to understand the answer to this question, and create the field of “Artificial Ability to Choose Correct and Defensible Courses of Action The Way Humans Apparently Can”. What kinds of descriptions do you think they might give of the engineering problem at the center of their field of study, of their criteria for distinguishing between good and bad ways of thinking about the problem, and of their level of commitment to any given way in which they’ve been trying to think about the problem? Do those descriptions differ from Eliezer’s descriptions regarding “Friendly AI” or “CEV”?
You seem to be frustrated about some argument(s) and conclusions that you think should be obvious to other people. The above is an explanation of how some conclusions that seem obvious to you could seem not obvious to me. Is this explanation compatible with your initial model of my awareness of arguments’ obviousnesses?
You keep constructing scenarios whose intent, as far as I can tell, is to let you argue that in those scenarios any currently imaginable non-human system would be incapable of choosing a correct or defensible course of action. By comparison, however, you must also be arguing that some human system in each of those scenarios would be capable of choosing a correct or defensible course of action. How?
And: Suppose you knew that someone was trying to understand the answer to this question, and create the field of “Artificial Ability to Choose Correct and Defensible Courses of Action The Way Humans Apparently Can”. What kinds of descriptions do you think they might give of the engineering problem at the center of their field of study, of their criteria for distinguishing between good and bad ways of thinking about the problem, and of their level of commitment to any given way in which they’ve been trying to think about the problem? Do those descriptions differ from Eliezer’s descriptions regarding “Friendly AI” or “CEV”?
You seem to be frustrated about some argument(s) and conclusions that you think should be obvious to other people. The above is an explanation of how some conclusions that seem obvious to you could seem not obvious to me. Is this explanation compatible with your initial model of my awareness of arguments’ obviousnesses?