I know that Word of Eliezer is that disciples won’t find it productive to read philosophy, but what you are talking about here has been discussed by analytic philosophers and computer scientists as “the frame problem” since the eighties and it might be worth a read for you
The issue can be talked about in terms of the frame problem, but I’m not sure it’s useful. In the classical frame problem, we have a much clearer idea of what we want, the problem is specifying enough so that the AI does too (ie so that the token “loaded” corresponds to the gun being loaded). This is quite closely related to symbol grounding, in a way.
When dealing with moral problems, we have the problem that we haven’t properly defined the terms to ourselves. Across the span of possible futures, the term “loaded gun” is likely much sharply defined than “living human being”. And if it isn’t—well, then we have even more problems if all our terms start becoming slippery, even the ones with no moral connotations.
But in any case, saying the problem is akin to the frame problem… still doesn’t solve it, alas!
You mean the frame problem that I talked about here? http://lesswrong.com/lw/gyt/thoughts_on_the_frame_problem_and_moral_symbol/
The issue can be talked about in terms of the frame problem, but I’m not sure it’s useful. In the classical frame problem, we have a much clearer idea of what we want, the problem is specifying enough so that the AI does too (ie so that the token “loaded” corresponds to the gun being loaded). This is quite closely related to symbol grounding, in a way.
When dealing with moral problems, we have the problem that we haven’t properly defined the terms to ourselves. Across the span of possible futures, the term “loaded gun” is likely much sharply defined than “living human being”. And if it isn’t—well, then we have even more problems if all our terms start becoming slippery, even the ones with no moral connotations.
But in any case, saying the problem is akin to the frame problem… still doesn’t solve it, alas!