Minds are not chronologically commutative with respect to input data. Reading libertarian philosophy followed by Marxist philosophy will give you a different connectome than vice versa. As a result, you will have distinct values in each scenario and act accordingly. Put another way, human values are extremely dependent on initial input parameters (your early social and educational history). Childhood brainwashing can give the resulting adult arbitrary values (as evinced by such quirks like suicide bombers and voluntary eunuchs). However, by providing such a malleable organism, evolution found a very cute trick by which it allowed for seemingly impossible computation. (development of mathematics, science, etc.)
I assume that in the definition of GAI, it is implicit that the AI can do mathematics and science as good or better than humans can, as to achieve its goals that require a physical restructuring of reality. Since the only example of a computational process that is capable of generating these things (humans) is so malleable in its values, what basis (mathematical or otherwise) does the SIAI have for assuming that Friendliness is achievable? Keep in mind that a GAI should be able to think and comprehend all things humans can and have thought (including the architectural problems in Friendliness), or at least something functionally isomorphic.
Minds are not chronologically commutative with respect to input data. Reading libertarian philosophy followed by Marxist philosophy will give you a different connectome than vice versa. As a result, you will have distinct values in each scenario and act accordingly. Put another way, human values are extremely dependent on initial input parameters (your early social and educational history). Childhood brainwashing can give the resulting adult arbitrary values (as evinced by such quirks like suicide bombers and voluntary eunuchs). However, by providing such a malleable organism, evolution found a very cute trick by which it allowed for seemingly impossible computation. (development of mathematics, science, etc.)
I assume that in the definition of GAI, it is implicit that the AI can do mathematics and science as good or better than humans can, as to achieve its goals that require a physical restructuring of reality. Since the only example of a computational process that is capable of generating these things (humans) is so malleable in its values, what basis (mathematical or otherwise) does the SIAI have for assuming that Friendliness is achievable? Keep in mind that a GAI should be able to think and comprehend all things humans can and have thought (including the architectural problems in Friendliness), or at least something functionally isomorphic.