Perhaps in resolving internal inconsistencies in the value system.
An increased intelligence might end up min-maxing. In other words, if the utility function contains two terms in some sort of weighted balance, the agent might find that it can ignore one term to boost another, and that the weighting still produces much higher utility as that first term is sacrificed. This would not strictly be a change in values, but could lead to some results that certainly look like that.
I think in a world with multiple superintelligent agents that have read access to each others’ code, I expect that agents ‘change their own goals’ for the social signalling/bargaining reasons that Bostrom mentions. Although it’s unclear whether this would look more like spawning a new successor system with different values and architecture.
the Chinese way back during their great days as philisophers discovered that it is us humans that input values onto the world at large, objects that it is us who give meaning to something that is meaningless in itself [Kant’s thing-in-itself] so that a system’s values is there as long as it delivers. luckily humans move on [boredom helps] so that values should never be enshrined: otherwise we may go the way of the Neaderthals. So does a system change with its intelligence? The problem here is that AI’s potential intelligence is a redefinition of itself because intelligence [per se] is innate within us: it is a resonance- a mind-field-wave-state [on a quantum level] that self manifests sort of. No AI will ever have that unless symbiosis as interphasing. So the answer to date is: No.
You were doing all right until the end. Too many of the words in your last few sentences are used in ways that do not fit together to make sense in any conventional way, and when I try to parse them anyway, the emphases land in odd places.
In practice do you expect a system’s values to change with its intelligence?
Perhaps in resolving internal inconsistencies in the value system.
An increased intelligence might end up min-maxing. In other words, if the utility function contains two terms in some sort of weighted balance, the agent might find that it can ignore one term to boost another, and that the weighting still produces much higher utility as that first term is sacrificed. This would not strictly be a change in values, but could lead to some results that certainly look like that.
I think in a world with multiple superintelligent agents that have read access to each others’ code, I expect that agents ‘change their own goals’ for the social signalling/bargaining reasons that Bostrom mentions. Although it’s unclear whether this would look more like spawning a new successor system with different values and architecture.
I expect a system to face a trade off between self improvement and goal stability.
http://johncarlosbaez.wordpress.com/2013/12/26/logic-probability-and-reflection/
the Chinese way back during their great days as philisophers discovered that it is us humans that input values onto the world at large, objects that it is us who give meaning to something that is meaningless in itself [Kant’s thing-in-itself] so that a system’s values is there as long as it delivers. luckily humans move on [boredom helps] so that values should never be enshrined: otherwise we may go the way of the Neaderthals. So does a system change with its intelligence? The problem here is that AI’s potential intelligence is a redefinition of itself because intelligence [per se] is innate within us: it is a resonance- a mind-field-wave-state [on a quantum level] that self manifests sort of. No AI will ever have that unless symbiosis as interphasing. So the answer to date is: No.
You were doing all right until the end. Too many of the words in your last few sentences are used in ways that do not fit together to make sense in any conventional way, and when I try to parse them anyway, the emphases land in odd places.
Try to use less jargon and rephrase?