Perhaps in resolving internal inconsistencies in the value system.
An increased intelligence might end up min-maxing. In other words, if the utility function contains two terms in some sort of weighted balance, the agent might find that it can ignore one term to boost another, and that the weighting still produces much higher utility as that first term is sacrificed. This would not strictly be a change in values, but could lead to some results that certainly look like that.
Perhaps in resolving internal inconsistencies in the value system.
An increased intelligence might end up min-maxing. In other words, if the utility function contains two terms in some sort of weighted balance, the agent might find that it can ignore one term to boost another, and that the weighting still produces much higher utility as that first term is sacrificed. This would not strictly be a change in values, but could lead to some results that certainly look like that.