There are some similarities, although I’m focusing on AI values not human values. Also, seems like the value change stuff is thinking about humanity on the level of an overall society, whereas I’m thinking about value systematization mostly on the level of an individual AI agent. (Of course, widespread deployment of an agent could have a significant effect on its values, if it continues to be updated. But I’m mainly focusing on the internal factors.)
There are some similarities, although I’m focusing on AI values not human values. Also, seems like the value change stuff is thinking about humanity on the level of an overall society, whereas I’m thinking about value systematization mostly on the level of an individual AI agent. (Of course, widespread deployment of an agent could have a significant effect on its values, if it continues to be updated. But I’m mainly focusing on the internal factors.)