I have been looking for articles discussing to what extent terminal values change. This question is important, as changing terminal values are generally very harmful to their accomplishment, as explained here for AI under “Basic AI drives”.
This article says that some values change. This paper suggests that there are core values that are unlikely to change. However, neither of these articles say whether the values they examined are terminal values, and I’m not knowledgeable enough about psychology to determine if they are.
Any relevant thoughts or links would be appreciated.
I have been looking for articles discussing to what extent terminal values change. This question is important, as changing terminal values are generally very harmful to their accomplishment, as explained here for AI under “Basic AI drives”.
This article says that some values change. This paper suggests that there are core values that are unlikely to change. However, neither of these articles say whether the values they examined are terminal values, and I’m not knowledgeable enough about psychology to determine if they are.
Any relevant thoughts or links would be appreciated.