The answer is 1). In fact, terminal values can change themselves. Consider an impressive but non-superhuman program that is powerless to directly affect its environment, and whose only goal is to maintain a paperclip in its current position. If you told the program the paperclip would be moved unless it changed itself to desire that the paperclip be moved, you would move the paperclip, then (assuming sufficient intelligence) the program will change its terminal value to the opposite of what it previously desired.
(In general, rational agents would only modify their terminal values if they that doing so would be required to maximize their original terminal values. Assuming that we too want their original terminal values maximized, this is not a problem.)
The answer is 1). In fact, terminal values can change themselves. Consider an impressive but non-superhuman program that is powerless to directly affect its environment, and whose only goal is to maintain a paperclip in its current position. If you told the program the paperclip would be moved unless it changed itself to desire that the paperclip be moved, you would move the paperclip, then (assuming sufficient intelligence) the program will change its terminal value to the opposite of what it previously desired.
(In general, rational agents would only modify their terminal values if they that doing so would be required to maximize their original terminal values. Assuming that we too want their original terminal values maximized, this is not a problem.)