It seems to me a rational agent should never change its self-consistent terminal values. To act out that change would be to act according to some other value and not the terminal values in question.
Only a static, an unchanging and unchangeable rational agent. In other words, a dead one.
All things change. In particular, with passage of time both the agent himself changes and the world around him changes. I see absolutely no reason why the terminal values of a rational agent should be an exception from the universal process of change.
Why wouldn’t you expect terminal values to charge? Does your agent have some motivation (which leads it to choose to change) other than its terminal values. Or is it choosing to change its terminal values in pursuit of those values? Or are the terminal value changing involuntarily?
In the first case, the things doing the changing are not the real terminal values.
In the second case, that doesn’t seem to make sense.
In the third case, what we’re discussing is no longer a perfect rational agent.
Only a static, an unchanging and unchangeable rational agent. In other words, a dead one.
All things change. In particular, with passage of time both the agent himself changes and the world around him changes. I see absolutely no reason why the terminal values of a rational agent should be an exception from the universal process of change.
Why wouldn’t you expect terminal values to charge? Does your agent have some motivation (which leads it to choose to change) other than its terminal values. Or is it choosing to change its terminal values in pursuit of those values? Or are the terminal value changing involuntarily?
In the first case, the things doing the changing are not the real terminal values.
In the second case, that doesn’t seem to make sense.
In the third case, what we’re discussing is no longer a perfect rational agent.
What exactly do you mean by “perfect rational agent”? Does such a creature exist in reality?