The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.
The case of unknown future. You know class of worlds you want to be in. What you don’t know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don’t want to be in.
Yes. That is the general class that includes ‘Omega rewards you if you make your decision irrationally’. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.
No. You don’t need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.
It works for pure consequentialist, but if one’s values have a deontology in the mix, then your suggestion effectively requires changing of one’s values.
And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.
The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.
The case of unknown future. You know class of worlds you want to be in. What you don’t know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don’t want to be in.
Yes. That is the general class that includes ‘Omega rewards you if you make your decision irrationally’. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.
No. You don’t need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.
It works for pure consequentialist, but if one’s values have a deontology in the mix, then your suggestion effectively requires changing of one’s values.
And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.