On your last paragraph… do you mean that you expect your material-level preferences concerning the future to change? Of course they would. But would you really expect that a straight-up intelligence boost would change the axioms governing what sorts of futures you prefer?
But would you really expect that a straight-up intelligence boost would change the axioms governing what sorts of futures you prefer?
Two answers. First is that yes, I expect that a sufficiently large intelligence boost would change my terminal values. Second is that even without the boost I, in my current state, do not seek to change only in a goal-stable way.
I think that that only seems to make sense because you don’t know what your terminal values are. If you did, I suspect you would be a little more attached to them.
On your last paragraph… do you mean that you expect your material-level preferences concerning the future to change? Of course they would. But would you really expect that a straight-up intelligence boost would change the axioms governing what sorts of futures you prefer?
Two answers. First is that yes, I expect that a sufficiently large intelligence boost would change my terminal values. Second is that even without the boost I, in my current state, do not seek to change only in a goal-stable way.
I think that that only seems to make sense because you don’t know what your terminal values are. If you did, I suspect you would be a little more attached to them.