....the way that its objectives ultimately shake out is quite sensitive to the specifics of its resolution strategies.
....and replaces it with other tools that do the same work just as well, but without mistaking the instrumental task for an end in-and-of-itself.
If the terminal values are changing, then the changes aren’t just resolving purely-instrumental incoherences. Where to do terminal values come from? What’s the criterion that the incoherence-resolving process uses to choose between different possible reflectively consistent states (e.g. different utility functions)?
These weird, seemingly incoherent things like “adopt others’s values” might be reflectively stable if it also operates at the meta-level of coherentifying. It’s delusional to think that this is the default without good reason, but that doesn’t rule out something like this happening by design.
If the terminal values are changing, then the changes aren’t just resolving purely-instrumental incoherences. Where to do terminal values come from? What’s the criterion that the incoherence-resolving process uses to choose between different possible reflectively consistent states (e.g. different utility functions)?
These weird, seemingly incoherent things like “adopt others’s values” might be reflectively stable if it also operates at the meta-level of coherentifying. It’s delusional to think that this is the default without good reason, but that doesn’t rule out something like this happening by design.