That said, one problem I see with your concept of preference is that, presumably, the actions of the “obsessive world-rewriting robot” are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.
Preference in this sense is a rigid designator, defined over the world but not determined by anything in the world, so modifying my mind couldn’t make my preference consistent with the world; a robot implementing my preference would have to understand this.
Preference in this sense is a rigid designator, defined over the world but not determined by anything in the world, so modifying my mind couldn’t make my preference consistent with the world; a robot implementing my preference would have to understand this.