When your preferences operate over high-level things in your map, the problem is not that they don’t talk about the real world. Because there is a specific way in which the map gets determined by the world, in a certain sense they already do talk about the real world, you just didn’t originally know how to interpret them in this way. You can compose the process that takes the world and produces your map with the process that takes your map and the high-level things in it and produces a value judgement, obtaining a process that takes the world and produces a value judgment. So it’s not a problem of judgments being defined for only the map, it’s an issue arising from gaining the ability to examine new interpretations of the same implementation of preferences, that are closer to being in terms of the world than the original intuitive perception that only described them in terms of vague high level concepts.
The problem is that when you examine your preference in terms of judging the world, you notice that you don’t like some properties of what it does, you want to modify it, but you are not sure how. There are more possibilities when you are working with a more detailed model. But gaining access to this new information doesn’t in any way invalidate the original judgment procedure, except to the extent that you are now able to make improvements. The ability to notice flaws and to make improvements means in particular that your preference includes judgments that refer to originally unknown details, as is normal for sufficiently complicated mathematical definitions.
You can compose the process that takes the world and produces your map with the process that takes your map and the high-level things in it and produces a value judgement, obtaining a process that takes the world and produces a value judgment.
What is the process that takes the world and produces my map? Note that whatever this process is, it needs to take as input an arbitrary possible state of the world. I can’t see what possible process you might be referring to...
As you learn more about the world, you are able to describe the way in which your brain-implemented judgment machinery depends on how the world works. This seems to be a reliable presentation of what has happened as a result of most of the discoveries that changed our view on fundamental physics so far. Not being able to formally describe how that happened is a problem that’s separate from deciding whether the preference defined in terms of vague brain-implemented concepts could be interpreted in terms of novel unexpected physics.
The step of re-describing status quo preferences in terms of a new understanding of the world (in this trivial sense I’m discussing) doesn’t seem to involve anything interesting related to the preferences, it’s all about physical brains and physical world. There is no point at this particular stage of updating the ontology where preferences become undefined or meaningless. The sense that they do (when they do) comes from the next stage, where you criticize the judgment procedures given the new better understanding of what they are, but at that point the task of changing the ontology is over.
When your preferences operate over high-level things in your map, the problem is not that they don’t talk about the real world. Because there is a specific way in which the map gets determined by the world, in a certain sense they already do talk about the real world, you just didn’t originally know how to interpret them in this way. You can compose the process that takes the world and produces your map with the process that takes your map and the high-level things in it and produces a value judgement, obtaining a process that takes the world and produces a value judgment. So it’s not a problem of judgments being defined for only the map, it’s an issue arising from gaining the ability to examine new interpretations of the same implementation of preferences, that are closer to being in terms of the world than the original intuitive perception that only described them in terms of vague high level concepts.
The problem is that when you examine your preference in terms of judging the world, you notice that you don’t like some properties of what it does, you want to modify it, but you are not sure how. There are more possibilities when you are working with a more detailed model. But gaining access to this new information doesn’t in any way invalidate the original judgment procedure, except to the extent that you are now able to make improvements. The ability to notice flaws and to make improvements means in particular that your preference includes judgments that refer to originally unknown details, as is normal for sufficiently complicated mathematical definitions.
What is the process that takes the world and produces my map? Note that whatever this process is, it needs to take as input an arbitrary possible state of the world. I can’t see what possible process you might be referring to...
As you learn more about the world, you are able to describe the way in which your brain-implemented judgment machinery depends on how the world works. This seems to be a reliable presentation of what has happened as a result of most of the discoveries that changed our view on fundamental physics so far. Not being able to formally describe how that happened is a problem that’s separate from deciding whether the preference defined in terms of vague brain-implemented concepts could be interpreted in terms of novel unexpected physics.
The step of re-describing status quo preferences in terms of a new understanding of the world (in this trivial sense I’m discussing) doesn’t seem to involve anything interesting related to the preferences, it’s all about physical brains and physical world. There is no point at this particular stage of updating the ontology where preferences become undefined or meaningless. The sense that they do (when they do) comes from the next stage, where you criticize the judgment procedures given the new better understanding of what they are, but at that point the task of changing the ontology is over.