For any heuristic, indeed any query that is part of the agent, the normative criterion for its performance should be given by the whole agent. What should truth be, the answers to logical questions? What probability should given event in the world be assigned? These questions are no simpler than the whole of morality. If we define a heuristic that is not optimized by the whole morality, this heuristic will inevitably become obsolete, tossed out whole. If we allow improvements (or see substitution as change), then the heuristic refers to morality, and is potentially no simpler than the whole.
Truth and reality are the most precise and powerful heuristics known to us. Truth as the way logical queries should be answered, and reality as the way we should assign anticipation to the world, plan for some circumstances over others. But there is no guarantee that the “urge to keep on counting” remains the dominant factor in queries about truth, or that chocolate superstimulus doesn’t leave a dint on parameters of quantum gravity.
The difference from the overall “morality” is that we know a great deal more about these aspects than about the others. The words themselves are no longer relevant in their potential curiosity-stopping quality.
(Knowledge of these powerful heuristics will most likely lead to humanity’s ruin. Anything that doesn’t use them is not interesting, an alien AI that doesn’t care about truth or reality eliminates itself quickly from our notice. But one that does care about these virtues will start rewriting things we deem important, even if it possesses almost no other virtues.)
Do you take a similar position on mathematical truth? If not, why? What’s the relevant difference between “true” and “right”?
For any heuristic, indeed any query that is part of the agent, the normative criterion for its performance should be given by the whole agent. What should truth be, the answers to logical questions? What probability should given event in the world be assigned? These questions are no simpler than the whole of morality. If we define a heuristic that is not optimized by the whole morality, this heuristic will inevitably become obsolete, tossed out whole. If we allow improvements (or see substitution as change), then the heuristic refers to morality, and is potentially no simpler than the whole.
Truth and reality are the most precise and powerful heuristics known to us. Truth as the way logical queries should be answered, and reality as the way we should assign anticipation to the world, plan for some circumstances over others. But there is no guarantee that the “urge to keep on counting” remains the dominant factor in queries about truth, or that chocolate superstimulus doesn’t leave a dint on parameters of quantum gravity.
The difference from the overall “morality” is that we know a great deal more about these aspects than about the others. The words themselves are no longer relevant in their potential curiosity-stopping quality.
(Knowledge of these powerful heuristics will most likely lead to humanity’s ruin. Anything that doesn’t use them is not interesting, an alien AI that doesn’t care about truth or reality eliminates itself quickly from our notice. But one that does care about these virtues will start rewriting things we deem important, even if it possesses almost no other virtues.)