Suppose it were true that preferences that violate the continuity axiom imply a utility function that is uncomputable. (This hardly seems worse or less convenient than the case—which is, in fact, the actual state of affairs—where continuity violations imply that no utility function can represent one’s preferences, computable or otherwise… but let’s set that aside, for now.)
How would this constitute a reason to have one’s preferences conform to the continuity axiom…?
Presumably, any agent which we manage to build will be computable. So to the extent our agent is using utility functions, they will be continuous.
If an agent is only capable of computable observations, but has a discontinuous utility function, then if the universe is in a state where the utility function is discontinuous, the agent will need to spend an infinite amount of time (or as long as the universe state remains at such a point) determining the utility of the current state. I think it might be possible to use this to create a more concrete exploit.
Presumably, any agent which we manage to build will be computable. So to the extent our agent is using utility functions, they will be continuous.
There are several objections one could make to this line of reasoning. Here are two.
First: do you believe that we, humans are uncomputable? If we are uncomputable, then it is clearly possible to construct an uncomputable agent. If, conversely, we are computable, then whatever reasoning you apply to an agent we build can be applied to us as well. Do you think it does apply to us?
Second: supposing your reasoning holds, why should it not be a reason for our constructed agent not to use utility functions, rather than a reason for said agent to have continuous preferences?
(This is a good time to mention, again, that this entire tangent is moot, as violating the continuity axiom—or any of the axioms—means that no utility function, computable or not, can be constructed from your preferences. But even if that weren’t the case, the above objections apply.)
This seems to be a non sequitur.
Suppose it were true that preferences that violate the continuity axiom imply a utility function that is uncomputable. (This hardly seems worse or less convenient than the case—which is, in fact, the actual state of affairs—where continuity violations imply that no utility function can represent one’s preferences, computable or otherwise… but let’s set that aside, for now.)
How would this constitute a reason to have one’s preferences conform to the continuity axiom…?
Presumably, any agent which we manage to build will be computable. So to the extent our agent is using utility functions, they will be continuous.
If an agent is only capable of computable observations, but has a discontinuous utility function, then if the universe is in a state where the utility function is discontinuous, the agent will need to spend an infinite amount of time (or as long as the universe state remains at such a point) determining the utility of the current state. I think it might be possible to use this to create a more concrete exploit.
There are several objections one could make to this line of reasoning. Here are two.
First: do you believe that we, humans are uncomputable? If we are uncomputable, then it is clearly possible to construct an uncomputable agent. If, conversely, we are computable, then whatever reasoning you apply to an agent we build can be applied to us as well. Do you think it does apply to us?
Second: supposing your reasoning holds, why should it not be a reason for our constructed agent not to use utility functions, rather than a reason for said agent to have continuous preferences?
(This is a good time to mention, again, that this entire tangent is moot, as violating the continuity axiom—or any of the axioms—means that no utility function, computable or not, can be constructed from your preferences. But even if that weren’t the case, the above objections apply.)