Presumably, any agent which we manage to build will be computable. So to the extent our agent is using utility functions, they will be continuous.
There are several objections one could make to this line of reasoning. Here are two.
First: do you believe that we, humans are uncomputable? If we are uncomputable, then it is clearly possible to construct an uncomputable agent. If, conversely, we are computable, then whatever reasoning you apply to an agent we build can be applied to us as well. Do you think it does apply to us?
Second: supposing your reasoning holds, why should it not be a reason for our constructed agent not to use utility functions, rather than a reason for said agent to have continuous preferences?
(This is a good time to mention, again, that this entire tangent is moot, as violating the continuity axiom—or any of the axioms—means that no utility function, computable or not, can be constructed from your preferences. But even if that weren’t the case, the above objections apply.)
There are several objections one could make to this line of reasoning. Here are two.
First: do you believe that we, humans are uncomputable? If we are uncomputable, then it is clearly possible to construct an uncomputable agent. If, conversely, we are computable, then whatever reasoning you apply to an agent we build can be applied to us as well. Do you think it does apply to us?
Second: supposing your reasoning holds, why should it not be a reason for our constructed agent not to use utility functions, rather than a reason for said agent to have continuous preferences?
(This is a good time to mention, again, that this entire tangent is moot, as violating the continuity axiom—or any of the axioms—means that no utility function, computable or not, can be constructed from your preferences. But even if that weren’t the case, the above objections apply.)