I’d be really interested in a UDT-like agent with a utility function over perceptions instead of a closed-form mathematical expression, though. Nesov called that hypothetical thing “UDT-AIXI” and we spent some time trying to find a good definition, but unsuccessfully. Do you know how to define such a thing?
My model of naturalized induction allows it: http://lesswrong.com/lw/jq9/intelligence_metrics_with_naturalized_induction/