A utility function is a function, not a program. You could talk about whether or not it’s computable. Since you can find a utility function by randomly putting the agent into various universes and seeing what happens, it’s computable.
Some utility functions can be found by randomly putting agents into various universes and seeing what happens
Implicit utility functions can. Explicit utility functions could not be computable, in the sense that you can go around saying that you want to put rocks into piles of sizes corresponding to programs that never halt, but what you’ll actually be doing is putting them into sizes corresponding to programs that you think will never halt (either ones that provably don’t, or possibly ones that pass some heuristic).
OrphanWilde appears to be talking about morality, not decision theory. The moral Utility Function of utilitarianism is not necessarily the decision-theoretic utility function of any agent, unless you happen to have a morally perfect agent lying around, so your procedure would not work.
Since you can find a utility function by randomly putting the agent into various universes and seeing what happens, it’s computable.
Empirically determinable and computable are not the same thing. For example, consider the hypothetical of the Halting problem encoded in the digits of the fine structure constant.
A utility function is a function, not a program. You could talk about whether or not it’s computable. Since you can find a utility function by randomly putting the agent into various universes and seeing what happens, it’s computable.
Some utility functions can be found by randomly putting agents into various universes and seeing what happens
The universe is computable.
Therefore, all utility functions are computable.
3 does not follow from 1 and 2.
Implicit utility functions can. Explicit utility functions could not be computable, in the sense that you can go around saying that you want to put rocks into piles of sizes corresponding to programs that never halt, but what you’ll actually be doing is putting them into sizes corresponding to programs that you think will never halt (either ones that provably don’t, or possibly ones that pass some heuristic).
OrphanWilde appears to be talking about morality, not decision theory. The moral Utility Function of utilitarianism is not necessarily the decision-theoretic utility function of any agent, unless you happen to have a morally perfect agent lying around, so your procedure would not work.
Empirically determinable and computable are not the same thing. For example, consider the hypothetical of the Halting problem encoded in the digits of the fine structure constant.