My definition of utility function is one commonly used in AI. It is a mapping of states to a real number: u:E → R
where u is a state in E (the set of all possible states), and R is the reals in one dimension.
What definition are you using? I don’t think we can have a productive conversation until we both understand each other’s definitions.
I’m not using a definition, I’m pointing out that standard arguments about UFs depend on ambiguities.
Your definition is abstract and doens’t capture anything that an actual AI could “have”—for one thing, you can’t compute the reals. It also fails to capture what UF’s are “for”.
AI researchers, a group of people who are fairly disjoint from LessWrongians, may have a rigorous and stable definition of UF, but that is not relevant. the point is that writings on MIRI and LessWrong use,and in fact depend on, shifting an ambiguous definitions.
My definition of utility function is one commonly used in AI. It is a mapping of states to a real number: u:E → R where u is a state in E (the set of all possible states), and R is the reals in one dimension.
What definition are you using? I don’t think we can have a productive conversation until we both understand each other’s definitions.
I’m not using a definition, I’m pointing out that standard arguments about UFs depend on ambiguities.
Your definition is abstract and doens’t capture anything that an actual AI could “have”—for one thing, you can’t compute the reals. It also fails to capture what UF’s are “for”.
Go read a textbook on AI. You clearly do not understand utility functions.
AI researchers, a group of people who are fairly disjoint from LessWrongians, may have a rigorous and stable definition of UF, but that is not relevant. the point is that writings on MIRI and LessWrong use,and in fact depend on, shifting an ambiguous definitions.