I’m not using a definition, I’m pointing out that standard arguments about UFs depend on ambiguities.
Your definition is abstract and doens’t capture anything that an actual AI could “have”—for one thing, you can’t compute the reals. It also fails to capture what UF’s are “for”.
AI researchers, a group of people who are fairly disjoint from LessWrongians, may have a rigorous and stable definition of UF, but that is not relevant. the point is that writings on MIRI and LessWrong use,and in fact depend on, shifting an ambiguous definitions.
I’m not using a definition, I’m pointing out that standard arguments about UFs depend on ambiguities.
Your definition is abstract and doens’t capture anything that an actual AI could “have”—for one thing, you can’t compute the reals. It also fails to capture what UF’s are “for”.
Go read a textbook on AI. You clearly do not understand utility functions.
AI researchers, a group of people who are fairly disjoint from LessWrongians, may have a rigorous and stable definition of UF, but that is not relevant. the point is that writings on MIRI and LessWrong use,and in fact depend on, shifting an ambiguous definitions.