An example of a “fake number” is utility. I’ve never seen a concrete utility value used anywhere, though I always hear about nice mathematical laws that it must obey.
I’m sure that in AI research many programs have been written around a specific well-defined utility function. Or, by “utility” you mean utility for a human? The “complexity of value” thesis is that the latter is very hard to define / measure. I’m not sure it makes it a “bad” concept.
The corresponding object in reinforcement learning systems is usually called a reward function, by analogy with behaviorist psychology; the fitness function of genetic programming is also related.
I’m sure that in AI research many programs have been written around a specific well-defined utility function. Or, by “utility” you mean utility for a human? The “complexity of value” thesis is that the latter is very hard to define / measure. I’m not sure it makes it a “bad” concept.
The corresponding object in reinforcement learning systems is usually called a reward function, by analogy with behaviorist psychology; the fitness function of genetic programming is also related.