but to the extent that any agent makes coherent goal-driven decisions, it has a utility function
That is not obvious to me. Why is it so? (defining “utility function” might be helpful)
Taking the VNM axioms as the definition of “coherent” then the VNM theorem proves precisely that “coherent” implies “has a utility function”.
Anyway, the context of the original post was that humans had an advantage through not having a utility function. So in that context the VNM theorem raises the question “Exactly which of the axioms is it advantageous to violate?”.
Sure, but that’s an uninteresting tautology. If we define A as a set of conditions sufficient for B to happen then lo and behold! A implies B.
Come on, mathematics is sometimes interesting, right?
The VNM theorem posits that a utility function exists. It doesn’t say anything about how to find it or how to evaluate it, never mind in real time.
It’s like asking why humans don’t do the Solomonoff induction all the time—“there must be a reason, what is it?”
Yeah okay, I agree with this. In other words the VNM theorem says that our AGI has to have a utility function, but it doesn’t say that we have to be thinking about utility functions when we build it or care about utility functions at all, just that we will have “by accident” created one.
I still think that using utility functions actually is a good idea though, but I agree that that isn’t implied by the VNM theorem.
In other words the VNM theorem says that our AGI has to have a utility function
Still nope. The VNM theorem says that if our AGI sticks to VNM axioms then a utility function describing its preferences exists. Exists somewhere in the rather vast space of mathematical functions. The theorem doesn’t say that the AGI “has” it—neither that it knows it, nor that it can calculate it.
I’m not sure how rhetorical your question is but you might want to look at the Von Neumann–Morgenstern utility theorem.
I’m quite familiar with the VNM utility, but here we are talking about real live meatbag humans, not about mathematical abstractions.
You asked
Taking the VNM axioms as the definition of “coherent” then the VNM theorem proves precisely that “coherent” implies “has a utility function”.
Anyway, the context of the original post was that humans had an advantage through not having a utility function. So in that context the VNM theorem raises the question “Exactly which of the axioms is it advantageous to violate?”.
Sure, but that’s an uninteresting tautology. If we define A as a set of conditions sufficient for B to happen then lo and behold! A implies B.
The VNM theorem posits that a utility function exists. It doesn’t say anything about how to find it or how to evaluate it, never mind in real time.
It’s like asking why humans don’t do the Solomonoff induction all the time—“there must be a reason, what is it?”
Come on, mathematics is sometimes interesting, right?
Yeah okay, I agree with this. In other words the VNM theorem says that our AGI has to have a utility function, but it doesn’t say that we have to be thinking about utility functions when we build it or care about utility functions at all, just that we will have “by accident” created one.
I still think that using utility functions actually is a good idea though, but I agree that that isn’t implied by the VNM theorem.
Still nope. The VNM theorem says that if our AGI sticks to VNM axioms then a utility function describing its preferences exists. Exists somewhere in the rather vast space of mathematical functions. The theorem doesn’t say that the AGI “has” it—neither that it knows it, nor that it can calculate it.
That’s what I meant.