but to the extent that any agent makes coherent goal-driven decisions, it has a utility function
That is not obvious to me. Why is it so? (defining “utility function” might be helpful)
Taking the VNM axioms as the definition of “coherent” then the VNM theorem proves precisely that “coherent” implies “has a utility function”.
Anyway, the context of the original post was that humans had an advantage through not having a utility function. So in that context the VNM theorem raises the question “Exactly which of the axioms is it advantageous to violate?”.
Sure, but that’s an uninteresting tautology. If we define A as a set of conditions sufficient for B to happen then lo and behold! A implies B.
Come on, mathematics is sometimes interesting, right?
The VNM theorem posits that a utility function exists. It doesn’t say anything about how to find it or how to evaluate it, never mind in real time.
It’s like asking why humans don’t do the Solomonoff induction all the time—“there must be a reason, what is it?”
Yeah okay, I agree with this. In other words the VNM theorem says that our AGI has to have a utility function, but it doesn’t say that we have to be thinking about utility functions when we build it or care about utility functions at all, just that we will have “by accident” created one.
I still think that using utility functions actually is a good idea though, but I agree that that isn’t implied by the VNM theorem.
In other words the VNM theorem says that our AGI has to have a utility function
Still nope. The VNM theorem says that if our AGI sticks to VNM axioms then a utility function describing its preferences exists. Exists somewhere in the rather vast space of mathematical functions. The theorem doesn’t say that the AGI “has” it—neither that it knows it, nor that it can calculate it.
The most defensible use of the term is described as Ordinal Utility, but this is a little weaker than I commonly see it used around here. I’d summarize as “a predictive model for how much goodness an agent will experience conditioned on some decision”. Vincent Yu has a more formal description in (this comment)[http://lesswrong.com/lw/dhd/stupid_questions_open_thread_round_3/72z3].
There’s a lot of discussion about whether humans have a utility function or not, with the underlying connotation being that a utility function implies consistency in decisionmaking, so inconsistency proves lack of utility function. One example: Do Humans Want Things? I prefer to think of humans as having a utility function at any given point in time, but not one that’s consistent over time.
A semi-joking synonym for “I care about X” for some of us is “I have a term for X in my utility function”. Note that this (for me) implies a LOT of terms in my function, with very different coefficients that may not be constant over time.
A “utility function” as applied to humans is an abstraction, a model. And just like any model, it is subject to the George Box maxim “All models are wrong, but some are useful”.
If you are saying that your model is “humans … [have] a utility function at any given point in time, but not one that’s consistent over time”, well, how useful is this model? You can’t estimate this utility function well and it can change at any time… so what does this model give you?
That is not obvious to me. Why is it so? (defining “utility function” might be helpful)
I’m not sure how rhetorical your question is but you might want to look at the Von Neumann–Morgenstern utility theorem.
I’m quite familiar with the VNM utility, but here we are talking about real live meatbag humans, not about mathematical abstractions.
You asked
Taking the VNM axioms as the definition of “coherent” then the VNM theorem proves precisely that “coherent” implies “has a utility function”.
Anyway, the context of the original post was that humans had an advantage through not having a utility function. So in that context the VNM theorem raises the question “Exactly which of the axioms is it advantageous to violate?”.
Sure, but that’s an uninteresting tautology. If we define A as a set of conditions sufficient for B to happen then lo and behold! A implies B.
The VNM theorem posits that a utility function exists. It doesn’t say anything about how to find it or how to evaluate it, never mind in real time.
It’s like asking why humans don’t do the Solomonoff induction all the time—“there must be a reason, what is it?”
Come on, mathematics is sometimes interesting, right?
Yeah okay, I agree with this. In other words the VNM theorem says that our AGI has to have a utility function, but it doesn’t say that we have to be thinking about utility functions when we build it or care about utility functions at all, just that we will have “by accident” created one.
I still think that using utility functions actually is a good idea though, but I agree that that isn’t implied by the VNM theorem.
Still nope. The VNM theorem says that if our AGI sticks to VNM axioms then a utility function describing its preferences exists. Exists somewhere in the rather vast space of mathematical functions. The theorem doesn’t say that the AGI “has” it—neither that it knows it, nor that it can calculate it.
That’s what I meant.
The most defensible use of the term is described as Ordinal Utility, but this is a little weaker than I commonly see it used around here. I’d summarize as “a predictive model for how much goodness an agent will experience conditioned on some decision”. Vincent Yu has a more formal description in (this comment)[http://lesswrong.com/lw/dhd/stupid_questions_open_thread_round_3/72z3].
There’s a lot of discussion about whether humans have a utility function or not, with the underlying connotation being that a utility function implies consistency in decisionmaking, so inconsistency proves lack of utility function. One example: Do Humans Want Things? I prefer to think of humans as having a utility function at any given point in time, but not one that’s consistent over time.
A semi-joking synonym for “I care about X” for some of us is “I have a term for X in my utility function”. Note that this (for me) implies a LOT of terms in my function, with very different coefficients that may not be constant over time.
A “utility function” as applied to humans is an abstraction, a model. And just like any model, it is subject to the George Box maxim “All models are wrong, but some are useful”.
If you are saying that your model is “humans … [have] a utility function at any given point in time, but not one that’s consistent over time”, well, how useful is this model? You can’t estimate this utility function well and it can change at any time… so what does this model give you?