A person’s behavior can always be understood as optimizing a utility function, it just that if they are irrational (as in the Allais paradox) the utility functions start to look ridiculously complex. If all else fails, a utility function can be used that has a strong dependency on time in whatever way is required to match the observed behavior of the subject. “The subject had a strong preference for sneezing at 3:15:03pm October 8, 2011.”
This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system. It makes no predictions. It does not constrain expectation in any way. It is woo.
Woo need not look like talk of chakras and crystals and angels. It can just as easily be dressed in the clothes of science and mathematics.
This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system.
You say “again”, but in the cited link it’s called the “Texas Sharpshooter Utility Function”. The word “fallacy” does not appear. If you’re going to claim there’s a fallacy here, you should support that statement. Where’s the fallacy?
It makes no predictions. It does not constrain expectation in any way. It is woo.
The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You’re talking like you disagree with me, but you aren’t actually doing so.
If the only goal is to predict human behavior, you can probably do it better without using a utility function. If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something, and unless there’s something relevant in that Wikipedia article about the Allais paradox that I don’t understand yet, that requires modeling them as though they have a utility function.
You’ll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren’t going to dominate the set of utility functions consistent with the person’s behavior, but mentioning them makes it obvious that the set is not empty.
You’ll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren’t going to dominate the set of utility functions consistent with the person’s behavior
How do you know this? If that’s true, it can only be true by being a mathematical theorem, which will require defining mathematically what makes a UF a TSUF. I expect this is possible, but I’ll have to think about it.
With [the universal] prior, TSUF-like utility functions aren’t going to dominate the set of utility functions consistent with the person’s behavior
How do you know this? If that’s true, it can only be true by being a mathematical theorem...
No, it’s true in the same sense that the statement “I have hands” is true. That is, it’s an informal empirical statement about the world. People can be vaguely understood as having purposeful behavior. When you put them in strange situations, this breaks down a bit and if you wish to understand them as having purposeful behavior you have to contrive the utility function a bit, but for the most part people do things for a comprehensible purpose. If TSUF’s were the simplest utility functions that described humans, then human behavior would be random, which is isn’t. Thus the simplest utility functions that describe humans aren’t going to be TSUF-like.
You say “again”, but in the cited link it’s called the “Texas Sharpshooter Utility Function”. The word “fallacy” does not appear. If you’re going to claim there’s a fallacy here, you should support that statement. Where’s the fallacy?
I was referring to the same fallacy in both cases. Perhaps I should have written out TSUF in full this time. The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.
The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You’re talking like you disagree with me, but you aren’t actually doing so.
I am disagreeing, by saying that the triviality of the counterexample is so great as to vitiate it entirely. The TSUF is not a utility function. One might as well say that a rock has a utility of 1 for just lying there and 0 for leaping into the air.
If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something
You have to model them as if they want many things, some of them being from time to time in conflict with each other. The reason for this is that they do want many things, some of them being from time to time in conflict with each other. Members of LessWrong regularly make personal posts on such matters, generally under the heading of “akrasia”, so it’s not as if I was proposing here some strange new idea of human nature. The problem of dealing with such conflicts is a regular topic here. And yet there is still a (not universal but pervasive) assumption that acting according to a utility function is the pinnacle of rational behaviour. Responding to that conundrum with TSUFs is pretty much isomorphic to the parable of the Heartstone.
I know the von Neumann-Morgenstern theorem on utility functions, but since they begin by assuming a total preference ordering on states of the world, it would be begging the question to cite it in support of human utility functions.
The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.
A fallacy is a false statement. (Not all false statements are fallacies; a fallacy must also be plausible enough that someone is at risk of being deceived by it, but that doesn’t matter here.) “Attaching a utility function post hoc to what the system does and does not do” is an activity. It is not a statement, so it cannot be false, and it cannot be a fallacy. You’ll have to try again if you want to make sense here.
The TSUF is not a utility function.
It a function that maps world-states to utilities, so it is a utility function. You’ll have to try again if you want to make sense here too.
We’re nearly at the point where it’s not worth my while to listen to you because you don’t speak carefully enough. Can you do something to improve, please? Perhaps get a friend to review your posts, or write things one day and reread them the next before posting, or simply make an effort not to say things that are obviously false.
It a function that maps world-states to utilities, so it is a utility function.
As lessdazed has said, that is simply not what the word “fallacy” means. Neither is a utility function, in the sense of VNM, merely a function from world states to numbers; it is a function from lotteries over outcomes to numbers that satisfies their axioms. The TSUF does not satisfy those axioms. No function whose range includes 0, 1, and nothing in between can satisfy the VNM axioms. The range of a VNM utility function must be an interval of real numbers.
We’re nearly at the point where it’s not worth my while to listen to you because you
This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system. It makes no predictions. It does not constrain expectation in any way. It is woo.
Woo need not look like talk of chakras and crystals and angels. It can just as easily be dressed in the clothes of science and mathematics.
You say “again”, but in the cited link it’s called the “Texas Sharpshooter Utility Function”. The word “fallacy” does not appear. If you’re going to claim there’s a fallacy here, you should support that statement. Where’s the fallacy?
The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You’re talking like you disagree with me, but you aren’t actually doing so.
If the only goal is to predict human behavior, you can probably do it better without using a utility function. If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something, and unless there’s something relevant in that Wikipedia article about the Allais paradox that I don’t understand yet, that requires modeling them as though they have a utility function.
You’ll surely want a prior distribution over utility functions. Since they are computable functions, the usual Universal Prior works fine here, so far as I can tell. With this prior, TSUF-like utility functions aren’t going to dominate the set of utility functions consistent with the person’s behavior, but mentioning them makes it obvious that the set is not empty.
How do you know this? If that’s true, it can only be true by being a mathematical theorem, which will require defining mathematically what makes a UF a TSUF. I expect this is possible, but I’ll have to think about it.
No, it’s true in the same sense that the statement “I have hands” is true. That is, it’s an informal empirical statement about the world. People can be vaguely understood as having purposeful behavior. When you put them in strange situations, this breaks down a bit and if you wish to understand them as having purposeful behavior you have to contrive the utility function a bit, but for the most part people do things for a comprehensible purpose. If TSUF’s were the simplest utility functions that described humans, then human behavior would be random, which is isn’t. Thus the simplest utility functions that describe humans aren’t going to be TSUF-like.
I was referring to the same fallacy in both cases. Perhaps I should have written out TSUF in full this time. The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.
I am disagreeing, by saying that the triviality of the counterexample is so great as to vitiate it entirely. The TSUF is not a utility function. One might as well say that a rock has a utility of 1 for just lying there and 0 for leaping into the air.
You have to model them as if they want many things, some of them being from time to time in conflict with each other. The reason for this is that they do want many things, some of them being from time to time in conflict with each other. Members of LessWrong regularly make personal posts on such matters, generally under the heading of “akrasia”, so it’s not as if I was proposing here some strange new idea of human nature. The problem of dealing with such conflicts is a regular topic here. And yet there is still a (not universal but pervasive) assumption that acting according to a utility function is the pinnacle of rational behaviour. Responding to that conundrum with TSUFs is pretty much isomorphic to the parable of the Heartstone.
I know the von Neumann-Morgenstern theorem on utility functions, but since they begin by assuming a total preference ordering on states of the world, it would be begging the question to cite it in support of human utility functions.
A fallacy is a false statement. (Not all false statements are fallacies; a fallacy must also be plausible enough that someone is at risk of being deceived by it, but that doesn’t matter here.) “Attaching a utility function post hoc to what the system does and does not do” is an activity. It is not a statement, so it cannot be false, and it cannot be a fallacy. You’ll have to try again if you want to make sense here.
It a function that maps world-states to utilities, so it is a utility function. You’ll have to try again if you want to make sense here too.
We’re nearly at the point where it’s not worth my while to listen to you because you don’t speak carefully enough. Can you do something to improve, please? Perhaps get a friend to review your posts, or write things one day and reread them the next before posting, or simply make an effort not to say things that are obviously false.
Not a pattern of an invalid argument?
Tim, lessdazed has just spoken for me.
Perhaps you are not reading carefully enough.
As lessdazed has said, that is simply not what the word “fallacy” means. Neither is a utility function, in the sense of VNM, merely a function from world states to numbers; it is a function from lotteries over outcomes to numbers that satisfies their axioms. The TSUF does not satisfy those axioms. No function whose range includes 0, 1, and nothing in between can satisfy the VNM axioms. The range of a VNM utility function must be an interval of real numbers.
Ignored.