You say “again”, but in the cited link it’s called the “Texas Sharpshooter Utility Function”. The word “fallacy” does not appear. If you’re going to claim there’s a fallacy here, you should support that statement. Where’s the fallacy?
I was referring to the same fallacy in both cases. Perhaps I should have written out TSUF in full this time. The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.
The original claim was that human behavior does not conform to optimizing a utility function, and I offered the trivial counterexample. You’re talking like you disagree with me, but you aren’t actually doing so.
I am disagreeing, by saying that the triviality of the counterexample is so great as to vitiate it entirely. The TSUF is not a utility function. One might as well say that a rock has a utility of 1 for just lying there and 0 for leaping into the air.
If the goal is to help someone get what they want, so far as I can tell you have to model them as though they want something
You have to model them as if they want many things, some of them being from time to time in conflict with each other. The reason for this is that they do want many things, some of them being from time to time in conflict with each other. Members of LessWrong regularly make personal posts on such matters, generally under the heading of “akrasia”, so it’s not as if I was proposing here some strange new idea of human nature. The problem of dealing with such conflicts is a regular topic here. And yet there is still a (not universal but pervasive) assumption that acting according to a utility function is the pinnacle of rational behaviour. Responding to that conundrum with TSUFs is pretty much isomorphic to the parable of the Heartstone.
I know the von Neumann-Morgenstern theorem on utility functions, but since they begin by assuming a total preference ordering on states of the world, it would be begging the question to cite it in support of human utility functions.
The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.
A fallacy is a false statement. (Not all false statements are fallacies; a fallacy must also be plausible enough that someone is at risk of being deceived by it, but that doesn’t matter here.) “Attaching a utility function post hoc to what the system does and does not do” is an activity. It is not a statement, so it cannot be false, and it cannot be a fallacy. You’ll have to try again if you want to make sense here.
The TSUF is not a utility function.
It a function that maps world-states to utilities, so it is a utility function. You’ll have to try again if you want to make sense here too.
We’re nearly at the point where it’s not worth my while to listen to you because you don’t speak carefully enough. Can you do something to improve, please? Perhaps get a friend to review your posts, or write things one day and reread them the next before posting, or simply make an effort not to say things that are obviously false.
It a function that maps world-states to utilities, so it is a utility function.
As lessdazed has said, that is simply not what the word “fallacy” means. Neither is a utility function, in the sense of VNM, merely a function from world states to numbers; it is a function from lotteries over outcomes to numbers that satisfies their axioms. The TSUF does not satisfy those axioms. No function whose range includes 0, 1, and nothing in between can satisfy the VNM axioms. The range of a VNM utility function must be an interval of real numbers.
We’re nearly at the point where it’s not worth my while to listen to you because you
I was referring to the same fallacy in both cases. Perhaps I should have written out TSUF in full this time. The fallacy is the one I just described: attaching a utility function post hoc to what the system does and does not do.
I am disagreeing, by saying that the triviality of the counterexample is so great as to vitiate it entirely. The TSUF is not a utility function. One might as well say that a rock has a utility of 1 for just lying there and 0 for leaping into the air.
You have to model them as if they want many things, some of them being from time to time in conflict with each other. The reason for this is that they do want many things, some of them being from time to time in conflict with each other. Members of LessWrong regularly make personal posts on such matters, generally under the heading of “akrasia”, so it’s not as if I was proposing here some strange new idea of human nature. The problem of dealing with such conflicts is a regular topic here. And yet there is still a (not universal but pervasive) assumption that acting according to a utility function is the pinnacle of rational behaviour. Responding to that conundrum with TSUFs is pretty much isomorphic to the parable of the Heartstone.
I know the von Neumann-Morgenstern theorem on utility functions, but since they begin by assuming a total preference ordering on states of the world, it would be begging the question to cite it in support of human utility functions.
A fallacy is a false statement. (Not all false statements are fallacies; a fallacy must also be plausible enough that someone is at risk of being deceived by it, but that doesn’t matter here.) “Attaching a utility function post hoc to what the system does and does not do” is an activity. It is not a statement, so it cannot be false, and it cannot be a fallacy. You’ll have to try again if you want to make sense here.
It a function that maps world-states to utilities, so it is a utility function. You’ll have to try again if you want to make sense here too.
We’re nearly at the point where it’s not worth my while to listen to you because you don’t speak carefully enough. Can you do something to improve, please? Perhaps get a friend to review your posts, or write things one day and reread them the next before posting, or simply make an effort not to say things that are obviously false.
Not a pattern of an invalid argument?
Tim, lessdazed has just spoken for me.
Perhaps you are not reading carefully enough.
As lessdazed has said, that is simply not what the word “fallacy” means. Neither is a utility function, in the sense of VNM, merely a function from world states to numbers; it is a function from lotteries over outcomes to numbers that satisfies their axioms. The TSUF does not satisfy those axioms. No function whose range includes 0, 1, and nothing in between can satisfy the VNM axioms. The range of a VNM utility function must be an interval of real numbers.
Ignored.