You mention utility (“can be adapted to any utility function”) before defining what utility is. Also, you make it sound like the concept of utility is specific to CDT rather than being common to all of the decision theories mentioned.
Utility isn’t the same as utilitarianism. There are only certain classes of utility functions that could reasonably be considered “utilitarian”, but decision theories work for any utility function.
What exactly do you mean by a “zero-sum game”? Are we talking about two-player games only? (talking about “the other players” threw me off here)
You mention utility (“can be adapted to any utility function”) before defining what utility is.
I think the concept of utility functions is widespread enough that I can get away with it (and I can’t find an aesthetically pleasing way to reorder that section and fix it).
Utility isn’t the same as utilitarianism. There are only certain classes of utility functions that could reasonably be considered “utilitarian”, but decision theories work for any utility function.
Nowhere in this post am I talking about Benthamist altruistic utilitarianism. I realize the ambiguity of the terms, but again I don’t see a good way to fix it.
What exactly do you mean by a “zero-sum game”? Are we talking about two-player games only? (talking about “the other players” threw me off here)
Nowhere in this post am I talking about Benthamist altruistic utilitarianism
Ah sorry—it was the link “but that’s a different topic” that I was talking about, I realize I didn’t make that clear. I was expecting the justification for assigning outcomes a utility would link to something about the Von Neumann-Morgenstern axioms, which I think are less controversial than altruistic utilitarianism. But it’s only a minor point.
Ah. It may be a matter of interpretation, but I view that post and this one as more enlightening on the need for expected-utility calculation, even for non-altruists, than the von Neumann-Morgenstern axioms.
A few minor points:
You mention utility (“can be adapted to any utility function”) before defining what utility is. Also, you make it sound like the concept of utility is specific to CDT rather than being common to all of the decision theories mentioned.
Utility isn’t the same as utilitarianism. There are only certain classes of utility functions that could reasonably be considered “utilitarian”, but decision theories work for any utility function.
What exactly do you mean by a “zero-sum game”? Are we talking about two-player games only? (talking about “the other players” threw me off here)
Thanks! I’ve made some edits.
I think the concept of utility functions is widespread enough that I can get away with it (and I can’t find an aesthetically pleasing way to reorder that section and fix it).
Nowhere in this post am I talking about Benthamist altruistic utilitarianism. I realize the ambiguity of the terms, but again I don’t see a good way to fix it.
Oops, good catch.
Ah sorry—it was the link “but that’s a different topic” that I was talking about, I realize I didn’t make that clear. I was expecting the justification for assigning outcomes a utility would link to something about the Von Neumann-Morgenstern axioms, which I think are less controversial than altruistic utilitarianism. But it’s only a minor point.
Ah. It may be a matter of interpretation, but I view that post and this one as more enlightening on the need for expected-utility calculation, even for non-altruists, than the von Neumann-Morgenstern axioms.