I feel this way. The linear theories are usually nothing but first order approximations.
Also, the very idea of summing of individual agent utilities… that’s, frankly, nothing but pseudomathematics. Each agent’s utility function can be modified without changing agent’s behaviour in any way. The utility function is a phantom. It isn’t so defined that you could add two of them together. You can map same agent’s preferences (whenever they are well ordered) to infinite variety of real valued ‘utility functions’.
Yes. The trouble with “shut up and multiply” - beyond assuming that humans have a utility function at all—is assuming that utility works like conventional arithmetic and that you can in fact multiply.
There’s also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes “scope insensitivity.” The error is assuming this means that people are scope-insensitive, rather than to realise that people aren’t buying saved birds at all, but are paying what they’re willing to pay for warm fuzzies in general—a constant amount.
The attraction of utilitarianism is that calculating actions would be so much simpler if utility functions existed and their output could be added with the same sort of rules as conventional arithmetic. This does not, however, constitute non-negligible evidence that any of the required assumptions hold.
This does not, however, constitute non-negligible evidence that any of the required assumptions hold.
It even tends to count against it, by the A+B rule. If items are selected by a high enough combined score on two criteria A and B, then among the selected items, there will tend to be a negative correlation between A and B.
There’s also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes “scope insensitivity.” The error is assuming this means that people are scope-insensitive, rather than to realise that people aren’t buying saved birds at all, but are paying what they’re willing to pay for warm fuzzies in general—a constant amount.
I don’t know who’s making that error. Seems like scope insensitivity and purchasing of warm fuzzies are usually discussed together around here.
Anyway, if there’s an error here then it isn’t about utilitarianism vs something else, but about declared vs revealed preference. The people believe that they care about the birds. They don’t act as if they cared about the birds. For those who accept deliberative reasoning as an expression of human values it’s a failure of decision-making intuitions and it’s called scope insensitivity. For those who believe that true preference is revealed through behavior it’s a failure of reflection. None of those positions seems inconsistent with utilitarianism. In fact it might be easier to be a total utilitarian if you go all the way and conclude that humans really care only about power and sex. Just give everybody nymphomania and megalomania, prohibit birth control and watch that utility counter go. ;)
I don’t think it’s even linearly combinable. Suppose there were 4 copies of me total, pair doing some identical thing, other pair doing 2 different things. The second pair is worth more. When I see someone go linear on morals, that strikes me as evidence of poverty of moral value and/or poverty of mathematical language they have available.
Then the consequentialism. The consequences are hard to track—got to model the worlds resulting from uncertain initial state. Really really computationally expensive. Everything is going to use heuristics, even jupiter brains.
There’s also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes “scope insensitivity.” The error is assuming this means that people are scope-insensitive, rather than to realise that people aren’t buying saved birds at all, but are paying what they’re willing to pay for warm fuzzies in general—a constant amount.
Well, “willing to pay for warm fuzzies” is a bad way to put it IMO. There’s limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn’t make you a billionaire.
Well, “willing to pay for warm fuzzies” is a bad way to put it IMO. There’s limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn’t make you a billionaire.
The figures people would pay to save 2000, 20,000, or 200,000 birds were $80, $78 and $88 respectively, which oughtn’t be so much that the utility of money for most WEIRD people would be significantly non-linear. (A much stronger effect IMO could be people taking—possibly subconsiously—the “2000” or the “20,000” as evidence about the total population of that bird species.)
Utilitarians don’t have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don’t come from evaluating any of the infinite variety of ‘true’ utility functions of every individual. They come from evaluating the total utilitarian’s model of individual preference satisfaction (or happiness or whatever).
Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn’t really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual’s utility function then that is silly but it’s a problem of confused terminology and not really a strong argument against utilitarianism.
Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that.
You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual’s internal state.
I feel this way. The linear theories are usually nothing but first order approximations.
Also, the very idea of summing of individual agent utilities… that’s, frankly, nothing but pseudomathematics. Each agent’s utility function can be modified without changing agent’s behaviour in any way. The utility function is a phantom. It isn’t so defined that you could add two of them together. You can map same agent’s preferences (whenever they are well ordered) to infinite variety of real valued ‘utility functions’.
Yes. The trouble with “shut up and multiply” - beyond assuming that humans have a utility function at all—is assuming that utility works like conventional arithmetic and that you can in fact multiply.
There’s also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes “scope insensitivity.” The error is assuming this means that people are scope-insensitive, rather than to realise that people aren’t buying saved birds at all, but are paying what they’re willing to pay for warm fuzzies in general—a constant amount.
The attraction of utilitarianism is that calculating actions would be so much simpler if utility functions existed and their output could be added with the same sort of rules as conventional arithmetic. This does not, however, constitute non-negligible evidence that any of the required assumptions hold.
It even tends to count against it, by the A+B rule. If items are selected by a high enough combined score on two criteria A and B, then among the selected items, there will tend to be a negative correlation between A and B.
I don’t know who’s making that error. Seems like scope insensitivity and purchasing of warm fuzzies are usually discussed together around here.
Anyway, if there’s an error here then it isn’t about utilitarianism vs something else, but about declared vs revealed preference. The people believe that they care about the birds. They don’t act as if they cared about the birds. For those who accept deliberative reasoning as an expression of human values it’s a failure of decision-making intuitions and it’s called scope insensitivity. For those who believe that true preference is revealed through behavior it’s a failure of reflection. None of those positions seems inconsistent with utilitarianism. In fact it might be easier to be a total utilitarian if you go all the way and conclude that humans really care only about power and sex. Just give everybody nymphomania and megalomania, prohibit birth control and watch that utility counter go. ;)
An explanatory reply from the downvoter would be useful. I’d like to think I could learn.
I don’t think it’s even linearly combinable. Suppose there were 4 copies of me total, pair doing some identical thing, other pair doing 2 different things. The second pair is worth more. When I see someone go linear on morals, that strikes me as evidence of poverty of moral value and/or poverty of mathematical language they have available.
Then the consequentialism. The consequences are hard to track—got to model the worlds resulting from uncertain initial state. Really really computationally expensive. Everything is going to use heuristics, even jupiter brains.
Well, “willing to pay for warm fuzzies” is a bad way to put it IMO. There’s limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn’t make you a billionaire.
The figures people would pay to save 2000, 20,000, or 200,000 birds were $80, $78 and $88 respectively, which oughtn’t be so much that the utility of money for most WEIRD people would be significantly non-linear. (A much stronger effect IMO could be people taking—possibly subconsiously—the “2000” or the “20,000” as evidence about the total population of that bird species.)
Utilitarians don’t have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don’t come from evaluating any of the infinite variety of ‘true’ utility functions of every individual. They come from evaluating the total utilitarian’s model of individual preference satisfaction (or happiness or whatever).
Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn’t really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual’s utility function then that is silly but it’s a problem of confused terminology and not really a strong argument against utilitarianism.
But the spirit of the argument is ungrounded in anything. What evidence is there that you can do this stuff at all using actual numbers without repeatedly bumping into “don’t do non-normative things even if you got that answer from a shut-up-and-multiply”?
Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that.
You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual’s internal state.