The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).
If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren’t actually the problem with Felix, the fact that there’s an incentive to manipulate your own utility function that way is (among other things).)
You aren’t seeing forest for the trees… the thing that is identical is that you are trading utilities across people, which is fundamentally problematic and leads to either tortured child or utility monster, or both.
Omelas is a goddamned paradise. Omelas without the tortured child would be better, yeah, but Omelas as described is still better than any human civilization that has ever existed. (For one thing, it only contains one miserable child.)
Well it seems to me they are trading N dust specks vs torture in Omelas. edit: Actually, I don’t like Omelas [as example]. I think that miserable child would only make the society way worse, with the people just opting to e.g. kill someone when it ever so slightly results in increase in their personal expected utility. This child in Omelas puts them straight on the slippery slope, and making everyone aware of slippage makes people slide down for fun and profit.
Our ‘civilization’ though, of course, is a god damn jungle and so its pretty damn bad. It’s pretty hard to beat on the moral wrongness scale, from first principles; you have to take our current status quo and modify it to get to something worse (or take our earlier status quo).
Your edit demonstrates that you really don’t get consequentialism at all. Why would making a good tradeoff (one miserable child in exchange for paradise for everyone else) lead to making a terrible one (a tiny bit of happiness for one person in exchange for death for someone else)?
People are individual survival machines, that’s why. Each bastard in the Omelas knows at the gut level (not in some abstract way) that there’s a child being miserable specifically for a tiny bit of his happiness. His personally. He will then kill for larger bit of his happiness. He isn’t society. He’s an individual. It is all between him and that child. At very best, between him&his family, and that child. The society ain’t part of equation. (And if it is, the communism should of worked perfectly in that universe) [assuming that the individual believes he won’t be caught]
edit: also i think you don’t understand the story. They didn’t take the child apart for much needed organs to save other folks in Omelas. The child is miserable for the purpose of bringing sense of unity into the commune, for the purpose of making them value their happiness. That is already very irrational, and not only that but also entirely contrary to how homo sapiens behave when exposed to gross injustice.
edit: To explain my use of language. We are not talking about rational agents and what they ought to decide. We are taking of irrational agents that are supposedly (premise of the story) made more well behaved by participation in a pointless and evil ritual, which is the opposite of the known effect of direct participation in that sort of ritual, on populace. That’s why the story makes a poor case against utilitarianism. Because the consequence is grossly invalid.
What ever. The reason why I don’t like that story too much is, I do not believe that, given the way homo sapiens are, demonstrating them that child in the Omelas would have consequence stated in the story, even if they are instructed that this is the consequence. It’s too much of a stretch. The effect of such on H. Sapiens, that I would forecast, would be entirely opposite. The Omelas is doing something more similar to how you break in the soldiers for effective Holocaust death squad—the soldiers that later kill others or themselves outside the orders. You make the soldiers participate all together in something like that. That’s why I don’t like this as example. I’m arguing against my own point of bringing it up as example. Because the reason we don’t like Omelas is because keeping child like this won’t have positive consequence. (and for it to have stated positive consequence, the people already have to have a grossly irrational reaction to exposure to that child)
the thing that is identical is that you are trading utilities across people,
This is either wrong (the utility functions of the people involved aren’t queried in the dust speck problem) or so generic as to be encompassed in the concept of “utility calculation”.
Aggregating utility functions across different people is an unsolved problem, but not necessarily an unsolvable one. One way of avoiding utility monsters would be to normalize utility functions. The obvious way to do that leads to problems such as arachnophobes getting less cake even if they like cake equally much, but IMO that’s better than utility monsters.
This is either wrong (the utility functions of the people involved aren’t queried in the dust speck problem) or so generic as to be encompassed in the concept of “utility calculation”.
The utilities of many people are a vector, you are to map it to a scalar value, that loses a lot of information in process, and it seems to me however you do it, leads to some sort of objectionable outcomes. edit: I have a feeling one could define it reasonably with some sort of Kolmogorov complexity like metric that would grow incredibly slowly for the dust specks and would never equate what ever hideously clever thing does our brain do to most of the neurons when we suffer; the suffering beating the dust specks on the complexity (you’d have to write down the largest number you can write down in as many bits as the bits being tortured in the brain; then that number of dust specks starts getting to the torture level). We need to understand how pain works before we can start comparing pain vs dust specks.
Really? Every use of utilities I have seen either uses a real world measure (such as money) with a notation that it isn’t really utilities or they go directly for the unfalsifiable handwaving. So far I haven’t seen anything to suggest “aggregating utility functions” is even theoretically possible. For that matter most of what I have read suggests that even an individual’s “utility function” is usually unmanageably fuzzy, or even unfalsifiable, itself.
The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).
If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren’t actually the problem with Felix, the fact that there’s an incentive to manipulate your own utility function that way is (among other things).)
You aren’t seeing forest for the trees… the thing that is identical is that you are trading utilities across people, which is fundamentally problematic and leads to either tortured child or utility monster, or both.
Omelas is a goddamned paradise. Omelas without the tortured child would be better, yeah, but Omelas as described is still better than any human civilization that has ever existed. (For one thing, it only contains one miserable child.)
Well it seems to me they are trading N dust specks vs torture in Omelas. edit: Actually, I don’t like Omelas [as example]. I think that miserable child would only make the society way worse, with the people just opting to e.g. kill someone when it ever so slightly results in increase in their personal expected utility. This child in Omelas puts them straight on the slippery slope, and making everyone aware of slippage makes people slide down for fun and profit.
Our ‘civilization’ though, of course, is a god damn jungle and so its pretty damn bad. It’s pretty hard to beat on the moral wrongness scale, from first principles; you have to take our current status quo and modify it to get to something worse (or take our earlier status quo).
Your edit demonstrates that you really don’t get consequentialism at all. Why would making a good tradeoff (one miserable child in exchange for paradise for everyone else) lead to making a terrible one (a tiny bit of happiness for one person in exchange for death for someone else)?
People are individual survival machines, that’s why. Each bastard in the Omelas knows at the gut level (not in some abstract way) that there’s a child being miserable specifically for a tiny bit of his happiness. His personally. He will then kill for larger bit of his happiness. He isn’t society. He’s an individual. It is all between him and that child. At very best, between him&his family, and that child. The society ain’t part of equation. (And if it is, the communism should of worked perfectly in that universe) [assuming that the individual believes he won’t be caught]
edit: also i think you don’t understand the story. They didn’t take the child apart for much needed organs to save other folks in Omelas. The child is miserable for the purpose of bringing sense of unity into the commune, for the purpose of making them value their happiness. That is already very irrational, and not only that but also entirely contrary to how homo sapiens behave when exposed to gross injustice.
edit: To explain my use of language. We are not talking about rational agents and what they ought to decide. We are taking of irrational agents that are supposedly (premise of the story) made more well behaved by participation in a pointless and evil ritual, which is the opposite of the known effect of direct participation in that sort of ritual, on populace. That’s why the story makes a poor case against utilitarianism. Because the consequence is grossly invalid.
Tapping out, inferential distance too wide.
What ever. The reason why I don’t like that story too much is, I do not believe that, given the way homo sapiens are, demonstrating them that child in the Omelas would have consequence stated in the story, even if they are instructed that this is the consequence. It’s too much of a stretch. The effect of such on H. Sapiens, that I would forecast, would be entirely opposite. The Omelas is doing something more similar to how you break in the soldiers for effective Holocaust death squad—the soldiers that later kill others or themselves outside the orders. You make the soldiers participate all together in something like that. That’s why I don’t like this as example. I’m arguing against my own point of bringing it up as example. Because the reason we don’t like Omelas is because keeping child like this won’t have positive consequence. (and for it to have stated positive consequence, the people already have to have a grossly irrational reaction to exposure to that child)
This is either wrong (the utility functions of the people involved aren’t queried in the dust speck problem) or so generic as to be encompassed in the concept of “utility calculation”.
Aggregating utility functions across different people is an unsolved problem, but not necessarily an unsolvable one. One way of avoiding utility monsters would be to normalize utility functions. The obvious way to do that leads to problems such as arachnophobes getting less cake even if they like cake equally much, but IMO that’s better than utility monsters.
The utilities of many people are a vector, you are to map it to a scalar value, that loses a lot of information in process, and it seems to me however you do it, leads to some sort of objectionable outcomes. edit: I have a feeling one could define it reasonably with some sort of Kolmogorov complexity like metric that would grow incredibly slowly for the dust specks and would never equate what ever hideously clever thing does our brain do to most of the neurons when we suffer; the suffering beating the dust specks on the complexity (you’d have to write down the largest number you can write down in as many bits as the bits being tortured in the brain; then that number of dust specks starts getting to the torture level). We need to understand how pain works before we can start comparing pain vs dust specks.
Really? Every use of utilities I have seen either uses a real world measure (such as money) with a notation that it isn’t really utilities or they go directly for the unfalsifiable handwaving. So far I haven’t seen anything to suggest “aggregating utility functions” is even theoretically possible. For that matter most of what I have read suggests that even an individual’s “utility function” is usually unmanageably fuzzy, or even unfalsifiable, itself.
You must know, that every pyramid Felix does not get, causes him a dust speck size pain.
The Humanity’s suffering is only several billion times as much as a single 50 years torture of one individual.
Nothing compared to 3^^^3 “dust specks”??!