I think they sometimes do, or at least it is eminently plausible that they sometimes do. The classic trolley (especially in its bridge formulation) problem is widely considered an example of a way in which the act-omission distinction is at odds with consequentialism. I’m sure you’re aware of the trolley problem, so I’m not bringing it up as an example I think you’re not aware of, but more to note that I’m confused as to why, given that you’re aware of it, you think it doesn’t defy consequentialism.
For another example, on one plausible theory in population ethics (the total view), creating a happy person at happiness level x adds to the total amount of happiness in the world, and is therefore just as valuable as increasing an existing person’s level of happiness by x. Thus, not creating this person when you could goes against consequentialism.
There are ways to argue that these asymmetries are actually optimal from a consequentialist perspective, but it seems to me the default view would be that they aren’t, so I’m confused why you think that they so obviously are. (I’m not sure that the fact that these asymmetries defy consequentialism would make them confusing—I don’t think (most) humans are intuitive consequentialists, at least not about all cases, so it seems to me not at all confusing that some of our intuitions would prescribe actions that aren’t optimal from a consequentialist perspective.)
The classic trolley (especially in its bridge formulation) problem is widely considered an example of a way in which the act-omission distinction is at odds with consequentialism.
It is no such thing. Anyone who considers it thus, is wrong.
A world where a bystander has murdered a specific fat person by pushing him off a bridge to prevent a trolley from hitting five other specific people, and a world where a trolley was speeding toward a specific person, and a bystander has done nothing at all (when he could, at his option, have flipped a switch to make the trolley crush five other specific people instead), are very different worlds. That means that the action in question, and the omission in question, have different consequences.
For another example, on one plausible theory in population ethics (the total view), creating a happy person at happiness level x adds to the total amount of happiness in the world, and is therefore just as valuable as increasing an existing person’s level of happiness by x.
Valuable to whom?
Thus, not creating this person when you could goes against consequentialism.
No, it doesn’t. This scenario is nonsensical for various reasons (incomparability of “level of happiness” and general implausibility of treating “level of happiness” as a ratio scale is one big one), but from a person-centered view (which is the only kind of view that isn’t absurd), these are vastly different consequences.
… one plausible theory in population ethics (the total view) …
The total view (construed in the way that is implied by your comments) is not a plausible theory.
A world where a bystander has murdered a specific fat person by pushing him off a bridge to prevent a trolley from hitting five other specific people, and a world where a trolley was speeding toward a specific person, and a bystander has done nothing at all (when he could, at his option, have flipped a switch to make the trolley crush five other specific people instead), are very different worlds. That means that the action in question, and the omission in question, have different consequences.
Technically it is true that there are different consequences, but a) most consequentialists don’t think that the differences are very morally relevant, and b) you can construct examples where these differences are minimised without changing people’s responses very much. For instance, by specifying that you would be given amnesic drugs after the trolley problem, so that there’s no difference in your memories.
The total view (construed in the way that is implied by your comments) is not a plausible theory.
Yet many people seem to find it plausible, including me. Have you written up a justification of your view that you could point me to?
a) most consequentialists don’t think that the differences are very morally relevant
That may very well be, but if—for instance—the “most consequentialists” to whom you refer are utilitarians, then the claim that their opinion on this is manifestly nonsensical is exactly the claim I am making in the first place… so any such majoritarian arguments are unconvincing.
For instance, by specifying that you would be given amnesic drugs after the trolley problem, so that there’s no difference in your memories.
The more outlandish you have to make a scenario to elicit a given moral intuition, the less plausible that moral intuition is, and the less weight we should assign to it. In any case, even if the consequences are the same in the modified scenario, that in no way at all means that they’re also the same in the original, unmodified, scenario.
The total view (construed in the way that is implied by your comments) is not a plausible theory.
Yet many people seem to find it plausible, including me. Have you written up a justification of your view that you could point me to?
Criticisms of utilitarianism (nor even of total utilitarianism in particular, nor of other similarly aggregative views) are not at all difficult to find. I don’t, in principle, object to providing references for some of my favorite ones, but I won’t put in the effort to do so if the request to provide them is made only as a rhetorical move. So, are you asking because you haven’t encountered such criticisms? or because you have, but found them unconvincing (and if so, which sort have you encountered)? or because you have, and are aware of convincing counterarguments?
(To be clear: for my part, I have never encountered convincing responses to any of [what I consider to be] the standard criticisms. At most, there are certain evasions[1], or handwaving, etc.)
Everyone’s being silly. Consequentialism maximizes the expected utility of the world. Said understands “world” to mean “universe configuration history”. The others understand “world” to mean “universe configuration”.
Consequentialism maximizes the expected utility of the world.
Consequentialist moral frameworks do not require the agent to have[1] a utility function. Without a utility function, there is no “expected utility”.
In general, I would advise avoiding such “technical-language” rephrasings of standard definitions; they often (such as here) create inaccuracies where there were none.
Said understands “world” to mean “universe configuration history”. The others understand “world” to mean “universe configuration”.
Unless you’re positing a last-Thursdayist sort of scenario where we arrive at some universe configuration “synthetically” (i.e., by divine fiat, rather than by the universe evolving into the configuration “naturally”), this distinction is illusory. Barring such bizarre, wholly hypothetical scenarios, you cannot get to a state where, for instance, people remember an event happening, there’s records and other evidence of the event happening, etc., without that event actually having happened.
Said, your “[1]” is not a link.
It wasn’t meant to be a link, it was meant to be a footnote reference (as in this comment); however, I seem to have forgotten to add the actual footnote, and now I don’t remember what it was supposed to be… perhaps something about so-called “normalizing assumptions”? Well, it’s not critical.
[1] Here “have” should be taken to mean “have preferences that, due to obeying certain axioms, may be transformed into”.
I only meant to unpack consequentialism’s definition in order to get a handle on the “world” term. I’m fine with “Consequentialism chooses actions based on their consequences on the world.”.
The distinction is relevant for, for example, whether to care about an AI simulating humans in detail in order to figure out their preferences.
Quantum physics combines amplitudes of equal universe configurations regardless of their history. A quantum computer could arrive in the same state through different paths, some of which had it run morally relevant algorithms.
Even if the distinction is illusory, it seems to be the crux of everyone’s disagreement.
I think they sometimes do, or at least it is eminently plausible that they sometimes do. The classic trolley (especially in its bridge formulation) problem is widely considered an example of a way in which the act-omission distinction is at odds with consequentialism. I’m sure you’re aware of the trolley problem, so I’m not bringing it up as an example I think you’re not aware of, but more to note that I’m confused as to why, given that you’re aware of it, you think it doesn’t defy consequentialism.
For another example, on one plausible theory in population ethics (the total view), creating a happy person at happiness level x adds to the total amount of happiness in the world, and is therefore just as valuable as increasing an existing person’s level of happiness by x. Thus, not creating this person when you could goes against consequentialism.
There are ways to argue that these asymmetries are actually optimal from a consequentialist perspective, but it seems to me the default view would be that they aren’t, so I’m confused why you think that they so obviously are. (I’m not sure that the fact that these asymmetries defy consequentialism would make them confusing—I don’t think (most) humans are intuitive consequentialists, at least not about all cases, so it seems to me not at all confusing that some of our intuitions would prescribe actions that aren’t optimal from a consequentialist perspective.)
It is no such thing. Anyone who considers it thus, is wrong.
A world where a bystander has murdered a specific fat person by pushing him off a bridge to prevent a trolley from hitting five other specific people, and a world where a trolley was speeding toward a specific person, and a bystander has done nothing at all (when he could, at his option, have flipped a switch to make the trolley crush five other specific people instead), are very different worlds. That means that the action in question, and the omission in question, have different consequences.
Valuable to whom?
No, it doesn’t. This scenario is nonsensical for various reasons (incomparability of “level of happiness” and general implausibility of treating “level of happiness” as a ratio scale is one big one), but from a person-centered view (which is the only kind of view that isn’t absurd), these are vastly different consequences.
The total view (construed in the way that is implied by your comments) is not a plausible theory.
Technically it is true that there are different consequences, but a) most consequentialists don’t think that the differences are very morally relevant, and b) you can construct examples where these differences are minimised without changing people’s responses very much. For instance, by specifying that you would be given amnesic drugs after the trolley problem, so that there’s no difference in your memories.
Yet many people seem to find it plausible, including me. Have you written up a justification of your view that you could point me to?
That may very well be, but if—for instance—the “most consequentialists” to whom you refer are utilitarians, then the claim that their opinion on this is manifestly nonsensical is exactly the claim I am making in the first place… so any such majoritarian arguments are unconvincing.
The more outlandish you have to make a scenario to elicit a given moral intuition, the less plausible that moral intuition is, and the less weight we should assign to it. In any case, even if the consequences are the same in the modified scenario, that in no way at all means that they’re also the same in the original, unmodified, scenario.
Criticisms of utilitarianism (nor even of total utilitarianism in particular, nor of other similarly aggregative views) are not at all difficult to find. I don’t, in principle, object to providing references for some of my favorite ones, but I won’t put in the effort to do so if the request to provide them is made only as a rhetorical move. So, are you asking because you haven’t encountered such criticisms? or because you have, but found them unconvincing (and if so, which sort have you encountered)? or because you have, and are aware of convincing counterarguments?
(To be clear: for my part, I have never encountered convincing responses to any of [what I consider to be] the standard criticisms. At most, there are certain evasions[1], or handwaving, etc.)
Everyone’s being silly. Consequentialism maximizes the expected utility of the world. Said understands “world” to mean “universe configuration history”. The others understand “world” to mean “universe configuration”.
Said, your “[1]” is not a link.
Consequentialist moral frameworks do not require the agent to have[1] a utility function. Without a utility function, there is no “expected utility”.
In general, I would advise avoiding such “technical-language” rephrasings of standard definitions; they often (such as here) create inaccuracies where there were none.
Unless you’re positing a last-Thursdayist sort of scenario where we arrive at some universe configuration “synthetically” (i.e., by divine fiat, rather than by the universe evolving into the configuration “naturally”), this distinction is illusory. Barring such bizarre, wholly hypothetical scenarios, you cannot get to a state where, for instance, people remember an event happening, there’s records and other evidence of the event happening, etc., without that event actually having happened.
It wasn’t meant to be a link, it was meant to be a footnote reference (as in this comment); however, I seem to have forgotten to add the actual footnote, and now I don’t remember what it was supposed to be… perhaps something about so-called “normalizing assumptions”? Well, it’s not critical.
[1] Here “have” should be taken to mean “have preferences that, due to obeying certain axioms, may be transformed into”.
I only meant to unpack consequentialism’s definition in order to get a handle on the “world” term. I’m fine with “Consequentialism chooses actions based on their consequences on the world.”.
The distinction is relevant for, for example, whether to care about an AI simulating humans in detail in order to figure out their preferences.
Quantum physics combines amplitudes of equal universe configurations regardless of their history. A quantum computer could arrive in the same state through different paths, some of which had it run morally relevant algorithms.
Even if the distinction is illusory, it seems to be the crux of everyone’s disagreement.