Part One:
Methodology: Why think that intuitions are reliable? What is reflective equilibrium, other than reflecting on our intuitions? If it is some process by which we balance first-order intuitions against general principles, why think this process is reliable?
Metaethics: Realism vs. Error vs. Expressivism?
Part Two:
2.6 I don’t see the collapse—an axiology may be paired with different moralities—e.g. a satisficing morality, or a maximizing morality. Maybe all that is meant by the collapse is that the right is a function of the good? Then ‘collapse’ is misleading.
Part Four:
4.2 Taking actions that make the world better is different from taking actions that make the world best. Consequentialism says that only consequences matter—a controversial claim that hasn’t been addressed. 4.4 Makes a strawman of the deontologist. Deontologists differ from consequentialists in ways other than avoiding dirtying their hands / guilt. They may care about not using others as means, or distinctions like doing/allowing, killing/letting die, etc., which apply to some trolley cases, and (purportedly) justify not producing the best consequences. More argument is needed to show that this precludes morality from ‘living in the world’.
Part Five:
5.4 Not obvious that different consequentialisms converge on most practical cases. Some desire pain. Some desire authenticity, achievement, relationships, etc. (no experience machine). Some desire not to be cheated on / have their wills disregarded / etc.
Part Seven:
7.3 Doesn’t address the strongest form of the objection. A stronger form is: we know that certain acts or institutions are necessarily immoral (gladiatorial games, slavery); utilitarianism could (whether or not it does) require we promote these; therefore utilitarianism is false. I like the utility monster example of this. The response in 7.5 to the utility monster case is bullet-biting—this should be the response in 7.3. The response that utilitarianism probably won’t tell us to promote these is inadequate. The mistake is remade by the three responses in 7.4 (prior to the appeal to ideal rather than actual preferences).
7.6 Similar problem here. The response quibbles with contingent facts, but the force of the objection is that vicious, repugnant, petty, stupid, etc., preferences have no less weight in principle, i.e. in virtue of their status as such.
7.7 Response misses the point. The objection is that it’s hard to see how utilitarianism can accommodate the intuitive distinction between higher and lower pleasures. Sure, utilitarians have nothing against symphonies, but would a world with symphonies be best? (Would an FAI-generated world contain symphonies?)
7.9 Rather quick treatment of the demandingness objection. One relevant issue in the vicinity is that of agent-centered permissions—permissions to do less than the best (in consequentialist terms), e.g. to favor those with whom we have special relations. Many philosophers and folk alike believe in such permissions—utilitarianism has a counterintuitive result here.
Suggestions for further content:
(1) How are we to conceive of ‘better’ consequences? Perhaps any of the answers given by the aforementioned systems would suffice—pleasure, preference satisfaction, ideal preference satisfaction. But I’m not convinced these are practically/pragmatically equivalent. For instance, there may be different best methods for investigating what produces the most pleasure vs. what would best satisfy our ideal preferences, and so different practical recommendations.
(2) What’s our axiology? Is it total utilitarian, egalitarian, prioritarian, maximizing, satisficing, etc.? How do the interests of animals, future time slices, and future individuals weigh against present human interests? A total utilitarian approach seems to be advocated, but that faces its own set of problems (repugnant conclusions, fanaticism, etc.).
P1: Intuitions being “reliable” requires that the point of intuitions be to correspond to something outside themselves. I’m not sure moral intuitions have this point.
P2: Point taken.
P4.2: I agree with taking actions that make the world better instead of best and will rephrase. I don’t understand the point of your second sentence.
4.4: Concern about not using others as means, or doing/allowing distinctions, seem to me common-sensically not to be about states of the world. I’m not sure what further argument is possible let alone necessary. The discussion of guilt only says that’s the only state-of-the-world-relevant difference.
5.4: Would you agree that most of the philosophically popular consequentialisms (act, rule, preference, etc.) usually converge?
7.3 and below: I don’t think slavery and gladiators are necessarily wrong. I can imagine situations in which they would be okay (I’ve mentioned some for gladiators above) and I remain open to moral argument from people who want to convince me they’re okay in our own world (although I don’t expect this argument to succeed any more than I expect to be convinced that the sky is green).
If the belief that slavery is wrong is not an axiom, but instead derives from deeper moral principles that when formalized under reflective equilibrium give you consequentialism, then I think it’s fair to say that consequentialism proves they are wrong, but that in a counterfactual world where consequentialism proved they were right, I would either have intuitions that they were right, or be willing to discard my intuition that they were wrong after considering the consequentialist arguments against it.
Part One: Methodology: Why think that intuitions are reliable? What is reflective equilibrium, other than reflecting on our intuitions? If it is some process by which we balance first-order intuitions against general principles, why think this process is reliable? Metaethics: Realism vs. Error vs. Expressivism?
Part Two: 2.6 I don’t see the collapse—an axiology may be paired with different moralities—e.g. a satisficing morality, or a maximizing morality. Maybe all that is meant by the collapse is that the right is a function of the good? Then ‘collapse’ is misleading.
Part Four: 4.2 Taking actions that make the world better is different from taking actions that make the world best. Consequentialism says that only consequences matter—a controversial claim that hasn’t been addressed.
4.4 Makes a strawman of the deontologist. Deontologists differ from consequentialists in ways other than avoiding dirtying their hands / guilt. They may care about not using others as means, or distinctions like doing/allowing, killing/letting die, etc., which apply to some trolley cases, and (purportedly) justify not producing the best consequences. More argument is needed to show that this precludes morality from ‘living in the world’.
Part Five: 5.4 Not obvious that different consequentialisms converge on most practical cases. Some desire pain. Some desire authenticity, achievement, relationships, etc. (no experience machine). Some desire not to be cheated on / have their wills disregarded / etc.
Part Seven: 7.3 Doesn’t address the strongest form of the objection. A stronger form is: we know that certain acts or institutions are necessarily immoral (gladiatorial games, slavery); utilitarianism could (whether or not it does) require we promote these; therefore utilitarianism is false. I like the utility monster example of this. The response in 7.5 to the utility monster case is bullet-biting—this should be the response in 7.3. The response that utilitarianism probably won’t tell us to promote these is inadequate. The mistake is remade by the three responses in 7.4 (prior to the appeal to ideal rather than actual preferences). 7.6 Similar problem here. The response quibbles with contingent facts, but the force of the objection is that vicious, repugnant, petty, stupid, etc., preferences have no less weight in principle, i.e. in virtue of their status as such. 7.7 Response misses the point. The objection is that it’s hard to see how utilitarianism can accommodate the intuitive distinction between higher and lower pleasures. Sure, utilitarians have nothing against symphonies, but would a world with symphonies be best? (Would an FAI-generated world contain symphonies?) 7.9 Rather quick treatment of the demandingness objection. One relevant issue in the vicinity is that of agent-centered permissions—permissions to do less than the best (in consequentialist terms), e.g. to favor those with whom we have special relations. Many philosophers and folk alike believe in such permissions—utilitarianism has a counterintuitive result here.
Suggestions for further content: (1) How are we to conceive of ‘better’ consequences? Perhaps any of the answers given by the aforementioned systems would suffice—pleasure, preference satisfaction, ideal preference satisfaction. But I’m not convinced these are practically/pragmatically equivalent. For instance, there may be different best methods for investigating what produces the most pleasure vs. what would best satisfy our ideal preferences, and so different practical recommendations. (2) What’s our axiology? Is it total utilitarian, egalitarian, prioritarian, maximizing, satisficing, etc.? How do the interests of animals, future time slices, and future individuals weigh against present human interests? A total utilitarian approach seems to be advocated, but that faces its own set of problems (repugnant conclusions, fanaticism, etc.).
P1: Intuitions being “reliable” requires that the point of intuitions be to correspond to something outside themselves. I’m not sure moral intuitions have this point.
P2: Point taken.
P4.2: I agree with taking actions that make the world better instead of best and will rephrase. I don’t understand the point of your second sentence.
4.4: Concern about not using others as means, or doing/allowing distinctions, seem to me common-sensically not to be about states of the world. I’m not sure what further argument is possible let alone necessary. The discussion of guilt only says that’s the only state-of-the-world-relevant difference.
5.4: Would you agree that most of the philosophically popular consequentialisms (act, rule, preference, etc.) usually converge?
7.3 and below: I don’t think slavery and gladiators are necessarily wrong. I can imagine situations in which they would be okay (I’ve mentioned some for gladiators above) and I remain open to moral argument from people who want to convince me they’re okay in our own world (although I don’t expect this argument to succeed any more than I expect to be convinced that the sky is green).
If the belief that slavery is wrong is not an axiom, but instead derives from deeper moral principles that when formalized under reflective equilibrium give you consequentialism, then I think it’s fair to say that consequentialism proves they are wrong, but that in a counterfactual world where consequentialism proved they were right, I would either have intuitions that they were right, or be willing to discard my intuition that they were wrong after considering the consequentialist arguments against it.