I think I broadly agree with all the arguments to characterize the problem and to motivate indefinability as a solution, but I have a different (meta-)meta-level intuitions about how palatable indefinability would be, and as a result of that, I’d say I have been thinking about similar issues in a differently drawn framework. While you seem to advocate for “salvaging the notion of ’one ethics’“ while highlighting that we then need to live with indefinability, I am usually thinking of it in terms of: “Most of this is underdefined, and that’s unsettling at least in some (but not necessarily all) cases, and if we want to make it less underdefined, the notion of ‘one ethics’ has to give.“ Maybe one reason why I find indefinability harder to tolerate is because in my own thinking, the problem arises forcefully at an earlier/higher-order stage already, and therefore the span of views that “ethics” is indefinable about(?) is larger and already includes questions of high practical significance. Having said that, I think there are some important pragmatic advantages to an “ethics includes indefinability“ framework, and that might be reason enough to adopt it. While different frameworks tend to differ in the underlying intuitions they highlight or move into the background, I think there is more than one parsimonious framework in which people can “do moral philosophy“ in a complete and unconfused way. Translation between frameworks can be difficult though (which is one reason I started to write a sequence about moral reasoning under anti-realism, to establish a starting points for disagreements, but then I got distracted – it’s on hold now).
Some more unorganized comments (apologies for “lazy“ block-quote commenting):
Moral indefinability is the term I use for the idea that there is no ethical theory which provides acceptable solutions to all moral dilemmas, and which also has the theoretical virtues (such as simplicity, precision and non-arbitrariness) that we currently desire.
This idea seems correct to me. And as you indicate later in the paragraph, we can add that it’s plausible that the “theoretical virtues“ are not well-specified either (e.g., there’s disagreement between people’s theoretical desiderata, or there’s vagueness in how to cash out a desideratum such as “non-arbitrariness”).
My claim is that eventually we will also need to change our meta-level intuitions in important ways, because it will become clear that the only theories which match them violate key object-level intuitions.
This recommendation makes sense to me (insofar as one can still do that), but I don’t think it’s completely obvious. Because both meta-level intuitions and object-level intuitions are malleable in humans, and because there’s no(t obviously a) principled distinction between these two types of intuitions, it’s an open question to what degree people want to adjust their meta-level intuitions in order to not have to bite the largest bullets.
If the only reason people were initially tempted to bite the bullets in question (e.g., accept a counterintuitive stance like the repugnant conclusion) was because they had a cached thought that “Moral theories ought to be simple/elegant“, then it makes a lot of sense to adjust this one meta-level intuition after the realization that it seems ungrounded. However, maybe “Moral theories ought to be simple/elegant“ is more than just a cached thought for some people:
Some moral realists buy the “wager” that their actions matter infinitely more in case moral realism is true. I suspect that an underlying reason why they find this wager compelling is that they have strong meta-level intuitions about what they want morality to be like, and it feels to them that it’s pointless to settle for something other than that.
I’m not a moral realist, but I find myself having similarly strong meta-level intuitions about wanting to do something that is “non-arbitrary” and in relevant ways “simple/elegant”. I’m confused about whether that’s literally the whole intuition, or whether I can break it down into another component. But motivationally it feels like this intuition is importantly connected to what makes it easy for me to go “all-in“ for my ethical/altruistic beliefs.
A second reason to believe in moral indefinability is the fact that human concepts tend to be open texture: there is often no unique “correct” way to rigorously define them.
I strongly agree with this point. I think even very high-level concepts in moral philosophy or the philosophy of reason/self-interest are “open texture“ like that. In your post you seem to start with an assumption that people have a rough, shared sense of what “ethics“ is about. But if the fuzziness is already attacking at this very high level, it calls into question whether you can find a solution that seems satisfying to different people’s (fuzzy and underdetermined) sense of what the question/problem is even about.
For instance, there is the narrow interpretations such as “ethics as altruism/caring/doing good“ (which I think roughly captures at least large parts of what you assume, and it also captures the parts I’m personally most interested in). There’s also “ethics as cooperation or contract“. And maybe the two blend into each other.
Then there’s the broader (I label it “existentialist“) sense in which ethics is about “life goals“ or “Why do I get up in the morning?“. And within this broader interpretation of it, you suddenly get narrower subdomains like “realism about rationality“ or “What makes up a person’s self-interest?“ where the connection to the other narrower domains (e.g. “ethics as altruism“) are not always clear.
I think indefinability is a plausible solution (or meta-philosophical framework?) for all of these. But when the scope over which we observe indefinability becomes so broad, it illustrates why it might feel a bit frustrating for some people, because without clearly delineated concepts it can be harder to make progress, and so a framework in which indefinability plays a central role could in some cases obscure conceptual progress in subareas where one might be able to make such progress (at least at the “my personal morality“ level, though not necessarily at the level of a “consensus morality“).
(I’m not sure I’m disagreeing with you BTW; probably I’m just adding thoughts and blowing up the scope of your post.)
I would guess that many anti-realists are sympathetic to the arguments I’ve made above, but still believe that we can make morality precise without changing our meta-level intuitions much—for example, by grounding our ethical beliefs in what idealised versions of ourselves would agree with, after long reflection. My main objection to this view is, broadly speaking, that there is no canonical “idealised version” of a person, and different interpretations of that term could lead to a very wide range of ethical beliefs.
I agree. The second part of my comment here tries to talk about this as well.
And even if idealised reflection is a coherent concept, it simply passes the buck to your idealised self, who might then believe my arguments and decide to change their meta-level intuitions.
Yeah. I assume most of us are familiar with a deep sense of uncertainty about whether we found the right approach to ethical deliberation. And one can maybe avoid to feel this uncomfortable feeling of uncertainty by deferring to idealized reflection. But it’s not obvious that this lastingly solves the underlying problem: Maybe we’ll always feel uncertain whenever we enter the mode of “actually making a moral judgment“. If I found myself as a virtual person who is part of a moral reflection procedure such as Paul Christiano’s indirect normativity, I wouldn’t suddenly know and feel confident in how to resolve my uncertainties. And the extra power, and the fact that life in the reflection procedure would be very different from the world I currently know, introduces further risks and difficulties. I think there are still reasons why one might want to value particularly-open-ended moral reflection, but maybe it’s important that people don’t use the uncomfortable feeling of “maybe I’m doing moral philosophy wrong“ as their sole reason to value particularly-open-ended moral reflection. If the reality is that this feeling never goes away, then there seems something wrong with the underlying intuition that valuing particularly-open-ended moral reflection is by default the “safe” or “prudent” thing to do. (And I’m not saying it’s wrong for people value particularly-open-ended moral reflection; I suspect that it depends on one’s higher-order intuitions: For every perspective there’s a place where the buck stops.)
From an anti-realist perspective, I claim that perpetual indefinability would be better.
It prevents fanaticism, which is a big plus. And it plausibly creates more agreement, which is also a plus in some weirder sense (there’s a “non-identity problem” type thing about whether we can harm future agents by setting up the memetic environment such that they’ll end up having less easily satisfiable goals, compared to an alternative where they’d find themselves in larger agreement and therefore with more easily satisfiable goals). A drawback is that it can mask underlying disagreements and maybe harm underdeveloped positions relative to the status quo.
That may be a little more difficult to swallow from a realist perspective, of course. My guess is that the core disagreement is whether moral claims are more like facts, or more like preferences or tastes
That’s a good description. I sometimes use the analogy of “morality is more like career choice than scientific inquiry“.
I don’t think that’s a coincidence: psychologically, humans just aren’t built to be maximisers, and so a true maximiser would be fundamentally adversarial.
This is another good instrumental/pragmatic argument why anti-realists interested in shaping the memetic environment where humans engage in moral philosophy might want to promote the framing of indefinability rather than “many different flavors of consequentialism, and (eventually) we should pick“.
Thanks for the detailed comments! I only have time to engage with a few of them:
Most of this is underdefined, and that’s unsettling at least in some (but not necessarily all) cases, and if we want to make it less underdefined, the notion of ‘one ethics’ has to give.
I’m not that wedded to ‘one ethics’, more like ‘one process for producing moral judgements’. But note that if we allow arbitrariness of scope, then ‘one process’ can be a piecewise function which uses one subprocess in some cases and another in others.
I find myself having similarly strong meta-level intuitions about wanting to do something that is “non-arbitrary” and in relevant ways “simple/elegant”. …motivationally it feels like this intuition is importantly connected to what makes it easy for me to go “all-in“ for my ethical/altruistic beliefs.
I agree that these intuitions are very strong, and they are closely connected to motivational systems. But so are some object-level intuitions like “suffering is bad”, and so the relevant question is what you’d do if it were a choice between that and simplicity. I’m not sure your arguments distinguish one from the other in that context.
one can maybe avoid to feel this uncomfortable feeling of uncertainty by deferring to idealized reflection. But it’s not obvious that this lastingly solves the underlying problem
Another way of phrasing this point: reflection is almost always good for figuring out what’s the best thing to do, but it’s not a good way to define what’s the best thing to do.
there’s a “non-identity problem” type thing about whether we can harm future agents by setting up the memetic environment such that they’ll end up having less easily satisfiable goals, compared to an alternative where they’d find themselves in larger agreement and therefore with more easily satisfiable goals
I hadn’t heard of that before, I’m glad you mentioned it. Your comment (as a whole) was both interesting/insightful/etc. and long, and I’d be interested in reading any future posts you make.
For the record, this is probably my key objection to preference utilitarianism, but I didn’t want to dive into the details in the post above (for a very long post about such things, see here).
I think I broadly agree with all the arguments to characterize the problem and to motivate indefinability as a solution, but I have a different (meta-)meta-level intuitions about how palatable indefinability would be, and as a result of that, I’d say I have been thinking about similar issues in a differently drawn framework. While you seem to advocate for “salvaging the notion of ’one ethics’“ while highlighting that we then need to live with indefinability, I am usually thinking of it in terms of: “Most of this is underdefined, and that’s unsettling at least in some (but not necessarily all) cases, and if we want to make it less underdefined, the notion of ‘one ethics’ has to give.“ Maybe one reason why I find indefinability harder to tolerate is because in my own thinking, the problem arises forcefully at an earlier/higher-order stage already, and therefore the span of views that “ethics” is indefinable about(?) is larger and already includes questions of high practical significance. Having said that, I think there are some important pragmatic advantages to an “ethics includes indefinability“ framework, and that might be reason enough to adopt it. While different frameworks tend to differ in the underlying intuitions they highlight or move into the background, I think there is more than one parsimonious framework in which people can “do moral philosophy“ in a complete and unconfused way. Translation between frameworks can be difficult though (which is one reason I started to write a sequence about moral reasoning under anti-realism, to establish a starting points for disagreements, but then I got distracted – it’s on hold now).
Some more unorganized comments (apologies for “lazy“ block-quote commenting):
This idea seems correct to me. And as you indicate later in the paragraph, we can add that it’s plausible that the “theoretical virtues“ are not well-specified either (e.g., there’s disagreement between people’s theoretical desiderata, or there’s vagueness in how to cash out a desideratum such as “non-arbitrariness”).
This recommendation makes sense to me (insofar as one can still do that), but I don’t think it’s completely obvious. Because both meta-level intuitions and object-level intuitions are malleable in humans, and because there’s no(t obviously a) principled distinction between these two types of intuitions, it’s an open question to what degree people want to adjust their meta-level intuitions in order to not have to bite the largest bullets.
If the only reason people were initially tempted to bite the bullets in question (e.g., accept a counterintuitive stance like the repugnant conclusion) was because they had a cached thought that “Moral theories ought to be simple/elegant“, then it makes a lot of sense to adjust this one meta-level intuition after the realization that it seems ungrounded. However, maybe “Moral theories ought to be simple/elegant“ is more than just a cached thought for some people:
Some moral realists buy the “wager” that their actions matter infinitely more in case moral realism is true. I suspect that an underlying reason why they find this wager compelling is that they have strong meta-level intuitions about what they want morality to be like, and it feels to them that it’s pointless to settle for something other than that.
I’m not a moral realist, but I find myself having similarly strong meta-level intuitions about wanting to do something that is “non-arbitrary” and in relevant ways “simple/elegant”. I’m confused about whether that’s literally the whole intuition, or whether I can break it down into another component. But motivationally it feels like this intuition is importantly connected to what makes it easy for me to go “all-in“ for my ethical/altruistic beliefs.
I strongly agree with this point. I think even very high-level concepts in moral philosophy or the philosophy of reason/self-interest are “open texture“ like that. In your post you seem to start with an assumption that people have a rough, shared sense of what “ethics“ is about. But if the fuzziness is already attacking at this very high level, it calls into question whether you can find a solution that seems satisfying to different people’s (fuzzy and underdetermined) sense of what the question/problem is even about.
For instance, there is the narrow interpretations such as “ethics as altruism/caring/doing good“ (which I think roughly captures at least large parts of what you assume, and it also captures the parts I’m personally most interested in). There’s also “ethics as cooperation or contract“. And maybe the two blend into each other.
Then there’s the broader (I label it “existentialist“) sense in which ethics is about “life goals“ or “Why do I get up in the morning?“. And within this broader interpretation of it, you suddenly get narrower subdomains like “realism about rationality“ or “What makes up a person’s self-interest?“ where the connection to the other narrower domains (e.g. “ethics as altruism“) are not always clear.
I think indefinability is a plausible solution (or meta-philosophical framework?) for all of these. But when the scope over which we observe indefinability becomes so broad, it illustrates why it might feel a bit frustrating for some people, because without clearly delineated concepts it can be harder to make progress, and so a framework in which indefinability plays a central role could in some cases obscure conceptual progress in subareas where one might be able to make such progress (at least at the “my personal morality“ level, though not necessarily at the level of a “consensus morality“).
(I’m not sure I’m disagreeing with you BTW; probably I’m just adding thoughts and blowing up the scope of your post.)
I agree. The second part of my comment here tries to talk about this as well.
Yeah. I assume most of us are familiar with a deep sense of uncertainty about whether we found the right approach to ethical deliberation. And one can maybe avoid to feel this uncomfortable feeling of uncertainty by deferring to idealized reflection. But it’s not obvious that this lastingly solves the underlying problem: Maybe we’ll always feel uncertain whenever we enter the mode of “actually making a moral judgment“. If I found myself as a virtual person who is part of a moral reflection procedure such as Paul Christiano’s indirect normativity, I wouldn’t suddenly know and feel confident in how to resolve my uncertainties. And the extra power, and the fact that life in the reflection procedure would be very different from the world I currently know, introduces further risks and difficulties. I think there are still reasons why one might want to value particularly-open-ended moral reflection, but maybe it’s important that people don’t use the uncomfortable feeling of “maybe I’m doing moral philosophy wrong“ as their sole reason to value particularly-open-ended moral reflection. If the reality is that this feeling never goes away, then there seems something wrong with the underlying intuition that valuing particularly-open-ended moral reflection is by default the “safe” or “prudent” thing to do. (And I’m not saying it’s wrong for people value particularly-open-ended moral reflection; I suspect that it depends on one’s higher-order intuitions: For every perspective there’s a place where the buck stops.)
It prevents fanaticism, which is a big plus. And it plausibly creates more agreement, which is also a plus in some weirder sense (there’s a “non-identity problem” type thing about whether we can harm future agents by setting up the memetic environment such that they’ll end up having less easily satisfiable goals, compared to an alternative where they’d find themselves in larger agreement and therefore with more easily satisfiable goals). A drawback is that it can mask underlying disagreements and maybe harm underdeveloped positions relative to the status quo.
That’s a good description. I sometimes use the analogy of “morality is more like career choice than scientific inquiry“.
This is another good instrumental/pragmatic argument why anti-realists interested in shaping the memetic environment where humans engage in moral philosophy might want to promote the framing of indefinability rather than “many different flavors of consequentialism, and (eventually) we should pick“.
Thanks for the detailed comments! I only have time to engage with a few of them:
I’m not that wedded to ‘one ethics’, more like ‘one process for producing moral judgements’. But note that if we allow arbitrariness of scope, then ‘one process’ can be a piecewise function which uses one subprocess in some cases and another in others.
I agree that these intuitions are very strong, and they are closely connected to motivational systems. But so are some object-level intuitions like “suffering is bad”, and so the relevant question is what you’d do if it were a choice between that and simplicity. I’m not sure your arguments distinguish one from the other in that context.
Another way of phrasing this point: reflection is almost always good for figuring out what’s the best thing to do, but it’s not a good way to define what’s the best thing to do.
I hadn’t heard of that before, I’m glad you mentioned it. Your comment (as a whole) was both interesting/insightful/etc. and long, and I’d be interested in reading any future posts you make.
For the record, this is probably my key objection to preference utilitarianism, but I didn’t want to dive into the details in the post above (for a very long post about such things, see here).