Good models of moral language should be able to reproduce the semantics that normal people use every day.
Agreed. So much the worse for classic emotivism and error theory.
But semantics seems secondary to you (along with many meta-ethicists frankly – semantic ascent is often just used as a technique for avoiding talking past one another, allowing e.g. anti-realist views to be voiced without begging the question. I think many are happy grab whatever machinery from symbolic logic they need to make the semantics fit the metaphysical/epistemological views they hold more dearly.) I’d like to get clear just what it is you have strong/weak credence in. How would you distribute your credences over the following (very non-exhaustive and simplified) list?
Classic Cultural Relativism: moral rules/rightness are to be identified with cultural codes (and for simplicity, say that goodness is derivative). Implication for moral epistemology: like other invented social games, to determine what is morally right (according to the morality game) we just need to probe the rulemakers/keepers (perhaps society at large or a specific moral authority).
Boyd’s view (example of naturalist realism): moral goodness is to be identified with the homeostatic clusters of natural (read regular, empirically observable) properties that govern the (moral) use of the term “good” in basically the same way that tigerness is to be identified with homeostatic clusters of natural properties that govern the (zoological) use of the term “tiger.” To score highly on tigerness is to score highly on various traits e.g. having orange fur with black strikes, being quadrupedal, being a carnivore, having retractable claws… We’ve learned more about tigers (tigerness) as we encountered more examples (and counterexamples) of them and refined our observation methods/tools; the same goes (will continue to go) for goodness and good people. Implication for moral epistemology: “goodness” has a certain causal profile – investigate what regulates that causal profile, the same we investigate anything else in science. No doubt mind-dependent things like your own preferences or cultural codes will figure among the things that regulate the term “good” but these will rarely have the final say in determining what is good or not. Cultural codes and preferences will likely just figure as one homeostatic mechanism among many.
Blackburn’s Projectivism or Gibbard’s Norm-Expressivism (sophisticated versions of expressivism, examples of naturalist anti-realism): morality is reduced to attitudes/preferences/plans.
According to Blackburn we talk as if moral properties are out their to be investigated the way Boyd suggests we can, but strictly speaking this is false: his view is a form of moral fictionalism. He believes there is no general causal profile to moral terms: nothing besides our preferences/attitudes regulates our usage of these terms. The only thing to “discover” is what our deepest preferences/attitudes are (and if we don’t care about having coherent preferences/attitudes, we can also note our incoherencies). Implication for moral epistemology: learn about the world while also looking deep inside yourself to see how you are moved by that new knowledge (or something to this effect).
According to Gibbard normative statements are expressions of plans – “what to do.” The logical structure of these expressions helps us express, probe and revise our plans for their consistency within a system of plans, but ultimately, no one/nothing outside of yourself can tell you what system of plans to adopt. Implication for moral epistemology: determine what your ultimate plans are and do moral reasoning with others to work out any inconsistencies in your system of plans.
If I had to guess you’re in the vicinity of Blackburn (3.a). Can you confirm? But now, how does your preferred view fit your three bullet points of data better than the others? Your 4th data point, matching normal moral discourse (more like a dataset), is another story. E.g. I think (1) pretty clearly scores worse on this one compared to the others. But the others are debatable, which is part of my point – it’s not obvious which theory to prefer. And there is clearly disagreement between these views – we can’t hold them all at once without some kind of incoherence: there is a choice to be made. How are you making that choice?
As for this:
In terms of epistemology of morality, the average philosopher has completely dropped the ball. But since, on average, they think that as well, surely I’m only deferring to those who have thought longer on this when I say that.
I’m sorry but I don’t follow. Care to elaborate? You’re saying philosophers have, on average, failed to develop plausible/practical moral epistemologies? Are you saying this somehow implies you can safely disregard their views on meta-ethics? I don’t see how: the more relevant question seems to be what our current best methodology for meta-ethics is and whether you or some demographic (e.g. philosophers) are comparatively better at applying it. Coming up with a plausible/practical moral epistemology is often treated as a goal of meta-ethics. Of course the criteria for success in that endeavor will depend what you think the goals of philosophy or science are.
If I had to guess you’re in the vicinity of Blackburn (5.a). Can you confirm?
Can confirm. Although between Boyd and Blackburn, I’d point out that the question of realism falls by the wayside (they both seem to agree we’re modeling the world and then pointing at some pattern we’ve noticed in the world, whether you call that realism or not is murky), and the actionable points of disagreement are things like “how much should we be willing to let complicated intuitions be overruled by simple intuitions?”
And there is clearly disagreement between these views – we can’t hold them all at once without some kind of incoherence
If two people agree about how humans form concepts, and one says that certain abstract objects we’ve formed concepts for are “real,” and another says they’re “not real,” they aren’t necessarily disagreeing about anything substantive.
Sometimes people disagree about concept formation, or (gasp) don’t even give it any role in their story of morality. There’s plenty of room for incoherence there.
But along your Boyd-Blackburn axis, arguments about what to label “real” are more about where to put emphasis, and often smuggle in social/emotive arguments about how we should act or feel in certain situations.
(Re: The Tails Coming Apart As Metaphor For Life. I dunno, if most people, upon reflection, find that the extremes prescribed by all straightforward extrapolations of our moral intuitions look ugly, that sounds like convergence on… not following any extrapolation into the crazy scenarios and just avoiding putting yourself in the crazy scenarios. It might just be wrong for us to have such power over the world as to be directing us into any part of Extremistan. Maybe let’s just not go to Extremistan – let’s stay in Mediocristan (and rebrand it as Satisficistan). If at first something sounds exciting and way better than where you are now, but on reflection looks repugnant – worse than where you are now – then maybe don’t go there. If utilitarianism, Christianism etc yield crazy results in the limit, so much the worse for them. Repugnance keeps hitting your gaze upon tails that have come apart? Maybe that’s because what you care about are actually homeostatic property clusters: the good doesn’t “boil down” to one simple thing like happiness or a few commands written on a stone tablet. Maybe you care about a balance of things – about following all four Red, Yellow, Blue and Green lines (along with 100 other ones no doubt) – never one thing at the unacceptable expense of another. But this is a topic for another day and I’m only gesturing vaguely at a response.)
(Sorry for delay! Was on vacation. Also, got a little too into digging up my old meta-ethics readings. Can’t spend as much time on further responses...)
Although between Boyd and Blackburn, I’d point out that the question of realism falls by the wayside...
I mean fwiw, Boyd will say “goodness exists” while Blackburn is arguably committed to saying “goodness does not exist” since in his total theory of the world, nothing in the domain that his quantifiers range over corresponds to goodness – it’s never taken as a value of any of his variables. But I’m pretty sure Blackburn would take issue with this criterion for ontological commitment, and I suspect you’re not interested in that debate. I’ll just say that we’re doing something when we say e.g. “unicorns don’t exist” and some stories are better than others regarding what that something is (though of course it’s open question as to which story is best).
they both seem to agree we’re modeling the world and then pointing at some pattern we’ve noticed in the world
I think the point of agreement you’re noticing here is their shared commitment to naturalism. Neither thinks that morality is somehow tied up with spooky acausal stuff. And yes, to talk very loosely, they are both pointing at patterns in the world and saying “that’s what’s key to understanding morality.” But contra:
If two people agree about how humans form concepts, and one says that certain abstract objects we’ve formed concepts for are “real,” and another says they’re “not real,” they aren’t necessarily disagreeing about anything substantive.
they are having a substantive disagreement, precisely over which patterns are key to understanding morality. They likely agree more or less on the general story of how human concepts form (as I understand you to mean “concept formation”), but they disagree about the characteristics of the concept [goodness] – its history, its function, how we learn more about its referent (if it has any) etc. Blackburn’s theory of [goodness] (a theory of meta-ethics) points only to feeling patterns in our heads/bodies (when talking “external” to the moral linguistic framework, i.e. in his meta-ethical moments; “internal” to that framework he points to all sorts of things. I think it’s an open question whether he can get away with this internal external dance,[1] but I’ll concede it for now). Boyd just straightforwardly points to all sorts of patterns, mostly in people’s collective and individual behavior, some in our heads, some in our physiology, some in our environment… And now the question is, who is correct? And how do we adjudicate?
Maybe I can sharpen their disagreement with a comparison. What function does “tiger” serve in our discourse? To borrow terms from Huw Price, is it an e-representation which serves to track or co-vary with a pattern (typically in the environment), or is it an i-representation which serves any number of other “in-game” functions (e.g. signaling a logico-inferential move in the language game, or maybe using/enforcing/renegotiating a semantic rule)? Relevant patterns to determine the answer to such questions: the behaviour of speakers. Also, we will need to get clear on our philosophy of language/linguistic theory: not everyone agrees with Price that this “new bifurcation” is all that important – people will try to subsume one type of role under another.[2] Anyway, suppose we now agree that “tiger” serves to refer, to track certain patterns in the environment. Now we can ask, how did “tiger” come to refer to tigers? Relevant patterns seem to include:
the evolution of a particular family of species – the transmission and gradual modification of common traits between generations of specimens
the evolution of the human sensory apparatus, which determines what sorts of bundles of patterns humans tend to track as unified wholes in their world models
the phonemes uttered by the first humans to encounter said species, and the cultural transmission/evolution of that guttural convention to other humans
...and probably much more I’m forgetting/glossing over/ignoring.
We can of course run the same questions for moral terms. And on nearly every point Blackburn and Boyd will disagree. None of these are spooky questions, but they seem relevant to helping us get clear on our collective project to study tigers – what it is and how to go about it. Of course zoologists don’t typically need to go to the same lengths ethicists do, but I think its fair to chalk that up to the how controversial moral talk is. It’s important to note that neither Blackburn nor Boyd are in the business of revising the function/referents of moral talk: they don’t want to merely stipulate the function/referent of “rightness” but instead, take the term as they hear it in the mouths of ordinary speakers and give an account of its associated rules of use, its function, the general shape of its referent (if it has one).
At this point you might object: what’s the point? How does this have any bearing on what I really care about, the first-order stuff – e.g. whether stealing is wrong or not? One appeal of meta-ethics, I think, is that it presents a range of non-moral questions that we can hopefully resolve in more straightforward ways (especially if we all agree on naturalism), and that these non-moral questions will allow us to resolve many first-order moral disputes. On the (uncontroversial? in any case, empirically verifiable) assumption that our moralizing (moral talk, reflection, judgment) serves some kind of function or is conducive to some type of outcome, then hopefully if we can get a better handle on what we’re are doing when we moralize maybe we can do it better by its own lights.[3]
Assuming of course one wants to moralize better – no one said ethics/meta-ethics would be of much interest to the amoralist. Here is indeed a meta-preference – the usual one appealed to in order to motivate the (meta-)ethicists’ entreprise. (Most people aren’t anti-moralists, who are only interested in meta-ethics insofar as it helps them do moralizing worse. And few are interested in making accurate predictions about homo sapiens’ moralizing for its own sake, without applying it to one’s own life). But I don’t see this as threatening or differentiating from other scientific endeavours. It’s not threatening (i.e. the bootstrapping works) because, as with any inquiry, we begin with already some grasp of our subject matter, the thing we’re interested in. We point and say “that’s what I want to investigate.”As we learn more about it, refining the definition of our subject matter, our interest shifts to track this refinement too (either in accordance with meta-preferences, or through shifts in our preferences in no way responsive to our initial set of preferences). This happens in any inquiry though. Suppose I care about solving a murder, but in the course of my investigation I discover no one killed the alleged victim – they died of an unrelated causes. At that point, I may drop all interest upon realizing no murder occurred, or I might realize what I really wanted to solve was the death of this person.
Might we end up not caring about the results of meta-ethics? I find that highly unlikely, assuming we have the meta-preference of wanting to do this morality thing better, whatever it turns out to be. This meta-preference assumes as little as possible about its subject, in the same way that an interest in solving a death assumes less about its subject than an interest in solving a murder. Meta-ethicists are like physicists who are interested in understanding what causes the perturbations Uranus’ orbit, whatever it turns out to be: they are not married to a specific planet-induced-perturbations hypothesis, dropping all interest once Vulcan was found missing.
Hopefully we agree on the first-order claim that one should want to do this morality thing better – whatever “doing morality better” turns out to be! In much same way that a athlete will, upon noting that breathing is key to better athletic performance, want to “do breathing better” whatever breathing turns out to be. The only difference with the athlete is that I take “doing morality better” to be among my terminal goals, insofar as its virtuous to try and make oneself more virtuous. (It’s not my only terminal goal of course – something something shard theory/allegory of the chariot).
To make sure things are clear: naturalists all agree there is a process as neutral as any other scientific process for doing meta-ethics – for determining what it is homo sapiens are doing when they engage in moralizing. This is the methodological (and ultimately, metaphysical) point of agreement between e.g. Blackburn and Boyd. We need to e.g. study moral talk, observe whether radical disagreement is a thing, and other behaviour etc. (Also taken as constraints: leaving typical moral discourse/uncontroversial first-order claims intact.) Naturalist realists start to advance a meta-ethical theory when they claim that there is a process as neutral as any other scientific process for determining what is right and what is wrong. On naturalist realist accounts our first-order ethics is (more or less) in the same business as every other science: getting better at predictions in a particular domain (according to LW’s philosophy of science). To simplify massively: folk morality is the proto-theory for first-order ethics; moral talk is about resolving whose past predictions about rightness/wrongness were correct, and the making of new predictions. None of this is a given of course – I’m not sure naturalist realist meta-ethics is correct! But I don’t see why it’s obviously false.
This brings me back to my original point: it’s not obvious what homo sapiens are doing when they engage in moralizing! It seems to me we still have a lot to learn! It’s not at all obvious to me that our moral terms are not regulated by pretty stable patterns in our environment+behaviour and that together they don’t form an attractor.
If we have a crux, I suspect it’s in the above, but just in case I’ll note some other, more “in the weeds” disagreements between Blackburn and Boyd. (They are substantive, for the broad reasons given above, but you might not feel what’s at stake without having engaged in the surrounding theoretical debates.)
Blackburn won’t identify goodness with any of the patterns mentioned earlier – arguably he can’t strictly (i.e. external to the moral linguistic framework) agree we can determine the truth of any moral claims (where “truth” here comes with theoretical baggage). Ultimately, moral claims to him are just projections of our attitudes, not claims on the world, despite remaining “truth-apt.” (He would reject some of this characterization, because he wants to go deflationist about truth, but then his view threatens to collapse into realism – see Taylor paper below). Accordingly, and contra Yudkowsky, he does not take “goodness” to be a two-place predicate with its predication relativized to the eye of the beholder. (“Goodness” is best formalized as an operator, and not a predicate according to Blackburn.) This allows him to refute that what’s good depends on the eye of the beholder. You can go with subjectivists (moral statements are reports of attitudes, attitudes are what determine what is good/bad relative to the person with those attitudes), who point to basically the same patterns as Blackburn regarding “what is key to understanding morality,” and now you don’t have to do this internal external dance. But this comes with other implications: moral disagreement becomes very hard to account for (when I say “I like chocolate” and you say “I like vanilla” are we really disagreeing?), and one is committed to saying things like “what’s good depends on the eye of the beholder.”
I know it can sound like philosophers are trying to trap you/each other with word games and are actually just tripping on their own linguistic shoelaces. But I think it’s actually just really hard to say all the things I think you want to say without contradiction (or to be a person with all the policies you want to have): that’s part of what what I’m trying to point out in the previous paragraph. In the same vein, perhaps the most interesting recent development in this space has been to investigate whether views like Blackburn’s don’t just collapse into “full-blown” realism like that of Boyd (along with all it’s implications for moral epistemology). This is the Taylor paper I sent you a few months ago (but see FN 2 below). Similarly, Egan 2007 points out how Blackburn’s quasi-realism could (alternatively) collapse into subjectivism.
the actionable points of disagreement are things like “how much should we be willing to let complicated intuitions be overruled by simple intuitions?”
I suspect their disagreement is deeper than you think, but I’m not sure what you mean by this: care to clarify?
I use Carnap’s internal-external distinction but IIRC, Blackburn’s view isn’t exactly the same since Carnap’s internal-external distinction is meant to apply to all linguistic frameworks, where Blackburn seems to be trying to make a special carve out specifically for moral talk. But it’s been awhile since I properly read through these papers. I’m pretty sure Blackburn draws on Carnap though.
I mention Price’s theory, because his global expressivism might be the best chance anti-realists like Blackburn have for maintaining their distance from realism while retaining their right to ordinary moral talk. There is still much to investigate!
“by it’s own lights” here is not spooky. We notice certain physical systems that have collections of mechanisms that each support one another in maintaining certain equilibria: each mechanism is said to have a certain function in this system. We can add to/modify mechanisms in the system in order to make it more or less resilient to shocks, more or less reliably reach and maintain those equilibria. We’re “helping” the system by its lights when we make it more resilient/robust/reliable; “hindering” it when we make it less resilient/robust/reliable.
To make sure things are clear: naturalists all agree there is a process as neutral as any other scientific process for doing meta-ethics – for determining what it is homo sapiens are doing when they engage in moralizing. This is the methodological (and ultimately, metaphysical) point of agreement between e.g. Blackburn and Boyd
How come they disagree on all those apparently non-spooky questions about relevant patterns in the world? I’m curious how you reconcile these.
In science the data is always open to some degree of interpretation, but a combination of the ability to repeat experiments independent of the experimenter and the precision with which predictions can be tested tends to gradually weed out different interpretations that actually bear on real-world choices.
If long-term disagreement is maintained, my usual diagnosis would be that the thing being disagreed about does not actually connect to observation in a way amenable to science. E.g. maybe even though it seems like “which patterns are important?” is a non-spooky question, actually it’s very theory-laden in a way that’s only tenuously connected to predictions about data (if at all), and so when comparing theories there isn’t any repeatable experiment you could just stack up until you have enough data to answer the question.
Alternately, maybe at least one of them is bad at science :P
It’s not at all obvious to me that our moral terms are not regulated by pretty stable patterns in our environment+behaviour and that together they don’t form an attractor.
In the strong sense that everyone’s use of “morality” converges to precisely the same referent under some distribution of “normal dynamics” like interacting with the world and doing self-reflection? That sort of miracle doesn’t occur for the same reason coffee and cream don’t spontaneously un-mix.
But that doesn’t happen even for “tiger”—it’s not necessary that everyone means precisely the same thing when they talk about tigers, as long as the amount of interpersonal noise doesn’t overwhelm the natural sparsity of the world that allows us to have single-world handles for general categories of things. You could still call this an attractor, it’s just not a pointlike attractor—there’s space for different people to use “tiger” in different ways that are stable under normal dynamics.
If that’s how it is for “morality” too (“if morality is as real as tigers” being a cheeky framing), then if we could somehow map where everyone is in concept space, I expect everyone can say “Look how close together everyone gets under normal dynamics, this can be framed as a morality attractor!” But it would be a mistake to then say “Therefore the most moral point is the center, we should all go there.”
the actionable points of disagreement are things like “how much should we be willing to let complicated intuitions be overruled by simple intuitions?”
I suspect their disagreement is deeper than you think, but I’m not sure what you mean by this: care to clarify?
I forget what I was thinking, sorry. Maybe the general gist was “if you strip away the supposedly-contingent disagreements like ‘is there a morality attractor,’” what are the remaining fundamental disagreements about how to do moral reasoning?
How come they disagree on all those apparently non-spooky questions about relevant patterns in the world?
tl;dr: I take meta-ethics, like psychology and economics ~200 years ago, to be asking questions we don’t really have the tools or know-how to answer. And even if we did, there is just a lot of work to be done (e.g. solving meta-semantics, which no doubt involves solving language acquisition. Or e.g. doing some sort of evolutionary anthropology of moral language). And there are few to do the work, with little funding.
Long answer: I take one of philosophy’s key contributions to the (more empirical) sciences to be the highlighting of new or ignored questions, conceptual field clearing, the laying out of non-circular pathways in the theoretical landscape, the placing of landmarks at key choice points. But they are not typically the ones with the tools to answer those questions or make the appropriate theoretical choices informed by finer data. Basically, philosophy generates new fields and gets them to a pre-paradigmatic stage: witness e.g. Aristotle on physics, biology, economics etc.; J. S. Mill and Kant on psychology; Yudkowsky and Bostrom on AI safety; and so on. Give me enough time and I can trace just about every scientific field to its origins in what can only be described as philosophical texts. Once developed to that stage, putatively philosophical methods (conceptual analysis, reasoning by analogy, logical argument, postulation and theorizing, sporadic reference to what coarse data is available) won’t get things much further – progress slows to a crawl or authors might even start going in circles until the empirical tools, methods, interest and culture are available to take things further.
(That’s the simplified, 20-20 hindsight view with a mature philosophy and methodology of science in hand: for much of history, figuring out how to “take things further” was just as contested and confused as anything else, and was only furthered through what was ex ante just more philosophy. Newton was a rival of Descartes and Leibniz: his Principia was a work of philosophy in its time. Only later did we start calling it a work of physics, as pertaining to a field of its own. Likewise with Leibniz and Descartes’ contributions to physics.)
Re: meta-ethics, I don’t think it’s going in circles yet, but do recognize the rate at which it has produced new ideas (found genuinely new choice points) has slowed down. It’s still doing much work in collapsing false choice points though (and this seems healthy: it should over-generate and then cut down).
One thing it has completely failed to do is sell the project to the rest of the scientific community (hence why I write). But it’s also tough sell. There are various sociological obstacles at work here:
20th century ethical disasters: I think after the atrocities committed in the name of science during, during the (especially early) 20th century, scientists rightly want nothing to do with anything that smells normative. In some sense, this is a philosophical success story: awareness of the naturalistic fallacy has increased substantially. The “origins and nature of morality” probably raises a lot of alarm bells for many scientists (though, yes, I’m aware there are evolutionary biologists who explore the topic. I want to see more of this). To be clear, the wariness is warranted: this subject is indeed a normative minefield. But that doesn’t mean it can’t be crossed and that answers can’t be found. (I actually think, in the specific case of meta-ethics, part of philosophy’s contribution is to clear or at least flag the normative mines – keep the first and second order claims as distinct as possible).
Specialization: As academia has specialized, there has been less cross-departmental pollination.
Philosophy as a dirty word: I think “hard scientists” have come to associate “philosophy” (and maybe especially “ethics”) with “subjective” or something, and therefore to be avoided. Like, for many it’s just negative association at this point, with little reason attached to it. (I blame Hegel – he’s the reason philosophy got such a bad rap starting in the early 20th century).
Funding: How many governments or private funding institutions in today’s post-modern world do you expect prioritize “solving the origins and nature of morality” over other more immediately materially/economically useful or prestigious/constituent-pleasing research directions?
There are also methodological obstacles: the relevant data is just hard to collect; the number of confounding variables, myriad; the dimensionality of the systems involved, incredibly high! Compare, for example, with macroeconomics: natural experiments are extremely few and far between, and even then confounding variables abound; the timescales of the phenomena of interest (e.g. sustained recessions vs sustained growth periods) are very long, and as such we have very little data – there’ve only been a handful of such periods since record keeping began. We barely understand/can predict macro-econ any better than we did 100 years ago, and it’s not for a lack of brilliance, rigor or funding.
Alternately, maybe at least one of them is bad at science :P
In the sense that I take you to be using “science” (forming a narrow hypothesis, carefully collecting pertinent data, making pretty graphs with error bars) neither of them are probably doing it well.[1] But we shouldn’t really expect them to? Like, that’s not what the discipline is good for.
I’d bet they liberally employ the usual theoretical desiderata (explanatory power, ontological parsimony, theoretical conservatism) to argue for their view, but they probably only make cursory reference to empirical studies. And until they are do refer to more empirical work, they won’t converge on an answer (or improve our predictions, if you prefer). But, again, I don’t expect them to, since I think most of the pertinent empirical work is yet to be done.
“if morality is as real as tigers” being a cheeky framing
I’m not surprised you find this cheeky, but just FYI I was dead serious: that’s pretty much literally what I and many think is possibly the case.
it’s not necessary that everyone means precisely the same thing when they talk about tigers, as long as the amount of interpersonal noise doesn’t overwhelm the natural sparsity of the world that allows us to have single-world handles for general categories of things. You could still call this an attractor, it’s just not a pointlike attractor—there’s space for different people to use “tiger” in different ways that are stable under normal dynamics. [...] But it would be a mistake to then say “Therefore the most moral point is the center, we should all go there.”
So this is very interesting to me, and I think I agree with you on some points here, but that you’re missing others. But first I need to understand what you mean by “natural sparsity” and what your (very very rough) story is of how our words get their referents. I take it you’re drawing on ML concepts and explanations, and it sounds like a story some philosophers tell, but I’m not familiar with the lingo and want to understand this better. Please tell me more. Related: would you say that we know more about water than our 1700s counterparts, or would you just say “water” today refers to something different than what it referred to in the 1700s? In which case, what is it we’ve gained relative to them? More accurate predictions regarding… what?
Maybe the general gist was “if you strip away the supposedly-contingent disagreements like ‘is there a morality attractor,’” what are the remaining fundamental disagreements about how to do moral reasoning?
Thanks, yep, I’m not sure. Whether or not there is an attractor (and how that attraction is supposed to work) seems like the major crux – certainly in our case!
One thing I want to defend and clarify: someone the other day objected that philosophers are overly confident in their proposals, overly married to them. I think I would agree in some sense, since I think their work is often in doing pre-paradigmatic work: they often jump the gun and declare victory, take philosophizing to be enough to settle a matter. Accordingly, I need to correct the following:
Meta-ethicists are like physicists who are interested in understanding what causes the perturbations Uranus’ orbit, whatever it turns out to be: they are not married to a specific planet-induced-perturbations hypothesis, dropping all interest once Vulcan was found missing.
I should have said the field as whole is not married to any particular theory. But I’m not sure having individual researchers try so hard to develop and defend particular views is so perverse. Seems pretty normal that in trying to advance theory, individual theorists heavily favor one or another theory – the one they are curious about, want to develop, make robust and take to its limit. One shouldn’t necessarily look to one particular frontier physicist to form your best guess about their frontier – instead one should survey the various theories being advanced/developed in the area.
For posterity, we discussed in-person, and both (afaict) took the following to be clear predictive disagreements between the (paradigmatic) naturalist realists and anti-realists (condensed for brevity here, to the point of really being more of a mnemonic device):
Realists claim that:
(No Special Semantics): Our use of “right” and “wrong” are picking up, respectively, on what would be appropriately called the rightness and wrongness features in the world.
(Non-subjectivism/non-relativism): These features are largely independent of any particular homo sapiens attitudes and very stable over time.
(Still Learning): We collectively haven’t fully learned these features yet – the sparsity of the world does support and can guide further refinement of our collective usage of moral terms should we collectively wish to generalize better at identifying the presence of said features. This is the claim that leads to claims of there being a “moral attractor.”
Anti-realists may or may not disagree with (1) depending on how they cash out their semantics, but they almost certainly disagree with something like (2) and (3) (at least in their meta-ethical moments).
Agreed. So much the worse for classic emotivism and error theory.
But semantics seems secondary to you (along with many meta-ethicists frankly – semantic ascent is often just used as a technique for avoiding talking past one another, allowing e.g. anti-realist views to be voiced without begging the question. I think many are happy grab whatever machinery from symbolic logic they need to make the semantics fit the metaphysical/epistemological views they hold more dearly.) I’d like to get clear just what it is you have strong/weak credence in. How would you distribute your credences over the following (very non-exhaustive and simplified) list?
Classic Cultural Relativism: moral rules/rightness are to be identified with cultural codes (and for simplicity, say that goodness is derivative). Implication for moral epistemology: like other invented social games, to determine what is morally right (according to the morality game) we just need to probe the rulemakers/keepers (perhaps society at large or a specific moral authority).
Boyd’s view (example of naturalist realism): moral goodness is to be identified with the homeostatic clusters of natural (read regular, empirically observable) properties that govern the (moral) use of the term “good” in basically the same way that tigerness is to be identified with homeostatic clusters of natural properties that govern the (zoological) use of the term “tiger.” To score highly on tigerness is to score highly on various traits e.g. having orange fur with black strikes, being quadrupedal, being a carnivore, having retractable claws… We’ve learned more about tigers (tigerness) as we encountered more examples (and counterexamples) of them and refined our observation methods/tools; the same goes (will continue to go) for goodness and good people. Implication for moral epistemology: “goodness” has a certain causal profile – investigate what regulates that causal profile, the same we investigate anything else in science. No doubt mind-dependent things like your own preferences or cultural codes will figure among the things that regulate the term “good” but these will rarely have the final say in determining what is good or not. Cultural codes and preferences will likely just figure as one homeostatic mechanism among many.
Blackburn’s Projectivism or Gibbard’s Norm-Expressivism (sophisticated versions of expressivism, examples of naturalist anti-realism): morality is reduced to attitudes/preferences/plans.
According to Blackburn we talk as if moral properties are out their to be investigated the way Boyd suggests we can, but strictly speaking this is false: his view is a form of moral fictionalism. He believes there is no general causal profile to moral terms: nothing besides our preferences/attitudes regulates our usage of these terms. The only thing to “discover” is what our deepest preferences/attitudes are (and if we don’t care about having coherent preferences/attitudes, we can also note our incoherencies). Implication for moral epistemology: learn about the world while also looking deep inside yourself to see how you are moved by that new knowledge (or something to this effect).
According to Gibbard normative statements are expressions of plans – “what to do.” The logical structure of these expressions helps us express, probe and revise our plans for their consistency within a system of plans, but ultimately, no one/nothing outside of yourself can tell you what system of plans to adopt. Implication for moral epistemology: determine what your ultimate plans are and do moral reasoning with others to work out any inconsistencies in your system of plans.
If I had to guess you’re in the vicinity of Blackburn (3.a). Can you confirm? But now, how does your preferred view fit your three bullet points of data better than the others? Your 4th data point, matching normal moral discourse (more like a dataset), is another story. E.g. I think (1) pretty clearly scores worse on this one compared to the others. But the others are debatable, which is part of my point – it’s not obvious which theory to prefer. And there is clearly disagreement between these views – we can’t hold them all at once without some kind of incoherence: there is a choice to be made. How are you making that choice?
As for this:
I’m sorry but I don’t follow. Care to elaborate? You’re saying philosophers have, on average, failed to develop plausible/practical moral epistemologies? Are you saying this somehow implies you can safely disregard their views on meta-ethics? I don’t see how: the more relevant question seems to be what our current best methodology for meta-ethics is and whether you or some demographic (e.g. philosophers) are comparatively better at applying it. Coming up with a plausible/practical moral epistemology is often treated as a goal of meta-ethics. Of course the criteria for success in that endeavor will depend what you think the goals of philosophy or science are.
Can confirm. Although between Boyd and Blackburn, I’d point out that the question of realism falls by the wayside (they both seem to agree we’re modeling the world and then pointing at some pattern we’ve noticed in the world, whether you call that realism or not is murky), and the actionable points of disagreement are things like “how much should we be willing to let complicated intuitions be overruled by simple intuitions?”
If two people agree about how humans form concepts, and one says that certain abstract objects we’ve formed concepts for are “real,” and another says they’re “not real,” they aren’t necessarily disagreeing about anything substantive.
Sometimes people disagree about concept formation, or (gasp) don’t even give it any role in their story of morality. There’s plenty of room for incoherence there.
But along your Boyd-Blackburn axis, arguments about what to label “real” are more about where to put emphasis, and often smuggle in social/emotive arguments about how we should act or feel in certain situations.
(Re: The Tails Coming Apart As Metaphor For Life. I dunno, if most people, upon reflection, find that the extremes prescribed by all straightforward extrapolations of our moral intuitions look ugly, that sounds like convergence on… not following any extrapolation into the crazy scenarios and just avoiding putting yourself in the crazy scenarios. It might just be wrong for us to have such power over the world as to be directing us into any part of Extremistan. Maybe let’s just not go to Extremistan – let’s stay in Mediocristan (and rebrand it as Satisficistan). If at first something sounds exciting and way better than where you are now, but on reflection looks repugnant – worse than where you are now – then maybe don’t go there. If utilitarianism, Christianism etc yield crazy results in the limit, so much the worse for them. Repugnance keeps hitting your gaze upon tails that have come apart? Maybe that’s because what you care about are actually homeostatic property clusters: the good doesn’t “boil down” to one simple thing like happiness or a few commands written on a stone tablet. Maybe you care about a balance of things – about following all four Red, Yellow, Blue and Green lines (along with 100 other ones no doubt) – never one thing at the unacceptable expense of another. But this is a topic for another day and I’m only gesturing vaguely at a response.)
(Sorry for delay! Was on vacation. Also, got a little too into digging up my old meta-ethics readings. Can’t spend as much time on further responses...)
I mean fwiw, Boyd will say “goodness exists” while Blackburn is arguably committed to saying “goodness does not exist” since in his total theory of the world, nothing in the domain that his quantifiers range over corresponds to goodness – it’s never taken as a value of any of his variables. But I’m pretty sure Blackburn would take issue with this criterion for ontological commitment, and I suspect you’re not interested in that debate. I’ll just say that we’re doing something when we say e.g. “unicorns don’t exist” and some stories are better than others regarding what that something is (though of course it’s open question as to which story is best).
I think the point of agreement you’re noticing here is their shared commitment to naturalism. Neither thinks that morality is somehow tied up with spooky acausal stuff. And yes, to talk very loosely, they are both pointing at patterns in the world and saying “that’s what’s key to understanding morality.” But contra:
they are having a substantive disagreement, precisely over which patterns are key to understanding morality. They likely agree more or less on the general story of how human concepts form (as I understand you to mean “concept formation”), but they disagree about the characteristics of the concept [goodness] – its history, its function, how we learn more about its referent (if it has any) etc. Blackburn’s theory of [goodness] (a theory of meta-ethics) points only to feeling patterns in our heads/bodies (when talking “external” to the moral linguistic framework, i.e. in his meta-ethical moments; “internal” to that framework he points to all sorts of things. I think it’s an open question whether he can get away with this internal external dance,[1] but I’ll concede it for now). Boyd just straightforwardly points to all sorts of patterns, mostly in people’s collective and individual behavior, some in our heads, some in our physiology, some in our environment… And now the question is, who is correct? And how do we adjudicate?
Maybe I can sharpen their disagreement with a comparison. What function does “tiger” serve in our discourse? To borrow terms from Huw Price, is it an e-representation which serves to track or co-vary with a pattern (typically in the environment), or is it an i-representation which serves any number of other “in-game” functions (e.g. signaling a logico-inferential move in the language game, or maybe using/enforcing/renegotiating a semantic rule)? Relevant patterns to determine the answer to such questions: the behaviour of speakers. Also, we will need to get clear on our philosophy of language/linguistic theory: not everyone agrees with Price that this “new bifurcation” is all that important – people will try to subsume one type of role under another.[2] Anyway, suppose we now agree that “tiger” serves to refer, to track certain patterns in the environment. Now we can ask, how did “tiger” come to refer to tigers? Relevant patterns seem to include:
the evolution of a particular family of species – the transmission and gradual modification of common traits between generations of specimens
the evolution of the human sensory apparatus, which determines what sorts of bundles of patterns humans tend to track as unified wholes in their world models
the phonemes uttered by the first humans to encounter said species, and the cultural transmission/evolution of that guttural convention to other humans
...and probably much more I’m forgetting/glossing over/ignoring.
We can of course run the same questions for moral terms. And on nearly every point Blackburn and Boyd will disagree. None of these are spooky questions, but they seem relevant to helping us get clear on our collective project to study tigers – what it is and how to go about it. Of course zoologists don’t typically need to go to the same lengths ethicists do, but I think its fair to chalk that up to the how controversial moral talk is. It’s important to note that neither Blackburn nor Boyd are in the business of revising the function/referents of moral talk: they don’t want to merely stipulate the function/referent of “rightness” but instead, take the term as they hear it in the mouths of ordinary speakers and give an account of its associated rules of use, its function, the general shape of its referent (if it has one).
At this point you might object: what’s the point? How does this have any bearing on what I really care about, the first-order stuff – e.g. whether stealing is wrong or not? One appeal of meta-ethics, I think, is that it presents a range of non-moral questions that we can hopefully resolve in more straightforward ways (especially if we all agree on naturalism), and that these non-moral questions will allow us to resolve many first-order moral disputes. On the (uncontroversial? in any case, empirically verifiable) assumption that our moralizing (moral talk, reflection, judgment) serves some kind of function or is conducive to some type of outcome, then hopefully if we can get a better handle on what we’re are doing when we moralize maybe we can do it better by its own lights.[3]
Assuming of course one wants to moralize better – no one said ethics/meta-ethics would be of much interest to the amoralist. Here is indeed a meta-preference – the usual one appealed to in order to motivate the (meta-)ethicists’ entreprise. (Most people aren’t anti-moralists, who are only interested in meta-ethics insofar as it helps them do moralizing worse. And few are interested in making accurate predictions about homo sapiens’ moralizing for its own sake, without applying it to one’s own life). But I don’t see this as threatening or differentiating from other scientific endeavours. It’s not threatening (i.e. the bootstrapping works) because, as with any inquiry, we begin with already some grasp of our subject matter, the thing we’re interested in. We point and say “that’s what I want to investigate.”As we learn more about it, refining the definition of our subject matter, our interest shifts to track this refinement too (either in accordance with meta-preferences, or through shifts in our preferences in no way responsive to our initial set of preferences). This happens in any inquiry though. Suppose I care about solving a murder, but in the course of my investigation I discover no one killed the alleged victim – they died of an unrelated causes. At that point, I may drop all interest upon realizing no murder occurred, or I might realize what I really wanted to solve was the death of this person.
Might we end up not caring about the results of meta-ethics? I find that highly unlikely, assuming we have the meta-preference of wanting to do this morality thing better, whatever it turns out to be. This meta-preference assumes as little as possible about its subject, in the same way that an interest in solving a death assumes less about its subject than an interest in solving a murder. Meta-ethicists are like physicists who are interested in understanding what causes the perturbations Uranus’ orbit, whatever it turns out to be: they are not married to a specific planet-induced-perturbations hypothesis, dropping all interest once Vulcan was found missing.
Hopefully we agree on the first-order claim that one should want to do this morality thing better – whatever “doing morality better” turns out to be! In much same way that a athlete will, upon noting that breathing is key to better athletic performance, want to “do breathing better” whatever breathing turns out to be. The only difference with the athlete is that I take “doing morality better” to be among my terminal goals, insofar as its virtuous to try and make oneself more virtuous. (It’s not my only terminal goal of course – something something shard theory/allegory of the chariot).
To make sure things are clear: naturalists all agree there is a process as neutral as any other scientific process for doing meta-ethics – for determining what it is homo sapiens are doing when they engage in moralizing. This is the methodological (and ultimately, metaphysical) point of agreement between e.g. Blackburn and Boyd. We need to e.g. study moral talk, observe whether radical disagreement is a thing, and other behaviour etc. (Also taken as constraints: leaving typical moral discourse/uncontroversial first-order claims intact.) Naturalist realists start to advance a meta-ethical theory when they claim that there is a process as neutral as any other scientific process for determining what is right and what is wrong. On naturalist realist accounts our first-order ethics is (more or less) in the same business as every other science: getting better at predictions in a particular domain (according to LW’s philosophy of science). To simplify massively: folk morality is the proto-theory for first-order ethics; moral talk is about resolving whose past predictions about rightness/wrongness were correct, and the making of new predictions. None of this is a given of course – I’m not sure naturalist realist meta-ethics is correct! But I don’t see why it’s obviously false.
This brings me back to my original point: it’s not obvious what homo sapiens are doing when they engage in moralizing! It seems to me we still have a lot to learn! It’s not at all obvious to me that our moral terms are not regulated by pretty stable patterns in our environment+behaviour and that together they don’t form an attractor.
If we have a crux, I suspect it’s in the above, but just in case I’ll note some other, more “in the weeds” disagreements between Blackburn and Boyd. (They are substantive, for the broad reasons given above, but you might not feel what’s at stake without having engaged in the surrounding theoretical debates.)
Blackburn won’t identify goodness with any of the patterns mentioned earlier – arguably he can’t strictly (i.e. external to the moral linguistic framework) agree we can determine the truth of any moral claims (where “truth” here comes with theoretical baggage). Ultimately, moral claims to him are just projections of our attitudes, not claims on the world, despite remaining “truth-apt.” (He would reject some of this characterization, because he wants to go deflationist about truth, but then his view threatens to collapse into realism – see Taylor paper below). Accordingly, and contra Yudkowsky, he does not take “goodness” to be a two-place predicate with its predication relativized to the eye of the beholder. (“Goodness” is best formalized as an operator, and not a predicate according to Blackburn.) This allows him to refute that what’s good depends on the eye of the beholder. You can go with subjectivists (moral statements are reports of attitudes, attitudes are what determine what is good/bad relative to the person with those attitudes), who point to basically the same patterns as Blackburn regarding “what is key to understanding morality,” and now you don’t have to do this internal external dance. But this comes with other implications: moral disagreement becomes very hard to account for (when I say “I like chocolate” and you say “I like vanilla” are we really disagreeing?), and one is committed to saying things like “what’s good depends on the eye of the beholder.”
I know it can sound like philosophers are trying to trap you/each other with word games and are actually just tripping on their own linguistic shoelaces. But I think it’s actually just really hard to say all the things I think you want to say without contradiction (or to be a person with all the policies you want to have): that’s part of what what I’m trying to point out in the previous paragraph. In the same vein, perhaps the most interesting recent development in this space has been to investigate whether views like Blackburn’s don’t just collapse into “full-blown” realism like that of Boyd (along with all it’s implications for moral epistemology). This is the Taylor paper I sent you a few months ago (but see FN 2 below). Similarly, Egan 2007 points out how Blackburn’s quasi-realism could (alternatively) collapse into subjectivism.
I suspect their disagreement is deeper than you think, but I’m not sure what you mean by this: care to clarify?
I use Carnap’s internal-external distinction but IIRC, Blackburn’s view isn’t exactly the same since Carnap’s internal-external distinction is meant to apply to all linguistic frameworks, where Blackburn seems to be trying to make a special carve out specifically for moral talk. But it’s been awhile since I properly read through these papers. I’m pretty sure Blackburn draws on Carnap though.
I mention Price’s theory, because his global expressivism might be the best chance anti-realists like Blackburn have for maintaining their distance from realism while retaining their right to ordinary moral talk. There is still much to investigate!
“by it’s own lights” here is not spooky. We notice certain physical systems that have collections of mechanisms that each support one another in maintaining certain equilibria: each mechanism is said to have a certain function in this system. We can add to/modify mechanisms in the system in order to make it more or less resilient to shocks, more or less reliably reach and maintain those equilibria. We’re “helping” the system by its lights when we make it more resilient/robust/reliable; “hindering” it when we make it less resilient/robust/reliable.
How come they disagree on all those apparently non-spooky questions about relevant patterns in the world? I’m curious how you reconcile these.
In science the data is always open to some degree of interpretation, but a combination of the ability to repeat experiments independent of the experimenter and the precision with which predictions can be tested tends to gradually weed out different interpretations that actually bear on real-world choices.
If long-term disagreement is maintained, my usual diagnosis would be that the thing being disagreed about does not actually connect to observation in a way amenable to science. E.g. maybe even though it seems like “which patterns are important?” is a non-spooky question, actually it’s very theory-laden in a way that’s only tenuously connected to predictions about data (if at all), and so when comparing theories there isn’t any repeatable experiment you could just stack up until you have enough data to answer the question.
Alternately, maybe at least one of them is bad at science :P
In the strong sense that everyone’s use of “morality” converges to precisely the same referent under some distribution of “normal dynamics” like interacting with the world and doing self-reflection? That sort of miracle doesn’t occur for the same reason coffee and cream don’t spontaneously un-mix.
But that doesn’t happen even for “tiger”—it’s not necessary that everyone means precisely the same thing when they talk about tigers, as long as the amount of interpersonal noise doesn’t overwhelm the natural sparsity of the world that allows us to have single-world handles for general categories of things. You could still call this an attractor, it’s just not a pointlike attractor—there’s space for different people to use “tiger” in different ways that are stable under normal dynamics.
If that’s how it is for “morality” too (“if morality is as real as tigers” being a cheeky framing), then if we could somehow map where everyone is in concept space, I expect everyone can say “Look how close together everyone gets under normal dynamics, this can be framed as a morality attractor!” But it would be a mistake to then say “Therefore the most moral point is the center, we should all go there.”
I forget what I was thinking, sorry. Maybe the general gist was “if you strip away the supposedly-contingent disagreements like ‘is there a morality attractor,’” what are the remaining fundamental disagreements about how to do moral reasoning?
tl;dr: I take meta-ethics, like psychology and economics ~200 years ago, to be asking questions we don’t really have the tools or know-how to answer. And even if we did, there is just a lot of work to be done (e.g. solving meta-semantics, which no doubt involves solving language acquisition. Or e.g. doing some sort of evolutionary anthropology of moral language). And there are few to do the work, with little funding.
Long answer: I take one of philosophy’s key contributions to the (more empirical) sciences to be the highlighting of new or ignored questions, conceptual field clearing, the laying out of non-circular pathways in the theoretical landscape, the placing of landmarks at key choice points. But they are not typically the ones with the tools to answer those questions or make the appropriate theoretical choices informed by finer data. Basically, philosophy generates new fields and gets them to a pre-paradigmatic stage: witness e.g. Aristotle on physics, biology, economics etc.; J. S. Mill and Kant on psychology; Yudkowsky and Bostrom on AI safety; and so on. Give me enough time and I can trace just about every scientific field to its origins in what can only be described as philosophical texts. Once developed to that stage, putatively philosophical methods (conceptual analysis, reasoning by analogy, logical argument, postulation and theorizing, sporadic reference to what coarse data is available) won’t get things much further – progress slows to a crawl or authors might even start going in circles until the empirical tools, methods, interest and culture are available to take things further.
(That’s the simplified, 20-20 hindsight view with a mature philosophy and methodology of science in hand: for much of history, figuring out how to “take things further” was just as contested and confused as anything else, and was only furthered through what was ex ante just more philosophy. Newton was a rival of Descartes and Leibniz: his Principia was a work of philosophy in its time. Only later did we start calling it a work of physics, as pertaining to a field of its own. Likewise with Leibniz and Descartes’ contributions to physics.)
Re: meta-ethics, I don’t think it’s going in circles yet, but do recognize the rate at which it has produced new ideas (found genuinely new choice points) has slowed down. It’s still doing much work in collapsing false choice points though (and this seems healthy: it should over-generate and then cut down).
One thing it has completely failed to do is sell the project to the rest of the scientific community (hence why I write). But it’s also tough sell. There are various sociological obstacles at work here:
20th century ethical disasters: I think after the atrocities committed in the name of science during, during the (especially early) 20th century, scientists rightly want nothing to do with anything that smells normative. In some sense, this is a philosophical success story: awareness of the naturalistic fallacy has increased substantially. The “origins and nature of morality” probably raises a lot of alarm bells for many scientists (though, yes, I’m aware there are evolutionary biologists who explore the topic. I want to see more of this). To be clear, the wariness is warranted: this subject is indeed a normative minefield. But that doesn’t mean it can’t be crossed and that answers can’t be found. (I actually think, in the specific case of meta-ethics, part of philosophy’s contribution is to clear or at least flag the normative mines – keep the first and second order claims as distinct as possible).
Specialization: As academia has specialized, there has been less cross-departmental pollination.
Philosophy as a dirty word: I think “hard scientists” have come to associate “philosophy” (and maybe especially “ethics”) with “subjective” or something, and therefore to be avoided. Like, for many it’s just negative association at this point, with little reason attached to it. (I blame Hegel – he’s the reason philosophy got such a bad rap starting in the early 20th century).
Funding: How many governments or private funding institutions in today’s post-modern world do you expect prioritize “solving the origins and nature of morality” over other more immediately materially/economically useful or prestigious/constituent-pleasing research directions?
There are also methodological obstacles: the relevant data is just hard to collect; the number of confounding variables, myriad; the dimensionality of the systems involved, incredibly high! Compare, for example, with macroeconomics: natural experiments are extremely few and far between, and even then confounding variables abound; the timescales of the phenomena of interest (e.g. sustained recessions vs sustained growth periods) are very long, and as such we have very little data – there’ve only been a handful of such periods since record keeping began. We barely understand/can predict macro-econ any better than we did 100 years ago, and it’s not for a lack of brilliance, rigor or funding.
In the sense that I take you to be using “science” (forming a narrow hypothesis, carefully collecting pertinent data, making pretty graphs with error bars) neither of them are probably doing it well.[1] But we shouldn’t really expect them to? Like, that’s not what the discipline is good for.
I’d bet they liberally employ the usual theoretical desiderata (explanatory power, ontological parsimony, theoretical conservatism) to argue for their view, but they probably only make cursory reference to empirical studies. And until they are do refer to more empirical work, they won’t converge on an answer (or improve our predictions, if you prefer). But, again, I don’t expect them to, since I think most of the pertinent empirical work is yet to be done.
I’m not surprised you find this cheeky, but just FYI I was dead serious: that’s pretty much literally what I and many think is possibly the case.
So this is very interesting to me, and I think I agree with you on some points here, but that you’re missing others. But first I need to understand what you mean by “natural sparsity” and what your (very very rough) story is of how our words get their referents. I take it you’re drawing on ML concepts and explanations, and it sounds like a story some philosophers tell, but I’m not familiar with the lingo and want to understand this better. Please tell me more. Related: would you say that we know more about water than our 1700s counterparts, or would you just say “water” today refers to something different than what it referred to in the 1700s? In which case, what is it we’ve gained relative to them? More accurate predictions regarding… what?
Thanks, yep, I’m not sure. Whether or not there is an attractor (and how that attraction is supposed to work) seems like the major crux – certainly in our case!
One thing I want to defend and clarify: someone the other day objected that philosophers are overly confident in their proposals, overly married to them. I think I would agree in some sense, since I think their work is often in doing pre-paradigmatic work: they often jump the gun and declare victory, take philosophizing to be enough to settle a matter. Accordingly, I need to correct the following:
I should have said the field as whole is not married to any particular theory. But I’m not sure having individual researchers try so hard to develop and defend particular views is so perverse. Seems pretty normal that in trying to advance theory, individual theorists heavily favor one or another theory – the one they are curious about, want to develop, make robust and take to its limit. One shouldn’t necessarily look to one particular frontier physicist to form your best guess about their frontier – instead one should survey the various theories being advanced/developed in the area.
For posterity, we discussed in-person, and both (afaict) took the following to be clear predictive disagreements between the (paradigmatic) naturalist realists and anti-realists (condensed for brevity here, to the point of really being more of a mnemonic device):
Realists claim that:
(No Special Semantics): Our use of “right” and “wrong” are picking up, respectively, on what would be appropriately called the rightness and wrongness features in the world.
(Non-subjectivism/non-relativism): These features are largely independent of any particular homo sapiens attitudes and very stable over time.
(Still Learning): We collectively haven’t fully learned these features yet – the sparsity of the world does support and can guide further refinement of our collective usage of moral terms should we collectively wish to generalize better at identifying the presence of said features. This is the claim that leads to claims of there being a “moral attractor.”
Anti-realists may or may not disagree with (1) depending on how they cash out their semantics, but they almost certainly disagree with something like (2) and (3) (at least in their meta-ethical moments).